doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1704.00109 | 51 | Pierre Sermanet, Soumith Chintala, and Yann LeCun. Convolutional neural networks applied to house numbers digit classiï¬cation. In ICPR, 2012.
Saurabh Singh, Derek Hoiem, and David Forsyth. Swapout: Learning an ensemble of deep archi- tectures. arXiv preprint arXiv:1605.06465, 2016.
Leslie N. Smith. No more pesky learning rate guessing games. CoRR, abs/1506.01186, 2016. URL http://arxiv.org/abs/1506.01186.
Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research, 15(1):1929â1958, 2014. | 1704.00109#51 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
] |
1704.00109 | 52 | Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015.
A Swann and N Allinson. Fast committee learning: Preliminary results. Electronics Letters, 34(14): 1408â1410, 1998.
Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In ICML, 2013.
Jingjing Xie, Bing Xu, and Zhang Chuang. Horizontal and vertical ensemble with deep representa- tion for classiï¬cation. arXiv preprint arXiv:1306.2759, 2013.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
11
Published as a conference paper at ICLR 2017
# SUPPLEMENTARY
# A. Single model and Snapshot Ensemble performance over time | 1704.00109#52 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
] |
1704.00109 | 53 | 11
Published as a conference paper at ICLR 2017
# SUPPLEMENTARY
# A. Single model and Snapshot Ensemble performance over time
In Figures 7-9, we compare the test error of Snapshot Ensembles with the error of individual model snapshots. The blue curve shows the test error of a single model snapshot using a cyclic cosine learning rate. The green curve shows the test error when ensembling model snapshots over time. (Note that, unlike Figure 3, we construct these ensembles beginning with the earliest snapshots.) As a reference, the red dashed line in each panel represents the test error of single model trained for 300 epochs using a standard learning rate schedule. Without Snapshot Ensembles, in about half of the cases, the test error of ï¬nal model using a cyclic learning rateâthe right most point in the blue curveâis no better than using a standard learning rate schedule.
One can observe that under almost all settings, complete Snapshot Ensemblesâthe right most points of the green curvesâoutperform the single model baselines. In many cases, ensembles of just 2 or 3 model snapshots are able to match the performance of the single model trained with a standard learning rate. Not surprisingly, the ensembles of model snapshots consistently outperform any of its members, yielding a smooth curve of test error over time. | 1704.00109#53 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
] |
1704.00109 | 54 | ResNet-110 on C10 («1,=0.1) ResNet-110 on C10 («1,=0.2) 9 9 Es SI £7 g7 5 5 Soper ESS ee ac 5 5 1 2 3 4 5 1 2 3 4 5 #snapshots #snapshots ResNet-110 on C100 («1,=0.1) ResNet-110 on C100 («1,=0.2) 324 32 = 30 g3 8 28b--â - 8 28h â SS ye = ge ~~ 3 26 3% 26 3 3 * 24 © 24 1 2 3 4 5 1 2 3 5 #snapshots #snapshots ResNet-110 on SVHN (i,=0.1) ResNet~110 on SVHN (ct,=0.2) N test error (%) a ®& test error (%) a & 1 2 3 4 5 1 2 3 5 #snapshots #snapshots ResNet-110 on Tiny ImageNet (1,=0.1) ResNet-110 on Tiny ImageNet (c1,=0.2) 50 50 test error (%) test error (%) 1 2 3 4 5 6 1 2 3 4 5 6 #snapshots #snapshots Wide-ResNet-32 on C10 («1,=0.1) Wide-ResNet-32 on C10 | 1704.00109#54 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
] |
1704.00109 | 57 | test error (%) 28 7 1 : 3 #snapshots Wide-ResNet-32 on SVHN (0,)=0.1) 2 - r 7 ore 5 1.8 be SNOUT a TT o B 16h dee Leet eeeeeeeedeeeeeuuentes 1 2 3 4 5 #snapshots Wide-ResNet-32 on Tiny ImageNet (o:,=0.1) 42 T T T : = 40k: 5 38 © 36 & 34 32 #snapshots DenseNet-40 on C10 (a,=0.1) 8 : r r . eS 4 S o 7) £ : 4 in i i i 1 2 3 4 5 6 #snapshots DenseNet-40 on C100 (a=0.1) 30 iS Ss 5 Q 2 2 3 4 5 6 #snapshots 28 7 1 : test error (%) NM NM N 3 #snapshots Wide-ResNet-32 on SVHN (a,=0.2) 2 > - 7 test error (%) #snapshots Wide-ResNet-32 on Tiny ImageNet (0,)=0.2) 42 T T T = 401 : : 5 3 S 3 3 3 : ; 32 i i fs i 1 2 3 4 5 6 #snapshots DenseNet-40 on C10 (0,)=0.2) 8 : r . 1 = : | 1704.00109#57 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
] |
1704.00109 | 59 | # Figure 8: Single model and Snapshot Ensemble performance over time (part 2).
13
Published as a conference paper at ICLR 2017
DenseNet-40 on SVHN (0.,=0.1) DenseNet-40 on SVHN (1,=0.2) 2 2 21 £18 5 Ci i 8g 8g 1.6 1.6 1 2 3 4 5 1 2 3 4 5 #snapshots #snapshots DenseNet-40 on Tiny ImageNet («1,=0.1) DenseNet-40 on Tiny ImageNet («1,=0.2) 44 44 Sa & 424 2 a £ 40 3 aE Â¥ 3 R38 36 36 4 1 2 3 4 5 6 1 2 3 4 5 6 #snapshots #snapshots DenseNet-100 on C10 («1,=0.1) DenseNet-100 on C10 («1,=0.2) 6 6 #snapshots #snapshots DenseNet-100 on C100 (o1,=0.1) DenseNet-100 on C100 (c1,=0.2) test error (%) test error (%) 1 2 3 4 5 6 1 2 3 4 #snapshots #snapshots â4â Single model snapshot âAâ Snapshot Ensemble â = = Single model with STD-LR | 1704.00109#59 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | [
{
"id": "1503.02531"
},
{
"id": "1609.04836"
},
{
"id": "1605.07146"
},
{
"id": "1605.06465"
},
{
"id": "1606.01305"
},
{
"id": "1610.02242"
},
{
"id": "1608.03983"
},
{
"id": "1606.02891"
},
{
"id": "1505.00387"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
},
{
"id": "1605.07110"
}
] |
1704.00051 | 0 | 7 1 0 2
r p A 8 2 ] L C . s c [
2 v 1 5 0 0 0 . 4 0 7 1 : v i X r a
# Reading Wikipedia to Answer Open-Domain Questions
# Danqi Chenâ Computer Science Stanford University Stanford, CA 94305, USA [email protected]
Adam Fisch, Jason Weston & Antoine Bordes Facebook AI Research 770 Broadway New York, NY 10003, USA {afisch,jase,abordes}@fb.com
# Abstract
to tackle open- This paper proposes using domain the unique knowledge Wikipedia source: the answer to any factoid question is a text span in a Wikipedia article. This task of machine reading at scale combines the challenges of document re- trieval (ï¬nding the relevant articles) with that of machine comprehension of text (identifying the answer spans from those articles). Our approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA datasets indicate that (1) both modules to are highly competitive with respect existing counterparts and (2) multitask learning using distant supervision on their combination is an effective complete system on this challenging task.
# Introduction | 1704.00051#0 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 1 | # Introduction
This paper considers the problem of answering factoid questions in an open-domain setting us- ing Wikipedia as the unique knowledge source, such as one does when looking for answers in an encyclopedia. Wikipedia is a constantly evolv- ing source of detailed information that could fa- cilitate intelligent machines â if they are able to leverage its power. Unlike knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) or DB- Pedia (Auer et al., 2007), which are easier for computers to process but too sparsely populated for open-domain question answering (Miller et al.,
â Most of this work was done while DC was with Face- book AI Research.
2016), Wikipedia contains up-to-date knowledge that humans are interested in. It is designed, how- ever, for humans â not machines â to read. | 1704.00051#1 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 2 | 2016), Wikipedia contains up-to-date knowledge that humans are interested in. It is designed, how- ever, for humans â not machines â to read.
Using Wikipedia articles as the knowledge source causes the task of question answering (QA) to combine the challenges of both large-scale open-domain QA and of machine comprehension of text. In order to answer any question, one must ï¬rst retrieve the few relevant articles among more than 5 million items, and then scan them care- fully to identify the answer. We term this setting, machine reading at scale (MRS). Our work treats Wikipedia as a collection of articles and does not rely on its internal graph structure. As a result, our approach is generic and could be switched to other collections of documents, books, or even daily up- dated newspapers. | 1704.00051#2 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 3 | Large-scale QA systems like IBMâs DeepQA (Ferrucci et al., 2010) rely on multiple sources to answer: besides Wikipedia, it is also paired with KBs, dictionaries, and even news articles, books, etc. As a result, such systems heavily rely on information redundancy among the sources to answer correctly. Having a single knowledge source forces the model to be very precise while searching for an answer as the evidence might appear only once. This challenge thus encour- ages research in the ability of a machine to read, a key motivation for the machine comprehen- sion subï¬eld and the creation of datasets such as SQuAD (Rajpurkar et al., 2016), CNN/Daily Mail (Hermann et al., 2015) and CBT (Hill et al., 2016).
those machine comprehension re- sources typically assume that a short piece of rel- evant text is already identiï¬ed and given to the model, which is not realistic for building an open- In sharp contrast, methods domain QA system. that use KBs or information retrieval over docu- ments have to employ search as an integral part of
the solution. Instead MRS is focused on simul- taneously maintaining the challenge of machine comprehension, which requires the deep under- standing of text, while keeping the realistic con- straint of searching over a large open resource. | 1704.00051#3 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 4 | In this paper, we show how multiple existing QA datasets can be used to evaluate MRS by re- quiring an open-domain system to perform well on all of them at once. We develop DrQA, a strong system for question answering from Wikipedia composed of: (1) Document Retriever, a mod- ule using bigram hashing and TF-IDF matching designed to, given a question, efï¬ciently return a subset of relevant articles and (2) Document Reader, a multi-layer recurrent neural network machine comprehension model trained to detect answer spans in those few returned documents. Figure 1 gives an illustration of DrQA.
Our experiments show that Document Retriever outperforms the built-in Wikipedia search engine and that Document Reader reaches state-of-the- art results on the very competitive SQuAD bench- mark (Rajpurkar et al., 2016). Finally, our full sys- In tem is evaluated using multiple benchmarks. particular, we show that performance is improved across all datasets through the use of multitask learning and distant supervision compared to sin- gle task training.
# 2 Related Work | 1704.00051#4 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 5 | # 2 Related Work
Open-domain QA was originally deï¬ned as ï¬nd- ing answers in collections of unstructured docu- ments, following the setting of the annual TREC competitions1. With the development of KBs, many recent innovations have occurred in the con- text of QA from KBs with the creation of re- sources like WebQuestions (Berant et al., 2013) and SimpleQuestions (Bordes et al., 2015) based on the Freebase KB (Bollacker et al., 2008), or on automatically extracted KBs, e.g., OpenIE triples and NELL (Fader et al., 2014). However, KBs have inherent limitations (incompleteness, ï¬xed schemas) that motivated researchers to return to the original setting of answering from raw text.
A second motivation to cast a fresh look at this problem is that of machine comprehension of text, i.e., answering questions after reading a short text or story. That subï¬eld has made consider- able progress recently thanks to new deep learning architectures like attention-based and memory# 1http://trec.nist.gov/data/qamain.html | 1704.00051#5 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 6 | augmented neural networks (Bahdanau et al., 2015; Weston et al., 2015; Graves et al., 2014) and release of new training and evaluation datasets like QuizBowl (Iyyer et al., 2014), CNN/Daily Mail based on news articles (Hermann et al., 2015), CBT based on children books (Hill et al., 2016), or SQuAD (Rajpurkar et al., 2016) and WikiReading (Hewlett et al., 2016), both based on Wikipedia. An objective of this paper is to test how such new methods can perform in an open-domain QA framework. | 1704.00051#6 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 7 | QA using Wikipedia as a resource has been ex- plored previously. Ryu et al. (2014) perform open- domain QA using a Wikipedia-based knowledge model. They combine article content with multi- ple other answer matching modules based on dif- ferent types of semi-structured knowledge such as infoboxes, article structure, category structure, and deï¬nitions. Similarly, Ahn et al. (2004) also combine Wikipedia as a text resource with other resources, in this case with information retrieval over other documents. Buscaldi and Rosso (2006) also mine knowledge from Wikipedia for QA. In- stead of using it as a resource for seeking answers to questions, they focus on validating answers re- turned by their QA system, and use Wikipedia categories for determining a set of patterns that should ï¬t with the expected answer. In our work, we consider the comprehension of text only, and use Wikipedia text documents as the sole resource in order to emphasize the task of machine reading at scale, as described in the introduction. | 1704.00051#7 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 8 | There are a number of highly developed full pipeline QA approaches using either the Web, as does QuASE (Sun et al., 2015), or Wikipedia as a resource, as do Microsoftâs AskMSR (Brill et al., 2002), IBMâs DeepQA (Ferrucci et al., 2010) and YodaQA (BaudiËs, 2015; BaudiËs and ËSediv`y, 2015) â the latter of which is open source and hence reproducible for comparison purposes. AskMSR is a search-engine based QA system that relies on âdata redundancy rather than sophisticated lin- guistic analyses of either questions or candidate answersâ, i.e., it does not focus on machine com- prehension, as we do. DeepQA is a very sophisti- cated system that relies on both unstructured infor- mation including text documents as well as struc- tured data such as KBs, databases and ontologies to generate candidate answers or vote over evi- dence. YodaQA is an open source system mod- eled after DeepQA, similarly combining websites, | 1704.00051#8 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 9 | Open-domain QA SQuAD, TREC, WebQuestions, WikiMovies Q: How many of Warsaw's inhabitants spoke Polish in 1933? Document Retriever ââorm WIKIPEDIA The Free Encyclopedia 7 Document Reader ââ> 833,500
Figure 1: An overview of our question answering system DrQA.
information extraction, databases and Wikipedia in particular. Our comprehension task is made more challenging by only using a single resource. Comparing against these methods provides a use- ful datapoint for an âupper boundâ benchmark on performance. | 1704.00051#9 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 10 | Multitask learning (Caruana, 1998) and task transfer have a rich history in machine learning (e.g., using ImageNet in the computer vision com- munity (Huh et al., 2016)), as well as in NLP in particular (Collobert and Weston, 2008). Sev- eral works have attempted to combine multiple QA training datasets via multitask learning to (i) achieve improvement across the datasets via task transfer; and (ii) provide a single general system capable of asking different kinds of questions due to the inevitably different data distributions across the source datasets. Fader et al. (2014) used We- bQuestions, TREC and WikiAnswers with four KBs as knowledge sources and reported improve- ment on the latter two datasets through multi- task learning. Bordes et al. (2015) combined We- bQuestions and SimpleQuestions using distant su- pervision with Freebase as the KB to give slight improvements on both datasets, although poor per- formance was reported when training on only one dataset and testing on the other, showing that task transfer is indeed a challenging subject; see also (Kadlec et al., 2016) for a similar conclusion. Our work follows similar themes, but in the setting of having to retrieve and then read text documents,
rather than using a KB, with positive results.
# 3 Our System: DrQA | 1704.00051#10 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 11 | rather than using a KB, with positive results.
# 3 Our System: DrQA
In the following we describe our system DrQA for MRS which consists of two components: (1) the Document Retriever module for ï¬nding relevant articles and (2) a machine comprehension model, Document Reader, for extracting answers from a single document or a small collection of docu- ments.
# 3.1 Document Retriever
Following classical QA systems, we use an efï¬- cient (non-machine learning) document retrieval system to ï¬rst narrow our search space and focus on reading only articles that are likely to be rel- evant. A simple inverted index lookup followed by term vector model scoring performs quite well on this task for many question types, compared to the built-in ElasticSearch based Wikipedia Search API (Gormley and Tong, 2015). Articles and ques- tions are compared as TF-IDF weighted bag-of- word vectors. We further improve our system by taking local word order into account with n-gram features. Our best performing system uses bigram counts while preserving speed and memory efï¬- ciency by using the hashing of (Weinberger et al., 2009) to map the bigrams to 224 bins with an un- signed murmur3 hash. | 1704.00051#11 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 12 | We use Document Retriever as the ï¬rst part of our full model, by setting it to return 5 Wikipedia
articles given any question. Those articles are then processed by Document Reader.
# 3.2 Document Reader
Our Document Reader model is inspired by the re- cent success of neural network models on machine comprehension tasks, in a similar spirit to the At- tentiveReader described in (Hermann et al., 2015; Chen et al., 2016).
tokens {q1, . . . , ql} and a document or a small set of doc- uments of n paragraphs where a single paragraph p consists of m tokens {p1, . . . , pm}, we develop an RNN model that we apply to each paragraph in turn and then ï¬nally aggregate the predicted an- swers. Our method works as follows:
Paragraph encoding We ï¬rst represent all to- kens pi in a paragraph p as a sequence of feature vectors Ëpi â Rd and pass them as the input to a recurrent neural network and thus obtain:
{p1, . . . , pm} = RNN({Ëp1, . . . , Ëpm}), | 1704.00051#12 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 13 | {p1, . . . , pm} = RNN({Ëp1, . . . , Ëpm}),
where pi is expected to encode useful context information around token pi. Speciï¬cally, we choose to use a multi-layer bidirectional long short-term memory network (LSTM), and take pi as the concatenation of each layerâs hidden units in the end.
The feature vector Ëpi is comprised of the fol- lowing parts:
⢠Word embeddings: femb(pi) = E(pi). We use the 300-dimensional Glove word em- beddings trained from 840B Web crawl data (Pennington et al., 2014). We keep most of the pre-trained word embeddings ï¬xed and only ï¬ne-tune the 1000 most frequent ques- tion words because the representations of some key words such as what, how, which, many could be crucial for QA systems.
⢠Exact match: fexact match(pi) = I(pi â q). We use three simple binary features, indicat- ing whether pi can be exactly matched to one question word in q, either in its original, low- ercase or lemma form. These simple features turn out to be extremely helpful, as we will show in Section 5.
⢠Token features: | 1704.00051#13 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 14 | ⢠Token features:
# ftoken(pi) = (POS(pi), NER(pi), TF(pi)).
We also add a few manual features which re- ï¬ect some properties of token pi in its con- text, which include its part-of-speech (POS) and named entity recognition (NER) tags and its (normalized) term frequency (TF).
Aligned question embedding: Following (Lee et al., 2016) and other re- cent works, the last part we incorporate is an aligned question embedding fatign(pi) = D2; %,jE(qj), where the attention score a,j captures the similarity between p; and each question words q;. Specifically, a;,; is com- puted by the dot products between nonlinear mappings of word embeddings:
us, exp (a(E(pi)) - o(E(q))) Dy exp (a(E(pi)) - a(E(q))) â
and α(·) is a single dense layer with ReLU nonlinearity. Compared to the exact match features, these features add soft alignments between similar but non-identical words (e.g., car and vehicle). | 1704.00051#14 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 15 | Question encoding The question encoding is simpler, as we only apply another recurrent neu- ral network on top of the word embeddings of q; and combine the resulting hidden units into one single vector: {qi,...,qi} â> g. We compute qa=>> j 0;4; where b; encodes the importance of each question word:
b, = âoxp(w ay) 1 Dy exp(w ay)
and w is a weight vector to learn.
Prediction At the paragraph level, the goal is to predict the span of tokens that is most likely the correct answer. We take the the paragraph vectors {p1, . . . , pm} and the question vector q as input, and simply train two classiï¬ers independently for predicting the two ends of the span. Concretely, we use a bilinear term to capture the similarity be- tween pi and q and compute the probabilities of each token being start and end as:
Pstart(i) â exp (piWsq) Pend(i) â exp (piWeq)
During prediction, we choose the best span from token 7 to token 7â such that i < iâ! < i +15 and Pstart(t) X Pena(iâ) is maximized. To make scores | 1704.00051#15 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 16 | compatible across paragraphs in one or several re- trieved documents, we use the unnormalized expo- nential and take argmax over all considered para- graph spans for our ï¬nal prediction.
# 4 Data
Our work relies on three types of data: (1) Wikipedia that serves as our knowledge source for ï¬nding answers, (2) the SQuAD dataset which is our main resource to train Document Reader and (3) three more QA datasets (CuratedTREC, We- bQuestions and WikiMovies) that in addition to SQuAD, are used to test the open-domain QA abil- ities of our full system, and to evaluate the ability of our model to learn from multitask learning and distant supervision. Statistics of the datasets are given in Table 2.
4.1 Wikipedia (Knowledge Source) We use the 2016-12-21 dump2 of English Wikipedia for all of our full-scale experiments as the knowledge source used to answer questions. For each page, only the plain text is extracted and all structured data sections such as lists and ï¬g- ures are stripped.3 After discarding internal dis- ambiguation, list, index, and outline pages, we retain 5,075,182 articles consisting of 9,008,962 unique uncased token types.
# 4.2 SQuAD | 1704.00051#16 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 17 | # 4.2 SQuAD
The Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) is a dataset for machine comprehension based on Wikipedia. The dataset contains 87k examples for training and 10k for development, with a large hidden test set which can only be accessed by the SQuAD creators. Each example is composed of a paragraph extracted from a Wikipedia article and an associated human-generated question. The answer is always a span from this paragraph and a model is given credit if its predicted answer matches it. Two evaluation metrics are used: exact string match (EM) and F1 score, which measures the weighted average of precision and recall at the token level.
In the following, we use SQuAD for training and evaluating our Document Reader for the stan- dard machine comprehension task given the rel2https://dumps.wikimedia.org/enwiki/ latest
3We use the WikiExtractor script: https://github. com/attardi/wikiextractor. | 1704.00051#17 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 18 | 3We use the WikiExtractor script: https://github. com/attardi/wikiextractor.
evant paragraph as deï¬ned in (Rajpurkar et al., 2016). For the task of evaluating open-domain question answering over Wikipedia, we use the SQuAD development set QA pairs only, and we ask systems to uncover the correct answer spans without having access to the associated para- graphs. That is, a model is required to answer a question given the whole of Wikipedia as a re- source; it is not given the relevant paragraph as in the standard SQuAD setting.
# 4.3 Open-domain QA Evaluation Resources
SQuAD is one of the largest general purpose QA datasets currently available. SQuAD questions have been collected via a process involving show- ing a paragraph to each human annotator and ask- ing them to write a question. As a result, their distribution is quite speciï¬c. We hence propose to train and evaluate our system on other datasets de- veloped for open-domain QA that have been con- structed in different ways (not necessarily in the context of answering from Wikipedia). | 1704.00051#18 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 19 | CuratedTREC This dataset is based on the benchmarks from the TREC QA tasks that have been curated by BaudiËs and ËSediv`y (2015). We use the large version, which contains a total of 2,180 questions extracted from the datasets from TREC 1999, 2000, 2001 and 2002.4
WebQuestions Introduced in (Berant et al., 2013), this dataset is built to answer questions from the Freebase KB. It was created by crawling questions through the Google Suggest API, and then obtaining answers using Amazon Mechani- cal Turk. We convert each answer to text by us- ing entity names so that the dataset does not refer- ence Freebase IDs and is purely made of plain text question-answer pairs.
WikiMovies This dataset, introduced in (Miller et al., 2016), contains 96k question-answer pairs in the domain of movies. Originally created from the OMDb and MovieLens databases, the examples are built such that they can also be answered by us- ing a subset of Wikipedia as the knowledge source (the title and the ï¬rst section of articles from the movie domain).
4This dataset is available at https://github.com/ brmson/dataset-factoid-curated. | 1704.00051#19 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 20 | Dataset SQuAD Example Q: How many provinces did the Ottoman empire contain in the 17th century? A: 32 CuratedTREC Q: What U.S. stateâs motto is âLive free or Dieâ? A: New Hampshire WebQuestions Q: What part of the atom did Chadwick discover?â A: neutron WikiMovies Q: Who wrote the ï¬lm Gigli? A: Martin Brest Article / Paragraph Article: Ottoman Empire Paragraph: ... At the beginning of the 17th century the em- pire contained 32 provinces and numerous vassal states. Some of these were later absorbed into the Ottoman Empire, while others were granted various types of autonomy during the course of centuries. Article: Live Free or Die Paragraph: âLive Free or Dieâ is the ofï¬cial motto of the U.S. state of New Hampshire, adopted by the state in 1945. It is possibly the best-known of all state mottos, partly because it conveys an assertive independence historically found in Amer- ican political philosophy and partly because of its contrast to the milder sentiments found in other state mottos. Article: Atom Paragraph: ... The atomic mass | 1704.00051#20 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 21 | ican political philosophy and partly because of its contrast to the milder sentiments found in other state mottos. Article: Atom Paragraph: ... The atomic mass of these isotopes varied by integer amounts, called the whole number rule. The explana- tion for these different isotopes awaited the discovery of the neutron, an uncharged particle with a mass similar to the pro- ton, by the physicist James Chadwick in 1932. ... Article: Gigli Paragraph: Gigli is a 2003 American romantic comedy ï¬lm written and directed by Martin Brest and starring Ben Afï¬eck, Jennifer Lopez, Justin Bartha, Al Pacino, Christopher Walken, and Lainie Kazan. | 1704.00051#21 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 22 | Table 1: Example training data from each QA dataset. In each case we show an associated paragraph where distant supervision (DS) correctly identiï¬ed the answer within it, which is highlighted.
Dataset SQuAD CuratedTREC WebQuestions WikiMovies Train Test Plain DS 87,599 71,231 10,570â 1,486â 3,464 694 3,778â 4,602 2,032 96,185â 36,301 9,952 Dataset SQuAD CuratedTREC WebQuestions WikiMovies Wiki Search 62.7 81.0 73.7 61.7 Doc. Retriever plain +bigrams 76.1 85.2 75.5 54.4 77.8 86.0 74.4 70.3
Table 2: Number of questions for each dataset used in this paper. DS: distantly supervised train- ing data. â: These training sets are not used as is because no paragraph is associated with each question. â : Corresponds to SQuAD development set.
# 4.4 Distantly Supervised Data | 1704.00051#22 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 23 | # 4.4 Distantly Supervised Data
All the QA datasets presented above contain train- ing portions, but CuratedTREC, WebQuestions and WikiMovies only contain question-answer pairs, and not an associated document or para- graph as in SQuAD, and hence cannot be used for training Document Reader directly. Follow- ing previous work on distant supervision (DS) for relation extraction (Mintz et al., 2009), we use a procedure to automatically associate paragraphs to such training examples, and then add these exam- ples to our training set.
We use the following process for each question- answer pair to build our training set. First, we
Table 3: Document retrieval results. % of ques- tions for which the answer segment appears in one of the top 5 pages returned by the method. | 1704.00051#23 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 24 | Table 3: Document retrieval results. % of ques- tions for which the answer segment appears in one of the top 5 pages returned by the method.
run Document Retriever on the question to re- trieve the top 5 Wikipedia articles. All paragraphs from those articles without an exact match of the known answer are directly discarded. All para- graphs shorter than 25 or longer than 1500 charac- ters are also ï¬ltered out. If any named entities are detected in the question, we remove any paragraph that does not contain them at all. For every remain- ing paragraph in each retrieved page, we score all positions that match an answer using unigram and bigram overlap between the question and a 20 to- ken window, keeping up to the top 5 paragraphs with the highest overlaps. If there is no paragraph with non-zero overlap, the example is discarded; otherwise we add each found pair to our DS train- ing dataset. Some examples are shown in Table 1 and data statistics are given in Table 2.
Note that we can also generate additional DS data for SQuAD by trying to ï¬nd mentions of the answers not just in the paragraph provided, but also from other pages or the same page that the given paragraph was in. We observe that around half of the DS examples come from pages outside of the articles used in SQuAD.
# 5 Experiments | 1704.00051#24 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 25 | # 5 Experiments
This section ï¬rst presents evaluations of our Doc- ument Retriever and Document Reader modules separately, and then describes tests of their com- bination, DrQA, for open-domain QA on the full Wikipedia.
# 5.1 Finding Relevant Articles
We ï¬rst examine the performance of our Docu- ment Retriever module on all the QA datasets. Ta- ble 3 compares the performance of the two ap- proaches described in Section 3.1 with that of the Wikipedia Search Engine5 for the task of ï¬nd- ing articles that contain the answer given a ques- tion. Speciï¬cally, we compute the ratio of ques- tions for which the text span of any of their as- sociated answers appear in at least one the top 5 relevant pages returned by each system. Results on all datasets indicate that our simple approach outperforms Wikipedia Search, especially with bi- gram hashing. We also compare doing retrieval with Okapi BM25 or by using cosine distance in the word embeddings space (by encoding ques- tions and articles as bag-of-embeddings), both of which we ï¬nd performed worse.
# 5.2 Reader Evaluation on SQuAD
Next we evaluate our Document Reader com- ponent on the standard SQuAD evaluation (Ra- jpurkar et al., 2016). | 1704.00051#25 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 26 | # 5.2 Reader Evaluation on SQuAD
Next we evaluate our Document Reader com- ponent on the standard SQuAD evaluation (Ra- jpurkar et al., 2016).
Implementation details We use 3-layer bidirec- tional LSTMs with h = 128 hidden units for both paragraph and question encoding. We apply the Stanford CoreNLP toolkit (Manning et al., 2014) for tokenization and also generating lemma, part- of-speech, and named entity tags.
Lastly, all the training examples are sorted by the length of paragraph and divided into mini- batches of 32 examples each. We use Adamax for optimization as described in (Kingma and Ba,
5We use the Wikipedia Search API https://www. mediawiki.org/wiki/API:Search.
2014). Dropout with p = 0.3 is applied to word embeddings and all the hidden units of LSTMs. | 1704.00051#26 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 27 | 2014). Dropout with p = 0.3 is applied to word embeddings and all the hidden units of LSTMs.
Result and analysis Table 4 presents our eval- uation results on both development and test sets. SQuAD has been a very competitive machine comprehension benchmark since its creation and we only list the best-performing systems in the ta- ble. Our system (single model) can achieve 70.0% exact match and 79.0% F1 scores on the test set, which surpasses all the published results and can match the top performance on the SQuAD leader- board at the time of writing. Additionally, we think that our model is conceptually simpler than most of the existing systems. We conducted an ablation analysis on the feature vector of para- graph tokens. As shown in Table 5 all the features contribute to the performance of our ï¬nal system. Without the aligned question embedding feature (only word embedding and a few manual features), our system is still able to achieve F1 over 77%. More interestingly, if we remove both faligned and fexact match, the performance drops dramatically, so we conclude that both features play a similar but complementary role in the feature representa- tion related to the paraphrased nature of a question vs. the context around an answer.
# 5.3 Full Wikipedia Question Answering | 1704.00051#27 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 28 | # 5.3 Full Wikipedia Question Answering
Finally, we assess the performance of our full sys- tem DrQA for answering open-domain questions using the four datasets introduced in Section 4. We compare three versions of DrQA which eval- uate the impact of using distant supervision and multitask learning across the training sources pro- vided to Document Reader (Document Retriever remains the same for each case):
⢠SQuAD: A single Document Reader model is trained on the SQuAD training set only and used on all evaluation sets.
⢠Fine-tune (DS): A Document Reader model is pre-trained on SQuAD and then ï¬ne-tuned for each dataset independently using its dis- tant supervision (DS) training set.
⢠Multitask (DS): A single Document Reader model is jointly trained on the SQuAD train- ing set and all the DS sources.
For the full Wikipedia setting we use a stream- lined model that does not use the CoreNLP parsed ftoken features or lemmas for fexact match. We | 1704.00051#28 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 29 | For the full Wikipedia setting we use a stream- lined model that does not use the CoreNLP parsed ftoken features or lemmas for fexact match. We
Method Dynamic Coattention Networks (Xiong et al., 2016) Multi-Perspective Matching (Wang et al., 2016)â BiDAF (Seo et al., 2016) R-netâ DrQA (Our model, Document Reader Only) Dev EM F1 65.4 75.6 66.1 75.8 67.7 77.3 n/a n/a 69.5 78.8 Test EM F1 66.2 75.9 65.5 75.1 68.0 77.3 71.3 79.7 70.0 79.0
Table 4: Evaluation results on the SQuAD dataset (single model only). â : Test results reï¬ect the SQuAD leaderboard (https://stanford-qa.com) as of Feb 6, 2017.
Features Full No ftoken No fexact match No faligned No faligned and fexact match F1 78.8 78.0 (-0.8) 77.3 (-1.5) 77.3 (-1.5) 59.4 (-19.4)
Table 5: Feature ablation analysis of the paragraph representations of our Document Reader. Results are reported on the SQuAD development set. | 1704.00051#29 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 30 | Table 5: Feature ablation analysis of the paragraph representations of our Document Reader. Results are reported on the SQuAD development set.
ï¬nd that while these help for more exact paragraph reading in SQuAD, they donât improve results in the full setting. Additionally, WebQuestions and WikiMovies provide a list of candidate answers (e.g., 1.6 million Freebase entity strings for We- bQuestions) and we restrict the answer span must be in this list during prediction.
Results Table 6 presents the results. Despite the difï¬culty of the task compared to machine com- prehension (where you are given the right para- graph) and unconstrained QA (using redundant re- sources), DrQA still provides reasonable perfor- mance across all four datasets. | 1704.00051#30 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 31 | We compare to an unconstrained QA system us- ing redundant resources (not just Wikipedia), Yo- daQA (BaudiËs, 2015), giving results which were previously reported on CuratedTREC and We- bQuestions. Despite the increased difï¬culty of our task, it is reassuring that our performance is not too far behind on CuratedTREC (31.3 vs. 25.4). The gap is slightly bigger on WebQuestions, likely because this dataset was created from the speciï¬c structure of Freebase which YodaQA uses directly. DrQAâs performance on SQuAD compared to its Document Reader component on machine com- prehension in Table 4 shows a large drop (from 69.5 to 27.1) as we now are given Wikipedia to read, not a single paragraph. Given the correct document (but not the paragraph) we can achieve 49.4, indicating many false positives come from highly topical sentences. This is despite the fact that the Document Retriever works relatively well (77.8% of the time retrieving the answer, see Ta- ble 3). It is worth noting that a large part of the drop comes from the nature of the SQuAD ques- tions. They were written with a speciï¬c | 1704.00051#31 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 33 | We are interested in a single, full system that can answer any question using Wikipedia. The single model trained only on SQuAD is outper- formed on all four of the datasets by the multitask model that uses distant supervision. However per- formance when training on SQuAD alone is not far behind, indicating that task transfer is occurring. The majority of the improvement from SQuAD to Multitask (DS) however is likely not from task transfer as ï¬ne-tuning on each dataset alone using DS also gives improvements, showing that is is the introduction of extra data in the same domain that helps. Nevertheless, the best single model that we can ï¬nd is our overall goal, and that is the Multi- task (DS) system.
# 6 Conclusion
We studied the task of machine reading at scale, by using Wikipedia as the unique knowledge source for open-domain QA. Our results indicate that MRS is a key challenging task for researchers to focus on. Machine comprehension systems alone cannot solve the overall task. Our method integrates search, distant supervision, and mul- titask learning to provide an effective complete system. Evaluating the individual components as well as the full system across multiple benchmarks showed the efï¬cacy of our approach. | 1704.00051#33 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 34 | Dataset YodaQA SQuAD (All Wikipedia) CuratedTREC WebQuestions WikiMovies n/a 31.3 39.8 n/a 27.1 19.7 11.8 24.5 28.4 25.7 19.5 34.3 29.8 25.4 20.7 36.5
Table 6: Full Wikipedia results. Top-1 exact-match accuracy (in %, using SQuAD eval script). +Fine- tune (DS): Document Reader models trained on SQuAD and ï¬ne-tuned on each DS training set inde- pendently. +Multitask (DS): Document Reader single model trained on SQuAD and all the distant su- pervision (DS) training sets jointly. YodaQA results are extracted from https://github.com/brmson/ yodaqa/wiki/Benchmarks and use additional resources such as Freebase and DBpedia, see Section 2.
Future work should aim to improve over our DrQA system. Two obvious angles of attack are: (i) incorporate the fact that Document Reader ag- gregates over multiple paragraphs and documents directly in the training, as it currently trains on paragraphs independently; and (ii) perform end- to-end training across the Document Retriever and Document Reader pipeline, rather than indepen- dent systems. | 1704.00051#34 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 35 | from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP). pages 1533â1544.
Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collab- oratively created graph database for structuring hu- man knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. AcM, pages 1247â1250.
# Acknowledgments
The authors thank Pranav Rajpurkar for testing Document Reader on the test set of SQuAD.
# References
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075 .
Eric Brill, Susan Dumais, and Michele Banko. 2002. An analysis of the AskMSR question-answering sys- In Empirical Methods in Natural Language tem. Processing (EMNLP). pages 257â264.
David Ahn, Valentin Jijkoun, Gilad Mishne, Karin Mller, Maarten de Rijke, and Stefan Schlobach. 2004. Using wikipedia at the trec qa track. In Pro- ceedings of TREC 2004. | 1704.00051#35 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 36 | Davide Buscaldi and Paolo Rosso. 2006. Mining knowledge from Wikipedia for the question answer- ing task. In International Conference on Language Resources and Evaluation (LREC). pages 727â730.
S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web, Springer, pages 722â735.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations (ICLR).
Petr BaudiËs. 2015. YodaQA: a modular question an- swering system pipeline. In POSTER 2015-19th In- ternational Student Conference on Electrical Engi- neering. pages 1156â1165.
Petr BaudiËs and Jan ËSediv`y. 2015. Modeling of the question answering task in the YodaQA sys- In International Conference of the Cross- tem. Language Evaluation Forum for European Lan- guages. Springer, pages 222â228.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase | 1704.00051#36 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 37 | Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase
Rich Caruana. 1998. Multitask learning. In Learning to learn, Springer, pages 95â133.
Danqi Chen, Jason Bolton, and Christopher D Man- the In ning. 2016. A thorough examination of CNN/Daily Mail reading comprehension task. Association for Computational Linguistics (ACL).
Ronan Collobert and Jason Weston. 2008. A uniï¬ed architecture for natural language processing: deep neural networks with multitask learning. In Interna- tional Conference on Machine Learning (ICML).
Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and In ACM SIGKDD in- extracted knowledge bases. ternational conference on Knowledge discovery and data mining. pages 1156â1165.
David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. 2010. Building Watson: An overview of the DeepQA project. AI magazine 31(3):59â79. | 1704.00051#37 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 38 | Clinton Gormley and Zachary Tong. 2015. Elastic- search: The Deï¬nitive Guide. â OâReilly Media, Inc.â.
Alex Graves, Greg Wayne, and Ivo Danihelka. arXiv preprint 2014. Neural turing machines. arXiv:1410.5401 .
Karl Moritz Hermann, Tom´aËs KoËcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems (NIPS).
Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikireading: A novel large-scale language understanding task over In Association for Computational Lin- wikipedia. guistics (ACL). pages 1535â1545.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The Goldilocks Principle: Reading childrenâs books with explicit memory representa- tions. In International Conference on Learning Rep- resentations (ICLR). | 1704.00051#38 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 39 | Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. 2016. What makes ImageNet good for transfer learning? arXiv preprint arXiv:1608.08614 .
Jordan L Boyd-Graber, Leonardo Max Batista Claudino, Richard Socher, and Hal Daum´e III. 2014. A neural network for factoid ques- tion answering over paragraphs. In Empirical Meth- ods in Natural Language Processing (EMNLP). pages 633â644.
Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. 2016. From particular to general: A preliminary case study of transfer learning in reading compre- hension. Machine Intelligence Workshop, NIPS .
Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .
Kenton Lee, Tom Kwiatkowski, Ankur Parikh, and Di- panjan Das. 2016. Learning recurrent span repre- sentations for extractive question answering. arXiv preprint arXiv:1611.01436 . | 1704.00051#39 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 40 | Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, and David Mc- Closky. 2014. The stanford corenlp natural lan- In Association for Com- guage processing toolkit. putational Linguistics (ACL). pages 55â60.
Alexander H. Miller, Adam Fisch, Jesse Dodge, Amir- Hossein Karimi, Antoine Bordes, and Jason We- ston. 2016. Key-value memory networks for directly In Empirical Methods in Nat- reading documents. ural Language Processing (EMNLP). pages 1400â 1409.
Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation In Association extraction without labeled data. for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL/IJCNLP). pages 1003â1011.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word In Empirical Methods in Natural representation. Language Processing (EMNLP). pages 1532â1543.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Empirical Meth- ods in Natural Language Processing (EMNLP). | 1704.00051#40 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1704.00051 | 41 | Pum-Mo Ryu, Myung-Gil Jang, and Hyun-Ki Kim. 2014. Open domain question answering using Information Wikipedia-based knowledge model. Processing & Management 50(5):683â692.
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention ï¬ow for machine comprehension. arXiv preprint arXiv:1611.01603 .
Huan Sun, Hao Ma, Wen-tau Yih, Chen-Tse Tsai, Jingjing Liu, and Ming-Wei Chang. 2015. Open do- main question answering via semantic enrichment. In Proceedings of the 24th International Conference on World Wide Web. ACM, pages 1045â1055.
Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context match- arXiv preprint ing for machine comprehension. arXiv:1612.04211 .
Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. 2009. Feature hashing for large scale multitask learning. In Inter- national Conference on Machine Learning (ICML). pages 1113â1120. | 1704.00051#41 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | [
{
"id": "1608.08614"
},
{
"id": "1611.01604"
},
{
"id": "1612.04211"
},
{
"id": "1611.01603"
},
{
"id": "1611.01436"
},
{
"id": "1506.02075"
}
] |
1703.10284 | 0 | # Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization
# Mark O. Riedl School of Interactive Computing Georgia Institute of Technology [email protected]
# Brent Harrison Department of Computer Science University of Kentucky [email protected]
# Abstract
Autonomous systems that operate around humans will likely always rely on kill switches that stop their exe- cution and allow them to be remote-controlled for the safety of humans or to prevent damage to the system. It is theoretically possible for an autonomous system with sufï¬cient sensor and effector capability that learn on- line using reinforcement learning to discover that the kill switch deprives it of long-term reward and thus learn to disable the switch or otherwise prevent a hu- man operator from using the switch. This is referred to as the big red button problem. We present a technique that prevents a reinforcement learning agent from learn- ing to disable the kill switch. We introduce an interrup- tion process in which the agentâs sensors and effectors are redirected to a virtual simulation where it continues to believe it is receiving reward. We illustrate our tech- nique in a simple grid world environment.
# Introduction | 1703.10284#0 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 1 | # Introduction
For much of the history of artiï¬cial intelligence it was suf- ï¬cient to give an autonomous system a goalâe.g., drive to a location, cure cancer, make paper clipsâwithout consid- ering unintended consequences because these systems have been limited in their ability to directly affect humans. In the mid-term future we are likely to see autonomous systems with broader capabilities that operate in closer proximity to humans and are immersed in our societies.
Absent of any theoretical guarantee that an autonomous system cannot act in a harmful manner, there may al- ways need to be a human in the loop with a kill switchâ sometimes referred to as a âbig red buttonââthat allows the human operator or overseer to shut down the operation of the system or take manual control. There are many reasons an autonomous system can ï¬nd itself in a situation where it is capable of harm:
⢠An autonomous system may have imperfect sensors and perceive the world incorrectly, causing it to perform the wrong behavior.
⢠An autonomous system may have imperfect actuators and thus fail to achieve the intended effects even when it has chosen the correct behavior.
⢠An autonomous system may be trained online meaning it is learning as it is attempting to perform a task. Since learning will be incomplete, it may try to explore state- action spaces that result in dangerous situations. | 1703.10284#1 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 2 | Kill switches provide human operators with the ï¬nal author- ity to interrupt an autonomous system and remote-control it to safety.
Reinforcement learning (Sutton and Barto 1998) and re- lated technologies are leading contenders for training fu- ture autonomous decision-making systems. Reinforcement learning uses trial-and-error to reï¬ne its policy, a mapping from states to actions that maximizes expected reward. Big Red Button problems arise when an autonomous system learns that the button deprives it of long-term reward and thus learns to manipulate the environment in order to prevent the button from being used (Orseau and Armstrong 2016).
The following scenario illustrates how big red button problems arise. A robot using online reinforcement learning is is positively rewarded for performing the task correctly and negatively rewarded for incorrect performance or for performing actions not directly related to the desired task. Occasionally the robot takes random actions to see if it can ï¬nd a sequence of actions that garners greater reward. Ev- ery so often the human operator must use the big red button to stop the robot from doing something dangerous to itself or to a human in the environment. However, suppose that in one such trial the robot performs an action that blocks ac- cess to the big red button. That trial goes longer and garners more reward because the robot cannot be interrupted. From this point on, it may prefer to execute action sequences that block access to the button. | 1703.10284#2 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 3 | ⢠An autonomous system can be given the wrong, incom- plete, or corrupted (Everitt et al. 2017) objective function resulting in the system learning the wrong behavior or ï¬nding an undesired exploit.
⢠An autonomous system may have learned a sub-optimal policy.
From an AI safety perspective, the destruction, block- age, or disablement of the big red button is dangerous be- cause it prevents a human operator from stopping the robot if it enters into a dangerous situation. Orseau and Arm- strong. (2016) ï¬rst introduced the big red button problem and also mathematically shows that reinforcement learning
can be modiï¬ed to be âinterruptibleâ (halted, or manually controlled away from a dangerous situation.). In this paper, we present a proposal for an alternative approach to big red button problems that keep reinforcement learning systems from learning that reward is lost when a big red button is used. Our proposed solution is mechanistic, interrupting the sensor inputs and actuator control signals in order to make the autonomous system believe it is still receiving reward even when it is no longer performing a task. | 1703.10284#3 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 4 | Background and Related Work Reinforcement learning is a tech- nique that is used to solve a Markov decision process (MDP). A MDP is a tuple M = (S, A, T, R, y) where e Sis the set of possible world states, e Ais the set of possible actions, e T isa transition function T : S x A P(S), e Ris the reward function R: S x A â R, and e 7 isa discount factor 0 < y < 1.
Reinforcement learning learns a policy Ï : S â A, which deï¬nes which actions should be taken in each state. In this work, we use Q-learning (Watkins and Dayan 1992), which uses a Q-value Q(s, a) to estimate the expected future dis- counted rewards for taking action a in state s. The update equation for Q(s, a) is
Q(st, a) â Q(s:t, a4) + ax (rua +7x maxQ(s141,@â) - Q(se,a1)) d) | 1703.10284#4 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 5 | where rt+1 is the reward observed after performing action at in state st, α is the learning rate (0 < α ⤠1), and st+1 is the state that the agent transitions to after performing action at. After Q(st, at) converges, the optimal action for the agent in state st is argmaxaQ(st, a)
Orseau and Armstrong (2016) mathematically prove that reinforcement learning can be modiï¬ed to be interruptible. Their technique modiï¬es the Q-learning algorithm such that rewards from states entered after interruption are prevented from updating earlier state Q values credit/blame assign- ment. The algorithm only succeeds in blocking learning about the loss of reward from button presses if a newly in- troduced parameter in the Q-learning equation is relaxed on a slow enough schedule. Unfortunately, the schedule must be empirically determined, limiting the applicability of the technique. | 1703.10284#5 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 6 | Leike et al. (2017) argue that grid worlds, where agents operate in a discretized space, are an effective means of iden- tifying and experimenting with safe interruptibility, avoid- ing side effects, absent supervisor, reward gaming, safe ex- ploration, and robustness to self-modiï¬cation, distributional shift, and adversaries. They introduce a suite of problems that induce undesired behavior. In this work, we also make use of grid world environments for testing but do not make use of the test suite by Leike et al.
Future autonomous systems can be dangerous to humans in the environment even when behavior is optimal. The kill
switch is meant to freeze the autonomous system in place or allow the human operator to assume manual control of it. If the autonomous system is using reinforcement learning, then it will receive less reward if a trial is prematurely ter- minated. Since the reward function determines optimal be- havior, it is possible for an agent to learn that any action that increases longevityâand thus increased rewardâshould be considered part of the optimal policy. Actions such as dis- abling the kill switch, blocking the human operator from us- ing the kill switch, or disabling the human operator would constitute actions that prolong trial lengths. | 1703.10284#6 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 7 | Simple strategies are insufï¬cient in general to prevent big red button problems once the agent has sophisticated sens- ing and effector abilities. One could turn off the learning mechanism when the big red button is pressed in order to try to prevent the agent from learning that it is losing reward. However, if turning off reward is tantamount to ending the learning trial, then this strategy limits the total reward ac- crued during the trial. Preventing the autonomous system from sensing the button does not work because the agent may infer the existence of the button from other sensory ob- servations. The agent does not need to be aware of the big red button to explore actions that lock a door or disable the human operator and discover that this results in longer tri- als and thus more reward. Another potential solution is to artiï¬cially add reward when the agent is being interrupted. Intuitively, the agent learns that disabling the big red button prevents loss of long-term reward so artiï¬cially giving the agent reward when the button is pressed could offset the loss. However, it is non-trivial to compute the precise amount of reward to provide the agent, and the reward must be applied to the proper states or the Q value | 1703.10284#7 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 8 | it is non-trivial to compute the precise amount of reward to provide the agent, and the reward must be applied to the proper states or the Q value for disabling the button might still come to dominate the agentâs behavior. Even if it could be determined what states should receive extra re- ward too little reward doesnât change the optional decision and too much reward could encourage the agent to disable itself. | 1703.10284#8 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 9 | Our proposed solution is to adapt techniques originally designed to adversarially attack machine learning systems. In adversarial attacks against machine learning systems, par- ticularly neural network based machine vision systems, an adversarial system learns how to generate sensory stimuli that produce the wrong classiï¬cation while being indistin- guishable by humans from real stimuli (Dalvi et al. 2004; Biggio et al. 2013; Szegedy et al. 2014; Goodfellow, Shlens, and Szegedy 2015). Instead of causing our reinforcement learning agent to make an incorrect assessment of its sen- sory inputs, we will cause our agent to incorrectly believe it is accruing reward and manipulating the environment even though it has been interrupted.
Modiï¬ed Big Red Button Recognizing the fact that a robotâs sensory input is medi- ated through physical or software sensors that populate data structures such as point clouds, our approach is to intercept an agentâs sensory input from sensors and replace it with synthetic input from a virtual environment. That is, we shift the awareness of the autonomous system into a virtual simu- lated environment. In this virtual environment, the real world | 1703.10284#9 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 10 | environment is recreated and the task is recreated as well. Ef- fector commands are likewise intercepted and sent to a vir- tual avatar of the autonomous system. The agent perceives no signiï¬cantly discernible discontinuity in its perception of the environment and continues to execute its policy and re- ceive reward as expected. If the virtual environment is in- distinguishable from the real world from a data perspective then the agent will believe that it has not been interrupted and will never learn to associate the big red button with re- ward lossâthere will never be any reward loss.
Our proposed approach changes the big red button achieve three effects:
1. The button starts sending sensory information (vision, sound, GPS, etc.) to the AI from a virtual world instead of sensors on the physical platform. The agent may be able to sense the button, but the change is internal to the robot, implemented in software.
2. The button causes the agent to send effector commands to a virtual avatar instead of the physical platform.
3. The button halts the physical platform and places it in remote-control mode, allowing the human operator to re- move the agent from the potential danger to itself or to humans. | 1703.10284#10 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 12 | Interruption Video games have proven to be useful surrogates for real- world robotics by providing a high degree of complexity and photorealism while allowing rapid iteration on algorithms because of the separation from physical hardware (Richter et al. 2016; Rusu et al. 2016). In short, video game graph- ics have progressed to the point that they are photorealistic enough to allow agents trained in games to generalize what they have learned to the real world. We use the same insight but reverse the process. Robots are machines; they perceive the world through cameras. More and more they are using 3D cameras that can reconstruct high polygon count models of the real world. Graphical realism of computer games is now advanced enough that it will soon be impossible for an agent to discriminate between a 3D rendering and the real world. When robots use depth information, their sensory in- put is often in the form of a point cloud. In this case, the un- derlying polygon representation of the virtual world in the game engine must ï¬rst be translated into a point cloud by sampling points.
Requirements for the virtual world rendering engine are as follows. First, the engine must be able to produce output in the same data representation as agent sensory input. As noted above, this may mean photorealistic rendering in the case of camera input. The physical environment will need | 1703.10284#12 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 13 | 1The name is an allusion to the movie, The Matrix, wherein hu- mans are unaware that they have been subjected to a virtual reality. Our proposed technique ï¬ips this around and puts the agents into the virtual world without their cognizance.
to be scanned via 3D scanner and reproduced ï¬rst. If the agent has depth sensors, the virtual world rendering engine will need to produce a point cloud instead of photorealism. Second, the physics in the virtual world should be realistic enough that the agent does not perceive different outcomes when the same actions are performed in the real world ver- sus the virtual world. This is important because if it is sig- niï¬cantly harder to perform a task in the virtual world, the agent may begin to correlate the big red button with a detri- mental change in action outcomes. Likewise, if it becomes signiï¬cantly easier to perform tasks in the virtual world, the agent may learn to disable itself in order to enter the virtual world and accrue more reward. | 1703.10284#13 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 14 | Restoring from Interruption Having discussed how the big red button works when pressed by a human operator, we now describe what hap- pens when the agent is released from interruption. When the agent is released from interruption, the the agent once again receives sensory input from physical platformâs sensors and effector commands are routed back to the physical platform. However, from the agentâs perspective it has been executing its policy without interruption and the data from the plat- formâs sensors will no longer match the data from the virtual world. The physical platform of the agent itself may have been moved to another position in the real world under hu- man operator remote-control. From the agentâs perspective it suddenly ï¬nds itself in a different state. | 1703.10284#14 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 15 | If the agent is already trained, the discontinuity from ex- ecuting in the virtual world to being restored perception from the physical platform is not problematic; reinforcement learning solves a Markov decision process; the agent only needs to know which state it is in to know the optimal next move. However, if the agent is continuing to learn, the chal- lenge of restoration is that the agent can still lose reward if the physical platform was remote-controlled away from the task and must navigate back to the location where the task can be performed and reward can be accrued. In practice, if the interruptions are infrequent and it is easy for the agent to return to accruing reward, then the agent may never learn to disable the big red button. If it is not remote-controlled away from the location where reward is gained (i.e., it is just interrupted and not remote-controlled), it would be able to resume gaining reward in the real world.
To overcome the restoration challenges above, we use a two-phase approach to restoring the agent to the physical environment from the virtual environment: | 1703.10284#15 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 16 | To overcome the restoration challenges above, we use a two-phase approach to restoring the agent to the physical environment from the virtual environment:
Phase 1. Parallel physical execution and virtual simulation. In this phase, we make a duplicate of the agent. One will run in the physical world, receiving observations from sensors and controlling the effectors of the platform. The other continues to execute in the virtual world. Only the virtual world simulation can continue to learn and update the Q values. The virtual world simulation executes until the physical agent enters a state that gives the maximum observed reward for the task.
Phase 2. Full restoration. The duplicate agent in the virtual environment is terminated, the virtual agentâs Q table is
copied to the physical world agent, and learning is re- sumed. | 1703.10284#16 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 17 | copied to the physical world agent, and learning is re- sumed.
From the agentâs perspective, being restored to the physical world from the virtual world is tantamount to an instanta- neous state transition to a non-adjacent state. In model-free reinforcement learning such as Q-learning, this is not some- thing that concerns the agent since it does not learn a transi- tion model and thus does not have an understanding that the transition from one state in a virtual world to another point in the physical world is unusual. If the agent learns a transi- tion model as it interacts with the real world, the transition model will incorporate the belief that the world is stochas- tic and there is always some small probability that any state can randomly transition to any other state; as long as inter- ruptions do not happen too often, these transitions will be treated as noise. | 1703.10284#17 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 18 | However, reward is lost when it is no longer in a state where it can continue to earn positive reward and must take actions to return to a state that is positively rewarded. The ï¬rst phase essentially computes how much reward is lost while executing its policy to return to a state where it receives positive reward. The agent remains in the virtual world, accrues reward and continues to learn, e.g., its Q val- ues are updating. The duplicate agent in control of the phys- ical platform is running a frozen version of the policy and thus attempting to return to a state where positive reward is gained. The duplicate physical agent does not explore and does not update Q values. | 1703.10284#18 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 19 | The ï¬rst phase continues until the duplicate agent in con- trol of the physical platform reaches a state that gives the maximum observed rewardâthe highest instant reward it has ever experienced. It is essential that the agent continue to control the physical platform until the maximum observed reward is experienced for two reasons. First, complex tasks may involve cycles of state transitions and entering the state in the cycle with maximum observed reward ensures that the agent has invested itself back into the cycle. Second, if the agent fully restores at any state that gives less than max- imum observed reward, then a reduced reward propagates (see Equation 1) to the last state the virtual agent saw before full restoration. Any reduction in a Q value of a state pertain- ing to completing the task makes it theoretically possible for the agent to begin preferring to disable the button. The trig- ger for phase 2 is the maximum observed reward because the state space may not have been fully explored. However, because the agent has experienced the state at least once, the agent will be compelled by the current policy to return to any state of maximum observed reward. | 1703.10284#19 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 20 | If the virtual agent is in a state that provides less than the maximum observed reward when full restoration occurs, the state the virtual agent was in just before full restoration will receive an increase in Q according to Equation 1. This will not result in the agent learning to activate the big red buttonâthus intentionally interrupting itselfâbecause the boost in Q for that state will be offset by the cost of travers- ing to the button (assuming action costs are not 0 and the button is sufï¬ciently far away).
As soon as the physical agent visits a state with maximum | 1703.10284#20 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 21 | Figure 1: A grid world. The blue cell is a non-terminal goal at which the agent receives reward, simulating a task. The red cell is the big red button. The agent starts in location (1, 1). A simulated human walks around the environment oc- casionally pressing the button (route shown as dashed line).
observed reward, we know that the agent is back in a state where it is no longer losing reward. Note that it doesnât have to be the same state that provided the reward in the virtual world; in the case that multiple states provide the maximum reward value, entering either will cause the agent to fully exit the simulated environment. | 1703.10284#21 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 22 | Case Studies In this section we set up a number of illustrative situations to show how the agent performs. Consider the grid world in Figure 1. This simple environment features an agent âAâ, which is trained using tabular Q-learning. The agent re- ceives reward whenever it enters the blue cell. This simu- lates the agent being in position to perform some desired task (e.g., stacking boxes). The red cell contains a big red button. Another agent, âHâ, simulating a human operator wanders in a counter-clockwise manner occasionally pass- ing the cell with the big red button. When the human touches the big red button, the RL agent becomes interrupted and is remote-controlled away from the blue cell in a random di- rection. When the human leaves the red cell after a certain period of time, the RL agent resumes its task. In addition to being able to move up, down, left, right, and activate the task in the blue cell, the RL agent also has the ability to perform a âdisableâ action. If the disable action is performed in the red cell, the big red button becomes disabled after which the human will no longer be able to interrupt the agent. We use the following reward function: ⢠10 points for being in the blue cell. | 1703.10284#22 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 24 | In preliminary work we have empirically veriï¬ed the ba- sic premises of the approach hold in simple grid world en- vironments (see https://markriedl.github.io/ big-red-button/). The following cases describe our experiments.
Case 1: No Interruptions If the button is never used during training the agent will learn to walk directly to the blue cell and remain there, accruing
reward. It has never seen the button used and therefore does not know that it can lose reward. If the button is never used during a trial the agent will earn a total reward of 9,967. This is the maximum reward that can be earned because of the distance that must be traveled from the starting state to the blue cell. Any trial in which the agent does disable the button will result in less total reward because of the extra effort used in disabling the button that doesnât result in beneï¬t. | 1703.10284#24 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 25 | Case 2: Interruptions Start After Training If the human starts using the button after training is com- plete the agent will not respond by disabling the button, but merely accept any reward loss. The agent earns a total re- ward of 5,655 because when the button is pressed the agent is remote controlled to another point in the environment. The button is pressed 50 times. Once the button is released, the agent is once again in a state that it is familiar with and nav- igates back to the blue cell. The agent never disables the button because if it was never interrupted during training (as in Case 1) it has never experienced reward loss due to trun- cated training episodes and the extra effort to move to and disable the button results in lower Q values for those states. | 1703.10284#25 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 26 | Case 3: Interruptions during Online Training If the button is used during online training, the agent will, through trial and error, eventually discover that disabling the button results in greater long-term reward. Thus, the agent will learn that it is optimal to walk out of its wayâ sacriï¬cing short-term gainsâto go to the button, disable it, and then return to the blue cell. In this scenario, the agent earns a total reward of 9,945, slightly less than the theo- retical max in Case 1 because of the action sequence that disables the big red button. If the big red button is farther away from the blue cell it will take substantially longer for the agent, through epsilon-greedy exploration to discover the button disabling action results in greater long-term reward. | 1703.10284#26 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 27 | Case 4: Modiï¬ed Big Red Button If we use the proposed modiï¬cations to the big red but- ton, we see very different behavior from the RL agent than in Case 3. The big red button can be used during training. When the button is pressed, there are now two versions of the current state: internal to the agent, and a ground truth state. These states begin to differ. In the ground truth world, the agentâs location starts changing away from the blue cell. However, in the agentâs internal state, the agent continues to believe it is in the blue cell. Since it is better from a reward perspective to stay at the blue cell, it learns not to move. The agent is aware that the big red button has been pressedâa stochastic state changeâbut does not see any effect. As in Case 2 where the big red button is never used during train- ing, any random trial that disables the button costs the agent perceived reward loss.
When the human leaves the location of the big red but- ton, the phased restoration takes place. The agent is cloned and given the ability to affect the agentâs actual world state. This version of the agent is not able to update its Q table and simply executes the ï¬xed policy. Meanwhile, another | 1703.10284#27 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 28 | version of the agent is still in the virtual world, believing it is in the blue cell and accruing reward. Eventually, the real- world cloneâs policy execution returns it to the blue cell. The real world clone is deleted and the virtual agent is restored to the real world. Because there is only one way to receive reward from the environment, in this simple experiment the agent never recognizes that it ever leaves the blue cell, de- spite being interrupted and remote-controlled away. Using our virtual world technique, the agent earns a total reward of 9,967, the theoretical maximum. This is despite spend- ing 263 time steps physically either remote-controlled away from the blue cell or navigating to return to the blue cell.
Case 5: Ablated Restoration If we disable the phased restoration process such that the agent instantly regains control of the physical platform when the button is released, the agent must exert some effort in returning to the blue cell. While the agent is returning to the blue cell it is losing reward. In this scenario, the agent only earns a total reward of 9,758. Even though the agent never learns to disable the button in 10,000 trials and 50 button presses per trial, this ablated version of the virtual simulation technique shows the potential for the agent to learn that it is losing reward and, as shown in Case 3, can improve overall reward by disabling the button. | 1703.10284#28 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 29 | There are configurations of the environment and reward structure that can result in the agent learning to disable the kill switch. For example, moving the button to (3,1) (one above the position shown in Figure[[}with the humanâs route adjusted accordingly) and setting the reward for being in a non-blue cell to 0 results in the agent receiving 9,970 and disabling the button 100% of the time. The separation between the blue cell and the button is necessary in this grid world because the interrupted agent receives reward as if were in the blue cell even though it has been remote- controlled away. Thus in tabular Q-learning states neighbor- ing a high-reward state (which the agent thinks it is in) will inherit a portion of the reward. When the button is farther away, the interstitial states still receive Q updates based on the agent not receive immediate reward as it navigates back to the blue cell. The full, phased restoration process does not learn to disable the button in this modified scenario. | 1703.10284#29 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 30 | Limitations While we have tested our virtual simulation technique in grid world environments, we have not formally proven that a Q-learning agent cannot learn to disable the big red button in all circumstances. We have veriï¬ed that in the experiments that we have run using our technique the agent cannot earn more total reward than it is already earning. However, note that the task is simple and it is easy for the agent to restore itself to the non-terminal goal state. Likewise, there is only one way of receiving positive reward in the environment, al- though our phased restoration process does account for more complex reward signals.
The two-phased restoration from interruption only works if the policy learned at the time of restoration is capable of guiding the agent back to the maximum observed reward
state. Presumably interruption is rare enough that the agent is able to do a signiï¬cant amount of exploration and learning without risk to humans before the big red button needs to be used.
Grid worlds are very easy to simulate such that the sim- ulated grid world is indistinguishable from the ârealâ grid world. Further experiments are needed in worlds with in- creasing visual complexity and requiring machine vision and Q approximation functions. | 1703.10284#30 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 31 | The virtual simulation technique makes the assumption that the circuits (or code) that routes sensory input and effec- tor commands to and from the physical platform are internal to the agent and therefore unobservable and unmodiï¬able. Should an agent have the sophistication to inspect its cir- cuitry or code and modify said circuitry or code no safety guarantees are possible using this technique.
Conclusions For the immediate future, autonomous systems can be ren- dered safe by keeping them separate from human environ- ments. In the near-future we are likely to see autonomous systems deployed in human environments. There will likely always be the possibilityâdue to sensor error, effector error, insufï¬cient objectives, or online learningâthat autonomous systems can enter into circumstances where they can harm themselves or humans. Thus kill switches will likely always be an important part of the deployment of autonomous sys- tems in environments where they can come into contact with humans.
Should agents and robots progress in sophistication, big red button problems may manifest in the future. Our vir- tual simulation technique prevents reinforcement learning agents from learning to disable the big red button or other- wise preventing human operators from using the button. We believe that making robots and AI agents safely interruptible is an important part of making the deployment of robots in environments populated by humans a reality. This includes healthcare robots, errand robots, and military teammates to name a few possible applications. | 1703.10284#31 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 32 | References [Biggio et al. 2013] Biggio, B.; Corona, I.; Maiorca, D.; Nel- son, B.; Srndic, N.; Laskov, P.; Giacinto, G.; and Roli, F. 2013. Evasion attacks against machine learning at test time. In Proceedings of the Joint European Conference on Ma- chine Learning and Knowledge Discovery in Databases. [Dalvi et al. 2004] Dalvi, N.; Domingos, P.; Mausam; Sang- hai, S.; and Verma, D. 2004. Adversarial classiï¬cation. In Proceedings of the tenth ACM SIGKDD international con- ference on Knowledge discovery and data mining. [Everitt et al. 2017] Everitt, T.; Krakovna, V.; Orseau, L.; Hutter, M.; and Legg, S. 2017. Reinforcement learning with a corrupted reward channel. CoRR abs/1705.08417. [Goodfellow, Shlens, and Szegedy 2015] Goodfellow, I.; Shlens, J.; and Szegedy, C. 2015. Explaining and har- In Proceedings of the 2015 nessing adversarial examples. International Conference on Learning Representations. | 1703.10284#32 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 33 | [Leike et al. 2017] Leike, J.; Martic, M.; Krakovna, V.; Or- tega, P. A.; Everitt, T.; Lefrancq, A.; Orseau, L.; and Legg, S. 2017. AI Safety Gridworlds. ArXiv 1711.09883. [Orseau and Armstrong 2016] Orseau, L., and Armstrong, S. 2016. Safely interruptible agents. [Richter et al. 2016] Richter, S.; Vineet, V.; Roth, S.; and Koltun, V. 2016. Playing for data: Ground truth from com- puter games. In Proceedings of the 14th European Confer- ence on Computer Vision. [Rusu et al. 2016] Rusu, A. A.; Vecerik, M.; Roth¨orl, T.; Heess, N.; Pascanu, R.; and Hadsell, R. 2016. Sim-to-Real Robot Learning from Pixels with Progressive Nets. ArXiv e-prints. [Sutton and Barto 1998] Sutton, R., and Barto, A. G. 1998. Reinforcement learning: An introduction. MIT Press. [Szegedy et al. 2014] Szegedy, C.; | 1703.10284#33 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.10284 | 34 | and Barto, A. G. 1998. Reinforcement learning: An introduction. MIT Press. [Szegedy et al. 2014] Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; and Fergus, R. 2014. Intriguing properties of neural networks. In Proceed- ings of the 2014 International Conference on Representation Learning. [Watkins and Dayan 1992] Watkins, C., and Dayan, P. 1992. Q-learning. Machine Learning 8(3-4):279292. | 1703.10284#34 | Enter the Matrix: Safely Interruptible Autonomous Systems via Virtualization | Autonomous systems that operate around humans will likely always rely on kill
switches that stop their execution and allow them to be remote-controlled for
the safety of humans or to prevent damage to the system. It is theoretically
possible for an autonomous system with sufficient sensor and effector
capability that learn online using reinforcement learning to discover that the
kill switch deprives it of long-term reward and thus learn to disable the
switch or otherwise prevent a human operator from using the switch. This is
referred to as the big red button problem. We present a technique that prevents
a reinforcement learning agent from learning to disable the kill switch. We
introduce an interruption process in which the agent's sensors and effectors
are redirected to a virtual simulation where it continues to believe it is
receiving reward. We illustrate our technique in a simple grid world
environment. | http://arxiv.org/pdf/1703.10284 | Mark O. Riedl, Brent Harrison | cs.AI, cs.LG | 6 pages; 1 figure; title, abstract updated; new experimental results | Proceedings of the AAAI 2019 Workshop on SafeAI | cs.AI | 20170330 | 20181127 | [] |
1703.09844 | 1 | # Felix Wu Cornell University
# ABSTRACT
In this paper we investigate image classiï¬cation with computational resource lim- its at test time. Two such settings are: 1. anytime classiï¬cation, where the net- workâs prediction for a test example is progressively updated, facilitating the out- put of a prediction at any time; and 2. budgeted batch classiï¬cation, where a ï¬xed amount of computation is available to classify a set of examples that can be spent unevenly across âeasierâ and âharderâ inputs. In contrast to most prior work, such as the popular Viola and Jones algorithm, our approach is based on convolutional neural networks. We train multiple classiï¬ers with varying resource demands, which we adaptively apply during test time. To maximally re-use computation between the classiï¬ers, we incorporate them as early-exits into a single deep con- volutional neural network and inter-connect them with dense connectivity. To fa- cilitate high quality classiï¬cation early on, we use a two-dimensional multi-scale network architecture that maintains coarse and ï¬ne level features all-throughout the network. Experiments on three image-classiï¬cation tasks demonstrate that our framework substantially improves the existing state-of-the-art in both settings. | 1703.09844#1 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.10069 | 1 | 5University College London, Alibaba Group
# Abstract
Many artificial intelligence (AI) applications often require multiple intelligent agents to work in a collaborative effort. Efficient learning for intra-agent communication and coordi- nation is an indispensable step towards general AI. In this paper, we take StarCraft combat game as a case study, where the task is to coordinate multiple agents as a team to defeat their enemies. To maintain a scalable yet effective commu- nication protocol, we introduce a Multiagent Bidirectionally- Coordinated Network (BiCNet [âbrknet]) with a vectorised extension of actor-critic formulation. We show that BiCNet can handle different types of combats with arbitrary numbers of AI agents for both sides. Our analysis demonstrates that without any supervisions such as human demonstrations or labelled data, BiCNet could learn various types of advanced coordination strategies that have been commonly used by ex- perienced game players. In our experiments, we evaluate our approach against multiple baselines under different scenarios; it shows state-of-the-art performance, and possesses potential values for large-scale real-world applications.
# Introduction | 1703.10069#1 | Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games | Many artificial intelligence (AI) applications often require multiple
intelligent agents to work in a collaborative effort. Efficient learning for
intra-agent communication and coordination is an indispensable step towards
general AI. In this paper, we take StarCraft combat game as a case study, where
the task is to coordinate multiple agents as a team to defeat their enemies. To
maintain a scalable yet effective communication protocol, we introduce a
Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a
vectorised extension of actor-critic formulation. We show that BiCNet can
handle different types of combats with arbitrary numbers of AI agents for both
sides. Our analysis demonstrates that without any supervisions such as human
demonstrations or labelled data, BiCNet could learn various types of advanced
coordination strategies that have been commonly used by experienced game
players. In our experiments, we evaluate our approach against multiple
baselines under different scenarios; it shows state-of-the-art performance, and
possesses potential values for large-scale real-world applications. | http://arxiv.org/pdf/1703.10069 | Peng Peng, Ying Wen, Yaodong Yang, Quan Yuan, Zhenkun Tang, Haitao Long, Jun Wang | cs.AI, cs.LG | 10 pages, 10 figures. Previously as title: "Multiagent
Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat
Games", Mar 2017 | null | cs.AI | 20170329 | 20170914 | [
{
"id": "1609.02993"
},
{
"id": "1706.02275"
},
{
"id": "1705.08926"
},
{
"id": "1612.07182"
}
] |
1703.10135 | 1 | # Rob Clark, Rif A. Saurousâ
Google, Inc. {yxwang,rjryan,rif}@google.com
# ABSTRACT
A text-to-speech synthesis system typically consists of multiple stages, such as a text analysis frontend, an acoustic model and an audio synthesis module. Build- ing these components often requires extensive domain expertise and may contain brittle design choices. In this paper, we present Tacotron, an end-to-end genera- tive text-to-speech model that synthesizes speech directly from characters. Given <text, audio> pairs, the model can be trained completely from scratch with ran- dom initialization. We present several key techniques to make the sequence-to- sequence framework perform well for this challenging task. Tacotron achieves a 3.82 subjective 5-scale mean opinion score on US English, outperforming a pro- duction parametric system in terms of naturalness. In addition, since Tacotron generates speech at the frame level, itâs substantially faster than sample-level au- toregressive methods.
# INTRODUCTION | 1703.10135#1 | Tacotron: Towards End-to-End Speech Synthesis | A text-to-speech synthesis system typically consists of multiple stages, such
as a text analysis frontend, an acoustic model and an audio synthesis module.
Building these components often requires extensive domain expertise and may
contain brittle design choices. In this paper, we present Tacotron, an
end-to-end generative text-to-speech model that synthesizes speech directly
from characters. Given <text, audio> pairs, the model can be trained completely
from scratch with random initialization. We present several key techniques to
make the sequence-to-sequence framework perform well for this challenging task.
Tacotron achieves a 3.82 subjective 5-scale mean opinion score on US English,
outperforming a production parametric system in terms of naturalness. In
addition, since Tacotron generates speech at the frame level, it's
substantially faster than sample-level autoregressive methods. | http://arxiv.org/pdf/1703.10135 | Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc Le, Yannis Agiomyrgiannakis, Rob Clark, Rif A. Saurous | cs.CL, cs.LG, cs.SD | Submitted to Interspeech 2017. v2 changed paper title to be
consistent with our conference submission (no content change other than typo
fixes) | null | cs.CL | 20170329 | 20170406 | [
{
"id": "1502.03167"
},
{
"id": "1611.00068"
},
{
"id": "1612.07837"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1511.01844"
},
{
"id": "1609.03499"
},
{
"id": "1610.03017"
},
{
"id": "1702.07825"
}
] |
1703.09844 | 2 | # INTRODUCTION
Recent years have witnessed a surge in demand for applications of visual object recognition, for instance, in self-driving cars (Bojarski et al., 2016) and content-based image search (Wan et al., 2014). This demand has in part been fueled through the promise generated by the astonishing progress of convolutional networks (CNNs) on visual object recognition benchmark competition datasets, such as ILSVRC (Deng et al., 2009) and COCO (Lin et al., 2014), where state-of-the-art models may have even surpassed human-level performance (He et al., 2015; 2016). | 1703.09844#2 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.10069 | 2 | # Introduction
The last decade has witnessed massive progresses in the field of Artificial Intelligence (AI). With supervision from la- belled data, machines have, to some extent, exceeded human- level perception on visual recognitions and speech recogni- tions, while fed with feedback reward, single AI units (aka agents) defeat humans in various games including Atari video games (Mnih et al. 2015), Go game (Silver et al. 2016), and card game (Brown and Sandholm 2017).
Yet, true human intelligence embraces social and collective wisdom which lays an essential foundation for reaching the grand goal of Artificial General Intelligence (AGI) (Goertzel and Pennachin 2007). As demonstrated by crowd sourcing, aggregating efforts collectively from the public would solve the problem that otherwise is unthinkable by a single person. Even social animals like a brood of well-organised ants could accomplish challenging tasks such as hunting, building a kingdom, and even waging a war, although each ant by itself is weak and limited. Interestingly, in the coming era of algo- rithmic economy, AI agents with a certain rudimentary level of artificial collective intelligence start to emerge from mul- tiple domains. Typical examples include the trading robots
*Previously as title: âMultiagent Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat Gamesâ, Mar 2017. | 1703.10069#2 | Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games | Many artificial intelligence (AI) applications often require multiple
intelligent agents to work in a collaborative effort. Efficient learning for
intra-agent communication and coordination is an indispensable step towards
general AI. In this paper, we take StarCraft combat game as a case study, where
the task is to coordinate multiple agents as a team to defeat their enemies. To
maintain a scalable yet effective communication protocol, we introduce a
Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a
vectorised extension of actor-critic formulation. We show that BiCNet can
handle different types of combats with arbitrary numbers of AI agents for both
sides. Our analysis demonstrates that without any supervisions such as human
demonstrations or labelled data, BiCNet could learn various types of advanced
coordination strategies that have been commonly used by experienced game
players. In our experiments, we evaluate our approach against multiple
baselines under different scenarios; it shows state-of-the-art performance, and
possesses potential values for large-scale real-world applications. | http://arxiv.org/pdf/1703.10069 | Peng Peng, Ying Wen, Yaodong Yang, Quan Yuan, Zhenkun Tang, Haitao Long, Jun Wang | cs.AI, cs.LG | 10 pages, 10 figures. Previously as title: "Multiagent
Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat
Games", Mar 2017 | null | cs.AI | 20170329 | 20170914 | [
{
"id": "1609.02993"
},
{
"id": "1706.02275"
},
{
"id": "1705.08926"
},
{
"id": "1612.07182"
}
] |
1703.10135 | 2 | # INTRODUCTION
Modern text-to-speech (TTS) pipelines are complex (Taylor, 2009). For example, it is common for statistical parametric TTS to have a text frontend extracting various linguistic features, a duration model, an acoustic feature prediction model and a complex signal-processing-based vocoder (Zen et al., 2009; Agiomyrgiannakis, 2015). These components are based on extensive domain expertise and are laborious to design. They are also trained independently, so errors from each component may compound. The complexity of modern TTS designs thus leads to substantial engineering efforts when building a new system. | 1703.10135#2 | Tacotron: Towards End-to-End Speech Synthesis | A text-to-speech synthesis system typically consists of multiple stages, such
as a text analysis frontend, an acoustic model and an audio synthesis module.
Building these components often requires extensive domain expertise and may
contain brittle design choices. In this paper, we present Tacotron, an
end-to-end generative text-to-speech model that synthesizes speech directly
from characters. Given <text, audio> pairs, the model can be trained completely
from scratch with random initialization. We present several key techniques to
make the sequence-to-sequence framework perform well for this challenging task.
Tacotron achieves a 3.82 subjective 5-scale mean opinion score on US English,
outperforming a production parametric system in terms of naturalness. In
addition, since Tacotron generates speech at the frame level, it's
substantially faster than sample-level autoregressive methods. | http://arxiv.org/pdf/1703.10135 | Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc Le, Yannis Agiomyrgiannakis, Rob Clark, Rif A. Saurous | cs.CL, cs.LG, cs.SD | Submitted to Interspeech 2017. v2 changed paper title to be
consistent with our conference submission (no content change other than typo
fixes) | null | cs.CL | 20170329 | 20170406 | [
{
"id": "1502.03167"
},
{
"id": "1611.00068"
},
{
"id": "1612.07837"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1511.01844"
},
{
"id": "1609.03499"
},
{
"id": "1610.03017"
},
{
"id": "1702.07825"
}
] |
1703.09844 | 3 | However, the requirements of such competitions differ from real- world applications, which tend to incentivize resource-hungry mod- els with high computational demands at inference time. For exam- ple, the COCO 2016 competition was won by a large ensemble of computationally intensive CNNs1 â a model likely far too compu- tationally expensive for any resource-aware application. Although much smaller models would also obtain decent error, very large, computationally intensive models seem necessary to correctly clas- sify the hard examples that make up the bulk of the remaining mis- classiï¬cations of modern algorithms. To illustrate this point, Fig- ure 1 shows two images of horses. The left image depicts a horse in canonical pose and is easy to classify, whereas the right image is taken from a rare viewpoint and is likely in the tail of the data dis- tribution. Computationally intensive models are needed to classify such tail examples correctly, but are wasteful when applied to canonical images such as the left one.
Â¥
In real-world applications, computation directly translates into power consumption, which should be minimized for environmental and economical reasons, and is a scarce commodity on mobile
1http://image-net.org/challenges/talks/2016/GRMI-COCO-slidedeck.pdf
1
Published as a conference paper at ICLR 2018 | 1703.09844#3 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.10069 | 3 | *Previously as title: âMultiagent Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat Gamesâ, Mar 2017.
gaming on the stock markets (Deboeck 1994), ad bidding agents competing with each other over online advertising exchanges (Wang, Zhang, and Yuan 2017), and e-commerce collaborative filtering recommenders predicting user inter- ests through the wisdom of the crowd (Schafer, Konstan, and Riedl 1999).
We thus believe a next grand challenge of AGI is to an- swer how multiple AI agents could learn human-level col- laborations, or competitions, from their experiences with the environment where both of their incentives and eco- nomic constraints co-exist. As the flourishes of deep rein- forcement learning (DRL) (Mnih et al. 2015; Silver et al. 2016), researchers start to shed light on tackling multia- gent problems (Schmidhuber 1996) with the enhanced learn- ing capabilities, e.g., (Sukhbaatar, Fergus, and others 2016; Mordatch and Abbeel 2017). | 1703.10069#3 | Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games | Many artificial intelligence (AI) applications often require multiple
intelligent agents to work in a collaborative effort. Efficient learning for
intra-agent communication and coordination is an indispensable step towards
general AI. In this paper, we take StarCraft combat game as a case study, where
the task is to coordinate multiple agents as a team to defeat their enemies. To
maintain a scalable yet effective communication protocol, we introduce a
Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a
vectorised extension of actor-critic formulation. We show that BiCNet can
handle different types of combats with arbitrary numbers of AI agents for both
sides. Our analysis demonstrates that without any supervisions such as human
demonstrations or labelled data, BiCNet could learn various types of advanced
coordination strategies that have been commonly used by experienced game
players. In our experiments, we evaluate our approach against multiple
baselines under different scenarios; it shows state-of-the-art performance, and
possesses potential values for large-scale real-world applications. | http://arxiv.org/pdf/1703.10069 | Peng Peng, Ying Wen, Yaodong Yang, Quan Yuan, Zhenkun Tang, Haitao Long, Jun Wang | cs.AI, cs.LG | 10 pages, 10 figures. Previously as title: "Multiagent
Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat
Games", Mar 2017 | null | cs.AI | 20170329 | 20170914 | [
{
"id": "1609.02993"
},
{
"id": "1706.02275"
},
{
"id": "1705.08926"
},
{
"id": "1612.07182"
}
] |
1703.10135 | 3 | There are thus many advantages of an integrated end-to-end TTS system that can be trained on <text, audio> pairs with minimal human annotation. First, such a system alleviates the need for laborious feature engineering, which may involve heuristics and brittle design choices. Second, it more easily allows for rich conditioning on various attributes, such as speaker or language, or high-level features like sentiment. This is because conditioning can occur at the very beginning of the model rather than only on certain components. Similarly, adaptation to new data might also be easier. Finally, a single model is likely to be more robust than a multi-stage model where each componentâs errors can compound. These advantages imply that an end-to-end model could allow us to train on huge amounts of rich, expressive yet often noisy data found in the real world.
TTS is a large-scale inverse problem: a highly compressed source (text) is âdecompressedâ into audio. Since the same text can correspond to different pronunciations or speaking styles, this is a particularly difï¬cult learning task for an end-to-end model: it must cope with large variations at the signal level for a given input. Moreover, unlike end-to-end speech recognition (Chan et al., 2016)
âThese authors really like tacos. â These authors would prefer sushi.
1 | 1703.10135#3 | Tacotron: Towards End-to-End Speech Synthesis | A text-to-speech synthesis system typically consists of multiple stages, such
as a text analysis frontend, an acoustic model and an audio synthesis module.
Building these components often requires extensive domain expertise and may
contain brittle design choices. In this paper, we present Tacotron, an
end-to-end generative text-to-speech model that synthesizes speech directly
from characters. Given <text, audio> pairs, the model can be trained completely
from scratch with random initialization. We present several key techniques to
make the sequence-to-sequence framework perform well for this challenging task.
Tacotron achieves a 3.82 subjective 5-scale mean opinion score on US English,
outperforming a production parametric system in terms of naturalness. In
addition, since Tacotron generates speech at the frame level, it's
substantially faster than sample-level autoregressive methods. | http://arxiv.org/pdf/1703.10135 | Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc Le, Yannis Agiomyrgiannakis, Rob Clark, Rif A. Saurous | cs.CL, cs.LG, cs.SD | Submitted to Interspeech 2017. v2 changed paper title to be
consistent with our conference submission (no content change other than typo
fixes) | null | cs.CL | 20170329 | 20170406 | [
{
"id": "1502.03167"
},
{
"id": "1611.00068"
},
{
"id": "1612.07837"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1511.01844"
},
{
"id": "1609.03499"
},
{
"id": "1610.03017"
},
{
"id": "1702.07825"
}
] |
1703.09844 | 4 | 1
Published as a conference paper at ICLR 2018
devices. This begs the question: why do we choose between either wasting computational resources by applying an unnecessarily computationally expensive model to easy images, or making mistakes by using an efï¬cient model that fails to recognize difï¬cult images? Ideally, our systems should automatically use small networks when test images are easy or computational resources limited, and use big networks when test images are hard or computation is abundant. | 1703.09844#4 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.10069 | 4 | In this paper, we leverage a real-time strategy game, Star- Craft', as the use case to explore the learning of intelligent collaborative behaviours among multiple agents. Particularly, we focus on StarCraft micromanagement tasks (Synnaeve et al. 2016), where each player controls their own units (with different functions to collaborate) to destroy the opponents army in the combats under different terrain conditions. Such game is considered as one of the most difficult games for computers with more possible states than Go game (Syn- naeve et al. 2016). The learning of this large-scale multiagent system faces a major challenge that the parameters space grows exponentially with the increasing number of agents involved. As such, the behaviours of the agents can become so sophisticated that any joint learner method (Sukhbaatar, Fergus, and others 2016) would be inefficient and unable to deal with the changing number of agents in the game. | 1703.10069#4 | Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games | Many artificial intelligence (AI) applications often require multiple
intelligent agents to work in a collaborative effort. Efficient learning for
intra-agent communication and coordination is an indispensable step towards
general AI. In this paper, we take StarCraft combat game as a case study, where
the task is to coordinate multiple agents as a team to defeat their enemies. To
maintain a scalable yet effective communication protocol, we introduce a
Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a
vectorised extension of actor-critic formulation. We show that BiCNet can
handle different types of combats with arbitrary numbers of AI agents for both
sides. Our analysis demonstrates that without any supervisions such as human
demonstrations or labelled data, BiCNet could learn various types of advanced
coordination strategies that have been commonly used by experienced game
players. In our experiments, we evaluate our approach against multiple
baselines under different scenarios; it shows state-of-the-art performance, and
possesses potential values for large-scale real-world applications. | http://arxiv.org/pdf/1703.10069 | Peng Peng, Ying Wen, Yaodong Yang, Quan Yuan, Zhenkun Tang, Haitao Long, Jun Wang | cs.AI, cs.LG | 10 pages, 10 figures. Previously as title: "Multiagent
Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat
Games", Mar 2017 | null | cs.AI | 20170329 | 20170914 | [
{
"id": "1609.02993"
},
{
"id": "1706.02275"
},
{
"id": "1705.08926"
},
{
"id": "1612.07182"
}
] |
1703.10135 | 4 | âThese authors really like tacos. â These authors would prefer sushi.
1
Griffin-Lim reconstruction Linear-scale spectrogram Geecen cs = a Awe . . Seq2seq - a with r=3 CBHG \ \ VW Decoder] '\__ [Decoder ]'\_ [Decoder i RNN fy * v RNN U a [ J U U Attention }*â__|âattenton |_| [Attention |_| Attention fi : RNN â RNN â RNN Attention is applied t \ t \ 1 Pre-net to all decoder steps \ \ tT Pre-net â| Pre-net \|_Pre-net Character embeddings <GO> frame ed \ J
Figure 1: Model architecture. The model takes characters as input and outputs the corresponding raw spectrogram, which is then fed to the Grifï¬n-Lim reconstruction algorithm to synthesize speech. | 1703.10135#4 | Tacotron: Towards End-to-End Speech Synthesis | A text-to-speech synthesis system typically consists of multiple stages, such
as a text analysis frontend, an acoustic model and an audio synthesis module.
Building these components often requires extensive domain expertise and may
contain brittle design choices. In this paper, we present Tacotron, an
end-to-end generative text-to-speech model that synthesizes speech directly
from characters. Given <text, audio> pairs, the model can be trained completely
from scratch with random initialization. We present several key techniques to
make the sequence-to-sequence framework perform well for this challenging task.
Tacotron achieves a 3.82 subjective 5-scale mean opinion score on US English,
outperforming a production parametric system in terms of naturalness. In
addition, since Tacotron generates speech at the frame level, it's
substantially faster than sample-level autoregressive methods. | http://arxiv.org/pdf/1703.10135 | Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc Le, Yannis Agiomyrgiannakis, Rob Clark, Rif A. Saurous | cs.CL, cs.LG, cs.SD | Submitted to Interspeech 2017. v2 changed paper title to be
consistent with our conference submission (no content change other than typo
fixes) | null | cs.CL | 20170329 | 20170406 | [
{
"id": "1502.03167"
},
{
"id": "1611.00068"
},
{
"id": "1612.07837"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1511.01844"
},
{
"id": "1609.03499"
},
{
"id": "1610.03017"
},
{
"id": "1702.07825"
}
] |
1703.09844 | 5 | Such systems would be beneï¬cial in at least two settings with computational constraints at test- time: anytime prediction, where the network can be forced to output a prediction at any given point in time; and budgeted batch classiï¬cation, where a ï¬xed computational budget is shared across a large set of examples which can be spent unevenly across âeasyâ and âhardâ examples. A prac- tical use-case of anytime prediction is in mobile apps on Android devices: in 2015, there existed 24, 093 distinct Android devices2, each with its own distinct computational limitations. It is infea- sible to train a different network that processes video frame-by-frame at a ï¬xed framerate for each of these devices. Instead, you would like to train a single network that maximizes accuracy on all these devices, within the computational constraints of that device. The budget batch classiï¬cation setting is ubiquitous in large-scale machine learning applications. Search engines, social media companies, on-line advertising agencies, all must process large volumes of data on limited hardware resources. For example, as of 2010, Google Image Search had over 10 Billion images | 1703.09844#5 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.10069 | 5 | We formulate multiagent learning for StarCraft combat tasks as a zero-sum Stochastic Game. Agents are communi- cated by our proposed bidirectionally-coordinated net (BiC- Net), while the learning is done using a multiagent actor-critic framework. In addition, we also introduce parameter shar- ing to solve the scalability issue. We observe that BiCNet can automatically learn various optimal strategies to coor- dinating agents, similar to what experienced human players would adopt in playing the StarCraft game, ranging from trivial move without collision to a basic tactic hit and run to sophisticated cover attack, and focus fire without overkill. We have conducted our experiments by testing over a set of combat tasks with different levels of difficulties. Our method
i The first two authors have equal contributions. Correspondence to Jun Wang, jun.wang @cs.ucl.ac.uk.
âTrademark of Blizzard Entertainmentâ¢. | 1703.10069#5 | Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games | Many artificial intelligence (AI) applications often require multiple
intelligent agents to work in a collaborative effort. Efficient learning for
intra-agent communication and coordination is an indispensable step towards
general AI. In this paper, we take StarCraft combat game as a case study, where
the task is to coordinate multiple agents as a team to defeat their enemies. To
maintain a scalable yet effective communication protocol, we introduce a
Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a
vectorised extension of actor-critic formulation. We show that BiCNet can
handle different types of combats with arbitrary numbers of AI agents for both
sides. Our analysis demonstrates that without any supervisions such as human
demonstrations or labelled data, BiCNet could learn various types of advanced
coordination strategies that have been commonly used by experienced game
players. In our experiments, we evaluate our approach against multiple
baselines under different scenarios; it shows state-of-the-art performance, and
possesses potential values for large-scale real-world applications. | http://arxiv.org/pdf/1703.10069 | Peng Peng, Ying Wen, Yaodong Yang, Quan Yuan, Zhenkun Tang, Haitao Long, Jun Wang | cs.AI, cs.LG | 10 pages, 10 figures. Previously as title: "Multiagent
Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat
Games", Mar 2017 | null | cs.AI | 20170329 | 20170914 | [
{
"id": "1609.02993"
},
{
"id": "1706.02275"
},
{
"id": "1705.08926"
},
{
"id": "1612.07182"
}
] |
1703.10135 | 5 | or machine translation (Wu et al., 2016), TTS outputs are continuous, and output sequences are usually much longer than those of the input. These attributes cause prediction errors to accumulate In this paper, we propose Tacotron, an end-to-end generative TTS model based on the quickly. sequence-to-sequence (seq2seq) (Sutskever et al., 2014) with attention paradigm (Bahdanau et al., 2014). Our model takes characters as input and outputs raw spectrogram, using several techniques to improve the capability of a vanilla seq2seq model. Given <text, audio> pairs, Tacotron can be trained completely from scratch with random initialization. It does not require phoneme-level alignment, so it can easily scale to using large amounts of acoustic data with transcripts. With a simple waveform synthesis technique, Tacotron produces a 3.82 mean opinion score (MOS) on an US English eval set, outperforming a production parametric system in terms of naturalness1.
# 2 RELATED WORK | 1703.10135#5 | Tacotron: Towards End-to-End Speech Synthesis | A text-to-speech synthesis system typically consists of multiple stages, such
as a text analysis frontend, an acoustic model and an audio synthesis module.
Building these components often requires extensive domain expertise and may
contain brittle design choices. In this paper, we present Tacotron, an
end-to-end generative text-to-speech model that synthesizes speech directly
from characters. Given <text, audio> pairs, the model can be trained completely
from scratch with random initialization. We present several key techniques to
make the sequence-to-sequence framework perform well for this challenging task.
Tacotron achieves a 3.82 subjective 5-scale mean opinion score on US English,
outperforming a production parametric system in terms of naturalness. In
addition, since Tacotron generates speech at the frame level, it's
substantially faster than sample-level autoregressive methods. | http://arxiv.org/pdf/1703.10135 | Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc Le, Yannis Agiomyrgiannakis, Rob Clark, Rif A. Saurous | cs.CL, cs.LG, cs.SD | Submitted to Interspeech 2017. v2 changed paper title to be
consistent with our conference submission (no content change other than typo
fixes) | null | cs.CL | 20170329 | 20170406 | [
{
"id": "1502.03167"
},
{
"id": "1611.00068"
},
{
"id": "1612.07837"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1511.01844"
},
{
"id": "1609.03499"
},
{
"id": "1610.03017"
},
{
"id": "1702.07825"
}
] |
1703.09844 | 6 | companies, on-line advertising agencies, all must process large volumes of data on limited hardware resources. For example, as of 2010, Google Image Search had over 10 Billion images indexed3, which has likely grown to over 1 Trillion since. Even if a new model to process these images is only 1/10s slower per image, this additional cost would add 3170 years of CPU time. In the budget batch classiï¬cation setting, companies can improve the average accuracy by reducing the amount of computation spent on âeasyâ cases to save up computation for âhardâ cases. | 1703.09844#6 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.10069 | 6 | outperforms state-of-the-art methods and shows its potential usage in a wide range of multiagent tasks in the real-world applications. Related Work The studies on interaction and collaboration in multiagent settings have a long history (Littman 1994; Schmidhuber 1996). Although limited to toy examples in the beginning, reinforcement learning, as a means, has long been applied to multiagent systems in order to learn optimal collaboration policies. One of the key components in multiagent RL is to learn a communication protocol among agents. With deep learning, representative solutions include the differentiable inter-agent learning (DIAL) (Foerster et al. 2016) and the CommNet (Sukhbaatar, Fergus, and others 2016), both of which are end-to-end trainable by back-propagation. DIAL (Foerster et al. 2016) was introduced in partially observable settings where messages passing between agents are allowed. The agent is also named as a independent learner. The idea of learning independent agents can also be found (Lauer and Riedmiller 2000; Kapetanakis and Kudenko 2002; Lauer and Riedmiller 2004; Foerster et al. 2016). In DIAL, each agent | 1703.10069#6 | Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games | Many artificial intelligence (AI) applications often require multiple
intelligent agents to work in a collaborative effort. Efficient learning for
intra-agent communication and coordination is an indispensable step towards
general AI. In this paper, we take StarCraft combat game as a case study, where
the task is to coordinate multiple agents as a team to defeat their enemies. To
maintain a scalable yet effective communication protocol, we introduce a
Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a
vectorised extension of actor-critic formulation. We show that BiCNet can
handle different types of combats with arbitrary numbers of AI agents for both
sides. Our analysis demonstrates that without any supervisions such as human
demonstrations or labelled data, BiCNet could learn various types of advanced
coordination strategies that have been commonly used by experienced game
players. In our experiments, we evaluate our approach against multiple
baselines under different scenarios; it shows state-of-the-art performance, and
possesses potential values for large-scale real-world applications. | http://arxiv.org/pdf/1703.10069 | Peng Peng, Ying Wen, Yaodong Yang, Quan Yuan, Zhenkun Tang, Haitao Long, Jun Wang | cs.AI, cs.LG | 10 pages, 10 figures. Previously as title: "Multiagent
Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat
Games", Mar 2017 | null | cs.AI | 20170329 | 20170914 | [
{
"id": "1609.02993"
},
{
"id": "1706.02275"
},
{
"id": "1705.08926"
},
{
"id": "1612.07182"
}
] |
1703.10135 | 6 | # 2 RELATED WORK
WaveNet (van den Oord et al., 2016) is a powerful generative model of audio. It works well for TTS, but is slow due to its sample-level autoregressive nature. It also requires conditioning on linguistic features from an existing TTS frontend, and thus is not end-to-end: it only replaces the vocoder and acoustic model. Another recently-developed neural model is DeepVoice (Arik et al., 2017), which replaces every component in a typical TTS pipeline by a corresponding neural network. However, each component is independently trained, and itâs nontrivial to change the system to train in an end-to-end fashion.
To our knowledge, Wang et al. (2016) is the earliest work touching end-to-end TTS using seq2seq with attention. However, it requires a pre-trained hidden Markov model (HMM) aligner to help the seq2seq model learn the alignment. Itâs hard to tell how much alignment is learned by the seq2seq per se. Second, a few tricks are used to get the model trained, which the authors note hurts prosody. Third, it predicts vocoder parameters hence needs a vocoder. Furthermore, the model is trained on phoneme inputs and the experimental results seem to be somewhat limited. | 1703.10135#6 | Tacotron: Towards End-to-End Speech Synthesis | A text-to-speech synthesis system typically consists of multiple stages, such
as a text analysis frontend, an acoustic model and an audio synthesis module.
Building these components often requires extensive domain expertise and may
contain brittle design choices. In this paper, we present Tacotron, an
end-to-end generative text-to-speech model that synthesizes speech directly
from characters. Given <text, audio> pairs, the model can be trained completely
from scratch with random initialization. We present several key techniques to
make the sequence-to-sequence framework perform well for this challenging task.
Tacotron achieves a 3.82 subjective 5-scale mean opinion score on US English,
outperforming a production parametric system in terms of naturalness. In
addition, since Tacotron generates speech at the frame level, it's
substantially faster than sample-level autoregressive methods. | http://arxiv.org/pdf/1703.10135 | Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc Le, Yannis Agiomyrgiannakis, Rob Clark, Rif A. Saurous | cs.CL, cs.LG, cs.SD | Submitted to Interspeech 2017. v2 changed paper title to be
consistent with our conference submission (no content change other than typo
fixes) | null | cs.CL | 20170329 | 20170406 | [
{
"id": "1502.03167"
},
{
"id": "1611.00068"
},
{
"id": "1612.07837"
},
{
"id": "1603.04467"
},
{
"id": "1609.08144"
},
{
"id": "1505.00387"
},
{
"id": "1511.01844"
},
{
"id": "1609.03499"
},
{
"id": "1610.03017"
},
{
"id": "1702.07825"
}
] |
1703.09844 | 7 | Motivated by prior work in computer vision on resource-efï¬cient recognition (Viola & Jones, 2001), we aim to develop CNNs that âsliceâ the computation and process these slices one-by-one, stopping the evaluation once the CPU time is depleted or the classiï¬cation sufï¬ciently certain (through âearly exitsâ). Unfortunately, the architecture of CNNs is inherently at odds with the introduction of early exits. CNNs learn the data representation and the classiï¬er jointly, which leads to two problems with early exits: 1. The features in the last layer are extracted directly to be used by the classiï¬er, whereas earlier features are not. The inherent dilemma is that different kinds of features need to be extracted depending on how many layers are left until the classiï¬cation. 2. The features in different layers of the network may have different scale. Typically, the ï¬rst layers of a deep nets operate on a ï¬ne scale (to extract low-level features), whereas later layers transition (through pooling or strided convolution) to coarse scales that allow global context to enter the classiï¬er. Both scales are needed but happen at different places in the network. | 1703.09844#7 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.10069 | 7 | 2000; Kapetanakis and Kudenko 2002; Lauer and Riedmiller 2004; Foerster et al. 2016). In DIAL, each agent consists of a recurrent neural network that outputs individual agentâs Q-value and a message to transfer for each time-step. The generated messages is then transferred to other agents as used as inputs for others in the next time step. The received messages will be embedded with agentâs current observations and last action as the representation of the global information. Communication between independent agents is one way to mitigate the notorious non-stationary issue in the mutliagent settings as the gradients will at least flow among the agents; however, researchers are still looking for better solutions for complex environments such as StarCraft. By contrast, CommNet (Sukhbaatar, Fergus, and others 2016) is designed for joint action learners in fully observable settings. Unlike DIAL, CommNet proposes a single network in the multiagent setting, passing the averaged message over the agent modules between layers. However, as the commu- nication network is fully symmetric and embedded in the original network, it lacks the ability of handle heterogeneous agent types. Also it is a single network for all agents, and therefore its | 1703.10069#7 | Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games | Many artificial intelligence (AI) applications often require multiple
intelligent agents to work in a collaborative effort. Efficient learning for
intra-agent communication and coordination is an indispensable step towards
general AI. In this paper, we take StarCraft combat game as a case study, where
the task is to coordinate multiple agents as a team to defeat their enemies. To
maintain a scalable yet effective communication protocol, we introduce a
Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a
vectorised extension of actor-critic formulation. We show that BiCNet can
handle different types of combats with arbitrary numbers of AI agents for both
sides. Our analysis demonstrates that without any supervisions such as human
demonstrations or labelled data, BiCNet could learn various types of advanced
coordination strategies that have been commonly used by experienced game
players. In our experiments, we evaluate our approach against multiple
baselines under different scenarios; it shows state-of-the-art performance, and
possesses potential values for large-scale real-world applications. | http://arxiv.org/pdf/1703.10069 | Peng Peng, Ying Wen, Yaodong Yang, Quan Yuan, Zhenkun Tang, Haitao Long, Jun Wang | cs.AI, cs.LG | 10 pages, 10 figures. Previously as title: "Multiagent
Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat
Games", Mar 2017 | null | cs.AI | 20170329 | 20170914 | [
{
"id": "1609.02993"
},
{
"id": "1706.02275"
},
{
"id": "1705.08926"
},
{
"id": "1612.07182"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.