doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1703.10069 | 51 | Ey. py, a> S Voaj,o(s) - Va;Q7*(s, aa(s))] ; ( i=1 j=l 2)
where in Eq.(10) Leibniz intergal rule is used to exchange derivative and integral since Q?°(s, ag(s)) is continuous. For Eq.( 1), we used the definition of @-value. Then, we take derivatives for each term in Eq.(11) to get Eq.(12). Afterwards, we combine the first and the second term in Eq.(12) to get the first term in Eq.(13), while we notice that we can iterate Eq.(10) and Eq.(11) to expand the second term in Eq.(13). By summing up the iterated terms, we get Eq.(14), which implies Eq.(15) by using Fubiniâs theorem to exchange the order of integration. Using the expectation to denote Eq.(15), we derive Eq.(16). Finally, we get Eq.(17) and the proof is done.
( 3)
5)
6)
# Pseudocode
# Algorithm 1 BiCNet algorithm | 1703.10069#51 | Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games | Many artificial intelligence (AI) applications often require multiple
intelligent agents to work in a collaborative effort. Efficient learning for
intra-agent communication and coordination is an indispensable step towards
general AI. In this paper, we take StarCraft combat game as a case study, where
the task is to coordinate multiple agents as a team to defeat their enemies. To
maintain a scalable yet effective communication protocol, we introduce a
Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a
vectorised extension of actor-critic formulation. We show that BiCNet can
handle different types of combats with arbitrary numbers of AI agents for both
sides. Our analysis demonstrates that without any supervisions such as human
demonstrations or labelled data, BiCNet could learn various types of advanced
coordination strategies that have been commonly used by experienced game
players. In our experiments, we evaluate our approach against multiple
baselines under different scenarios; it shows state-of-the-art performance, and
possesses potential values for large-scale real-world applications. | http://arxiv.org/pdf/1703.10069 | Peng Peng, Ying Wen, Yaodong Yang, Quan Yuan, Zhenkun Tang, Haitao Long, Jun Wang | cs.AI, cs.LG | 10 pages, 10 figures. Previously as title: "Multiagent
Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat
Games", Mar 2017 | null | cs.AI | 20170329 | 20170914 | [
{
"id": "1609.02993"
},
{
"id": "1706.02275"
},
{
"id": "1705.08926"
},
{
"id": "1612.07182"
}
] |
1703.09844 | 52 | Alexander Grubb and Drew Bagnell. Speedboost: Anytime prediction with uniform near-optimality. In AISTATS, volume 15, pp. 458â466, 2012.
Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2015.
Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In IJCNN, pp. 293â299, 1993.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. In ICCV, pp. 1026â1034, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CVPR, pp. 770â778, 2016.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. In NIPS Deep Learning Workshop, 2014. | 1703.09844#52 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.10069 | 52 | ( 3)
5)
6)
# Pseudocode
# Algorithm 1 BiCNet algorithm
Initialise actor network and critic network with ⬠and 6 Initialise target network and critic network with £â + ⬠and 6â + 6 iti replay buffer R des=1, E do e a random process U/ for action exploration receive initial observation state s+ for t=1, T do for each agent i, select and execute action at = aio(s*) +M receive reward [r/]_, and observe new state sâ+? store transition {sâ, at, r4]Â¥_,, sâ+4} inR ; sample a random minibatch of M transitions {5%,,, [a%, ;. 1), Jha. 8nt) JY compute target value for each agent in each transition using the Bi-RNN: for m=1, M do , from R
Qmi =Tmi + OE, (shit, agâ(s't)) for each agent i
# end for
compute critic gradient estimation according to Eq.(8):
Ag= + S > (Qn. _ Qs (Sm ag(s ))) . VeQs (s ag(s »}: Mag , mem ⢠miem m
compute actor gradient estimation according to Eq.(7) and replace Q-value with the critic estimation: | 1703.10069#52 | Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games | Many artificial intelligence (AI) applications often require multiple
intelligent agents to work in a collaborative effort. Efficient learning for
intra-agent communication and coordination is an indispensable step towards
general AI. In this paper, we take StarCraft combat game as a case study, where
the task is to coordinate multiple agents as a team to defeat their enemies. To
maintain a scalable yet effective communication protocol, we introduce a
Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a
vectorised extension of actor-critic formulation. We show that BiCNet can
handle different types of combats with arbitrary numbers of AI agents for both
sides. Our analysis demonstrates that without any supervisions such as human
demonstrations or labelled data, BiCNet could learn various types of advanced
coordination strategies that have been commonly used by experienced game
players. In our experiments, we evaluate our approach against multiple
baselines under different scenarios; it shows state-of-the-art performance, and
possesses potential values for large-scale real-world applications. | http://arxiv.org/pdf/1703.10069 | Peng Peng, Ying Wen, Yaodong Yang, Quan Yuan, Zhenkun Tang, Haitao Long, Jun Wang | cs.AI, cs.LG | 10 pages, 10 figures. Previously as title: "Multiagent
Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat
Games", Mar 2017 | null | cs.AI | 20170329 | 20170914 | [
{
"id": "1609.02993"
},
{
"id": "1706.02275"
},
{
"id": "1705.08926"
},
{
"id": "1612.07182"
}
] |
1703.09844 | 53 | Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. In NIPS Deep Learning Workshop, 2014.
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efï¬cient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In ECCV, pp. 646â661. Springer, 2016.
Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. In CVPR, 2017.
Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In NIPS, pp. 4107â4115, 2016.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pp. 770â778, 2015. | 1703.09844#53 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.09844 | 54 | Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pp. 770â778, 2015.
J¨orn-Henrik Jacobsen, Edouard Oyallon, St´ephane Mallat, and Arnold WM Smeulders. Multiscale hierarchical convolutional networks. arXiv preprint arXiv:1703.04140, 2017.
Sergey Karayev, Mario Fritz, and Trevor Darrell. Anytime recognition of objects and scenes. In CVPR, pp. 572â579, 2014.
11
Published as a conference paper at ICLR 2018
Tsung-Wei Ke, Michael Maire, and Stella X. Yu. Neural multigrid. CoRR, abs/1611.07661, 2016. URL http://arxiv.org/abs/1611.07661.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Tech Report, 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In NIPS, pp. 1097â1105, 2012. | 1703.09844#54 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.09844 | 55 | Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net- works without residuals. In ICLR, 2017.
Yann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal brain damage. In NIPS, volume 2, pp. 598â605, 1989.
Chen-Yu Lee, Saining Xie, Patrick W Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply- supervised nets. In AISTATS, volume 2, pp. 5, 2015.
Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning ï¬lters for efï¬cient convnets. In ICLR, 2017.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, pp. 740â755. Springer, 2014.
Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, pp. 3431â3440, 2015. | 1703.09844#55 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.09844 | 56 | Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, pp. 3431â3440, 2015.
Feng Nan, Joseph Wang, and Venkatesh Saligrama. Feature-budgeted random forest. In ICML, pp. 1983â1991, 2015.
Augustus Odena, Dieterich Lawson, and Christopher Olah. Changing model behavior at test-time using reinforcement learning. arXiv preprint arXiv:1702.07780, 2017.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. In ECCV, pp. 525â542. Springer, 2016.
Shreyas Saxena and Jakob Verbeek. Convolutional neural fabrics. In NIPS, pp. 4053â4061, 2016.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, pp. 1â9, 2015. | 1703.09844#56 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.09844 | 57 | Kirill Trapeznikov and Venkatesh Saligrama. Supervised sequential classiï¬cation under budget constraints. In AI-STATS, pp. 581â589, 2013.
Paul Viola and Michael Jones. Robust real-time object detection. International Journal of Computer Vision, 4(34â47), 2001.
Ji Wan, Dayong Wang, Steven Chu Hong Hoi, Pengcheng Wu, Jianke Zhu, Yongdong Zhang, and In ACM Jintao Li. Deep learning for content-based image retrieval: A comprehensive study. Multimedia, pp. 157â166, 2014.
Joseph Wang, Kirill Trapeznikov, and Venkatesh Saligrama. Efï¬cient learning by directed acyclic graph for resource constrained prediction. In NIPS, pp. 2152â2160. 2015.
Zhixiang Xu, Olivier Chapelle, and Kilian Q. Weinberger. The greedy miser: Learning under test- time budgets. In ICML, pp. 1175â1182, 2012.
Zhixiang Xu, Matt Kusner, Minmin Chen, and Kilian Q. Weinberger. Cost-sensitive tree of classi- ï¬ers. In ICML, volume 28, pp. 133â141, 2013. | 1703.09844#57 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.09844 | 58 | Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016.
A. R. Zamir, T.-L. Wu, L. Sun, W. Shen, B. E. Shi, J. Malik, and S. Savarese. Feedback Networks. ArXiv e-prints, December 2016.
Yisu Zhou, Xiaolin Hu, and Bo Zhang. Interlinked convolutional neural networks for face parsing. In International Symposium on Neural Networks, pp. 222â231. Springer, 2015.
12
Published as a conference paper at ICLR 2018
# A DETAILS OF MSDNET ARCHITECTURE AND BASELINE NETWORKS | 1703.09844#58 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.09844 | 59 | 12
Published as a conference paper at ICLR 2018
# A DETAILS OF MSDNET ARCHITECTURE AND BASELINE NETWORKS
We use MSDNet with three scales on the CIFAR datasets, and the network reduction method intro- duced in[4.1]is applied. [Figure 9] gives an illustration of the reduced network. The convolutional layer functions in the first layer, hj, denote a sequence of 3x3 convolutions (Conv), batch normaliza- tion (BN; [loffe & Szegedy|(2015)), and rectified linear unit (ReLU) activation. In the computation of he, down-sampling is performed by applying convolutions using strides that are powers of two. For subsequent feature layers, the transformations h; and hs are defined following the design in DenseNets 2017): Conv(1 x 1)-BN-ReLU-Conv(3 x 3)-BN-ReLU. We set the num- ber of output channels of the three scales to 6, 12, and 24, respectively. Each classifier has two down-sampling convolutional layers with 128 dimensional 3 x 3 filters, followed by a 2 x 2 average pooling layer and a linear layer. | 1703.09844#59 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.09844 | 60 | The MSDNet used for ImageNet has four scales, respectively producing 16, 32, 64, and 64 feature maps at each layer. The network reduction is also applied to reduce computational cost. The original images are ï¬rst transformed by a 7 3 max pooling (both with stride 2), 7 convolution and a 3 before entering the ï¬rst layer of MSDNets. The classiï¬ers have the same structure as those used for the CIFAR datasets, except that the number of output channels of each convolutional layer is set to be equal to the number of its input channels. | 1703.09844#60 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.09844 | 61 | Figure 9: Illustration of an MSDNet with network reduction. The network has S = 3 scales, and it is divided into three blocks, which maintain a decreasing number of scales. A transition layer is placed between two contiguous blocks. Network architecture for anytime prediction. The MSDNet used in our anytime-prediction ex- periments has 24 layers (each layer corresponds to a column in Fig. 1 of the main paper), using the reduced network with transition layers as described in Section 4. The classiï¬ers operate on the (i+1)th layers, with i = 1, . . . , 11. On ImageNet, we use MSDNets with four scales, output of the 2 and the ith classiï¬er operates on the (k i+3)th layer (with i = 1, . . . , 5 ), where k = 4, 6 and 7. For à simplicity, the losses of all the classiï¬ers are weighted equally during training. | 1703.09844#61 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.09844 | 62 | Network architecture for budgeted batch setting. The MSDNets used here for the two CIFAR datasets have depths ranging from 10 to 36 layers, using the reduced network with transition layers k i=1 i)th layer. The MSDNets used as described in Section 4. The kth classiï¬er is attached to the ( for ImageNet are the same as those described for the anytime learning setting. ResNetMC and DenseNetMC. The ResNetMC has 62 layers, with 10 residual blocks at each spatial resolution (for three resolutions): we train early-exit classiï¬ers on the output of the 4th and 8th residual blocks at each resolution, producing a total of 6 intermediate classiï¬ers (plus the ï¬nal classiï¬cation layer). The DenseNetMC consists of 52 layers with three dense blocks and each of them has 16 layers. The six intermediate classiï¬ers are attached to the 6th and 12th layer in each block, also with dense connections to all previous layers in that block.
# B ADDITIONAL RESULTS
B.1 ABLATION STUDY
We perform additional experiments to shed light on the contributions of the three main components of MSDNet, viz., multi-scale feature maps, dense connectivity, and intermediate classiï¬ers.
13
Published as a conference paper at ICLR 2018 | 1703.09844#62 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.09844 | 63 | We start from an MSDNet with six intermediate classiï¬ers and remove the three main components one at a time. To make our comparisons fair, we keep the computational costs of the full networks 108 FLOPs, by adapting similar, at around 3.0 the network width, i.e., number of output chan- nels at each layer. After removing all the three components in an MSDNet, we obtain a regular VGG-like convolutional network. We show the classiï¬cation accuracy of all classiï¬ers in a model in the left panel of Figure 10. Several observa- tions can be made: 1. the dense connectivity is crucial for the performance of MSDNet and re- moving it hurts the overall accuracy drastically (orange vs. black curve); 2. removing multi-scale convolution hurts the accuracy only in the lower budget regions, which is consistent with our mo- tivation that the multi-scale design introduces discriminative features early on; 3. the ï¬nal canonical CNN (star) performs similarly as MSDNet under the speciï¬c budget that matches its evaluation cost exactly, but it is unsuited for varying budget constraints. The | 1703.09844#63 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.09844 | 65 | B.2 RESULTS ON CIFAR-10
For the CIFAR-10 dataset, we use the same MSDNets and baseline models as we used for CIFAR- 100, except that the networks used here have a 10-way fully connected layer at the end. The results under the anytime learning setting and the batch computational budget setting are shown in the left and right panel of Figure 11, respectively. Similar to what we have observed from the results on CIFAR-100 and ImageNet, MSDNets outperform all the baselines by a signiï¬cant margin in both settings. As in the experiments presented in the main paper, ResNet and DenseNet models with multiple intermediate classiï¬ers perform relatively poorly. | 1703.09844#65 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.09844 | 66 | Anytime prediction on CIFAR-10 Batch computational learning on CIFAR-10 92 MC with MSDNet MC with earl ts (He et al. 2015) exits M1 83 ® # Ensemble of ResNets (all shallow) jets (Huang et al., 2016) nm arying depth) Stochastic Depth-110 (Huang et al., 2016) |4 omy 's (varying depth) WideResNet-40 (Zagoruyko et al., 2016) 80. 00 02 Od 06 08 10 12 1d 10 15 20 25 budget (in MUL-ADD) x10® average budget (in MUL-ADD) x10°
Figure 11: Classiï¬cation accuracies on the CIFAR-10 dataset in the anytime-prediction setting (left) and the budgeted batch setting (right).
14 | 1703.09844#66 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | [
{
"id": "1702.07780"
},
{
"id": "1702.07811"
},
{
"id": "1703.04140"
},
{
"id": "1603.08983"
},
{
"id": "1612.02297"
},
{
"id": "1604.07316"
},
{
"id": "1704.04861"
},
{
"id": "1610.02357"
}
] |
1703.06585 | 1 | We introduce the ï¬rst goal-driven training for visual ques- tion answering and dialog agents. Speciï¬cally, we pose a cooperative âimage guessingâ game between two agents â Q-BOT and A-BOTâ who communicate in natural language dialog so that Q-BOT can select an unseen image from a lineup of images. We use deep reinforcement learning (RL) to learn the policies of these agents end-to-end â from pixels to multi-agent multi-round dialog to game reward. We demonstrate two experimental results. First, as a âsanity checkâ demonstration of pure RL (from scratch), we show results on a synthetic world, where the agents communicate in ungrounded vocabulary, i.e., sym- bols with no pre-speciï¬ed meanings (X, Y, Z). We ï¬nd that two bots invent their own communication protocol and start using certain symbols to ask/answer about certain vi- sual attributes (shape/color/style). Thus, we demonstrate the emergence of grounded language and communication among âvisualâ dialog agents with no human supervision. Second, we conduct large-scale real-image experiments on the VisDial dataset | 1703.06585#1 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 2 | of grounded language and communication among âvisualâ dialog agents with no human supervision. Second, we conduct large-scale real-image experiments on the VisDial dataset [4], where we pretrain with supervised dialog data and show that the RL âï¬ne-tunedâ agents signif- icantly outperform SL agents. Interestingly, the RL Q-BOT learns to ask questions that A-BOT is good at, ultimately resulting in more informative dialog and a better team. | 1703.06585#2 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 3 | @ i] Questioner Answerer [ Two zebra are walking around their pen at the zoo. A °° Q1: Any people in the shot? ] ° [ A1: No, there aren't any. Oe om) @ pS Q10: Are they facing each other? | A10: They aren't. o) ° F 9 | pally me A) |e (9) | think we were talking about this image! .' cn 5 ee
Figure 1: We propose a cooperative image guessing game between two agents â Q-BOT and A-BOTâ who communicate through a natural language dialog so that Q-BOT can select a particular un- seen image from a lineup. We model these agents as deep neural networks and train them end-to-end with reinforcement learning.
# 1. Introduction | 1703.06585#3 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 4 | # 1. Introduction
The focus of this paper is visually-grounded conversational artiï¬cial intelligence (AI). Speciï¬cally, we would like to de- velop agents that can âseeâ (i.e., understand the contents of an image) and âcommunicateâ that understanding in natu- ral language (i.e., hold a dialog involving questions and an- swers about that image). We believe the next generation of intelligent systems will need to posses this ability to hold a dialog about visual content for a variety of applications: e.g., helping visually impaired users understand their sur- roundings [2] or social media content [36] (âWho is in the photo? Dave. What is he doing?â), enabling analysts to | 1703.06585#4 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 5 | sift through large quantities of surveillance data (âDid any- one enter the vault in the last month? Yes, there are 103 recorded instances. Did any of them pick something up?â), and enabling users to interact naturally with intelligent as- sistants (either embodied as a robot or not) (âDid I leave my phone on my desk? Yes, itâs here. Did I miss any calls?â). Despite rapid progress at the intersection of vision and lan- guage, in particular, in image/video captioning [3, 12, 32â 34, 37] and question answering [1, 21, 24, 30, 31], it is clear we are far from this grand goal of a visual dialog agent. Two recent works [4, 5] have proposed studying this task of visually-grounded dialog. Perhaps somewhat counter- intuitively, both these works treat dialog as a static super- vised learning problem, rather than an interactive agent learning problem that it naturally is. Speciï¬cally, both
*The ï¬rst two authors (AD, SK) contributed equally.
1 | 1703.06585#5 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 6 | works [4, 5] ï¬rst collect a dataset of human-human dia- log, i.e., a sequence of question-answer pairs about an im- age (q1, a1), . . . , (qT , aT ). Next, a machine (a deep neu- ral network) is provided with the image I, the human dia- log recorded till round t â 1, (q1, a1), . . . , (qtâ1, atâ1), the follow-up question qt, and is supervised to generate the hu- man response at. Essentially, at each round t, the machine is artiï¬cially âinjectedâ into the conversation between two humans and asked to answer the question qt; but the ma- chineâs answer Ëat is thrown away, because at the next round t + 1, the machine is again provided with the âground-truthâ human-human dialog that includes the human response at and not the machine response Ëat. Thus, the machine is never allowed to steer the conversation because that would take the dialog out of the dataset, making it non-evaluable. In this paper, we generalize the task of Visual Dialog be- yond the necessary ï¬rst stage of supervised learning â | 1703.06585#6 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 7 | the dataset, making it non-evaluable. In this paper, we generalize the task of Visual Dialog be- yond the necessary ï¬rst stage of supervised learning â by posing it as a cooperative âimage guessingâ game between two dialog agents. We use deep reinforcement learning (RL) to learn the policies of these agents end-to-end â from pixels to multi-agent multi-round dialog to the game reward. Our setup is illustrated in Fig. 1. We formulate a game be- tween a questioner bot (Q-BOT) and an answerer bot (A- BOT). Q-BOT is shown a 1-sentence description (a caption) of an unseen image, and is allowed to communicate in natu- ral language (discrete symbols) with the answering bot (A- BOT), who is shown the image. The objective of this fully- cooperative game is for Q-BOT to build a mental model of the unseen image purely from the natural language dialog, and then retrieve that image from a lineup of images. Notice that this is a challenging game. Q-BOT must ground the words mentioned in the provided caption (âTwo zebra are walking around their pen at the zoo.â), estimate | 1703.06585#7 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 8 | challenging game. Q-BOT must ground the words mentioned in the provided caption (âTwo zebra are walking around their pen at the zoo.â), estimate which images from the provided pool contain this content (there will typically be many such images since captions describe only the salient entities), and ask follow-up questions (âAny people in the shot? Are there clouds in the sky? Are they facing each other?â) that help it identify the correct image. Analogously, A-BOT must build a mental model of what Q- BOT understands, and answer questions (âNo, there arenât any. I canât see the sky. They arenât.â) in a precise enough way to allow discrimination between similar images from a pool (that A-BOT does not have access to) while being concise enough to not confuse the imperfect Q-BOT. At every round of dialog, Q-BOT listens to the answer pro- vided by A-BOT, updates its beliefs, and makes a prediction about the visual representation of the unseen image (specif- ically, the fc7 vector of I), and receives a reward from the environment based on how close Q-BOTâs prediction is to | 1703.06585#8 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 10 | selves, may not stay consistent in their responses, A-BOT does not have access to an external knowledge-base so it cannot answer all questions, etc. Thus, to succeed at the task, they must learn to play to each otherâs strengths. An important question to ask is â why force the two agents to communicate in discrete symbols (English words) as op- posed to continuous vectors? The reason is twofold. First, discrete symbols and natural language is interpretable. By forcing the two agents to communicate and understand nat- ural language, we ensure that humans can not only inspect the conversation logs between two agents, but more im- portantly, communicate with them. After the two bots are trained, we can pair a human questioner with A-BOT to ac- complish the goals of visual dialog (aiding visually/situa- tionally impaired users), and pair a human answerer with Q-BOT to play a visual 20-questions game. The second reason to communicate in discrete symbols is to prevent cheating â if Q-BOT and A-BOT are allowed to exchange continuous vectors, then the trivial solution is for A-BOT to ignore Q-BOTâs question and | 1703.06585#10 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 11 | and A-BOT are allowed to exchange continuous vectors, then the trivial solution is for A-BOT to ignore Q-BOTâs question and directly convey the fc7 vec- tor for I, allowing Q-BOT to make a perfect prediction. In essence, discrete natural language is an interpretable low- dimensional âbottleneckâ layer between these two agents. Contributions. We introduce a novel goal-driven training for visual question answering and dialog agents. Despite signiï¬cant popular interest in VQA (over 200 works citing [1] since 2015), all previous approaches have been based on supervised learning, making this the ï¬rst instance of goal- driven training for visual question answering / dialog. We demonstrate two experimental results. First, as a âsanity checkâ demonstration of pure RL (from scratch), we show results on a diagnostic task where per- ception is perfect â a synthetic world with âimagesâ con- taining a single object deï¬ned by three attributes (shape/- color/style). In this synthetic world, for Q-BOT to identify an image, it must learn about these attributes. The two bots communicate via an ungrounded vocabulary, i.e., | 1703.06585#11 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 12 | synthetic world, for Q-BOT to identify an image, it must learn about these attributes. The two bots communicate via an ungrounded vocabulary, i.e., symbols with no pre-speciï¬ed human-interpretable meanings (âXâ, âYâ, â1â, â2â). When trained end-to-end with RL on this task, we ï¬nd that the two bots invent their own communica- tion protocol â Q-BOT starts using certain symbols to query for speciï¬c attributes (âXâ for color), and A-BOT starts re- sponding with speciï¬c symbols indicating the value of that attribute (â1â for red). Essentially, we demonstrate the auto- matic emergence of grounded language and communication among âvisualâ dialog agents with no human supervision! Second, we conduct large-scale real-image experiments on the VisDial dataset [4]. With imperfect perception on real images, discovering a human-interpretable language and communication strategy from scratch is both tremendously difï¬cult and an unnecessary re-invention of English. Thus, we pretrain with supervised dialog | 1703.06585#12 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 14 | in making deep RL converge to something meaningful. We show that these RL ï¬ne-tuned bots signiï¬cantly outperform the supervised bots. Most interestingly, while the super- vised Q-BOT attempts to mimic how humans ask questions, the RL trained Q-BOT shifts strategies and asks questions that the A-BOT is better at answering, ultimately resulting in more informative dialog and a better team.
# 2. Related Work | 1703.06585#14 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 15 | Vision and Language. A number of problems at the inter- section of vision and language have recently gained promi- nence, e.g., image captioning [6, 7, 13, 34], and visual ques- tion answering (VQA) [1, 9, 20, 21, 24]. Most related to this paper are two recent works on visually-grounded dia- log [4, 5]. Das et al. [4] proposed the task of Visual Di- alog, collected the VisDial dataset by pairing two subjects on Amazon Mechanical Turk to chat about an image (with assigned roles of âQuestionerâ and âAnswererâ), and trained neural visual dialog answering models. De Vries et al. [5] extended the Referit game [14] to a âGuessWhatâ game, where one person asks questions about an image to guess which object has been âselectedâ, and the second person answers questions in âyesâ/ânoâ/NA (natural language an- swers are disallowed). One disadvantage of GuessWhat is that it requires bounding box annotations for objects; our image guessing game does not need any such annotations and thus an unlimited number of game plays may be | 1703.06585#15 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 16 | of GuessWhat is that it requires bounding box annotations for objects; our image guessing game does not need any such annotations and thus an unlimited number of game plays may be sim- ulated. Moreover, as described in Sec. 1, both these works unnaturally treat dialog as a static supervised learning prob- lem. Although both datasets contain thousands of human dialogs, they still only represent an incredibly sparse sam- ple of the vast space of visually-grounded questions and an- swers. Training robust, visually-grounded dialog agents via supervised techniques is still a challenging task. In our work, we take inspiration from the AlphaGo [27] ap- proach of supervision from human-expert games and rein- forcement learning from self-play. Similarly, we perform supervised pretraining on human dialog data and ï¬ne-tune in an end-to-end goal-driven manner with deep RL. 20 Questions and Lewis Signaling Game. Our proposed image-guessing game is naturally the visual analog of the popular 20-questions game. More formally, it is a general- ization of the Lewis Signaling (LS) [17] game, widely stud- ied in economics and game theory. LS is a cooperative game between two players â a sender | 1703.06585#16 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 17 | ization of the Lewis Signaling (LS) [17] game, widely stud- ied in economics and game theory. LS is a cooperative game between two players â a sender and a receiver. In the clas- sical setting, the world can be in a number of ï¬nite discrete states {1, 2, . . . , N }, which is known to the sender but not the receiver. The sender can send one of N discrete sym- bols/signals to the receiver, who upon receiving the signal must take one of N discrete actions. The game is perfectly cooperative, and one simple (though not unique) Nash Equi- librium is the âidentity mappingâ, where the sender encodes each world state with a bijective signal, and similarly the | 1703.06585#17 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 19 | receiver has a bijective mapping from a signal to an action. Our proposed âimage guessingâ game is a generalization of LS with Q-BOT being the receiver and A-BOT the sender. However, in our proposed game, the receiver (Q-BOT) is not passive. It actively solicits information by asking ques- tions. Moreover, the signaling process is not âsingle shotâ, but proceeds over multiple rounds of conversation. Text-only or Classical Dialog. Li et al. [18] have pro- posed using RL for training dialog systems. However, they hand-deï¬ne what a âgoodâ utterance/dialog looks like (non- repetition, coherence, continuity, etc.). In contrast, taking a cue from adversarial learning [10, 19], we set up a cooper- ative game between two agents, such that we do not need to hand-deï¬ne what a âgoodâ dialog looks like â a âgoodâ dialog is one that leads to a successful image-guessing play. Emergence of Language. There is a long history of work on language emergence in multi-agent systems [23]. The more recent resurgence has focused on deep RL | 1703.06585#19 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 20 | play. Emergence of Language. There is a long history of work on language emergence in multi-agent systems [23]. The more recent resurgence has focused on deep RL [8, 11, 16, 22]. The high-level ideas of these concurrent works are sim- ilar to our synthetic experiments. For our large-scale real- image results, we do not want our bots to invent their own uninterpretable language and use pretraining on VisDial [4] to achieve âalignmentâ with English. | 1703.06585#20 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 22 | Players and Roles. The game involves two collaborative agents â a questioner bot (Q-BOT) and an answerer bot (A- BOT) â with an information asymmetry. A-BOT sees an im- age I, Q-BOT does not. Q-BOT is primed with a 1-sentence description c of the unseen image and asks âquestionsâ (se- quence of discrete symbols over a vocabulary V), which A- BOT answers with another sequence of symbols. The com- munication occurs for a fixed number of rounds. Game Objective in General. At each round, in addition to communicating, Q-BOT must provide a âdescriptionâ 4 of the unknown image J based only on the dialog history and both players receive a reward from the environment in- versely proportional to the error in this description under some metric ¢(, 9%â). We note that this is a general set- ting where the âdescriptionâ y can take on varying levels of specificity â from image embeddings (or fe7 vectors of J) to textual descriptions to pixel-level image generations. Specific Instantiation. In our experiments, we focus on the setting where Q-BOT is tasked with | 1703.06585#22 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 23 | of J) to textual descriptions to pixel-level image generations. Specific Instantiation. In our experiments, we focus on the setting where Q-BOT is tasked with estimating a vector em- bedding of the image J. Given some feature extractor (i.e., a pretrained CNN model, say VGG-16), no human annotation is required to produce the target âdescriptionâ 9% (simply forward-prop the image through the CNN). Reward/error can be measured by simple Euclidean distance, and any im- age may be used as the visual grounding for a dialog. Thus, an unlimited number of âgame playsâ may be simulated. | 1703.06585#23 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 25 | In this section, we formalize the training of two visual dia- log agents (Q-BOT and A-BOT) with Reinforcement Learn- ing (RL) â describing formally the action, state, environ- ment, reward, policy, and training procedure. We begin by noting that although there are two agents (Q-BOT, A-BOT), since the game is perfectly cooperative, we can without loss of generality view this as a single-agent RL setup where the single âmeta-agentâ comprises of two âconstituent agentsâ communicating via a natural language bottleneck layer. Action. Both agents share a common action space con- sisting of all possible output sequences under a token vo- cabulary V. This action space is discrete and in princi- ple, infinitely-large since arbitrary length sequences q, a¢ may be produced and the dialog may go on forever. In our synthetic experiment, the two agents are given different vo- cabularies to coax a certain behavior to emerge (details in Sec. 5). In our VisDial experiments, the two agents share a common vocabulary of English tokens. In addition, at each round of the dialog t, Q-BOT also predicts y, its current guess about | 1703.06585#25 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 26 | two agents share a common vocabulary of English tokens. In addition, at each round of the dialog t, Q-BOT also predicts y, its current guess about the visual representation of the unseen image. This component of Q-BOTâs action space is continuous. State. Since there is information asymmetry (A-BOT can see the image J, Q-BOT cannot), each agent has its own observed state. For a dialog grounded in image J with caption c, the state of Q-BOT at round ¢ is the caption and dialog history so far se = [e,q1,41,---,Qâ1, 4-1], and the state of A-BOT also includes the image s/ = [L,¢,Q1,1,---,t-1, 4-1, U]- Policy. We model Q-BOT and A-BOT operating under stochastic policies 7Q(q | 8;0Q) and ma(a; | sf;6.4), such that questions and answers may be sampled from these policies conditioned on the dialog/state history. These poli- cies will be learned by two separate deep neural networks parameterized by 0g and 64. In addition, Q-BOT | 1703.06585#26 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 27 | history. These poli- cies will be learned by two separate deep neural networks parameterized by 0g and 64. In addition, Q-BOT includes a feature regression network f (-) that produces an image rep- resentation prediction after listening to the answer at round tie, He = f(s? qe, a1;f) = f(s2.450). Thus, the goal of policy learning is to estimate the parameters 0g, 04, Of. Environment and Reward. The environment is the image I upon which the dialog is grounded. Since this is a purely cooperative setting, both agents receive the same reward. Let ¢(-,+) be a distance metric on image representations (Euclidean distance in our experiments). At each round t, we define the reward for a state-action pair as: | 1703.06585#27 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 28 | ri( 82 (grav) ) =C(Gi-1,9") ââ¬(Gy") A) state action distance at t-1 distance at t
i.e., the change in distance to the true representation be- fore and after a round of dialog. In this way, we consider a question-answer pair to be low quality (i.e., have a negative reward) if it leads the questioner to make a worse estimate of
4
the target image representation than if the dialog had ended. Note that the total reward summed over all time steps of a dialog is a function of only the initial and ï¬nal states due to the cancellation of intermediate terms, i.e.,
Yon(s?, (ae, 41.44))) = C(Go,y") ââ¬(Gr,y") (2) = Se t=1 overall improvement due to dialog
This is again intuitive â âHow much do the feature predic- tions of Q-BOT improve due to the dialog?â The details of policy learning are described in Sec. 4.2, but before that, let us describe the inner working of the two agents.
# 4.1. Policy Networks for Q-BOT and A-BOT | 1703.06585#28 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 29 | # 4.1. Policy Networks for Q-BOT and A-BOT
Fig. 2 shows an overview of our policy networks for Q-BOT and A-BOT and their interaction within a single round of dialog. Both the agent policies are modeled via Hierarchical Recurrent Encoder-Decoder neural networks, which have recently been proposed for dialog modeling [4, 25, 26]. Q-BOT consists of the following four components:
- Fact Encoder: Q-BOT asks a question qt: âAre there any animals?â and receives an answer at: âYes, there are two elephants.â. Q-BOT treats this concatenated (qt, at)-pair as a âfactâ it now knows about the unseen image. The fact encoder is an LSTM whose ï¬nal hidden state F Q t â R512 is used as an embedding of (qt, at).
- State/History Encoder is an LSTM that takes the en- coded fact F Q t at each time step to produce an encoding of the prior dialog including time t as SQ t â R512. Notice that this results in a two-level hierarchical encoding of the dialog (qt, at) â F Q | 1703.06585#29 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 30 | - Question Decoder is an LSTM that takes the state/his- tâ1 and gener- tory encoding from the previous round SQ ates question qt by sequentially sampling words.
- Feature Regression Network f (·) is a single fully- connected layer that produces an image representation prediction Ëyt from the current encoded state Ëyt = f (SQ t ).
Each of these components and their relation to each other are shown on the left side of Fig. 2. We collectively refer to the parameters of the three LSTM models as θQ and those of the feature regression network as θf . A-BOT has a similar structure to Q-BOT with slight differ- ences since it also models the image I via a CNN:
- Question Encoder: A-BOT receives a question qt from t â R512. Q-BOT and encodes it via an LSTM QA
- Fact Encoder: Similar to Q-BOT, A-BOT also encodes t â R512. The the (qt, at)-pairs via an LSTM to get F A purpose of this encoder is for A-BOT to remember what it has already told Q-BOT and be able to understand ref- erences to entities already mentioned. | 1703.06585#30 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 31 | 1Q Sey Question Decoder Q History [J Fact Encoder [~~] Embedding Rounds of Dialog Feature |ââ+ Regression Network [0.1, -2, 0, ..., 0.57] Are there any animals? Yes, there are two elephants. Answer at Reward Function y_ FA, SA, Question | History Encoder Encoder Decoder Fact Embedding FA sa
Figure 2: Policy networks for Q-BOT and A-BOT. At each round t of dialog, (1) Q-BOT generates a question qt from its question decoder conditioned on its state encoding SQ t , and generates an answer at, (3) both encode the completed exchange as F Q t , predicts an image representation Ëyt, and receives a reward.
- State/History Encoder is an LSTM that takes as in- put at each round t â the encoded question Q/, the image features from VGG [28] y, and the previous fact encoding F;4, â to produce a state encoding, i.e. the model to contextualize the current question w.r.t. the history while looking at the image to seek an answer. | 1703.06585#31 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 32 | While the above is a natural objective, we ï¬nd that consid- ering the entire dialog as a single RL episode does not dif- ferentiate between individual good or bad exchanges within it. Thus, we update our model based on per-round rewards,
J(94,0Q,99) = E TQITA [re (s?, (qt, ae; w))| (5)
- Answer Decoder is an LSTM that takes the state encod- t and generates at by sequentially sampling words.
Following the REINFORCE algorithm, we can write the gradient of this expectation as an expectation of a quantity related to the gradient. For θQ, we derive this explicitly:
Our code will be publicly available. To recap, a dialog round at time t consists of 1) Q-BOT generating a question qt conditioned on its state encoding SQ tâ1, 2) A-BOT encoding qt, updating its state encoding SA t , and generating an answer at, 3) Q-BOT and A-BOT both encoding the completed exchange as F Q t and F A t , and 4) Q-BOT updating its state to SQ t based on F Q t and making an image representation prediction Ëyt for the unseen image. | 1703.06585#32 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 33 | VooI = Veo E [ri «| (r; inputs hidden to avoid clutter) TQ.TA = Vo [> m9 (als@1) ma (a Ino] = > TQ (als) Veg log TQ (als?) TA (a:|s#) Tt (-) qe.ae = _E [m6 () Vag logo (aels?1)| ©) TQ.TA
# 4.2. Joint Training with Policy Gradients
Similarly, gradient w.r.t. θA, i.e., âθA J can be derived as
In order to train these agents, we use the REINFORCE [35] algorithm that updates policy parameters (θQ, θA, θf ) in re- sponse to experienced rewards. In this section, we derive the expressions for the parameter gradients for our setup. Recall that our agents take actions â communication (qt, at) and feature prediction Ëyt â and our objective is to maximize the expected reward under the agentsâ policies, summed over the entire dialog:
min J(04,9Q, 9) 4,0 .99 where, (3)
Vot= E [re() Vox log ma (arls)]. | 1703.06585#33 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 34 | min J(04,9Q, 9) 4,0 .99 where, (3)
Vot= E [re() Vox log ma (arls)].
As is standard practice, we estimate these expectations with sample averages. Speciï¬cally, we sample a question from Q-BOT (by sequentially sampling words from the question decoder LSTM till a stop token is produced), sample its an- swer from A-BOT, compute the scalar reward for this round, multiply that scalar reward to gradient of log-probability of this exchange, propagate backward to compute gradients w.r.t. all parameters θQ, θA. This update has an intuitive interpretation â if a particular (qt, at) is informative (i.e., leads to positive reward), its probabilities will be pushed up (positive gradient). Conversely, a poor exchange leading to negative reward will be pushed down (negative gradient).
5 | 1703.06585#34 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 35 | 5
shape A HO : Task Image i : Task: (shape, style) : color | : (color, shape) {purple, square, filled} : aK ( P Pee ) te Reword vs # Ker c ® i Q2:Z A2:4 : syle @O) Predicted: (iriangle, filled) : Tasks i (color, shape), (shape, color), (style, color), (color, style), (shape, style), (style, shape) (a) : (b) L| i Qu:zZ AT Task: (style, color) Reward Q2:X AZT pL Predicted: (solid, purple) $ "© â 100-200 300400 # Iter (c) ; (d)
Figure 3: Emergence of grounded dialog: (a) Each âimageâ has three attributes, and there are six tasks for Q-BOT (ordered pairs of attributes). (b) Both agents interact for two rounds followed by attribute pair prediction by Q-BOT. (c) Example 2-round dialog where grounding emerges: color, shape, style have been encoded as X, Y, Z respectively. (d) Improvement in reward while policy learning.
Finally, since the feature regression network f(-) forms a deterministic policy, its parameters 67 receive âsupervisedâ gradient updates for differentiable ¢(-, -). | 1703.06585#35 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 37 | To succeed at our image guessing game, Q-BOT and A-BOT need to accomplish a number of challenging sub-tasks â they must learn a common language (do you understand what I mean when I say âpersonâ?) and develop map- pings between symbols and image representations (what does âpersonâ look like?), i.e., A-BOT must learn to ground language in visual perception to answer questions and Q- BOT must learn to predict plausible image representations â all in an end-to-end manner from a distant reward func- tion. Before diving in to the full task on real images, we conduct a âsanity checkâ on a synthetic dataset with perfect perception to ask â is this even possible? Setup. As shown in Fig. 3, we consider a synthetic world with âimagesâ represented as a triplet of attributes â 4 shapes, 4 colors, 4 styles â for a total of 64 unique images. A-BOT has perfect perception and is given direct access to this representation for an image. Q-BOT is tasked with de- ducing two attributes of the image in a particular order â e.g., if the task is (shape, color), Q-BOT would need to | 1703.06585#37 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 38 | is tasked with de- ducing two attributes of the image in a particular order â e.g., if the task is (shape, color), Q-BOT would need to out- put (square, purple) for a (purple, square, ï¬lled) image seen by A-BOT (see Fig. 3b). We form all 6 such tasks per image. Vocabulary. We conducted a series of pilot experiments and found the choice of the vocabulary size to be crucial for coaxing non-trivial ânon-cheatingâ behavior to emerge. For instance, we found that if the A-BOT vocabulary VA is large enough, say |VA| ⥠64 (#images), the optimal policy learnt simply ignores what Q-BOT asks and A-BOT conveys the entire image in a single token (e.g. token 1 â¡ (red, square, ï¬lled)). As with human communication, an impoverished vocabulary that cannot possibly encode the richness of the visual sensor is necessary for non-trivial dialog to emerge. To ensure at least 2 rounds of dialog, we restrict each agent to only produce a single symbol utterance per round from âminimalâ vocabularies VA = | 1703.06585#38 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 40 | a non-trivial dialog is necessary to succeed at the task. Policy Learning. Since the action space is discrete and small, we instantiate Q-BOT and A-BOT as fully specified tables of Q-values (state, action, future reward estimate) and apply tabular Q-learning with Monte Carlo estimation over 10k episodes to learn the policies. Updates are done alternately where one bot is frozen while the other is up- dated. During training, we use e-greedy policies [29], en- suring an action probability of 0.6 for the greedy action and split the remaining probability uniformly across other ac- tions. At test time, we default to greedy, deterministic pol- icy obtained from these e-greedy policies. The task requires outputting the correct attribute value pair based on the task and image. Since there are a total of 4+ 4+ 4 = 12 unique values across the 3 attributes, Q-BOTâs final action selects one of 12x 12=144 attribute-pairs. We use +1 and â1 as rewards for right and wrong predictions. Results. Fig. 3d shows the reward achieved by the agentsâ policies vs. number of RL iterations (each with | 1703.06585#40 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 41 | as rewards for right and wrong predictions. Results. Fig. 3d shows the reward achieved by the agentsâ policies vs. number of RL iterations (each with 10k episodes/dialogs). We can see that the two quickly learn the optimal policy. Fig. 3b,c show some example exchanges be- tween the trained bots. We find that the two invent their own communication protocol â Q-BOT consistently uses specific symbols to query for specific attributes: X â color, Y > shape, Z â> style. And A-BOT consistently responds with specific symbols to indicate the inquired attribute, e.g., if Q- BOT emits X (asks for color), A-BOT responds with: 1 > purple, 2 â> green, 3 â blue, 4 â> red. Similar mappings exist for responses to other attributes. Essentially, we find the automatic emergence of grounded language and a com- munication protocol among âvisualâ dialog agents without any human supervision! | 1703.06585#41 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 43 | Image + Caption Human-Human dialog [4] 1 man 1 woman What are their genders? Yes Are they both adults? Looks like sunglasses Do they wear goggles? Man does Do they have hats on? No Are there any other people? Black What color is manâs hat? Is it snowing now? No What is woman wearing? Blue jacket and black pants Yes Are they smiling? Yes Do you see trees? SL-pretrained Q-BOT-A-BOT dialog 2 How many people? Male or female? I canât tell I canât tell, they are far away What are they wearing? I canât tell What color are skis? Yes Are they wearing goggles? Yes Are they wearing goggles? Yes Are they wearing goggles? Yes Are they wearing goggles? Yes Are they wearing hats? Yes Are they wearing goggles? RL-full-QAf Q-BOT-A-BOT dialog Are people male or female or both? What are they wearing? Is it snowing? Can you see any trees? Can you see any buildings in background? Can you see any trees? Can you see any buildings in background? Can you see any trees? Can you see any buildings in background? Does it look like they are in park? | 1703.06585#43 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 44 | you see any trees? Can you see any buildings in background? Can you see any trees? Can you see any buildings in background? Does it look like they are in park? I think 1 is male, canât see other 1 ... They are all wearing snow pants and jackets It does not appear to be snowing at moment Yes, there are lot of trees in background No, I can not Yes, there are lot of trees in background No, I do not see any buildings Yes, there are lot of trees in background No , I do not see any buildings It does not appear to be A couple of people in the snow on skis. They look to be 30âs How old are men? I canât tell, its close up Are they in restaurant? Maybe Do men look they might be related? Are they each holding slice of pizza in 1 hand and giving thumbs up with other? 1 holding slice, other holding box... cheese What kind of pizza do they have? No Does box have pizza company name on it? No Are there any pictures or graphics on box? Slender Are men slender or heavy set? Yes Do either of men wear glasses? 1 1 or both? How old are men? Are they male or female? What are they wearing? What kind of pizza is | 1703.06585#44 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 45 | heavy set? Yes Do either of men wear glasses? 1 1 or both? How old are men? Are they male or female? What are they wearing? What kind of pizza is it? Does it look good? Can you tell what kind of pizza it is? Are there any toppings on pizza? Are there any toppings on pizza? Are there any toppings on pizza? Are there any other toppings on pizza? They look like they are in their 20âs 1 male and 1 female T shirt and shorts Looks like pepperoni Yes it does Looks like pepperoni Pepperoni Pepperoni Looks like cheese, but I canât tell... No How old are men? Are they wearing chefâs hat? Do they have long hair? What kind of pizza is it? Does it look good? Are there any other people around? Are they inside or outside? Can you tell what time of day it is? Are there any other people around? What color are walls? They look like they are in their 20âs No No Looks like pepperoni Yes No Inside No No White Two men giving thumbs up holding large slices of pizza. How old is man? Is he wearing helmet? Is he wearing any protective gear? What color is his hair? What color is his shirt? Is he wearing shorts? | 1703.06585#45 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 46 | How old is man? Is he wearing helmet? Is he wearing any protective gear? What color is his hair? What color is his shirt? Is he wearing shorts? What color are his shorts? Are there other people visible? What color are his shoes? Is this at skate park? In his teens No No Dirty blonde or dark brown Black with red letters Yes Denim No I canât see them Yes How old is man? What is he wearing? What color is his shirt? what color are his pants? what color is his shirt? What color are his shoes? What color are his shoes? What color are his shoes? What color are his shoes? What color are his shoes? I canât see his face, I donât know shorts and t shirt white black blue Black Black Black Black Black What color is skateboard? Is he wearing helmet? Is he wearing knee pads? Is it day or night? Is it sunny? Can you see any other people? Is it day or night? Is it sunny? What color is skateboard? Is he wearing helmet? It is hard to tell but I think itâs black Yes No, no knee pads It is daytime It is bright out, but I see no shadows, so I can not There are 2 other | 1703.06585#46 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 47 | but I think itâs black Yes No, no knee pads It is daytime It is bright out, but I see no shadows, so I can not There are 2 other people in background It is daytime Yes, it is sunny It is hard to tell but I think itâs black Yes, he is wearing helmet A man riding a skateboard down the side of a ramp. What color is plane? Is there writing on plane? What color is ground? Any people? Can you see airport? What color is it? Is there sunshine? Any other planes? Any grass? Any trees? White and blue Yes it says lufthansa Grey tarmac No people visible Yes Grey No itâs kind of overcast No other planes No grass No tree What color is plane? Is it large or small? Any people? Any writing on plane? Any writing on plane? Any writing on plane? Any writing on plane? What does it say? What color is plane? Any writing on it? White Large No Yes Yes Yes Yes UNK White Yes What color is airplane? Can you tell what airline it is? What time of day is it? Can you see any people? Can you see any buildings? Is it day or night time? Can you see sky? Are there clouds in sky? Does it look like itâs going to rain? Does | 1703.06585#47 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 48 | Can you see any buildings? Is it day or night time? Can you see sky? Are there clouds in sky? Does it look like itâs going to rain? Does it look like itâs going to rain? White with red and blue stripes No, itâs too far away It is daytime No Yes, in background It is daytime Yes Yes, few Yes it does Yes it does An airplane sitting on an asphalt landing strip. What color is dog? What breed is dog? Is dog wearing collar? Can you tell what is cooking? Is oven stainless steel? Does it have see through glass? How many racks in oven? Can you see knobs? Does oven have wooden handle? Are beans in pot? Brown and white Look like brittany spaniel Not able to see Look like beans Yes Yes 2 No Not able to see Glass dish What kind of dog is it? What color is it? Is it large or small? What kind of food is it? Is it large or small? What color is it? Is it large or small? What color are walls? What kind of ï¬oor? What kind of ï¬oor? I canât tell White Large I canât tell Small White Large White Canât see ï¬oor Canât see ï¬oor What kind of dog is | 1703.06585#48 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 49 | canât tell White Large I canât tell Small White Large White Canât see ï¬oor Canât see ï¬oor What kind of dog is it? What color is it? Can you tell what kind of food it is? Are there people around? Is it inside or outside? Can you tell what time of day it is? What color is room? Do you see any windows? Do you see any windows? Do you see any windows? Terrier of some kind White and grey No No people Inside No I canât I canât see walls No No No A dog watching food cook in an oven. Are there more people in this photo or just man? No Just tennis player man in this photo? Yes Canât tell, itâs close up Is it outdoor court? White shirt and shorts What is man wearing? No Can you see net? Yes Can you see bottom of court? Yes Can you see ball in photo? Yes Is it close to manâs racquet? Yes Is man about to take swing? Right Is he left handed or right handed? How old is man? What color is his hair? What is he wearing? What color is his shirt? What color is his shirt? What color is his shirt? What color is his shirt? What color is his shirt? What color is his shirt? What color is his | 1703.06585#49 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 50 | What color is his shirt? What color is his shirt? What color is his shirt? What color is his shirt? What color is his shirt? What color is his shirt? What color is his shirt? I canât see his face, I donât know Brown Shorts and t shirt White White White White White White White How old is man? What color is his hair? Is he wearing glasses? Is he wearing hat? Is he wearing glasses? Are there other people around? Can you see net? What color is court? Is it sunny? Do you see any trees? He looks to be in his 20âs His hair is brown No he is not No he is not No he is not No itâs just him No I canât Court is green Yes it is No I do not A man prepares to swing at the tennis ball. Is giraffe adult? Is grass at higher altitude than its feet? Is rock big? How is weather? What color is rock? Is grass green or dry? Does giraffe look healthy? Does giraffe look happy? Is giraffe laying down? Is giraffeâs mouth closed? I cannot tell, I only see face No Yes Sunny Tan color Green but looks little dry Yes It looks tired Yes Yes Is there only 1 giraffe? Are there any other | 1703.06585#50 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 51 | cannot tell, I only see face No Yes Sunny Tan color Green but looks little dry Yes It looks tired Yes Yes Is there only 1 giraffe? Are there any other animals? Are there any people? Are there any other animals? Is it day or night? Is it sunny? What time of day it is? Are there any other animals? Are there any trees? Are there any other animals? Yes No No No Day Yes I canât tell No No No Is there only 1 giraffe? Are there any other animals? Are there any people? Does this look like zoo? Is there fence? Is it day or night? Is it sunny? What time of day is it? Are there any trees? Can you see sky? Yes No No Canât tell No Day Yes Morning No No A giraffe is resting its head in the grass near a rock. Are there any people in picture? people Is cheese white? Are crackers white? Are there lots of greens? Was picture taken in kitchen? zoomed in How many crackers are there? Is there design on plate? Is there good lighting? Is plate on table? background Does cracker look delicious? No, there are no Yes, cheese is white No, cracker is brownish No, only 1 sprig I cannot tell, it is | 1703.06585#51 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 52 | background Does cracker look delicious? No, there are no Yes, cheese is white No, cracker is brownish No, only 1 sprig I cannot tell, it is too There is 1 No, plate is plane white Yes, lighting is decent I cannot tell I can only see plate as Are there any people? What color is plate? Is plate on table? What color is plate? Is plate on table? Is there anything else on table? Is there anything else on table? Is there anything else on table? Is there anything else on table? Is there anything else on table? No White I canât tell White I canât tell No No No No No Is plate large or small? Is it on table? What kind of donut? What kind of chips? Any people? Is this inside or outside? What color is table? Any chairs? What color is table? Is it large or small? Large I canât tell Plain with sugar on it Potato chips No I canât tell, itâs close up White No I canât tell I canât see plate Absolutely A cracker with shredded cheese and leafy greens on a white plate. | 1703.06585#52 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 53 | He looks like late teens How old is this man? Medium Does he have short hair or long hair? Brown What color is his hair? No Is he wearing glasses? No Does he have facial hair? How is he dressed? Heâs white dress shirt with red tie and matching red vest over it looks like school uniform Is he larger man or is he skinny? Is this picture taken outside? Can you tell if itâs daytime? Can you tell what room he is standing in? hallway
Heâs skinny Inside Iâm not sure Heâs in
How old is man? What is he wearing? What color is his hair? What is he wearing? What color is his hair? What is he wearing? What color is his hair? Is it short or long? What color is his hair? Is it short or long?
He looks like heâs in his 20s T shirt and jeans Black T shirt and jeans Black T shirt and jeans Black Short Black Short
Where is man located? What is man wearing? Are there any other people in picture? What color is table? Is there anything else on table? What are other people doing? they Are there any windows? What color are walls? What time of day do you think it is? What is man wearing? | 1703.06585#53 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 54 | Looks like classroom of some sort Black t shirt and jeans Yes, quite few people in background Picture is black and white, but itâs wood table Not that I can see They are standing in front of him, but I donât know what
Not that I can see I canât see walls I canât tell Black t shirt and jeans
A man making the live long and prosper sign from star trek.
Table 1: Selected examples of Q-BOT-A-BOT interactions for SL-pretrained and RL-full-QAf. RL-full-QAf interactions are diverse, less prone to repetitive and safe exchanges (âcanât tell", âdonât know" etc.), and more image-discriminative.
7 | 1703.06585#54 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 55 | strategy from scratch is both tremendously difficult and an unnecessary re-invention of English. We leverage the recently introduced VisDial dataset [4] that contains (as of the publicly released v0.5) human dialogs (10 rounds of question-answer pairs) on 68k images from the COCO dataset, for a total of 680k QA-pairs. Example dialogs from the VisDial dataset are shown in Tab. 1. Image Feature Regression. We consider a specific in- stantiation of the visual guessing game described in Sec. 3 â specifically at each round t, Q-BOT needs to regress to the vector embedding %, of image J corresponding to the fc7 (penultimate fully-connected layer) output from VGG- 16 [28]. The distance metric used in the reward computation . : to» 2 tn 2 is C2, ie. r4(-) = llyâ â Grills â lly â dello- Training Strategies. We found two training strategies to be crucial to ensure/improve the convergence of the RL frame- work described in Sec. 4, to produce any meaningful dialog exchanges, and to ground the agents in natural language. 1) Supervised Pretraining. We first train both agents in a | 1703.06585#55 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 56 | in Sec. 4, to produce any meaningful dialog exchanges, and to ground the agents in natural language. 1) Supervised Pretraining. We first train both agents in a supervised manner on the train split of VisDial [4] v0.5 under an MLE objective. Thus, conditioned on human di- alog history, Q-BOT is trained to generate the follow-up question by human1, A-BOT is trained to generate the re- sponse by human2, and the feature network f(-) is opti- mized to regress to y. The CNN in A-BOT is pretrained on ImageNet. This pretraining ensures that the agents can generally recognize some objects/scenes and emit English questions/answers. The space of possible (q;, a,) is ttemen- dously large and without pretraining most exchanges result in no information gain about the image. 2) Curriculum Learning. After supervised pretraining, we âsmoothlyâ transition the agents to RL training accord- ing to a curriculum. Specifically, we continue supervised training for the first (say 9) rounds of dialog and tran- sition to policy-gradient updates for the remaining 10 â rounds. We start at A = 9 and gradually anneal to 0. | 1703.06585#56 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 57 | rounds of dialog and tran- sition to policy-gradient updates for the remaining 10 â rounds. We start at A = 9 and gradually anneal to 0. This curriculum ensures that the agent team does not suddenly diverge off policy, if one incorrect q or a is generated. Models are pretrained for 15 epochs on VisDial, after which we transition to policy-gradient training by annealing K down by 1 every epoch. All LSTMs are 2-layered with 512- d hidden states. We use Adam [15] with a learning rate of 10-3, and clamp gradients to [â5,5] to avoid explosion. All our code will be made publicly available. There is no explicit state-dependent baseline in our training as we ini- tialize from supervised pretraining and have zero-centered reward, which ensures a good proportion of random sam- ples are both positively and negatively reinforced. Model Ablations. We compare to a few natural ablations of our full model, denoted RL-ful1-QAf. First, we evaluate the purely supervised agents (denoted SL-pret rained), i.e., trained only on VisDial data (no RL). Comparison to these agents establishes how much RL helps over super8 | 1703.06585#57 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 58 | vised learning. Second, we ï¬x one of Q-BOT or A-BOT to the supervised pretrained initialization and train the other agent (and the regression network f ) with RL; we label these as Frozen-Q or Frozen-A respectively. Compar- ing to these partially frozen agents tell us the importance of coordinated communication. Finally, we freeze the regres- sion network f to the supervised pretrained initialization while training Q-BOT and A-BOT with RL. This measures improvements from language adaptation alone. We quantify performance of these agents along two dimen- sions â how well they perform on the image guessing task (i.e. image retrieval) and how closely they emulate human dialogs (i.e. performance on VisDial dataset [4]). Evaluation: Guessing Game. To assess how well the agents have learned to cooperate at the image guessing task, we setup an image retrieval experiment based on the test split of VisDial v0.5 (â¼9.5k images), which were never seen by the agents in RL training. We present each im- age + an automatically generated caption [13] to the agents, and allow them to communicate over 10 rounds of dialog. After each round, | 1703.06585#58 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 59 | We present each im- age + an automatically generated caption [13] to the agents, and allow them to communicate over 10 rounds of dialog. After each round, Q-BOT predicts a feature representation Ëyt. We sort the entire test set in ascending distance to this prediction and compute the rank of the source image. Fig. 4a shows the mean percentile rank of the source im- age for our method and the baselines across the rounds (shaded region indicates standard error). A percentile rank of 95% means that the source image is closer to the predic- tion than 95% of the images in the set. Tab. 1 shows ex- ample exchanges between two humans (from VisDial), the SL-pretrained and the RL-full-QAf agents. We make a few observations: | 1703.06585#59 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 60 | We see that outperforms SL-pretrained and all other ablations (e.g., at improving percentile rank by over 3%), round 10, indicating that our training framework is indeed effective at training these agents for image guessing.
⢠All agents âforgetâ; RL agents forget less. One in- teresting trend we note in Fig. 4a is that all methods signiï¬cantly improve from round 0 (caption-based re- trieval) to rounds 2 or 3, but beyond that all methods with the exception of RL-full-QAf get worse, even though they have strictly more information. As shown in Tab. 1, agents will often get stuck in inï¬nite repeat- ing loops but this is much rarer for RL agents. More- over, even when RL agents repeat themselves, it is af- ter longer gaps (2-5 rounds). We conjecture that the goal of helping a partner over multiple rounds encour- ages longer term memory retention.
⢠RL leads to more informative dialog. SL A-BOT tends to produce âsafeâ generic responses (âI donât knowâ, âI canât seeâ) but RL A-BOT responses are | 1703.06585#60 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 62 | £2 distance to ground truth image in fc7 space Round 1: What kind of pizza is it? Cheese and | maybe mushroom. ce he | Fag tS 0.9263 gaa! Pizza slice sittingon | Round 5: Is there anything else on plate? top of white plate. Yes, es are 2 other plates in background. Group of people standing on top of lush green field wa 78| i 1.0477 1 ' 1 f 1 1 1 0.9343 '0.9352|1 0.9423 10,9426 0.9446 = Round 4: Are th outdoors? Outdoors. ae \ pee & ! H i] i = lus a3 111508] 28) 1761 Man in light-colored suit and tie standing next to woman in short purple dress. £2 distance to ground truth image in fc7 space 1 is i i 1.1551 H 1629 1.1591 Round 3: What aE one flowers in one of many ceramic vases. Round 1: How many people are there? Lot, too many to count. il i f H f i 0.8687! 0.8890 ik os 1,0.9006} 0.9149 People staring at man Round 3: Does it look old or 7" It looks new. on fancy motorcycle. 1 | = 1.1861 | 1703.06585#62 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 65 | Figure 4: a) Guessing Game Evaluation. Plot shows the rank in percentile (higher is better) of the âground truthâ image (shown to A-BOT) as retrieved using fc7 predictions of Q-BOT vs. rounds of dialog. Round 0 corresponds to image guessing based on the caption alone. We can see that the RL-full-QAf bots signiï¬cantly outperforms the SL-pretrained bots (and other ablations). Error bars show standard error of means. (c) shows qualitative results on this predicted fc7-based image retrieval. Left column shows true image and caption, right column shows dialog exchange, and a list of images sorted by their distance to the ground-truth image. The image predicted by Q-BOT is highlighted in red. We can see that the predicted image is often semantically quite similar. b) VisDial Evaluation. Performance of A-BOT on VisDial v0.5 test, under mean reciprocal rank (MRR), recall@k for k = {5, 10} and mean rank metrics. Higher is better for MRR and recall@k, while lower is better for mean rank. We see that our proposed Frozen-Q-multi outperforms all other models on VisDial metrics by 3% relative gain. This improvement is entirely âfor freeâ since no additional annotations were required for RL. | 1703.06585#65 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 66 | much more detailed (âIt is hard to tell but I think itâs blackâ). These observations are consistent with re- cent literature in text-only dialog [18]. Our hypothesis for this improvement is that human responses are di- verse and SL trained agents tend to âhedge their betsâ and achieve a reasonable log-likelihood by being noncommittal. In contrast, such âsafeâ responses do not help Q-BOT in picking the correct image, thus encour- aging an informative RL A-BOT.
Evaluation: Emulating Human Dialogs. To quantify how well the agents emulate human dialog, we evaluate A-BOT on the retrieval metrics proposed by Das et al. [4]. Speciï¬9 | 1703.06585#66 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 67 | cally, every question in VisDial is accompanied by 100 can- didate responses. We use the log-likehood assigned by the A-BOT answer decoder to sort these candidates and report the results in Tab. 4b. We ï¬nd that despite the RL A-BOTâs answer being more informative, the improvements on Vis- Dial metrics are minor. We believe this is because while the answers are correct, they may not necessarily mimic hu- man responses (which is what the answer retrieval metrics check for). In order to dig deeper, we train a variant of Frozen-Q with a multi-task objective â simultaneous (1) ground truth answer supervision and (2) image guessing re- ward, to keep A-BOT close to human-like responses. We use a weight of 1.0 for the SL loss and 10.0 for RL. This model, denoted Frozen-Q-multi, performs better than all other approaches on VisDial answering metrics, improv- ing the best reported result on VisDial by 0.7 mean rank (relative improvement of 3%). Note that this gain is entirely âfreeâ since no additional annotations were required for RL. Human Study. We conducted a human interpretabil- ity study | 1703.06585#67 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 68 | of 3%). Note that this gain is entirely âfreeâ since no additional annotations were required for RL. Human Study. We conducted a human interpretabil- ity study to measure (1) whether humans can easily un- derstand the Q-BOT-A-BOT dialog, and (2) how image- discriminative the interactions are. We show human sub- jects a pool of 16 images, the agent dialog (10 rounds), and ask humans to pick their top-5 guesses for the image the two agents are talking about. We ï¬nd that mean rank of the ground-truth image for SL-pretrained agent dialog is 3.70 vs. 2.73 for RL-full-QAf dialog. In terms of MRR, the comparison is 0.518 vs. 0.622 respectively. Thus, un- der both metrics, humans ï¬nd it easier to guess the unseen image based on RL-full-QAf dialog exchanges, which shows that agents trained within our framework (1) success- fully develop image-discriminative language, and (2) this language is interpretable; they do not deviate off English. | 1703.06585#68 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 69 | # 7. Conclusions
To summarize, we introduce a novel training framework for visually-grounded dialog agents by posing a cooperative âimage guessingâ game between two agents. We use deep reinforcement learning to learn the policies of these agents end-to-end â from pixels to multi-agent multi-round dialog to game reward. We demonstrate the power of this frame- work in a completely ungrounded synthetic world, where the agents communicate via symbols with no pre-speciï¬ed meanings (X, Y, Z). We ï¬nd that two bots invent their own communication protocol without any human supervision. We go on to instantiate this game on the VisDial [4] dataset, where we pretrain with supervised dialog data. We ï¬nd that the RL âï¬ne-tunedâ agents not only signiï¬cantly outperform SL agents, but learn to play to each otherâs strengths, all the while remaining interpretable to outside humans observers.
Acknowledgements. We thank Devi Parikh for helpful discussions. This work was funded in part by the following
10 | 1703.06585#69 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 70 | Acknowledgements. We thank Devi Parikh for helpful discussions. This work was funded in part by the following
10
awards to DB â NSF CAREER award, ONR YIP award, ONR Grant N00014-14-1-0679, ARO YIP award, ICTAS Junior Faculty award, Google Faculty Research Award, Amazon Academic Research Award, AWS Cloud Credits for Research, and NVIDIA GPU donations. SK was sup- ported by ONR Grant N00014-12-1-0903, and SL was par- tially supported by the Bradley Postdoctoral Fellowship. Views and conclusions contained herein are those of the au- thors and should not be interpreted as necessarily represent- ing the ofï¬cial policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.
# References
[1] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. VQA: Visual Question Answering. In ICCV, 2015. 1, 2, 3 | 1703.06585#70 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 71 | [2] J. P. Bigham, C. Jayant, H. Ji, G. Little, A. Miller, R. C. Miller, R. Miller, A. Tatarowicz, B. White, S. White, and T. Yeh. VizWiz: Nearly Real-time Answers to Visual Ques- tions. In UIST, 2010. 1
[3] X. Chen and C. L. Zitnick. Mindâs Eye: A Recurrent Vi- sual Representation for Image Caption Generation. In CVPR, 2015. 1
[4] A. Das, S. Kottur, K. Gupta, A. Singh, D. Yadav, J. M. Moura, D. Parikh, and D. Batra. Visual Dialog. In CVPR, 2017. 1, 2, 3, 4, 7, 8, 9, 10 | 1703.06585#71 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 72 | [5] H. de Vries, F. Strub, S. Chandar, O. Pietquin, H. Larochelle, and A. Courville. GuessWhat?! visual object discovery through multi-modal dialogue. In CVPR, 2017. 1, 2, 3 [6] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term Re- current Convolutional Networks for Visual Recognition and Description. In CVPR, 2015. 3
[7] H. Fang, S. Gupta, F. N. Iandola, R. K. Srivastava, L. Deng, P. Dollár, J. Gao, X. He, M. Mitchell, J. C. Platt, C. L. Zit- nick, and G. Zweig. From Captions to Visual Concepts and Back. In CVPR, 2015. 3
[8] J. Foerster, Y. M. Assael, N. de Freitas, and S. Whiteson. Learning to communicate with deep multi-agent reinforce- ment learning. In Advances in Neural Information Process- ing Systems, 2016. 3 | 1703.06585#72 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 73 | [9] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu. Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering. In NIPS, 2015. 3 J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Gen- erative Adversarial Nets. In NIPS, 2014. 3
[11] S. Havrylov and I. Titov. Emergence of language with multi- agent games: Learning to communicate with sequences of symbols. In ICLR Workshop, 2017. 3
[12] J. Johnson, A. Karpathy, and L. Fei-Fei. DenseCap: Fully Convolutional Localization Networks for Dense Captioning. In CVPR, 2016. 1
[13] A. Karpathy and L. Fei-Fei. Deep visual-semantic align- In CVPR, 2015. ments for generating image descriptions. 3, 8
[14] S. Kazemzadeh, V. Ordonez, M. Matten, and T. L. Berg. ReferItGame: Referring to Objects in Photographs of Natural Scenes. In EMNLP, 2014. 3 | 1703.06585#73 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 74 | [15] D. Kingma and J. Ba. Adam: A Method for Stochastic Opti- mization. In ICLR, 2015. 8
[16] A. Lazaridou, A. Peysakhovich, and M. Baroni. Multi-agent cooperation and the emergence of (natural) language. In ICLR, 2017. 3
[17] D. Lewis. Convention: A philosophical study. John Wiley & Sons, 2008. 3
[18] J. Li, W. Monroe, A. Ritter, M. Galley, J. Gao, and D. Juraf- sky. Deep Reinforcement Learning for Dialogue Generation. In EMNLP, 2016. 3, 9
[19] J. Li, W. Monroe, T. Shi, A. Ritter, and D. Jurafsky. Adver- sarial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547, 2017. 3
[20] M. Malinowski and M. Fritz. A Multi-World Approach to Question Answering about Real-World Scenes based on Un- certain Input. In NIPS, 2014. 3
[21] M. Malinowski, M. Rohrbach, and M. Fritz. Ask your neu- rons: A neural-based approach to answering questions about images. In ICCV, 2015. 1, 3 | 1703.06585#74 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 75 | [22] I. Mordatch and P. Abbeel. Emergence of grounded compo- sitional language in multi-agent populations. arXiv preprint arXiv:1703.04908, 2017. 3
[23] S. Nolï¬ and M. Mirolli. Evolution of Communication and Language in Embodied Agents. Springer Publishing Com- pany, Incorporated, 1st edition, 2009. 3
[24] M. Ren, R. Kiros, and R. Zemel. Exploring Models and Data for Image Question Answering. In NIPS, 2015. 1, 3
[25] I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models. In AAAI, 2016. 4
[26] I. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio. A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues. arXiv preprint arXiv:1605.06069, 2016. 4 | 1703.06585#75 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 76 | [27] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, I. Antonoglou,
11
V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 2016. 3
[28] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 5, 8
[29] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. 6 | 1703.06585#76 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 77 | [29] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. 6
[30] M. Tapaswi, Y. Zhu, R. Stiefelhagen, A. Torralba, R. Ur- tasun, and S. Fidler. MovieQA: Understanding Stories in Movies through Question-Answering. In CVPR, 2016. 1 [31] K. Tu, M. Meng, M. W. Lee, T. E. Choe, and S. C. Zhu. Joint Video and Text Parsing for Understanding Events and Answering Queries. IEEE MultiMedia, 2014. 1
[32] S. Venugopalan, M. Rohrbach, J. Donahue, R. J. Mooney, T. Darrell, and K. Saenko. Sequence to Sequence - Video to Text. In ICCV, 2015. 1
[33] S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. J. Mooney, and K. Saenko. Translating Videos to Natural Lan- guage Using Deep Recurrent Neural Networks. In NAACL HLT, 2015. 1 | 1703.06585#77 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.06585 | 78 | [34] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In CVPR, 2015. 1, 3 [35] R. J. Williams. Simple statistical gradient-following algo- rithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992. 5
[36] S. Wu, H. Pique, and J. Wieland. Using artiï¬- facebook. intelligence to help blind people âseeâ cial http://newsroom.fb.com/news/2016/04/using-artiï¬cial- intelligence-to-help-blind-people-see-facebook/, 1 2016.
[37] K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhut- dinov, R. S. Zemel, and Y. Bengio. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In ICML, 2015. 1 | 1703.06585#78 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | [
{
"id": "1605.06069"
},
{
"id": "1703.04908"
},
{
"id": "1701.06547"
}
] |
1703.04908 | 0 | 8 1 0 2
# l u J
4 2
] I A . s c [ 2 v 8 0 9 4 0 . 3 0 7 1 : v i X r a
# Emergence of Grounded Compositional Language in Multi-Agent Populations
# Igor Mordatch OpenAI San Francisco, California, USA
# Pieter Abbeel UC Berkeley Berkeley, California, USA
# Abstract
By capturing statistical patterns in large corpora, machine learning has enabled signiï¬cant advances in natural language processing, including in machine translation, question an- swering, and sentiment analysis. However, for agents to in- telligently interact with humans, simply capturing the statis- tical patterns is insufï¬cient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic com- positional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a deï¬ned vocabulary and syntax. We also observe emergence of non- verbal communication such as pointing and guiding when language communication is unavailable.
# Introduction | 1703.04908#0 | Emergence of Grounded Compositional Language in Multi-Agent Populations | By capturing statistical patterns in large corpora, machine learning has
enabled significant advances in natural language processing, including in
machine translation, question answering, and sentiment analysis. However, for
agents to intelligently interact with humans, simply capturing the statistical
patterns is insufficient. In this paper we investigate if, and how, grounded
compositional language can emerge as a means to achieve goals in multi-agent
populations. Towards this end, we propose a multi-agent learning environment
and learning methods that bring about emergence of a basic compositional
language. This language is represented as streams of abstract discrete symbols
uttered by agents over time, but nonetheless has a coherent structure that
possesses a defined vocabulary and syntax. We also observe emergence of
non-verbal communication such as pointing and guiding when language
communication is unavailable. | http://arxiv.org/pdf/1703.04908 | Igor Mordatch, Pieter Abbeel | cs.AI, cs.CL | null | null | cs.AI | 20170315 | 20180724 | [
{
"id": "1603.08887"
},
{
"id": "1611.01779"
},
{
"id": "1612.07182"
},
{
"id": "1609.00777"
},
{
"id": "1612.08810"
}
] |
1703.05175 | 0 | 7 1 0 2 n u J 9 1 ] G L . s c [
2 v 5 7 1 5 0 . 3 0 7 1 : v i X r a
# Prototypical Networks for Few-shot Learning
# Jake Snell University of Torontoâ
Kevin Swersky Twitter
# Richard S. Zemel University of Toronto, Vector Institute
# Abstract
We propose prototypical networks for the problem of few-shot classiï¬cation, where a classiï¬er must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classiï¬cation can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reï¬ect a simpler inductive bias that is beneï¬cial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the- art results on the CU-Birds dataset.
# Introduction | 1703.05175#0 | Prototypical Networks for Few-shot Learning | We propose prototypical networks for the problem of few-shot classification,
where a classifier must generalize to new classes not seen in the training set,
given only a small number of examples of each new class. Prototypical networks
learn a metric space in which classification can be performed by computing
distances to prototype representations of each class. Compared to recent
approaches for few-shot learning, they reflect a simpler inductive bias that is
beneficial in this limited-data regime, and achieve excellent results. We
provide an analysis showing that some simple design decisions can yield
substantial improvements over recent approaches involving complicated
architectural choices and meta-learning. We further extend prototypical
networks to zero-shot learning and achieve state-of-the-art results on the
CU-Birds dataset. | http://arxiv.org/pdf/1703.05175 | Jake Snell, Kevin Swersky, Richard S. Zemel | cs.LG, stat.ML | null | null | cs.LG | 20170315 | 20170619 | [
{
"id": "1605.05395"
},
{
"id": "1502.03167"
}
] |
1703.04908 | 1 | # Introduction
Recently there has been a surge of renewed interest in the pragmatic aspects of language use and it is also the focus of our work. We adopt a view of (Gauthier and Mordatch 2016) that an agent possesses an understanding of language when it can use language (along with other tools such as non-verbal communication or physical acts) to accomplish goals in its environment. This leads to evaluation criteria that can be measured precisely and without human involvement. In this paper, we propose a physically-situated multi- agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete sym- bols uttered by agents over time, but nonetheless has a co- herent structure that possesses a deï¬ned vocabulary and syn- tax. The agents utter communication symbols alongside per- forming actions in the physical environment to cooperatively accomplish goals deï¬ned by a joint reward function shared between all agents. There are no pre-designed meanings as- sociated with the uttered symbols - the agents form concepts relevant to the task and environment and assign arbitrary symbols to communicate them. | 1703.04908#1 | Emergence of Grounded Compositional Language in Multi-Agent Populations | By capturing statistical patterns in large corpora, machine learning has
enabled significant advances in natural language processing, including in
machine translation, question answering, and sentiment analysis. However, for
agents to intelligently interact with humans, simply capturing the statistical
patterns is insufficient. In this paper we investigate if, and how, grounded
compositional language can emerge as a means to achieve goals in multi-agent
populations. Towards this end, we propose a multi-agent learning environment
and learning methods that bring about emergence of a basic compositional
language. This language is represented as streams of abstract discrete symbols
uttered by agents over time, but nonetheless has a coherent structure that
possesses a defined vocabulary and syntax. We also observe emergence of
non-verbal communication such as pointing and guiding when language
communication is unavailable. | http://arxiv.org/pdf/1703.04908 | Igor Mordatch, Pieter Abbeel | cs.AI, cs.CL | null | null | cs.AI | 20170315 | 20180724 | [
{
"id": "1603.08887"
},
{
"id": "1611.01779"
},
{
"id": "1612.07182"
},
{
"id": "1609.00777"
},
{
"id": "1612.08810"
}
] |
1703.04933 | 1 | # Sharp Minima Can Generalize For Deep Nets
# Laurent Dinh 1 Razvan Pascanu 2 Samy Bengio 3 Yoshua Bengio 1 4
Abstract Despite their overwhelming capacity to overï¬t, deep learning architectures tend to generalize rel- atively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the ï¬atness of minima of the loss function found by stochastic gradient based methods results in good generalization. This pa- per argues that most notions of ï¬atness are prob- lematic for deep models and can not be directly applied to explain generalization. Speciï¬cally, when focusing on deep networks with rectiï¬er units, we can exploit the particular geometry of pa- rameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper min- ima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its general- ization properties. | 1703.04933#1 | Sharp Minima Can Generalize For Deep Nets | Despite their overwhelming capacity to overfit, deep learning architectures
tend to generalize relatively well to unseen data, allowing them to be deployed
in practice. However, explaining why this is the case is still an open area of
research. One standing hypothesis that is gaining popularity, e.g. Hochreiter &
Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the
loss function found by stochastic gradient based methods results in good
generalization. This paper argues that most notions of flatness are problematic
for deep models and can not be directly applied to explain generalization.
Specifically, when focusing on deep networks with rectifier units, we can
exploit the particular geometry of parameter space induced by the inherent
symmetries that these architectures exhibit to build equivalent models
corresponding to arbitrarily sharper minima. Furthermore, if we allow to
reparametrize a function, the geometry of its parameters can change drastically
without affecting its generalization properties. | http://arxiv.org/pdf/1703.04933 | Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio | cs.LG | 8.5 pages of main content, 2.5 of bibliography and 1 page of appendix | null | cs.LG | 20170315 | 20170515 | [
{
"id": "1609.03193"
},
{
"id": "1701.04271"
},
{
"id": "1609.04836"
},
{
"id": "1606.04838"
},
{
"id": "1611.03530"
},
{
"id": "1605.08803"
},
{
"id": "1511.01029"
},
{
"id": "1609.08144"
},
{
"id": "1611.01838"
},
{
"id": "1606.05336"
},
{
"id": "1603.01431"
},
{
"id": "1511.01844"
},
{
"id": "1612.04010"
},
{
"id": "1611.07476"
},
{
"id": "1611.02344"
}
] |
1703.05175 | 1 | # Introduction
Few-shot classiï¬cation [20, 16, 13] is a task in which a classiï¬er must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely overï¬t. While the problem is quite difï¬cult, it has been demonstrated that humans have the ability to perform even one-shot classiï¬cation, where only a single example of each new class is given, with a high degree of accuracy [16]. | 1703.05175#1 | Prototypical Networks for Few-shot Learning | We propose prototypical networks for the problem of few-shot classification,
where a classifier must generalize to new classes not seen in the training set,
given only a small number of examples of each new class. Prototypical networks
learn a metric space in which classification can be performed by computing
distances to prototype representations of each class. Compared to recent
approaches for few-shot learning, they reflect a simpler inductive bias that is
beneficial in this limited-data regime, and achieve excellent results. We
provide an analysis showing that some simple design decisions can yield
substantial improvements over recent approaches involving complicated
architectural choices and meta-learning. We further extend prototypical
networks to zero-shot learning and achieve state-of-the-art results on the
CU-Birds dataset. | http://arxiv.org/pdf/1703.05175 | Jake Snell, Kevin Swersky, Richard S. Zemel | cs.LG, stat.ML | null | null | cs.LG | 20170315 | 20170619 | [
{
"id": "1605.05395"
},
{
"id": "1502.03167"
}
] |
1703.04908 | 2 | Development of agents that are capable of communication and ï¬exible language use is one of the long-standing chal- lenges facing the ï¬eld of artiï¬cial intelligence. Agents need to develop communication if they are to successfully coor- dinate as a collective. Furthermore, agents will need some language capacity if they are to interact and productively collaborate with humans or make decisions that are inter- pretable by humans. If such a capacity were to arise artiï¬- cially, it could also offer important insights into questions surrounding development of human language and cognition. But if we wish to arrive at formation of communication from ï¬rst principles, it must form out of necessity. The ap- proaches that learn to plausibly imitate language from ex- amples of human language, while tremendously useful, do not learn why language exists. Such supervised approaches can capture structural and statistical relationships in lan- guage, but they do not capture its functional aspects, or that language happens for purposes of successful coordina- tion between humans. Evaluating success of such imitation- based approaches on the basis of linguistic plausibility also presents challenges of ambiguity and requirement of human involvement.
Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. | 1703.04908#2 | Emergence of Grounded Compositional Language in Multi-Agent Populations | By capturing statistical patterns in large corpora, machine learning has
enabled significant advances in natural language processing, including in
machine translation, question answering, and sentiment analysis. However, for
agents to intelligently interact with humans, simply capturing the statistical
patterns is insufficient. In this paper we investigate if, and how, grounded
compositional language can emerge as a means to achieve goals in multi-agent
populations. Towards this end, we propose a multi-agent learning environment
and learning methods that bring about emergence of a basic compositional
language. This language is represented as streams of abstract discrete symbols
uttered by agents over time, but nonetheless has a coherent structure that
possesses a defined vocabulary and syntax. We also observe emergence of
non-verbal communication such as pointing and guiding when language
communication is unavailable. | http://arxiv.org/pdf/1703.04908 | Igor Mordatch, Pieter Abbeel | cs.AI, cs.CL | null | null | cs.AI | 20170315 | 20180724 | [
{
"id": "1603.08887"
},
{
"id": "1611.01779"
},
{
"id": "1612.07182"
},
{
"id": "1609.00777"
},
{
"id": "1612.08810"
}
] |
1703.04933 | 2 | approximate certain functions (e.g. Montufar et al., 2014; Raghu et al., 2016). Other works (e.g Dauphin et al., 2014; Choromanska et al., 2015) have looked at the structure of the error surface to analyze how trainable these models are. Finally, another point of discussion is how well these mod- els can generalize (Nesterov & Vial, 2008; Keskar et al., 2017; Zhang et al., 2017). These correspond, respectively, to low approximation, optimization and estimation error as described by Bottou (2010). | 1703.04933#2 | Sharp Minima Can Generalize For Deep Nets | Despite their overwhelming capacity to overfit, deep learning architectures
tend to generalize relatively well to unseen data, allowing them to be deployed
in practice. However, explaining why this is the case is still an open area of
research. One standing hypothesis that is gaining popularity, e.g. Hochreiter &
Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the
loss function found by stochastic gradient based methods results in good
generalization. This paper argues that most notions of flatness are problematic
for deep models and can not be directly applied to explain generalization.
Specifically, when focusing on deep networks with rectifier units, we can
exploit the particular geometry of parameter space induced by the inherent
symmetries that these architectures exhibit to build equivalent models
corresponding to arbitrarily sharper minima. Furthermore, if we allow to
reparametrize a function, the geometry of its parameters can change drastically
without affecting its generalization properties. | http://arxiv.org/pdf/1703.04933 | Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio | cs.LG | 8.5 pages of main content, 2.5 of bibliography and 1 page of appendix | null | cs.LG | 20170315 | 20170515 | [
{
"id": "1609.03193"
},
{
"id": "1701.04271"
},
{
"id": "1609.04836"
},
{
"id": "1606.04838"
},
{
"id": "1611.03530"
},
{
"id": "1605.08803"
},
{
"id": "1511.01029"
},
{
"id": "1609.08144"
},
{
"id": "1611.01838"
},
{
"id": "1606.05336"
},
{
"id": "1603.01431"
},
{
"id": "1511.01844"
},
{
"id": "1612.04010"
},
{
"id": "1611.07476"
},
{
"id": "1611.02344"
}
] |
1703.05175 | 2 | Two recent approaches have made signiï¬cant progress in few-shot learning. Vinyals et al. [29] proposed matching networks, which uses an attention mechanism over a learned embedding of the labeled set of examples (the support set) to predict classes for the unlabeled points (the query set). Matching networks can be interpreted as a weighted nearest-neighbor classiï¬er applied within an embedding space. Notably, this model utilizes sampled mini-batches called episodes during training, where each episode is designed to mimic the few-shot task by subsampling classes as well as data points. The use of episodes makes the training problem more faithful to the test environment and thereby improves generalization. Ravi and Larochelle [22] take the episodic training idea further and propose a meta-learning approach to few-shot learning. Their approach involves training an LSTM [9] to produce the updates to a classiï¬er, given an episode, such that it will generalize well to a test-set. Here, rather than training a single model over multiple episodes, the LSTM meta-learner learns to train a custom model for each episode. | 1703.05175#2 | Prototypical Networks for Few-shot Learning | We propose prototypical networks for the problem of few-shot classification,
where a classifier must generalize to new classes not seen in the training set,
given only a small number of examples of each new class. Prototypical networks
learn a metric space in which classification can be performed by computing
distances to prototype representations of each class. Compared to recent
approaches for few-shot learning, they reflect a simpler inductive bias that is
beneficial in this limited-data regime, and achieve excellent results. We
provide an analysis showing that some simple design decisions can yield
substantial improvements over recent approaches involving complicated
architectural choices and meta-learning. We further extend prototypical
networks to zero-shot learning and achieve state-of-the-art results on the
CU-Birds dataset. | http://arxiv.org/pdf/1703.05175 | Jake Snell, Kevin Swersky, Richard S. Zemel | cs.LG, stat.ML | null | null | cs.LG | 20170315 | 20170619 | [
{
"id": "1605.05395"
},
{
"id": "1502.03167"
}
] |
1703.04908 | 3 | Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
There are similarly no explicit language usage goals, such as making correct utterances, and no explicit roles agents are assigned, such as speaker or listener, or explicit turn- taking dialogue structure as in traditional language games. There may be an arbitrary number of agents in a popula- tion communicating at the same time and part of the dif- ï¬culty is learning to refer speciï¬c agents. A population of agents is situated as moving particles in a continuous two-dimensional environment, possessing properties such as color and shape. The goals of the population are based on non-linguistic objectives, such as moving to a location and language arises from the need to coordinate on those goals. We do not rely on any supervision such as human demon- strations or text corpora.
Similar to recent work,we formulate the discovery the ac- tion and communication protocols for our agents jointly as a reinforcement learning problem. Agents perform physical actions and communication utterances according to an iden- tical policy that is instantiated for all agents and fully de- termines the action and communication protocols. The poli- cies are based on neural network models with an architec- ture composed of dynamically-instantiated recurrent mod- ules. This allows decentralized execution with a variable | 1703.04908#3 | Emergence of Grounded Compositional Language in Multi-Agent Populations | By capturing statistical patterns in large corpora, machine learning has
enabled significant advances in natural language processing, including in
machine translation, question answering, and sentiment analysis. However, for
agents to intelligently interact with humans, simply capturing the statistical
patterns is insufficient. In this paper we investigate if, and how, grounded
compositional language can emerge as a means to achieve goals in multi-agent
populations. Towards this end, we propose a multi-agent learning environment
and learning methods that bring about emergence of a basic compositional
language. This language is represented as streams of abstract discrete symbols
uttered by agents over time, but nonetheless has a coherent structure that
possesses a defined vocabulary and syntax. We also observe emergence of
non-verbal communication such as pointing and guiding when language
communication is unavailable. | http://arxiv.org/pdf/1703.04908 | Igor Mordatch, Pieter Abbeel | cs.AI, cs.CL | null | null | cs.AI | 20170315 | 20180724 | [
{
"id": "1603.08887"
},
{
"id": "1611.01779"
},
{
"id": "1612.07182"
},
{
"id": "1609.00777"
},
{
"id": "1612.08810"
}
] |
1703.04933 | 3 | Our work focuses on the analysis of the estimation error. In particular, different approaches had been used to look at the question of why stochastic gradient descent results in solu- tions that generalize well (Bottou & LeCun, 2005; Bottou & Bousquet, 2008). For example, Duchi et al. (2011); Nesterov & Vial (2008); Hardt et al. (2016); Bottou et al. (2016); Go- nen & Shalev-Shwartz (2017) rely on the concept of stochas- tic approximation or uniform stability (Bousquet & Elisseeff, 2002). Another conjecture that was recently (Keskar et al., 2017) explored, but that could be traced back to Hochreiter & Schmidhuber (1997), relies on the geometry of the loss function around a given solution. It argues that ï¬at minima, for some deï¬nition of ï¬atness, lead to better generalization. Our work focuses on this particular conjecture, arguing that there are critical issues when applying the concept of ï¬at minima to deep neural networks, which require rethinking what ï¬atness actually means.
# Introduction | 1703.04933#3 | Sharp Minima Can Generalize For Deep Nets | Despite their overwhelming capacity to overfit, deep learning architectures
tend to generalize relatively well to unseen data, allowing them to be deployed
in practice. However, explaining why this is the case is still an open area of
research. One standing hypothesis that is gaining popularity, e.g. Hochreiter &
Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the
loss function found by stochastic gradient based methods results in good
generalization. This paper argues that most notions of flatness are problematic
for deep models and can not be directly applied to explain generalization.
Specifically, when focusing on deep networks with rectifier units, we can
exploit the particular geometry of parameter space induced by the inherent
symmetries that these architectures exhibit to build equivalent models
corresponding to arbitrarily sharper minima. Furthermore, if we allow to
reparametrize a function, the geometry of its parameters can change drastically
without affecting its generalization properties. | http://arxiv.org/pdf/1703.04933 | Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio | cs.LG | 8.5 pages of main content, 2.5 of bibliography and 1 page of appendix | null | cs.LG | 20170315 | 20170515 | [
{
"id": "1609.03193"
},
{
"id": "1701.04271"
},
{
"id": "1609.04836"
},
{
"id": "1606.04838"
},
{
"id": "1611.03530"
},
{
"id": "1605.08803"
},
{
"id": "1511.01029"
},
{
"id": "1609.08144"
},
{
"id": "1611.01838"
},
{
"id": "1606.05336"
},
{
"id": "1603.01431"
},
{
"id": "1511.01844"
},
{
"id": "1612.04010"
},
{
"id": "1611.07476"
},
{
"id": "1611.02344"
}
] |
1703.05175 | 3 | We attack the problem of few-shot learning by addressing the key issue of overï¬tting. Since data is severely limited, we work under the assumption that a classiï¬er should have a very simple inductive bias. Our approach, prototypical networks, is based on the idea that there exists an embedding in which points cluster around a single prototype representation for each class. In order to do this, we learn a non-linear mapping of the input into an embedding space using a neural network and take a classâs prototype to be the mean of its support set in the embedding space. Classiï¬cation is then performed for an embedded query point by simply ï¬nding the nearest class prototype. We follow the same approach to tackle zero-shot learning; here each class comes with meta-data giving a high-level description of the class rather than a small number of labeled examples. We therefore learn an embedding of the meta-data into a shared space to serve as the prototype for each class.
*Initial work by ï¬rst author done while at Twitter.
(a) Few-shot (b) Zero-shot | 1703.05175#3 | Prototypical Networks for Few-shot Learning | We propose prototypical networks for the problem of few-shot classification,
where a classifier must generalize to new classes not seen in the training set,
given only a small number of examples of each new class. Prototypical networks
learn a metric space in which classification can be performed by computing
distances to prototype representations of each class. Compared to recent
approaches for few-shot learning, they reflect a simpler inductive bias that is
beneficial in this limited-data regime, and achieve excellent results. We
provide an analysis showing that some simple design decisions can yield
substantial improvements over recent approaches involving complicated
architectural choices and meta-learning. We further extend prototypical
networks to zero-shot learning and achieve state-of-the-art results on the
CU-Birds dataset. | http://arxiv.org/pdf/1703.05175 | Jake Snell, Kevin Swersky, Richard S. Zemel | cs.LG, stat.ML | null | null | cs.LG | 20170315 | 20170619 | [
{
"id": "1605.05395"
},
{
"id": "1502.03167"
}
] |
1703.04908 | 4 | number of agents and communication streams. The joint dynamics of all agents and environment, including discrete communication streams are fully-differentiable, the agentsâ policy is trained end-to-end with backpropagation through time.
The languages formed exhibit interpretable compositional structure that in general assigns symbols to separately refer to environment landmarks, action verbs, and agents. How- ever, environment variation leads to a number of specialized languages, omitting words that are clear from context. For example, when there is only one type of action to take or one landmark to go to, words for those concepts do not form in the language. Considerations of the physical environment also have an impact on language structure. For example, a symbol denoting go action is typically uttered ï¬rst because the listener can start moving before even hearing the desti- nation. This effect only arises when linguistic and physical behaviors are treated jointly and not in isolation.
The presence of a physical environment also allows for alternative strategies aside from language use to accom- plish goals. A visual sensory modality provides an alterna- tive medium for communication and we observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable. When even non-verbal communication is unavailable, strategies such as direct pushing may be employed to succeed at the task. It is important to us to build an environment with a diverse set of capabilities which language use develops alongside with. | 1703.04908#4 | Emergence of Grounded Compositional Language in Multi-Agent Populations | By capturing statistical patterns in large corpora, machine learning has
enabled significant advances in natural language processing, including in
machine translation, question answering, and sentiment analysis. However, for
agents to intelligently interact with humans, simply capturing the statistical
patterns is insufficient. In this paper we investigate if, and how, grounded
compositional language can emerge as a means to achieve goals in multi-agent
populations. Towards this end, we propose a multi-agent learning environment
and learning methods that bring about emergence of a basic compositional
language. This language is represented as streams of abstract discrete symbols
uttered by agents over time, but nonetheless has a coherent structure that
possesses a defined vocabulary and syntax. We also observe emergence of
non-verbal communication such as pointing and guiding when language
communication is unavailable. | http://arxiv.org/pdf/1703.04908 | Igor Mordatch, Pieter Abbeel | cs.AI, cs.CL | null | null | cs.AI | 20170315 | 20180724 | [
{
"id": "1603.08887"
},
{
"id": "1611.01779"
},
{
"id": "1612.07182"
},
{
"id": "1609.00777"
},
{
"id": "1612.08810"
}
] |
1703.04933 | 4 | # Introduction
Deep learning techniques have been very successful in several domains, like object recognition in images (e.g Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; Szegedy et al., 2015; He et al., 2016), machine transla- tion (e.g. Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015; Wu et al., 2016; Gehring et al., 2016) and speech recognition (e.g. Graves et al., 2013; Hannun et al., 2014; Chorowski et al., 2015; Chan et al., 2016; Collobert et al., 2016). Several arguments have been brought forward to jus- tify these empirical results. From a representational point of view, it has been argued that deep networks can efï¬ciently
1Université of Montréal, Montréal, Canada 2DeepMind, Lon- don, United Kingdom 3Google Brain, Mountain View, United States 4CIFAR Senior Fellow. Correspondence to: Laurent Dinh <[email protected]>. | 1703.04933#4 | Sharp Minima Can Generalize For Deep Nets | Despite their overwhelming capacity to overfit, deep learning architectures
tend to generalize relatively well to unseen data, allowing them to be deployed
in practice. However, explaining why this is the case is still an open area of
research. One standing hypothesis that is gaining popularity, e.g. Hochreiter &
Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the
loss function found by stochastic gradient based methods results in good
generalization. This paper argues that most notions of flatness are problematic
for deep models and can not be directly applied to explain generalization.
Specifically, when focusing on deep networks with rectifier units, we can
exploit the particular geometry of parameter space induced by the inherent
symmetries that these architectures exhibit to build equivalent models
corresponding to arbitrarily sharper minima. Furthermore, if we allow to
reparametrize a function, the geometry of its parameters can change drastically
without affecting its generalization properties. | http://arxiv.org/pdf/1703.04933 | Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio | cs.LG | 8.5 pages of main content, 2.5 of bibliography and 1 page of appendix | null | cs.LG | 20170315 | 20170515 | [
{
"id": "1609.03193"
},
{
"id": "1701.04271"
},
{
"id": "1609.04836"
},
{
"id": "1606.04838"
},
{
"id": "1611.03530"
},
{
"id": "1605.08803"
},
{
"id": "1511.01029"
},
{
"id": "1609.08144"
},
{
"id": "1611.01838"
},
{
"id": "1606.05336"
},
{
"id": "1603.01431"
},
{
"id": "1511.01844"
},
{
"id": "1612.04010"
},
{
"id": "1611.07476"
},
{
"id": "1611.02344"
}
] |
1703.05175 | 4 | *Initial work by ï¬rst author done while at Twitter.
(a) Few-shot (b) Zero-shot
Figure 1: Prototypical networks in the few-shot and zero-shot scenarios. Left: Few-shot prototypes ck are computed as the mean of embedded support examples for each class. Right: Zero-shot prototypes ck are produced by embedding class meta-data vk. In either case, embedded query points are classiï¬ed via a softmax over distances to class prototypes: pÏ(y = k|x) â exp(âd(fÏ(x), ck)).
Classiï¬cation is performed, as in the few-shot scenario, by ï¬nding the nearest class prototype for an embedded query point. | 1703.05175#4 | Prototypical Networks for Few-shot Learning | We propose prototypical networks for the problem of few-shot classification,
where a classifier must generalize to new classes not seen in the training set,
given only a small number of examples of each new class. Prototypical networks
learn a metric space in which classification can be performed by computing
distances to prototype representations of each class. Compared to recent
approaches for few-shot learning, they reflect a simpler inductive bias that is
beneficial in this limited-data regime, and achieve excellent results. We
provide an analysis showing that some simple design decisions can yield
substantial improvements over recent approaches involving complicated
architectural choices and meta-learning. We further extend prototypical
networks to zero-shot learning and achieve state-of-the-art results on the
CU-Birds dataset. | http://arxiv.org/pdf/1703.05175 | Jake Snell, Kevin Swersky, Richard S. Zemel | cs.LG, stat.ML | null | null | cs.LG | 20170315 | 20170619 | [
{
"id": "1605.05395"
},
{
"id": "1502.03167"
}
] |
1703.04908 | 5 | By compositionality we mean the combination of mul- tiple words to create meaning, as opposed to holistic lan- guages that have a unique word for every possible meaning (Kirby 2001). Our work offers insights into why such com- positional structure emerges. In part, we ï¬nd it to emerge when we explicitly encourage active vocabulary sizes to be small through a soft penalty. This is consistent with analy- sis in evolutionary linguistics (Nowak, Plotkin, and Jansen 2000) that ï¬nds composition to emerge only when number of concepts to be expressed becomes greater than a factor of agentâs symbol vocabulary capacity. Another important component leading to composition is training on a variety of tasks and environment conï¬gurations simultaneously. Train- ing on cases where most information is clear from context (such as when there is only one landmark) leads to forma- tion of atomic concepts that are reused compositionally in more complicated cases. | 1703.04908#5 | Emergence of Grounded Compositional Language in Multi-Agent Populations | By capturing statistical patterns in large corpora, machine learning has
enabled significant advances in natural language processing, including in
machine translation, question answering, and sentiment analysis. However, for
agents to intelligently interact with humans, simply capturing the statistical
patterns is insufficient. In this paper we investigate if, and how, grounded
compositional language can emerge as a means to achieve goals in multi-agent
populations. Towards this end, we propose a multi-agent learning environment
and learning methods that bring about emergence of a basic compositional
language. This language is represented as streams of abstract discrete symbols
uttered by agents over time, but nonetheless has a coherent structure that
possesses a defined vocabulary and syntax. We also observe emergence of
non-verbal communication such as pointing and guiding when language
communication is unavailable. | http://arxiv.org/pdf/1703.04908 | Igor Mordatch, Pieter Abbeel | cs.AI, cs.CL | null | null | cs.AI | 20170315 | 20180724 | [
{
"id": "1603.08887"
},
{
"id": "1611.01779"
},
{
"id": "1612.07182"
},
{
"id": "1609.00777"
},
{
"id": "1612.08810"
}
] |
1703.04933 | 5 | While the concept of ï¬at minima is not well deï¬ned, having slightly different meanings in different works, the intuition is relatively simple. If one imagines the error as a one- dimensional curve, a minimum is ï¬at if there is a wide region around it with roughly the same error, otherwise the minimum is sharp. When moving to higher dimen- sional spaces, deï¬ning ï¬atness becomes more complicated. In Hochreiter & Schmidhuber (1997) it is deï¬ned as the size of the connected region around the minimum where the training loss is relatively similar. Chaudhari et al. (2017) relies, in contrast, on the curvature of the second order struc- ture around the minimum, while Keskar et al. (2017) looks at the maximum loss in a bounded neighbourhood of the minimum. All these works rely on the fact that ï¬atness results in robustness to low precision arithmetic or noise in the parameter space, which, using an minimum description length-based argument, suggests a low expected overï¬tting.
Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, 2017. JMLR: W&CP. Copyright 2017 by the author(s).
Sharp Minima Can Generalize For Deep Nets | 1703.04933#5 | Sharp Minima Can Generalize For Deep Nets | Despite their overwhelming capacity to overfit, deep learning architectures
tend to generalize relatively well to unseen data, allowing them to be deployed
in practice. However, explaining why this is the case is still an open area of
research. One standing hypothesis that is gaining popularity, e.g. Hochreiter &
Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the
loss function found by stochastic gradient based methods results in good
generalization. This paper argues that most notions of flatness are problematic
for deep models and can not be directly applied to explain generalization.
Specifically, when focusing on deep networks with rectifier units, we can
exploit the particular geometry of parameter space induced by the inherent
symmetries that these architectures exhibit to build equivalent models
corresponding to arbitrarily sharper minima. Furthermore, if we allow to
reparametrize a function, the geometry of its parameters can change drastically
without affecting its generalization properties. | http://arxiv.org/pdf/1703.04933 | Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio | cs.LG | 8.5 pages of main content, 2.5 of bibliography and 1 page of appendix | null | cs.LG | 20170315 | 20170515 | [
{
"id": "1609.03193"
},
{
"id": "1701.04271"
},
{
"id": "1609.04836"
},
{
"id": "1606.04838"
},
{
"id": "1611.03530"
},
{
"id": "1605.08803"
},
{
"id": "1511.01029"
},
{
"id": "1609.08144"
},
{
"id": "1611.01838"
},
{
"id": "1606.05336"
},
{
"id": "1603.01431"
},
{
"id": "1511.01844"
},
{
"id": "1612.04010"
},
{
"id": "1611.07476"
},
{
"id": "1611.02344"
}
] |
1703.05175 | 5 | Classiï¬cation is performed, as in the few-shot scenario, by ï¬nding the nearest class prototype for an embedded query point.
In this paper, we formulate prototypical networks for both the few-shot and zero-shot settings. We draw connections to matching networks in the one-shot setting, and analyze the underlying distance function used in the model. In particular, we relate prototypical networks to clustering [4] in order to justify the use of class means as prototypes when distances are computed with a Bregman divergence, such as squared Euclidean distance. We ï¬nd empirically that the choice of distance is vital, as Euclidean distance greatly outperforms the more commonly used cosine similarity. On several benchmark tasks, we achieve state-of-the-art performance. Prototypical networks are simpler and more efï¬cient than recent meta-learning algorithms, making them an appealing approach to few-shot and zero-shot learning.
# 2 Prototypical Networks
# 2.1 Notation | 1703.05175#5 | Prototypical Networks for Few-shot Learning | We propose prototypical networks for the problem of few-shot classification,
where a classifier must generalize to new classes not seen in the training set,
given only a small number of examples of each new class. Prototypical networks
learn a metric space in which classification can be performed by computing
distances to prototype representations of each class. Compared to recent
approaches for few-shot learning, they reflect a simpler inductive bias that is
beneficial in this limited-data regime, and achieve excellent results. We
provide an analysis showing that some simple design decisions can yield
substantial improvements over recent approaches involving complicated
architectural choices and meta-learning. We further extend prototypical
networks to zero-shot learning and achieve state-of-the-art results on the
CU-Birds dataset. | http://arxiv.org/pdf/1703.05175 | Jake Snell, Kevin Swersky, Richard S. Zemel | cs.LG, stat.ML | null | null | cs.LG | 20170315 | 20170619 | [
{
"id": "1605.05395"
},
{
"id": "1502.03167"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.