doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1612.06370 | 5 | âWork done during an internship at FAIR.
1
in motion compared to objects that tend to be seen at rest.
Inspired by these human vision studies, we propose to train ConvNets for the well-established task of object fore- ground vs. background segmentation, using unsupervised motion segmentation to provide âpseudo ground truthâ. Concretely, to prepare training data we use optical ï¬ow to group foreground pixels that move together into a single ob- ject. We then use the resulting segmentation masks as au- tomatically generated targets, and task a ConvNet with pre- dicting these masks from single, static frames without any motion information (Figure 2). Because pixels with differ- ent colors or low-level image statistics can still move to- gether and form a single object, the ConvNet cannot solve this task using a low-level representation. Instead, it may have to recognize objects that tend to move and identify their shape and pose. Thus, we conjecture that this task forces the ConvNet to learn a high-level representation. | 1612.06370#5 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 6 | We evaluate our proposal in two settings. First, we test if a ConvNet can learn a good feature representation when learning to segment from the high-quality, manually labeled segmentations in COCO [27], without using the class labels. Indeed, we show that the resulting feature representation is effective when transferred to PASCAL VOC object detec- tion. It achieves state-of-the-art performance for representa- tions trained without any semantic category labels, perform- ing within 5 points AP of an ImageNet pretrained model and 10 points higher than the best unsupervised methods. This justiï¬es our proposed task by showing that given good ground truth segmentations, a ConvNet trained to segment objects will learn an effective feature representation. | 1612.06370#6 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 7 | Our goal, however, is to learn features without man- ual supervision. Thus in our second setting we train with automatically generated âpseudo ground truthâ obtained through unsupervised motion segmentation on uncurated videos from the Yahoo Flickr Creative Commons 100 mil- lion (YFCC100m) [43] dataset. When transferred to object detection, our representation retains good performance even when most of the ConvNet parameters are frozen, signif- icantly outperforming previous unsupervised learning ap- proaches. It also allows much better transfer learning when training data for the target task is scarce. Our representation quality tends to increase logarithmically with the amount of data, suggesting the possibility of outperforming ImageNet pretraining given the countless videos on the web.
# 2. Related Work
Unsupervised learning is a broad area with a large vol- ume of work; Bengio et al. [5] provide an excellent survey. Here, we brieï¬y revisit some of the recent work in this area.
Unsupervised learning by generating images. Classical unsupervised representation learning approaches, such as autoencoders [4, 20] and denoising autoencoders [44], at2
3. Train ConvNet 1. Collect videos 2. Segment using motio! | 1612.06370#7 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 8 | 3. Train ConvNet 1. Collect videos 2. Segment using motio!
Figure 2. Overview of our approach. We use motion cues to seg- ment objects in videos without any supervision. We then train a ConvNet to predict these segmentations from static frames, i.e. without any motion cues. We then transfer the learned representa- tion to other recognition tasks.
tempt to learn feature representations from which the orig- inal image can be decoded with a low error. An alter- native to reconstruction-based objectives is to train gener- ative models of images using generative adversarial net- works [16]. These models can be extended to produce good feature representations by training jointly with image en- coders [10,11]. However, to generate realistic images, these models must pay signiï¬cant attention to low-level details while potentially ignoring higher-level semantics. | 1612.06370#8 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 9 | Self-supervision via pretext tasks. Instead of producing images, several recent studies have focused on providing alternate forms of supervision (often called âpretext tasksâ) that do not require manual labeling and can be algorithmi- cally produced. For instance, Doersch et al. [8] task a Con- vNet with predicting the relative location of two cropped image patches. Noroozi and Favaro [30] extend this by asking a network to arrange shufï¬ed patches cropped from a 3Ã3 grid. Pathak et al. [35] train a network to per- form an image inpainting task. Other pretext tasks include predicting color channels from luminance [25, 51] or vice versa [52], and predicting sounds from video frames [7,33]. The assumption in these works is that to perform these tasks, the network will need to recognize high-level con- cepts, such as objects, in order to succeed. We compare our approach to all of these pretext tasks and show that the pro- posed natural task of object segmentation leads to a quanti- tatively better feature representation in many cases. | 1612.06370#9 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 10 | Learning from motion and action. The human visual system does not receive static images; it receives a continu- ous video stream. The same idea of deï¬ning auxiliary pre- text tasks can be used in unsupervised learning from videos too. Wang and Gupta [46] train a ConvNet to distinguish between pairs of tracked patches in a single video, and pairs of patches from different videos. Misra et al. [29] ask a network to arrange shufï¬ed frames of a video into a tem- porally correct order. Another such pretext task is to make predictions about the next few frames: Goroshin et al. [17] predict pixels of future frames and Walker et al. [45] predict dense future trajectories. However, since nearby frames in a video tend to be visually similar (in color or texture), these approaches might learn low-level image statistics instead of more semantic features. Alternatively, Li et al. [26] use mo- tion boundary detection to bootstrap a ConvNet-based con- tour detector, but ï¬nd that this does not lead to good feature representations. Our intuitions are similar, but our approach produces semantically strong representations. | 1612.06370#10 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 12 | # 3. Evaluating Feature Representations
To measure the quality of a learned feature representa- tion, we need an evaluation that reï¬ects real-world con- straints to yield useful conclusions. Prior work on unsuper- vised learning has evaluated representations by using them as initializations for ï¬ne-tuning a ConvNet for a particu- lar isolated task, such as object detection [8]. The intuition is that a good representations should serve as a good start- ing point for task-speciï¬c ï¬ne-tuning. While ï¬ne-tuning for each task can be a good solution, it can also be impractical. For example, a mobile app might want to handle multiple tasks on device, such as image classiï¬cation, object detec- tion, and segmentation. But both the app download size and execution time will grow linearly with the number of tasks unless computation is shared. In such cases it may be desir- able to have a general representation that is shared between tasks and task-speciï¬c, lightweight classiï¬er âheadsâ. | 1612.06370#12 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 13 | Another practical concern arises when the amount of la- beled training data is too limited for ï¬ne-tuning. Again, in this scenario it may be desirable to use a ï¬xed general rep- resentation with a trained task-speciï¬c âheadâ to avoid over- ï¬tting. Rather than emphasizing any one of these cases, in this paper we aim for a broader understanding by evaluating learned representations under a variety of conditions:
1. On multiple tasks: We consider object detection, im- age classiï¬cation and semantic segmentation.
2. With shared layers: We ï¬ne-tune the pretrained Con- vNet weights to different extents, ranging from only the fully connected layers to ï¬ne-tuning everything (see [30] for a similar evaluation on ImageNet).
3. With limited target task training data: We reduce the amount of training data available for the target task.
3
# 4. Learning Features by Learning to Group | 1612.06370#13 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 14 | 3. With limited target task training data: We reduce the amount of training data available for the target task.
3
# 4. Learning Features by Learning to Group
The core intuition behind this paper is that training a ConvNet to group pixels in static images into objects with- out any class labels will cause it to learn a strong, high- level feature representation. This is because such grouping is difï¬cult from low-level cues alone: objects are typically made of multiple colors and textures and, if occluded, might even consist of spatially disjoint regions. Therefore, to ef- fectively do this grouping is to implicitly recognize the ob- ject and understand its location and shape, even if it cannot be named. Thus, if we train a ConvNet for this task, we expect it to learn a representation that aids recognition.
To test this hypothesis, we ran a series of experiments us- ing high-quality manual annotations on static images from COCO [27]. Although supervised, these experiments help to evaluate a) how well our method might work under ideal conditions, b) how performance is impacted if the segments are of lower quality, and c) how much data is needed. We now describe these experiments in detail.
# 4.1. Training a ConvNet to Segment Objects | 1612.06370#14 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 15 | # 4.1. Training a ConvNet to Segment Objects
We frame the task as follows: given an image patch con- taining a single object, we want the ConvNet to segment the object, i.e., assign each pixel a label of 1 if it lies on the object and 0 otherwise. Since an image contains multiple objects, the task is ambiguous if we feed the ConvNet the entire image. Instead, we sample an object from an image and crop a box around the ground truth segment. However, given a precise bounding box, it is easy for the ConvNet to cheat: a blob in the center of the box would yield low loss. To prevent such degenerate solutions, we jitter the box in position and scale. Note that a similar training setup was used for recent segmentation proposal methods [37, 38].
We use a straightforward ConvNet architecture that takes as input a w à w image and outputs an s às mask. Our network ends in a fully connected layer with s2 outputs fol- lowed by an element-wise sigmoid. The resulting s2 dimen- sional vector is reshaped into an s às mask. We also down- sample the ground truth mask to s às and sum the cross entropy losses over the s2 locations to train the network.
# 4.2. Experiments | 1612.06370#15 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 16 | # 4.2. Experiments
To enable comparisons to prior work on unsupervised learning, we use AlexNet [24] as our ConvNet architecture. We use s = 56 and w = 227. We use images and anno- tations from the trainval set of the COCO dataset [27], dis- carding the class labels and only using the segmentations.
Does training for segmentation yield good features? Following recent work on unsupervised learning, we per- form experiments on the task of object detection on PAS- CAL VOC 2007 using Fast R-CNN [15].1 We use multi1https://github.com/rbgirshick/py-faster-rcnn
Object Detection (VOC2007) % mean AP w Ru a oo 6 0 N i) a4 ImageNet [21] 10-yy Supervised Masks ee Context [6] (unsupervised) All >cl e233 Layers Finetuned 0
Figure 3. Our representation trained on manually-annotated seg- ments from COCO (without class labels) compared to ImageNet pretraining and context prediction (unsupervised) [8], evaluated â>cXâ: all layers for object detection on PASCAL VOC 2007. above convX are ï¬ne-tuned; âAllâ: the entire net is ï¬ne-tuned. | 1612.06370#16 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 17 | scale training and testing [15]. In keeping with the motiva- tion described in Section 3, we measure performance with ConvNet layers frozen to different extents. We compare our representation to a ConvNet trained on image classiï¬cation on ImageNet, and the representation trained by Doersch et al. [8]. The latter is competitive with the state-of-the-art. (Comparisons to other recent work on unsupervised learn- ing appear later.) The results are shown in Figure 3.
We ï¬nd that our supervised representation outperforms the unsupervised context prediction model across all sce- narios by a large margin, which is to be expected. Notably though, our model maintains a fairly small gap with Ima- geNet pretraining. This result is state-of-the-art for a model trained without semantic category labels. Thus, given high- quality segments, our proposed method can learn a strong representation, which validates our hypothesis. | 1612.06370#17 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 18 | Figure 3 also shows that the model trained on context prediction degrades rapidly as more layers are frozen. This drop indicates that the higher layers of the model have be- come overly speciï¬c to the pretext task [49], and may not capture the high-level concepts needed for object recogni- tion. This is in contrast to the stable performance of the ImageNet trained model even when most of the network is frozen, suggesting the utility of its higher layers for recog- nition tasks. We ï¬nd that this trend is also true for our rep- resentation: it retains good performance even when most of the ConvNet is frozen, indicating that it has indeed learned high-level semantics in the higher layers.
Can the ConvNet learn from noisy masks? We next ask if the quality of the learned representation is impacted by the quality of the ground truth, which is important since the segmentations obtained from unsupervised motion-based grouping will be imperfect. To simulate noisy segments, we train the representation with degraded masks from COCO. We consider two ways of creating noisy segments: intro- ducing noise in the boundary and truncating the mask.
4 | 1612.06370#18 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 19 | Figure 4. We degrade ground truth masks to measure the impact of segmentation quality on the learned representation. From left to right, the original mask, dilated and eroded masks (boundary errors), and a truncated mask (truncation can be on any side).
Object Detection (VOC 2007) : 55 it - & 50 (Oe, g tenon 5 45 - . _ â⬠40- - 4 L x S35 . 0 4 8121620 01020304050 3 10 30 100 Morph kernel size % Truncation % Data
Figure 5. VOC object detection accuracy using our supervised ConvNet as noise is introduced in mask boundaries, the masks are truncated, or the amount of data is reduced. Surprisingly, the rep- resentation maintains quality even with large degradation. | 1612.06370#19 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 20 | Noise in the segment boundary simulates the foreground leaking into the background or vice-versa. To introduce such noise during training, for each cropped ground truth mask, we randomly either erode or dilate the mask using a kernel of ï¬xed size (Figure 4, second and third images). The boundaries become noisier as the kernel size increases. Truncation simulates the case when we miss a part of the object, such as when only part of the object moves. Specif- ically, for each ground truth mask, we zero out a strip of pixels corresponding to a ï¬xed percentage of the bounding box area from one of the four sides (Figure 4, last image).
We evaluate the representation trained with these noisy ground truth segments on object detection using Fast R- CNN with all layers up to and including conv5 frozen (Fig- ure 5). We ï¬nd that the learned representation is surpris- ingly resilient to both kinds of degradation. Even with large, systematic truncation (up to 50%) or large errors in bound- aries, the representation maintains its quality. | 1612.06370#20 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 21 | How much data do we need? We vary the amount of data available for training, and evaluate the resulting rep- resentation on object detection using Fast-RCNN with all conv layers frozen. The results are shown in the third plot in Figure 5. We ï¬nd that performance drops signiï¬cantly as the amount of training data is reduced, suggesting that good representations will need large amounts of data.
In summary, these results suggest that training for seg- mentation leads to strong features even with imprecise ob- ject masks. However, building a good representation re- quires signiï¬cant amounts of training data. These observa- tions strengthen our case for learning features in an unsu- pervised manner on large unlabeled datasets. | 1612.06370#21 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 22 | Figure 6. From left to right: a video frame, the output of uNLC that we use to train our ConvNet, and the output of our ConvNet. uNLC is able to highlight the moving object even in potentially cluttered scenes, but is often noisy, and sometimes fails (last two rows). Nevertheless, our ConvNet can still learn from this noisy data and produce signiï¬cantly better and smoother segmentations.
# 5. Learning by Watching Objects Move
We ï¬rst describe the motion segmentation algorithm we use to segment videos, and then discuss how we use the segmented frames to train a ConvNet.
# 5.1. Unsupervised Motion Segmentation
The key idea behind motion segmentation is that if there is a single object moving with respect to the background through the entire video, then pixels on the object will move differently from pixels on the background. Analyzing the optical ï¬ow should therefore provide hints about which pix- els belong to the foreground. However, since only a part of the object might move in each frame, this information needs to be aggregated across multiple frames. | 1612.06370#22 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 23 | We adopt the NLC approach from Faktor and Irani [12]. While NLC is unsupervised with respect to video segmenta- tion, it utilizes an edge detector that was trained on labeled edge images [39]. In order to have a purely unsupervised method, we replace the trained edge detector in NLC with unsupervised superpixels. To avoid confusion, we call our implementation of NLC as uNLC. First uNLC computes a per-frame saliency map based on motion by looking for ei- ther pixels that move in a mostly static frame or, if the frame contains signiï¬cant motion, pixels that move in a direction different from the dominant one. Per-pixel saliency is then averaged over superpixels [1]. Next, a nearest neighbor graph is computed over the superpixels in the video using location and appearance (color histograms and HOG [6]) as features. Finally, it uses a nearest neighbor voting scheme to propagate the saliency across frames.
5 | 1612.06370#23 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 24 | Figure 7. Examples of segmentations produced by our ConvNet on held out images. The ConvNet is able to identify the motile object (or objects) and segment it out from a single frame. Masks are not perfect but they do capture the general object shape.
We ï¬nd that uNLC often fails on videos in the wild. Sometimes this is because the assumption of there being a single moving object in the video is not satisï¬ed, especially in long videos made up of multiple shots showing differ- ent objects. We use a publicly available appearance-based shot detection method [40] (also unsupervised) to divide the video into shots and run uNLC separately on each shot.
Videos in the wild are also often low resolution and have compression artifacts, which can degrade the result- ing segmentations. From our experiments using strong su- pervision, we know our approach can be robust to such noise. Nevertheless, since a large video dataset comprises a massive collection of frames, we simply discard badly segmented frames based on two heuristics. Speciï¬cally, we discard: (1) frames with too many (>80%) or too few (<10%) pixels marked as foreground; (2) frames with too many pixels (>10%) within 5% of the frame border that are marked as foreground. In preliminary tests, we found that results were not sensitive to the precise thresholds used. | 1612.06370#24 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 25 | We ran uNLC on videos from YFCC100m [43], which contains about 700,000 videos. After pruning, we ended up with 205,000 videos. We sampled 5-10 frames per shot from each video to create our dataset of 1.6M images, so we have slightly more frames than images in ImageNet. How- ever, note that our frames come from fewer videos and are therefore more correlated than images from ImageNet.
We stress that our approach in generating this dataset is completely unsupervised, and does not use any form of supervised learning in any part of the pipeline. The code for the segmentation and pruning, together with our auto- matically generated dataset of frames and segments, will be made publicly available soon.
Our motion segmentation approach is far from state-of- the-art, as can be seen by the noisy segments shown in Fig- ure 6. Nevertheless, we ï¬nd that our representation is quite resilient to this noise (as shown below). As such, we did not aim to improve the particulars of our motion segmentation.
# 5.2. Learning to Segment from Noisy Labels | 1612.06370#25 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 26 | # 5.2. Learning to Segment from Noisy Labels
As before, we feed the ConvNet cropped images, jit- tered in scale and translation, and ask it to predict the motile foreground object. Since the motion segmentation output is noisy, we do not trust the absolute foreground probabilities it provides. Instead, we convert it into a trimap representa- tion in which pixels with a probability <0.4 are marked as negative samples, those with a probability >0.7 are marked as positives, and the remaining pixels are marked as âdonât caresâ (in preliminary experiments, our results were found to be robust to these thresholds). The ConvNet is trained with a logistic loss only on the positive and negative pixels; donât care pixels are ignored. Similar techniques have been successfully explored earlier in segmentation [3, 22].
Despite the steps we take to get good segments, the uNLC output is still noisy and often grossly incorrect, as can be seen from the second column of Figure 6. However, if there are no systematic errors, then these motion-based segments can be seen as perturbations about a true latent segmentation. Because a ConvNet has ï¬nite capacity, it will not be able to ï¬t the noise perfectly and might instead learn something closer to the underlying correct segmentation. | 1612.06370#26 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 27 | Some positive evidence for this can be seen in the output of the trained ConvNet on its training images (Fig. 6, third column). The ConvNet correctly identiï¬es the motile object and its rough shape, leading to a smoother, more correct segmentation than the original motion segmentation.
The ConvNet is also able to generalize to unseen images. Figure 7 shows the output of the ConvNet on frames from the DAVIS [36], FBMS [31] and VSB [13] datasets, which were not used in training. Again, it is able to identify the moving object and its rough shape from just a single frame. When evaluated against human annotated segments in these datasets, we ï¬nd that the ConvNetâs output is signiï¬cantly better than the uNLC segmentation output as shown below:
Metric uNLC ConvNet (unsupervised) Mean IoU (%) Precision (%) Recall (%) 13.1 15.4 45.8 24.8 29.9 59.3
These results conï¬rm our earlier ï¬nding that the Con- vNet is able to learn well even from noisy and often incor- rect ground truth. However, the goal of this paper is not segmentation, but representation learning. We evaluate the learned representation in the next section.
6
# 6. Evaluating the Learned Representation
# 6.1. Transfer to Object Detection | 1612.06370#27 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 28 | 6
# 6. Evaluating the Learned Representation
# 6.1. Transfer to Object Detection
We ï¬rst evaluate our representation on the task of object detection using Fast R-CNN. We use VOC 2007 for cross- validation: we pick an appropriate learning rate for each method out of a set of 3 values {0.001, 0.002 and 0.003}. Finally, we train on VOC 2012 train and test on VOC 2012 val exactly once. We use multi-scale training and testing and discard difï¬cult objects during training.
We present results with the ConvNet parameters frozen to different extents. As discussed in Section 3, a good repre- sentation should work well both as an initialization to ï¬ne- tuning and also when most of the ConvNet is frozen.
We compare our approach to ConvNet representations produced by recent prior work on unsupervised learn- ing [2, 8, 10, 30, 33, 35, 46, 51]. We use publicly available models for all methods shown. Like our ConvNet represen- tation, all models have the AlexNet architecture, but differ in minor details such as the presence of batch normalization layers [8] or the presence of grouped convolutions [51]. | 1612.06370#28 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 29 | We also compare to two models trained with strong supervision. The ï¬rst is trained on ImageNet classiï¬ca- tion. The second is trained on manually-annotated segments (without class labels) from COCO (see Section 4).
Results are shown in Figure 8(a) (left) and Table 1 (left). We ï¬nd that our representation learned from unsupervised motion segmentation performs on par or better than prior work on unsupervised learning across all scenarios.
As we saw in Section 4.2, in contrast to ImageNet super- vised representations, the representations learned by previ- ous unsupervised approaches show a large decay in perfor- mance as more layers are frozen, owing to the representa- tion becoming highly speciï¬c to the pretext task. Similar to our supervised approach trained on segmentations from COCO, we ï¬nd that our unsupervised approach trained on motion segmentation also shows stable performance as the layers are frozen. Thus, unlike prior work on unsupervised learning, the upper layers in our representation learn high- level abstract concepts that are useful for recognition. | 1612.06370#29 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 30 | It is possible that some of the differences between our method and prior work are because the training data is from different domains (YFCC100m videos vs. ImageNet im- ages). To control for this, we retrained the model from [8] on frames from our video dataset (see Context-videos in Ta- ble 1). The two variants perform similarly: 33.4% mean AP when trained on YFCC with conv5 and below frozen com- pared to 33.2% for the ImageNet version. This conï¬rms that the different image sources do not explain our gains.
# 6.2. Low-shot Transfer | 1612.06370#30 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 31 | A good representation should also aid learning when training data is scarce, as we motivated in Section 3. FigMethod All >c1 >c4 >c5 All >c1 >c4 >c5 #wins Supervised Imagenet Sup. Masks (Ours) 56.5 51.7 57.0 51.8 57.1 52.7 57.1 52.2 55.6 52.0 52.5 47.5 17.7 13.6 19.1 13.8 19.7 15.5 20.3 17.6 20.9 18.1 19.6 15.1 NA NA Unsupervised Jigsawâ¡ [30] Kmeans [23] Egomotion [2] Inpainting [35] Tracking-gray [46] Sounds [33] BiGAN [10] Colorization [51] Split-Brain Auto [52] Context [8] Context-videosâ [8] Motion Masks (Ours) 49.0 42.8 37.4 39.1 43.5 42.9 44.9 44.5 43.8 49.9 47.8 48.6 50.0 42.2 36.9 36.4 44.6 42.3 44.6 44.9 45.6 48.8 47.9 48.2 48.9 40.3 34.4 34.1 44.6 40.6 | 1612.06370#31 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 32 | 42.3 44.6 44.9 45.6 48.8 47.9 48.2 48.9 40.3 34.4 34.1 44.6 40.6 44.7 44.7 45.6 44.4 46.6 48.3 47.7 37.1 28.9 29.4 44.2 37.1 42.4 44.4 46.1 44.3 47.2 47.0 45.8 32.4 24.1 24.8 41.5 32.0 38.4 42.6 44.1 42.1 44.3 45.8 37.1 26.0 17.1 13.4 35.7 26.5 29.4 38.0 37.6 33.2 33.4 40.3 5.9 4.1 â â 3.7 5.4 4.9 6.1 3.5 6.7 6.6 10.2 8.7 4.9 â â 5.7 5.1 6.1 7.9 7.9 10.2 9.2 10.2 8.8 5.0 â â 7.4 5.0 7.3 8.6 9.6 9.2 10.7 11.7 10.1 4.5 â â 9.0 4.8 7.6 10.6 10.2 9.5 12.2 12.5 9.9 4.2 | 1612.06370#32 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 34 | Table 1. Object detection AP (%) on PASCAL VOC 2012 using Fast R-CNN with various pretrained ConvNets. All models are trained on train and tested on val using consistent Fast R-CNN settings. âââ means training didnât converge due to insufï¬cient data. Our approach achieves the best performance in the majority of settings. â Doersch et al. [8] trained their original context model using ImageNet images. The Context-videos model is obtained by retraining their approach on our video frames from YFCC. This experiment controls for the effect of the distribution of training images and shows that the image domain used for training does not signiï¬cantly impact performance. â¡Noroozi et al. [30] use a more computationally intensive ConvNet architecture (>2à longer to ï¬netune) with a ï¬ner stride at conv1, preventing apples-to-apples comparisons. Nevertheless, their model works signiï¬cantly worse than our representation when either layers are frozen or in case of limited data and is comparable to ours when network is ï¬netuned with full training data. | 1612.06370#34 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 35 | aa ImageNet[21] ©~e Colorization [48] ©-© BIGAN [8] e-e Tracking-gray [43] ee Context [6] âA Sup. Masks (Ours) ee Sounds [30] e-e Motion Masks (Ours) Object detection (VOC 2012): Full train set Object detection (VOC 2012): 150 image set 65 Object detection (VOC 2007) 60- i _ - ? - 1 i - ââ Motion Masks (Ours) 55- SSS x9 60 Imagenet â 4 a a! is r ee a oss < a < < ° f45 - 215- a aA - < 50 Sup. Masks s Al a A 5 Ours) 40 - 9 £45 is 35- _ £10 x Tracking-gray[43] F30 * ° si i 2 5- - 35 Context-videos{ 6] 20- y , 0 , y , y 305, , All >cl >c2 >c3 cA >c5 All >cl >c2 >c3 >c4 oc 10° 10° 10? Layers finetuned Layers finetuned Number of frames / images
# (a) Performance vs. Finetuning
# (b) Performance vs. Data | 1612.06370#35 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 36 | # (a) Performance vs. Finetuning
# (b) Performance vs. Data
Figure 8. Results on object detection using Fast R-CNN. (a) VOC 2012 object detection results when the ConvNet representation is frozen to different extents. We compare to other unsupervised and supervised approaches. Left: using the full training set. Right: using only 150 training images (note the different y-axis scales). (b) Variation of representation quality (mean AP on VOC 2007 object detection with conv5 and below frozen) with number of training frames. A few other methods are also shown. Context-videos [8] is the representation of Doersch et al. [8] retrained on our video frames. Note that most other methods in Table 1 use ImageNet as their train set.
ure 8(a) (right) and Table 1 (right) show how we compare to other unsupervised and supervised approaches on the task of object detection when we have few (150) training images. We observe that in this scenario it actually hurts to ï¬ne- tune the entire network, and the best setup is to leave some layers frozen. Our approach provides the best AP overall (achieved by freezing all layers up to and including conv4) among all other representations from recent unsupervised learning methods by a large margin. The performance in other low-shot settings is presented in Figure 10. | 1612.06370#36 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 37 | Note that in spite of its strong performance relative to prior unsupervised approaches, our representation learned without supervision on video trails both the strongly super- vised mask and ImageNet versions by a signiï¬cant margin. We discuss this in the following subsection.
# 6.3. Impact of Amount of Training Data
The quality of our representation (measured by Fast R-CNN performance on VOC 2007 with all conv layers frozen) grows roughly logarithmically with the number of
7
ImageNet [21]
# Colorization [48]
BiGAN [8]
# Sup. Masks (Ours)
sâ4
# ee
# oo
4
4 | 1612.06370#37 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 38 | 7
ImageNet [21]
# Colorization [48]
BiGAN [8]
# Sup. Masks (Ours)
sâ4
# ee
# oo
4
4
e~e Tracking-gray [43] eâe Context [6] , Image classification (VOC 2007) ee Sounds [30] eâe Motio Action classification (Stanford 40), in Masks (Ours) Semantic Segmentation (VOC 2011) as es a âââ 75 - > ââ o i 8 40- a - 5 45- 65. a 3 âf-â_* 3 © a 5 = 30 § 35 1 £55 5 £ o ov o * 45. E 20- R25- x 35. - 10- fr 1s. i. All >cl >c2 >c3 >c4 >c5 All >cl >c2 >c3 >c4 > All >cl >c2 >c3 >c4 >c5 Layers finetuned Layers finetuned Layers finetuned
Figure 9. Results on image (object) classiï¬cation on VOC 2007, single-image action classiï¬cation on Stanford 40 Actions, and semantic segmentation on VOC 2011. Results shown with ConvNet layers frozen to different extents (note that the metrics vary for each task). | 1612.06370#38 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 39 | frames used. With 396K frames (50K videos), it is already better than prior state-of-the-art [8] trained on a million Im- ageNet images, see Figure 8(b). With our full dataset (1.6M frames) accuracy increases substantially. If this logarithmic growth continues, our representation will be on par with one trained on ImageNet if we use about 27M frames (or 3 to 5 million videos, the same order of magnitude as the number of images in ImageNet). Note that frames from the same video are very correlated. We expect this number could be reduced with more algorithmic improvements.
# 6.4. Transfer to Other Tasks | 1612.06370#39 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 40 | # 6.4. Transfer to Other Tasks
Analysis. Like object detection, all these tasks require se- mantic knowledge. However, while in object detection the ConvNet is given a tight crop around the target object, the input in these image classiï¬cation tasks is the entire image, and semantic segmentation involves running the ConvNet in a sliding window over all locations. This difference appears to play a major role. Our representation was trained on ob- ject crops, which is similar to the setup for object detection, but quite different from the setups in Figure 9. This mis- match may negatively impact the performance of our repre- sentation, both for the version trained on motion segmenta- tion and the strongly supervised version. Such a mismatch may also explain the low performance of the representation trained by Wang et al. [46] on semantic segmentation.
As discussed in Section 3, a good representation should generalize across tasks. We now show experiments for two other tasks: image classiï¬cation and semantic image seg- mentation. For image classiï¬cation, we test on both object and action classiï¬cation. | 1612.06370#40 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 41 | Image Classiï¬cation. We experimented with image clas- siï¬cation on PASCAL VOC 2007 (object categories) and Stanford 40 Actions [48] (action labels). To allow compar- isons to prior work [10, 51], we used random crops during training and averaged scores from 10 crops during testing (see [10] for details). We minimally tuned some hyper- parameters (we increased the step size to allow longer train- ing) on VOC 2007 validation, and used the same settings for both VOC 2007 and Stanford 40 Actions. On both datasets, we trained with different amounts of ï¬ne-tuning as before. Results are in the ï¬rst two plots in Figure 9.
Semantic Segmentation. We use fully convolutional net- works for semantic segmentation with the default hyper- parameters [28]. All the pretrained ConvNet models are ï¬netuned on union of images from VOC 2011 train set and additional SBD train set released by Hariharan et al. [18], and we test on the VOC 2011 val set after removing over- lapping images from SBD train. The last plot in Figure 9 shows the performance of different methods when the num- ber of layers being ï¬netuned is varied. | 1612.06370#41 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 42 | Nevertheless, when the ConvNet is progressively frozen, our approach is a strong performer. When all layers un- til conv5 are frozen, our representation is better than other approaches on action classiï¬cation and second only to col- orization [51] on image classiï¬cation on VOC 2007 and semantic segmentation on VOC 2011. Our higher perfor- mance on action classiï¬cation might be due to the fact that our video dataset has many people doing various actions.
# 7. Discussion
We have presented a simple and intuitive approach to unsupervised learning by using segments from low-level motion-based grouping to train ConvNets. Our experiments show that our approach enables effective transfer especially when computational or data constraints limit the amount of task-speciï¬c tuning we can do. Scaling to larger video datasets should allow for further improvements.
We noted in Figure 6 that our network learns to reï¬ne the noisy input segments. This is a good example of a sce- nario where ConvNets can learn to extract signal from large amounts of noisy data. Combining the reï¬ned, single-frame output from the ConvNet with noisy motion cues extracted from the video should lead to better pseudo ground truth, and can be used by the ConvNet to bootstrap itself. We leave this direction for future work.
8 | 1612.06370#42 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 43 | ImageNet [21] 0-0 BiGAN [8] Tracking-gray [43] ©-e Sounds [30] Colorization [48] 4 4 Sup. Masks (Ours) Context [6] ee Motion Masks (Ours) Object detection (VOC 2012): 2800 image set â a oo oo oo 0 oo oo Context [6] 45 a as i P 40 a 4 2 a x = =a = 40 = 35 5 5 3 Fi £35 £ 30- & & 30 2s 25 20 20- 1s ImageNet [21] Colorization [48] go Object detection (VOC 2012): 1400 image set oo BIGAN [8] Tracking-gray [43] e-e Sounds [30] 4 4 Sup. Masks (Ours) ee Motion Masks (Ours) ImageNet [21] 0-0 BiGAN [8] Tracking-gray [43] oe Sounds [30] Colorization [48] 4 4 Sup. Masks (Ours) Context [6] ee Motion Masks (Ours) Object detection (VOC 2012): 800 image set â oo oo oo & cs & > > % mean AP 30 a a t 7 a a, 25 A a < a t < 4 a <5 a J 5% 5 i a 5 x t ® @ 20 ® 2 oo 2 Eu. RS S15 RS 15 wo to | 1612.06370#43 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 45 | Figure 10. Results for object detection on Pascal VOC 2012 using Fast R-CNN and varying number of images available for ï¬netuning. Each plot shows the comparison of different unsupervised learning methods as the number of layers being ï¬netuned is varied. Different plots depict this variation for different amounts of data available for ï¬netuning Fast R-CNN (please note the different y-axis scales for each plot). As the data for ï¬netuning decreases, it is actually better to freeze more layers. Our method works well across all the settings and scales and as the amount of data decreases. When layers are frozen or data is limited, our method signiï¬cantly outperforms other methods. This suggests that features learned in the higher layers of our model are good for recognition.
# References
# References
[1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. S¨usstrunk. SLIC superpixels compared to state-of-the-art superpixel methods. TPAMI, 2012. 5
[11] V. Dumoulin, I. Belghazi, B. Poole, A. Lamb, M. Arjovsky, O. Mastropietro, and A. Courville. Adversarially learned in- ference. ICLR, 2017. 2 | 1612.06370#45 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 46 | [12] A. Faktor and M. Irani. Video Segmentation by Non-Local Consensus voting. BMVC, 2014. 5
[2] P. Agrawal, J. Carreira, and J. Malik. Learning to see by moving. ICCV, 2015. 3, 6, 7
[3] C. Arteta, V. Lempitsky, and A. Zisserman. Counting in the wild. In ECCV, 2016. 6
[4] Y. Bengio. Learning deep architectures for AI. Foundations and trends in Machine Learning, 2009. 1, 2
[13] F. Galasso, N. Nagaraja, T. Cardenas, T. Brox, and B. Schiele. A uniï¬ed video segmentation benchmark: An- notation, metrics and analysis. ICCV, 2013. 6
[14] R. Garg, V. K. B.G., G. Carneiro, and I. Reid. Unsupervised cnn for single view depth estimation: Geometry to the res- cue. ECCV, 2016. 3
[5] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. TPAMI, 35(8), 2013. 2 | 1612.06370#46 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 47 | [5] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. TPAMI, 35(8), 2013. 2
[6] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. CVPR, 2005. 5
[7] V. R. de Sa. Learning classiï¬cation with unlabeled data. NIPS, 1994. 2
[8] C. Doersch, A. Gupta, and A. A. Efros. Unsupervised visual representation learning by context prediction. ICCV, 2015. 1, 2, 3, 4, 6, 7, 8
[9] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional ac- tivation feature for generic visual recognition. ICML, 2014. 1
[15] R. Girshick. Fast R-CNN. ICCV, 2015. 1, 3, 4 [16] I. Goodfellow,
J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Gen- erative adversarial nets. NIPS, 2014. 2 | 1612.06370#47 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 48 | [17] R. Goroshin, M. Mathieu, and Y. LeCun. Learning to lin- earize under uncertainty. NIPS, 2015. 1, 3
[18] B. Hariharan, P. Arbel´aez, L. Bourdev, S. Maji, and J. Malik. Semantic contours from inverse detectors. ICCV, 2011. 8
[19] B. Hariharan, P. Arbel´aez, R. Girshick, and J. Malik. Hyper- columns for object segmentation and ï¬ne-grained localiza- tion. CVPR, 2015. 1
[20] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimen- sionality of data with neural networks. Science, 2006. 1, 2
[10] J. Donahue, P. Kr¨ahenb¨uhl, and T. Darrell. Adversarial Fea- ture Learning. ICLR, 2017. 2, 6, 7, 8
[21] D. Jayaraman and K. Grauman. Learning image representa- tions tied to ego-motion. ICCV, 2015. 3
9
[22] P. Kohli, P. H. Torr, et al. Robust higher order potentials for enforcing label consistency. IJCV, 2009. 6 | 1612.06370#48 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 49 | 9
[22] P. Kohli, P. H. Torr, et al. Robust higher order potentials for enforcing label consistency. IJCV, 2009. 6
[23] P. Kr¨ahenb¨uhl, C. Doersch, J. Donahue, and T. Darrell. Data- dependent initializations of convolutional neural networks. ICLR, 2016. 7
Ima- geNet classiï¬cation with deep convolutional neural net- works. NIPS, 2012. 3
[25] G. Larsson, M. Maire, and G. Shakhnarovich. Learning rep- resentations for automatic colorization. ECCV, 2016. 2 [26] Y. Li, M. Paluri, J. M. Rehg, and P. Doll´ar. Unsupervised
learning of edges. CVPR, 2016. 3
[27] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: Com- mon objects in context. ECCV, 2014. 2, 3 | 1612.06370#49 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 50 | [28] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. CVPR, 2015. 8 [29] I. Misra, C. L. Zitnick, and M. Hebert. Shufï¬e and Learn: Unsupervised Learning using Temporal Order Veriï¬cation. ECCV, 2016. 1, 3
[30] M. Noroozi and P. Favaro. Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles. ECCV, 2016. 1, 2, 3, 6, 7
[31] P. Ochs, J. Malik, and T. Brox. Segmentation of moving objects by long term video analysis. TPAMI, 36(6), 2014. 6 [32] Y. Ostrovsky, E. Meyers, S. Ganesh, U. Mathur, and P. Sinha. Visual parsing after recovery from blindness. Psychological Science, 2009. 1
[33] A. Owens, J. Wu, J. H. McDermott, W. T. Freeman, and A. Torralba. Ambient sound provides supervision for visual learning. ECCV, 2016. 2, 6, 7
[34] S. E. Palmer. Vision science: Photons to phenomenology. MIT press, 1999. 1 | 1612.06370#50 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 51 | [34] S. E. Palmer. Vision science: Photons to phenomenology. MIT press, 1999. 1
[35] D. Pathak, P. Kr¨ahenb¨uhl, J. Donahue, T. Darrell, and A. Efros. Context Encoders: Feature Learning by Inpaint- ing. CVPR, 2016. 2, 6, 7
[36] F. Perazzi, J. Pont-Tuset, B. McWilliams, L. V. Gool, M. Gross, and A. Sorkine-Hornung. A Benchmark Dataset and Evaluation Methodology for Video Object Segmenta- tion. CVPR, 2016. 6
[37] P. O. Pinheiro, R. Collobert, and P. Dollr. Learning to Seg- ment Object Candidates. NIPS, 2015. 3
[38] P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Doll´ar. Learn- ing to Reï¬ne Object Segments. ECCV, 2016. 3
[39] L. Z. Piotr Doll´ar. Structured forests for fast edge detection. ICCV, 2013. 5 | 1612.06370#51 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 52 | [39] L. Z. Piotr Doll´ar. Structured forests for fast edge detection. ICCV, 2013. 5
[40] D. Potapov, M. Douze, Z. Harchaoui, and C. Schmid. Category-speciï¬c video summarization. ECCV, 2014. 5 [41] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, ImageNet Large Scale Visual A. C. Berg, and L. Fei-Fei. Recognition Challenge. IJCV, 2015. 1
[42] E. S. Spelke. Principles of object perception. Cognitive sci- ence, 14(1), 1990. 1
[43] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li. YFCC100M: The new data in multimedia research. Communications of the ACM, 59(2), 2016. 2, 5
10 | 1612.06370#52 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 53 | 10
[44] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with denoising au- toencoders. ICML, 2008. 1, 2
[45] J. Walker, C. Doersch, A. Gupta, and M. Hebert. An uncer- tain future: Forecasting from static images using variational autoencoders. ECCV, 2016. 3
[46] X. Wang and A. Gupta. Unsupervised learning of visual rep- resentations using videos. ICCV, 2015. 1, 2, 6, 7, 8
[47] M. Wertheimer. Laws of organization in perceptual forms. 1938. 1
[48] B. Yao, X. Jiang, A. Khosla, A. L. Lin, L. Guibas, and L. Fei- Fei. Human action recognition by learning bases of action attributes and parts. ICCV, 2011. 8
[49] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How trans- ferable are features in deep neural networks? NIPS, 2014. 4 | 1612.06370#53 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.04936 | 0 | 7 1 0 2
b e F 3 1 ] L C . s c [
4 v 6 3 9 4 0 . 2 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# LEARNING THROUGH DIALOGUE INTERACTIONS BY ASKING QUESTIONS
Jiwei Li, Alexander H. Miller, Sumit Chopra, MarcâAurelio Ranzato, Jason Weston Facebook AI Research, New York, USA {jiwel,ahm,spchopra,ranzato,jase}@fb.com
# ABSTRACT
A good dialogue agent should have the ability to interact with users by both re- sponding to questions and by asking questions, and importantly to learn from both types of interaction. In this work, we explore this direction by designing a simu- lator and a set of synthetic tasks in the movie domain that allow such interactions between a learner and a teacher. We investigate how a learner can beneï¬t from asking questions in both ofï¬ine and online reinforcement learning settings, and demonstrate that the learner improves when asking questions. Finally, real exper- iments with Mechanical Turk validate the approach. Our work represents a ï¬rst step in developing such end-to-end learned interactive dialogue agents.
# INTRODUCTION | 1612.04936#0 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 1 | # INTRODUCTION
When a student is asked a question by a teacher, but is not conï¬dent about the answer, they may ask for clariï¬cation or hints. A good conversational agent (a learner/bot/student) should have this ability to interact with a dialogue partner (the teacher/user). However, recent efforts have mostly focused on learning through ï¬xed answers provided in the training set, rather than through interactions. In that case, when a learner encounters a confusing situation such as an unknown surface form (phrase or structure), a semantically complicated sentence or an unknown word, the agent will either make a (usually poor) guess or will redirect the user to other resources (e.g., a search engine, as in Siri). Humans, in contrast, can adapt to many situations by asking questions.
We identify three categories of mistakes a learner can make during dialogue1: (1) the learner has problems understanding the surface form of the text of the dialogue partner, e.g., the phrasing of a question; (2) the learner has a problem with reasoning, e.g. they fail to retrieve and connect the relevant knowledge to the question at hand; (3) the learner lacks the knowledge necessary to answer the question in the ï¬rst place â that is, the knowledge sources the student has access to do not contain the needed information. | 1612.04936#1 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 2 | All the situations above can be potentially addressed through interaction with the dialogue partner. Such interactions can be used to learn to perform better in future dialogues. If a human student has problems understanding a teacherâs question, they might ask the teacher to clarify the question. If the student doesnât know where to start, they might ask the teacher to point out which known facts are most relevant. If the student doesnât know the information needed at all, they might ask the teacher to tell them the knowledge theyâre missing, writing it down for future use.
In this work, we try to bridge the gap between how a human and an end-to-end machine learning dialogue agent deal with these situations: our student has to learn how to learn. We hence design a simulator and a set of synthetic tasks in the movie question answering domain that allow a bot to interact with a teacher to address the issues described above. Using this framework, we explore how a bot can beneï¬t from interaction by asking questions in both ofï¬ine supervised settings and online reinforcement learning settings, as well as how to choose when to ask questions in the latter setting. In both cases, we ï¬nd that the learning system improves through interacting with users.
1This list is not exhaustive; for example, we do not address a failure in the dialogue generation stage.
1
Published as a conference paper at ICLR 2017 | 1612.04936#2 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 3 | 1This list is not exhaustive; for example, we do not address a failure in the dialogue generation stage.
1
Published as a conference paper at ICLR 2017
Finally, we validate our approach on real data where the teachers are humans using Amazon Me- chanical Turk, and observe similar results.
# 2 RELATED WORK
Learning language through interaction and feedback can be traced back to the 1950s, when Wittgen- stein argued that the meaning of words is best understood from their use within given language games (Wittgenstein, 2010). The direction of interactive language learning through language games has been explored in the early seminal work of Winograd (Winograd, 1972), and in the recent SHRD- LURN system (Wang et al., 2016). In a broader context, the usefulness of feedback and interactions has been validated in the setting of multiple language learning, such as second language learning (Bassiri, 2011) and learning by students (Higgins et al., 2002; Latham, 1997; Werts et al., 1995). | 1612.04936#3 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 4 | In the context of dialogue, with the recent popularity of deep learning models, many neural dialogue systems have been proposed. These include the chit-chat type end-to-end dialogue systems (Vinyals & Le, 2015; Li et al., 2015; Sordoni et al., 2015), which directly generate a response given the previous history of user utterance. It also include a collection of goal-oriented dialogue systems (Wen et al., 2016; Su et al., 2016; Bordes & Weston, 2016), which complete a certain task such as booking a ticket or making a reservation at a restaurant. Another line of research focuses on supervised learning for question answering from dialogues (Dodge et al., 2015; Weston, 2016), using either a given database of knowledge (Bordes et al., 2015; Miller et al., 2016) or short stories (Weston et al., 2015). As far as we know, current dialogue systems mostly focus on learning through ï¬xed supervised signals rather than interacting with users. | 1612.04936#4 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 5 | Our work is closely related to the recent work of Weston (2016), which explores the problem of learning through conducting conversations, where supervision is given naturally in the response dur- ing the conversation. Their work introduced multiple learning schemes from dialogue utterances. In particular the authors discussed Imitation Learning, where the agent tries to learn by imitating the dialogue interactions between a teacher and an expert student; Reward-Based Imitation Learn- ing, which only learns by imitating the dialogue interactions which have have correct answers; and Forward Prediction, which learns by predicting the teacherâs feedback to the studentâs response. Despite the fact that Forward Prediction does not uses human-labeled rewards, the authors show that it yields promising results. However, their work did not fully explore the ability of an agent to learn via questioning and interaction. Our work can be viewed as a natural extension of theirs.
# 3 THE TASKS
In this section we describe the dialogue tasks we designed2. They are tailored for the three different situations described in Section 1 that motivate the bot to ask questions: (1) Question Clariï¬cation, in which the bot has problems understanding its dialogue partnerâs text; (2) Knowledge Operation, in which the bot needs to ask for help to perform reasoning steps over an existing knowledge base; and (3) Knowledge Acquisition, in which the botâs knowledge is incomplete and needs to be ï¬lled. | 1612.04936#5 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 6 | For our experiments we adapt the WikiMovies dataset (Weston et al., 2015), which consists of roughly 100k questions over 75k entities based on questions with answers in the open movie dataset (OMDb). The training/dev/test sets respectively contain 181638 / 9702 / 9698 examples. The accu- racy metric corresponds to the percentage of times the student gives correct answers to the teacherâs questions.
Each dialogue takes place between a teacher and a bot. In this section we describe how we gener- ate tasks using a simulator. Section 4.2 discusses how we test similar setups with real data using Mechanical Turk.
The bot is ï¬rst presented with facts from the OMDb KB. This allows us to control the exact knowl- edge the bot has access to. Then, we include several teacher-bot question-answer pairs unrelated to the question the bot needs to answer, which we call conversation histories3. In order to explore the
2 Code and data are available at https://github.com/facebook/MemNN/tree/master/AskingQuestions. 3 These history QA pairs can be viewed as distractions and are used to test the botâs ability to separate the
3 These history QA pairs can be viewed as tions and are wheat from the chaff. For each dialogue, we incorporate 5 extra QA pairs (10 sentences). ility to separate the | 1612.04936#6 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 7 | 3 These history QA pairs can be viewed as tions and are wheat from the chaff. For each dialogue, we incorporate 5 extra QA pairs (10 sentences). ility to separate the
wheat from the chaff. For each dialogue, we incorporate 5 extra QA pairs (10 sentences).
2
Published as a conference paper at ICLR 2017
beneï¬ts of asking clariï¬cation questions during a conversation, for each of the three scenarios, our simulator generated data for two different settings, namely, Question-Answering (denoted by QA), and Asking-Question (denoted by AQ). For both QA and AQ, the bot needs to give an answer to the teacherâs original question at the end. The details of the simulator can be found in the appendix.
# 3.1 QUESTION CLARIFICATION. | 1612.04936#7 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 8 | In this setting, the bot does not understand the teacherâs question. We focus on a special situation where the bot does not understand the teacher because of typo/spelling mistakes, as shown in Figure 1. We intentionally misspell some words in the questions such as replacing the word âmovieâ with âmovvieâ or âstarâ with âsttarâ.4 To make sure that the bot will have problems understanding the question, we guarantee that the bot has never encountered the misspellings beforeâthe misspelling- introducing mechanisms in the training, dev and test sets are different, so the same word will be misspelled in different ways in different sets. We present two AQ tasks: (i) Question Paraphrase where the student asks the teacher to use a paraphrase that does not contain spelling mistakes to clarify the question by asking âwhat do you mean?â; and (ii) Question Veriï¬cation where the stu- dent asks the teacher whether the original typo-bearing question corresponds to another question without the spelling mistakes (e.g., âDo you mean which ï¬lm did Tom Hanks appear in?â). The teacher will give feedback by giving a | 1612.04936#8 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 9 | mistakes (e.g., âDo you mean which ï¬lm did Tom Hanks appear in?â). The teacher will give feedback by giving a paraphrase of the original question without spelling mistakes (e.g., âI mean which ï¬lm did Tom Hanks appear inâ) in Question Paraphrase or positive/negative feedback in Question Veriï¬cation. Next the student will give an answer and the teacher will give positive/negative feedback depending on whether the studentâs answer is correct. Positive and nega- tive feedback are variants of âNo, thatâs incorrectâ or âYes, thatâs rightâ5. In these tasks, the bot has access to all relevant entries in the KB. | 1612.04936#9 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 10 | 3.2 KNOWLEDGE OPERATION
The bot has access to all the relevant knowledge (facts) but lacks the ability to perform necessary reasoning operations over them; see Figure 2. We focus on a special case where the bot will try to understand what are the relevant facts. We explore two settings: Ask For Relevant Knowledge (Task 3) where the bot directly asks the teacher to point out the relevant KB fact and Knowledge Veriï¬cation (Task 4) where the bot asks whether the teacherâs question is relevant to one particular KB fact. The teacher will point out the relevant KB fact in the Ask For Relevant Knowledge setting or give a positive or negative response in the Knowledge Veriï¬cation setting. Then the bot will give an answer to the teacherâs original question and the teacher will give feedback on the answer.
3.3 KNOWLEDGE ACQUISITION | 1612.04936#10 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 11 | 3.3 KNOWLEDGE ACQUISITION
For the tasks in this subsection, the bot has an incomplete KB and there are entities important to the dialogue missing from it, see Figure 3. For example, given the question âWhich movie did Tom Hanks star in?â, the missing part could either be the entity that the teacher is asking about (question entity for short, which is Tom Hanks in this example), the relation entity (starred actors), the answer to the question (Forrest Gump), or the combination of the three. In all cases, the bot has little chance of giving the correct answer due to the missing knowledge. It needs to ask the teacher the answer to acquire the missing knowledge. The teacher will give the answer and then move on to other questions (captured in the conversational history). They later will come back to reask the question. At this point, the bot needs to give an answer since the entity is not new any more. | 1612.04936#11 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 12 | Though the correct answer has effectively been included in the earlier part of the dialogue as the answer to the botâs question, as we will show later, many of the tasks are not as trivial as they look when the teacher reasks the question. This is because the botâs model needs to memorize the missing entity and then construct the links between the missing entities and known ones. This is akin to the real world case where a student might make the same mistake again and again even though each time the teacher corrects them if their answer is wrong. We now detail each task in turn.
4Many reasons could lead to the bot not understanding the teacherâs question, e.g., the teacherâs question has an unknown phrase structure, rather than unknown words. We choose to use spelling mistakes because of the ease of dataset construction.
5In the datasets we build, there are 6 templates for positive feedback and 6 templates for negative feedback.
3
Published as a conference paper at ICLR 2017 | 1612.04936#12 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 13 | Task 1: Question Paraphrase (QA) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movvie did Tom Hanks sttar in ? S : Larry Crowne T : Wrong. (-) Task 1: Question Paraphrase (AQ): kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movvie did Tom Hanks sttar in ? S : What do you mean ? T : I mean which ï¬lm did Tom Hanks appear in. T : Which movvie did Tom Hanks sttar in ? S : Forrest Gump T : Thatâs correct. (+) Task 2: Question Veriï¬cation (QA): kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : | 1612.04936#13 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 14 | starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movvie did Tom Hanks sttar in ? S : Forrest Gump T : Thatâs correct. (+) Task 2: Question Veriï¬cation (AQ) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movvie did Tom Hanks sttar in ? S : Do you mean which ï¬lm did Tom Hanks appear in ? T : Thatâs correct. (+) T : Which movvie did Tom Hanks sttar in ? S : Larry Crowne T : Sorry, thatâs not right. (-) | 1612.04936#14 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 15 | Figure 1: Sample dialogues for the two Question Clariï¬cation tasks (rows) using both the traditional QA setting (left column) and AQ setting (right column). In each case the same example is given for simplicity. Black text preï¬xed by âkb:â denotes KB knowledge that the student has access to. Blue text is spoken by the teacher, denoted by T. (+) denotes a reward of 1 (and 0 otherwise) that the teacher assigns to the bot. Red text denotes responses or questions posed by the bot, denoted by S. Brown denotes typos deliberately introduced by the authors. For the Question Veriï¬cation setting, the student can either ask a correct (pertinent) question (as in this example) or an incorrect (irrelevant) one. The teacher will give positive or negative feedback based on the correctness of the studentâs question. In our ofï¬ine superised learning experiments, the probability of asking pertinent questions and correctly answering the original question from the teacher is set to 0.5. Finally, T/S denotes 5 pairs of questions and answers that are irrelevant to the rest of the conversation. | 1612.04936#15 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 16 | Task 3: Ask For Relevant Knowledge (AQ): kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Can you give me a hint ? T : It is related to the kb fact that Task 4: Knowledge Veriï¬cation (AQ): kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Does it have something to do with Larry Crowne directed by Tom Hanks? Forrest Gump starred actors Tom Hanks Robin Wright, Gary Sinise T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâs correct. (+) T : No. Thatâs wrong. (-) T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâs correct. (+)
Figure 2: Sample dialogues for Knowledge Operation tasks. | 1612.04936#16 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 17 | Figure 2: Sample dialogues for Knowledge Operation tasks.
Missing Question Entity: The entity that the teacher is asking about is missing from the knowledge base. All KB facts containing the question entity will be hidden from the bot. In the example for Task 5 in Figure 3, since the teacherâs question contains the entity Tom Hanks, the KB facts that contain Tom Hanks are hidden from the bot.
4
Published as a conference paper at ICLR 2017
Figure 3: Different Tasks for Knowledge Acquisition. Crossed lines correspond to entries of the KB whose retrieval is missed.
. Task 5: Missing Question Entity (QA) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâs correct. (+) | 1612.04936#17 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 18 | Task 5: Missing Question Entity (AQ) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T : Which movie did Tom Hanks star in ? S : I donât know. Whatâs the answer? T : The answer is Forrest Gump. T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâs correct. (+)
Task 6: Missing Answer Entity (AQ) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T : Which movie did Tom Hanks star in ? S : I donât know. Whatâs the answer? T : The answer is Forrest Gump. T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâs correct. (+) | 1612.04936#18 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 19 | Task 7: Missing Relation Entity (AQ) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T : Which movie did Tom Hanks star in ? S : I donât know. Whatâs the answer? T : The answer is Forrest Gump. T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâs correct. (+)
. Task 8: Missing Triple (AQ) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T : Which movie did Tom Hanks star in ? S : I donât know. Whatâs the answer? T : The answer is Forrest Gump. T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâs correct. (+) | 1612.04936#19 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 20 | Task 9: Missing Everything (AQ) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T : Which movie did Tom Hanks star in ? S : I donât know. Whatâs the answer? T : The answer is Forrest Gump. T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâs correct. (+)
Missing Answer Entity: The answer entity to the question is unknown to the bot. All KB facts that contain the answer entity will be hidden. Hence, in Task 6 of Figure 3, all KB facts containing the answer entity Forrest Gump will be hidden from the bot.
Missing Relation Entity: The relation type is unknown to the bot. In Task 7 of Figure 3, all KB facts that express the relation starred actors are hidden from the bot.
Missing Triples: The triple that expresses the relation between the question entity and the answer In Task 8 of Figure 3, the triple âForrest Gump (question entity) entity is hidden from the bot. starred actors Tom Hanks (answer entity)â will be hidden. | 1612.04936#20 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 21 | Missing Everything: The question entity, the relation entity, the answer entity are all missing from the KB. All KB facts in Task 9 of Figure 3 will be removed since they either contain the relation entity (i.e., starred actors), the question entity (i.e., Forrest Gump) or the answer entity Tom Hanks.
5
Published as a conference paper at ICLR 2017
# 4 TRAIN/TEST REGIME
We now discuss in detail the regimes we used to train and test our models, which are divided between evaluation within our simulator and using real data collected via Mechanical Turk.
4.1 SIMULATOR
Using our simulator, our objective was twofold. We ï¬rst wanted to validate the usefulness of ask- ing questions in all the settings described in Section 3. Second, we wanted to assess the ability of our student bot to learn when to ask questions. In order to accomplish these two objectives we ex- plored training our models with our simulator using two methodologies, namely, Ofï¬ine Supervised Learning and Online Reinforcement Learning.
4.1.1 OFFLINE SUPERVISED LEARNING | 1612.04936#21 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 22 | 4.1.1 OFFLINE SUPERVISED LEARNING
The motivation behind training our student models in an ofï¬ine supervised setting was primarily to test the usefulness of the ability to ask questions. The dialogues are generated as described in the previous section, and the botâs role is generated with a ï¬xed policy. We chose a policy where answers to the teacherâs questions are correct answers 50% of the time, and incorrect otherwise, to add a degree of realism. Similarly, in tasks where questions can be irrelevant they are only asked correctly 50% of the time.6
The ofï¬ine setting explores different combinations of training and testing scenarios, which mimic different situations in the real world. The aim is to understand when and how observing interactions between two agents can help the bot improve its performance for different tasks. As a result we construct training and test sets in three ways across all tasks, resulting in 9 different scenarios per task, each of which correspond to a real world scenario. | 1612.04936#22 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 23 | The three training sets we generated are referred to as TrainQA, TrainAQ, and TrainMix. TrainQA follows the QA setting discussed in the previous section: the bot never asks questions and only tries to immediately answer. TrainAQ follows the AQ setting: the student, before answering, ï¬rst always asks a question in response to the teacherâs original question. TrainMix is a combination of the two where 50% of time the student asks a question and 50% does not. | 1612.04936#23 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 24 | The three test sets we generated are referred to as TestQA, TestAQ, and TestModelAQ. TestQA and TestAQ are generated similarly to TrainQA and TrainAQ, but using a perfect ï¬xed policy (rather than 50% correct) for evaluation purposes. In the TestModelAQ setting the model has to get the form of the question correct as well. In the Question Veriï¬cation and Knowledge Veriï¬cation tasks there are many possible ways of forming the question and some of them are correct â the model has to choose the right question to ask. E.g. it should ask âDoes it have something to do with the fact that Larry Crowne directed by Tom Hanks?ârather than âDoes it have something to do with the fact that Forrest Gump directed by Robert Zemeckis?â when the latter is irrelevant (the candidate list of questions is generated from the known knowledge base entries with respect to that question). The policy is trained using either the TrainAQ or TrainMix set, depending on the training scenario. The teacher will reply to the question, giving positive feedback if the studentâs question is correct and no response and negative feedback otherwise. The student will then give the ï¬nal answer. The difference between | 1612.04936#24 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 26 | To summarize, for every task listed in Section 3 we train one model for each of the three training sets (TrainQA, TrainAQ, TrainMix) and test each of these models on the three test sets (TestQA, TestAQ, and TestModelAQ), resulting in 9 combinations. For the purpose of notation the train/test combination is denoted by âTrainSetting+TestSettingâ. For example, TrainAQ+TestQA denotes a model which is trained using the TrainAQ dataset and tested on TestQA dataset. Each combination has a real world interpretation. For instance, TrainAQ+TestQA would refer to a scenario where a student can ask the teacher questions during learning but cannot to do so while taking an exam. Similarly, TrainQA+TestQA describes a stoic teacher that never answers a studentâs question at either learning or examination time. The setting TrainQA+TestAQ corresponds to the case where a lazy
6This only makes sense in tasks like Question or Knowledge Veriï¬cation. In tasks where the question is static such as âWhat do you mean?â there is no way to ask an irrelevant question, and we do not use this policy.
6
Published as a conference paper at ICLR 2017
student never asks question at learning time but gets anxious during the examination and always asks a question. | 1612.04936#26 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 27 | 6
Published as a conference paper at ICLR 2017
student never asks question at learning time but gets anxious during the examination and always asks a question.
4.1.2 ONLINE REINFORCEMENT LEARNING (RL)
We also explored scenarios where the student learns the ability to decide when to ask a question. In other words, the student learns how to learn.
Although it is in the interest of the student to ask questions at every step of the conversation, since the response to its question will contain extra information, we donât want our model to learn this behavior. Each time a human student asks a question, thereâs a cost associated with that action. This cost is a reï¬ection of the patience of the teacher, or more generally of the users interacting with the bot in the wild: users wonât ï¬nd the bot engaging if it always asks clariï¬cation questions. The student should thus be judicious about asking questions and learn when and what to ask. For instance, if the student is conï¬dent about the answer, there is no need for it to ask. Or, if the teacherâs question is so hard that clariï¬cation is unlikely to help enough to get the answer right, then it should also refrain from asking. | 1612.04936#27 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 28 | We now discuss how we model this problem under the Reinforcement Learning framework. The bot is presented with KB facts (some facts might be missing depending on the task) and a question. It needs to decide whether to ask a question or not at this point. The decision whether to ask is made by a binary policy PRLQuestion. If the student chooses to ask a question, it will be penalized by costAQ. We explored different values of costAQ ranging from [0, 2], which we consider as modeling the patience of the teacher. The goal of this setting is to ï¬nd the best policy for asking/not- asking questions which would lead to the highest cumulative reward. The teacher will appropriately reply if the student asks a question. The student will eventually give an answer to the teacherâs initial question at the end using the policy PRLAnswer, regardless of whether it had asked a question. The student will get a reward of +1 if its ï¬nal answer is correct and â1 otherwise. Note that the student can ask at most one question and that the type of question is always speciï¬ed by the task under consideration. The ï¬nal reward the student gets is the cumulative reward over the current dialogue episode. In particular the reward structure we propose is the following:
# Asking Question Not asking Question | 1612.04936#28 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 29 | # Asking Question Not asking Question
Final Answer Correct Final Answer Incorrect 1-costAQ -1-costAQ 1 -1
Table 1: Reward structure for the Reinforcement Learning setting.
For each of the tasks described in Section 3, we consider three different RL scenarios. Good-Student: The student will be presented with all relevant KB facts. There are no misspellings or unknown words in the teacherâs question. This represents a knowledgable student in the real world that knows as much as it needs to know (e.g., a large knowledge base, large vocabulary). This setting is identical across all missing entity tasks (5 - 9). Poor-Student: The KB facts or the questions presented to the student are ï¬awed depending on each task. For example, for the Question Clariï¬cation tasks, the student does not understand the question due to spelling mistakes. For the Missing Question Entity task the entity that the teacher asks about is unknown by the student and all facts containing the entity will be hidden from the student. This setting is similar to a student that is underprepared for the tasks. Medium-Student: The combination of the previous two settings where for 50% of the questions, the student has access to the full KB and there are no new words or phrases or entities in the question, and 50% of the time the question and KB are taken from the Poor-Student setting. | 1612.04936#29 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 30 | 4.2 MECHANICAL TURK DATA
Finally, to validate our approach beyond our simulator by using real language, we collected data via Amazon Mechanical Turk. Due to the cost of data collection, we focused on real language versions of Tasks 4 (Knowledge Veriï¬cation) and 8 (Missing Triple), see Secs. 3.2 and 3.3 for the simulator versions. That is, we collect dialoguess and use them in an ofï¬ine supervised learning setup similar to Section 4.1.1. This setup allows easily reproducibile experiments.
For Mechanical Turk Task 4, the bot is asked a question by a human teacher, but before answering can ask the human if the question is related to one of the facts it knows about from its memory.
7
Published as a conference paper at ICLR 2017
a Which mowvie did Tom Hanks sttar in? AQ Qa cd 5 2S) what do you mean? ® is âLarry Crowne BS I mean which film did Tom Hanks appear in. & Gh thatâs incorrect (-) ES) Forest Gump. ~ Reward: -1 a Thatâs correct (+) Reward: 1-CostAQ illustration of the poor-student setting for RL Task 1 (Question
Figure 4: An illustration of the poor-student setting for RL Task 1 (Question Paraphrase). | 1612.04936#30 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 31 | Figure 4: An illustration of the poor-student setting for RL Task 1 (Question Paraphrase).
It is then required to answer the original question, after some additional dialog turns relating to other question/answer pairs (called âconversational historyâ, as before). For Task 8, the bot is asked a question by a human but lacks the triple in its memory that would be needed to answer it. It is allowed to ask for the missing information, the human responds to the question in free-form language. The bot is then required to answer the original question, again after some âconversational historyâ has transpired.
We collect around 10,000 episodes (dialogues) for training, 1000 for validation, and 2500 for testing for each of the two tasks. In each case, we give instructions to the Turkers that still follow the original form of the task, but make the tasks contain realistic language written by humans. The instructions given to the Turkers are given in the appendix. | 1612.04936#31 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 32 | For both tasks, while the human turkers replace the simulator that the bot was previously conversing with, the botâs dialogue actions (capabilities) are essentially unchanged from before. That is, when answering questions, now the bot is required to answer a humanâs questions rather than templated questions from the simulator. When the bot is asking questions, the bot still asks in the same form as before, e.g. questions like âDoes it have something to do with Xâ for Task 4 or âI donât know. Whatâs the answer?â for Task 8. However, now its questions are answered by a human. In both cases (asking and answering) the human data is richer with potentially more complex language and lexical variability. Examples of the collected dialogues are given in Figure 5. | 1612.04936#32 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 33 | MTurk Task 4: Knowledge Veriï¬cation (AQ): kb: In Their Skin written by Joshua Close, Jeremy Power Regimbal kb: In Their Skin starred actors Selma Blair, James Dâarcy, Rachel Miner, Joshua Close T: What ï¬lm was written by Joshua Close? S: Does it have something to do with âIn Their Skin starred actors Joshua Closeâ? T: No we were looking for who wrote the ï¬lm not who starred in it. . . . MTurk Task 8: Missing Triple (AQ): T :What movies does Timothy Scott Bogart have writing credit for? S: I donât know. Whatâs the answer? T: Extreme Ops or The I Inside is the right answer. . . .
Figure 5: Sample dialogues for Mechanical Turk versions of Tasks 4 and 8. Compared to the original tasks (see Figs 2 and 3) the teacherâs questions, and the teacher responses to the studentâs questions, are written by humans and are more complex and contain more variety.
# 5 MODELS | 1612.04936#33 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 34 | # 5 MODELS
For both ofï¬ine supervised and online RL settings, we use the End-to-End Memory Network model (MemN2N) (Sukhbaatar et al., 2015) as a backbone. The model takes as input the last utterance of the dialogue history (the question from the teacher) as well as a set of memory contexts including short-term memories (the dialogue history between the bot and the teacher) and long-term memories
8
Published as a conference paper at ICLR 2017
(the knowledge base facts that the bot has access to), and outputs a label. We refer readers to the Appendix for more details about MemN2N.
Ofï¬ine Supervised Settings: The ï¬rst learning strategy we adopt is the reward-based imitation strategy (denoted vanilla-MemN2N) described in (Weston, 2016), where at training time, the model maximizes the log likelihood probability of the correct answers the student gave (examples with incorrect ï¬nal answers are discarded). Candidate answers are words that appear in the memories, which means the bot can only predict the entities that it has seen or known before. | 1612.04936#34 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 35 | We also use a variation of MemN2N called âcontext MemN2Nâ (Cont-MemN2N for short) where we replace each wordâs embedding with the average of its embedding (random for unseen words) and the embeddings of the other words that appear around it. We use both the preceeding and following words as context and the number of context words is a hyperparameter selected on the dev set.
An issue with both vanilla-MemN2N and Cont-MemN2N is that the model only makes use of the botâs answers as signals and ignores the teacherâs feedback. We thus propose to use a model that jointly predicts the botâs answers and the teacherâs feedback (denoted as TrainQA (+FP)). The botâs answers are predicted using a vanilla-MemN2N and the teacherâs feedback is predicted using the Forward Prediction (FP) model as described in (Weston, 2016). We refer the readers to the Appendix for the FP model details. At training time, the models learn to jointly predict the teacherâs feedback and the answers with positive reward. At test time, the model will only predict the botâs answer. | 1612.04936#35 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 36 | For the TestModelAQ setting described in Section 4, the model needs to decide the question to ask. Again, we use vanilla-MemN2N that takes as input the question and contexts, and outputs the question the bot will ask.
Online RL Settings: A binary vanilla-MemN2N (denoted as PRL(Question)) is used to decide whether the bot should or should not ask a question, with the teacher replying if the bot does ask something. A second MemN2N is then used to decide the botâs answer, denoted as PRL(Answer). PRL(Answer) for QA and AQ are two separate models, which means the bot will use different models for ï¬nal-answer prediction depending on whether it chooses to ask a question or not.7
to update PRL(Question) and We use the REINFORCE algorithm (Williams, 1992) PRL(Answer). For each dialogue, the bot takes two sequential actions (a1, a2): to ask or not to ask a question (denoted as a1); and guessing the ï¬nal answer (denoted as a2). Let r(a1, a2) denote the cumulative reward for the dialogue episode, computed using Table 1. The gradient to update the policy is given by: | 1612.04936#36 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 37 | p(a1, a2) = PRL(Question)(a1) · PRL(answer)(a2) âJ(θ) â â log p(a1, a2)[r(a1, a2) â b] (1)
where b is the baseline value, which is estimated using another MemN2N model that takes as input the query x and memory C, and outputs a scalar b denoting the estimation of the future reward. The baseline model is trained by minimizing the mean squared loss between the estimated reward b and actual cumulative reward r, ||r â b||2. We refer the readers to (Ranzato et al., 2015; Zaremba & Sutskever, 2015) for more details. The baseline estimator model is independent from the policy models and the error is not backpropagated back to them. | 1612.04936#37 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 38 | train only In practice, we ï¬nd the following training strategy yields better results: ï¬rst PRL(answer), updating gradients only for the policy that predicts the ï¬nal answer. After the botâs ï¬nal-answer policy is sufï¬ciently learned, train both policies in parallel8. This has a real-world anal- ogy where the bot ï¬rst learns the basics of the task, and then learns to improve its performance via a question-asking policy tailored to the userâs patience (represented by costAQ) and its own ability to asnwer questions.
7An alternative is to train one single model for ï¬nal answer prediction in both AQ and QA cases, similar to the TrainMix setting in the supervised learning setting. But we ï¬nd training AQ and QA separately for the ï¬nal answer prediction yields a little better result than the single model setting. | 1612.04936#38 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 39 | 8 We implement this by running 16 epochs in total, updating only the modelâs policy for ï¬nal answers in the ï¬rst 8 epochs while updating both policies during the second 8 epochs. We pick the model that achieves the best reward on the dev set during the ï¬nal 8 epochs. Due to relatively large variance for RL models, we repeat each task 5 times and keep the best model on each task.
9
Published as a conference paper at ICLR 2017 | 1612.04936#39 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 40 | Question Clariï¬cation Knowledge Operation Train \Test TrainQA (Context) TrainAQ (Context) TrainMix (Context) Task 1: Q. Paraphrase TestAQ TestQA 0.726 0.754 0.889 0.640 0.846 0.751 Task 2: Q. Veriï¬cation TestQA 0.742 0.643 0.740 TestAQ 0.684 0.807 0.789 Task 3: Ask For Relevant K. TestQA 0.883 0.716 0.870 TestAQ 0.947 0.985 0.985 Task 4: K. Veriï¬cation TestQA 0.888 0.852 0.875 TestAQ 0.959 0.987 0.985 Train \Test TrainQA (Context) TrainAQ (Context) TrainMix (Context) TestQA TestAQ Task 5: Q. Entity <0.01 0.224 <0.01 0.639 <0.01 0.632 Knowledge Acquisition TestAQ TestQA Task 6: Answer Entity <0.01 <0.01 <0.01 TestQA Task 7: Relation Entity 0.241 0.143 0.216 TestAQ 0.120 0.885 0.852 0.301 0.893 0.898 TestQA | 1612.04936#40 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 42 | TestQA TestAQ Task 9: Everything <0.01 0.058 <0.01 0.908 <0.01 0.903
Table 2: Results for Cont-MemN2N on different tasks.
6 EXPERIMENTS
6.1 SIMULATOR
Ofï¬ine Results: Ofï¬ine results are presented in Tables 2, 7 and 8 (the latter two are in the appendix). Table 7 presents results for the vanilla-MemN2N and Forward Prediction models. Table 2 presents results for Cont-MemN2N, which is better at handling unknown words. We repeat each experiment 10 times and report the best result. Finally, Table 8 presents results for the test scenario where the bot itself chooses when to ask questions. Observations can be summarized as as follows:
Asking questions helps at test time, which is intuitive since it provides additional evidence:
⢠TrainAQ+TestAQ (questions can be asked at both training and test time) performs the best across all the settings.
⢠TrainQA+TestAQ (questions can be asked at training time but not at test time) performs worse than TrainQA+TestQA (questions can be asked at neither training nor test time) in tasks Question Clariï¬cation and Knowledge Operation due to the discrepancy between training and testing. | 1612.04936#42 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 43 | ⢠TrainQA+TestAQ performs better than TrainQA+TestQA on all Knowledge Acquisition tasks, the only exception being the Cont-MemN2N model on the Missing Triple setting. The explanation is that for most tasks in Knowledge Acquisition, the learner has no chance of giving the correct answer without asking questions. The beneï¬t from asking is thus large enough to compensate for the negative effect introduced by data discrepancy between training and test time.
⢠TrainMix offers ï¬exibility in bridging the gap between datasets generated using QA and AQ, very slightly underperforming TrainAQ+TestAQ, but gives competitive results on both TestQA and TestAQ in the Question Clariï¬cation and Knowledge Operations tasks.
⢠TrainAQ+TestQA (allowing questions at training time but forbid questions at test time) per- forms the worst, even worse than TrainQA+TestQA. This has a real-world analogy where a student becomes dependent on the teacher answering their questions, later struggling to answer the test questions without help.
⢠In the Missing Question Entity task (the student does not know about the question entity), the Missing Answer Entity task (the student does not know about the answer entity), and Missing Everything task, the bot achieves accuracy less than 0.01 if not asking questions at test time (i.e., TestQA). | 1612.04936#43 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 44 | ⢠The performance of TestModelAQ, where the bot relies on its model to ask questions at test time (and thus can ask irrelevant questions) performs similarly to asking the correct question at test time (TestAQ) and better than not asking questions (TestQA).
- Cont-MemN2N signiï¬cantly outperforms vanilla-MemN2N. One explanation is that considering context provides signiï¬cant evidence distinguishing correct answers from candidates in the dialogue history, especially in cases where the model encounters unfamiliar words.
RL Results For the RL settings, we present results for Task 2 (Question Veriï¬cation) and Task 6 (Missing Answer Entities) in Figure 6. Task 2 represents scenarios where different types of student
10
Published as a conference paper at ICLR 2017 | 1612.04936#44 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 45 | 10
Published as a conference paper at ICLR 2017
Task2 Question Verification Question-Asking Rate vs Question Cost Task2 Question Verification Final Accuracy vs Question Cost 5S g +3 0.8| ae 2 ov 0.6 % ¢ = 6 0.4 i S02 .72| [=e good oO âa medium m= poor 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.00 0.05 010 015 0.20 0.25 0.30 Question Cost Question Cost Task6 Missing Answer Entity Task6 Missing Answer Entity *1opeeQuestion-Asking Rate vs Question Cost Final Accuracy vs Question Cost 2 % 0.8, fe > > 8 § os} 50. o 3 $ < < = § 0.4, 3 0 B £ a it 3 0.2 o 8a OS⢠ie Ts a0 2s 0 «|= 8S OS 80 Question Cost Question Cost
Figure 6: Results of online learning for Task 2 and Task 6 | 1612.04936#45 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 46 | Figure 6: Results of online learning for Task 2 and Task 6
have different abilities to correctly answer questions (e.g., a poor student can still sometimes give correct answers even when they do not fully understand the question). Task 6 represents tasks where a poor learner who lacks the knowledge necessary to answer the question can hardly give a correct answer. All types of students including the good student will theoretically beneï¬t from asking questions (asking for the correct answer) in Task 6. We show the percentage of question-asking versus the cost of AQ on the test set and the accuracy of question-answering on the test set vs the cost of AQ. Our main ï¬ndings were:
⢠A good student does not need to ask questions in Task 2 (Question Veriï¬cation), because they already understand the question. The student will raise questions asking for the correct answer when cost is low for Task 6 (Missing Answer Entities).
⢠A poor student always asks questions when the cost is low. As the cost increases, the frequency of question-asking declines.
As the AQ cost increases gradually, good students will stop asking questions earlier than the medium and poor students. The explanation is intuitive: poor students beneï¬t more from asking questions than good students, so they continue asking even with higher penalties. ⢠As the probability of question-asking declines, the accuracy for poor and medium students | 1612.04936#46 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 47 | drops. Good students are more resilient to not asking questions.
6.2 MECHANICAL TURK
Results for the Mechanical Turk Tasks are given in Table 3. We again compare vanilla-MemN2N and Cont-MemN2N, using the same TrainAQ/TrainQA and TestAQ/TestQA combinations as before, for Tasks 4 and 8 as described in Section 4.2. We tune hyperparameters on the validation set and repeat each experiment 10 times and report the best result.
While performance is lower than on the related Task 4 and Task 8 simulator tasks, we still arrive at the same trends and conclusions when real data from humans is used. The performance was expected to be lower because (i) real data has more lexical variety, complexity and noise; and (ii) the training set was smaller due to data collection costs (10k vs. 180k). We perform an analysis of the difference between simulated and real training data (or combining the two) in the appendix, which shows that using real data is indeed important and measurably superior to using simulated data.
11
Published as a conference paper at ICLR 2017 | 1612.04936#47 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 48 | 11
Published as a conference paper at ICLR 2017
vanilla-MemN2N Cont-MemN2N Train \Test TrainQA TrainAQ Task 4: K. Veriï¬cation TestQA 0.331 0.318 TestAQ 0.313 0.375 Task 8: Triple TestQA 0.133 0.072 TestAQ 0.162 0.422 Task 4: K. Veriï¬cation TestQA 0.712 0.679 TestAQ 0.703 0.774 Task 8: Triple TestQA 0.308 0.137 TestAQ 0.234 0.797
Table 3: Mechanical Turk Task Results. Asking Questions (AQ) outperforms only answering ques- tions without asking (QA).
More importantly, the same main conclusion is observed as before: TrainAQ+TestAQ (questions can be asked at both training and test time) performs the best across all the settings. That is, we show that a bot asking questions to humans learns to outperform one that only answers them.
# 7 CONCLUSIONS | 1612.04936#48 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 49 | # 7 CONCLUSIONS
In this paper, we explored how an intelligent agent can beneï¬t from interacting with users by asking questions. We developed tasks where interaction via asking questions is desired. We explore both online and ofï¬ine settings that mimic different real world situations and show that in most cases, teaching a bot to interact with humans facilitates language understanding, and consequently leads to better question answering ability.
# REFERENCES
Mohammad Amin Bassiri. Interactional feedback and the impact of attitude and motivation on noticing l2 form. English Language and Literature Studies, 1(2):61, 2011.
Antoine Bordes and Jason Weston. Learning end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683, 2016.
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015.
Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog sys- tems. arXiv preprint arXiv:1511.06931, 2015. | 1612.04936#49 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 50 | Richard Higgins, Peter Hartley, and Alan Skelton. The conscientious consumer: Reconsidering the role of assessment feedback in student learning. Studies in higher education, 27(1):53â64, 2002.
Andrew S Latham. Learning through feedback. Educational Leadership, 54(8):86â87, 1997.
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055, 2015.
Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Ja- arXiv preprint son Weston. Key-value memory networks for directly reading documents. arXiv:1606.03126, 2016.
MarcâAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015. | 1612.04936#50 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 51 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714, 2015.
Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina Rojas-Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, and Steve Young. Continuously learning neural dialogue management. arXiv preprint arXiv:1606.02689, 2016.
Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in neural information processing systems, pp. 2440â2448, 2015.
12
Published as a conference paper at ICLR 2017
Oriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015.
Sida I Wang, Percy Liang, and Christopher D Manning. Learning language games through interac- tion. arXiv preprint arXiv:1606.02447, 2016. | 1612.04936#51 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 52 | Sida I Wang, Percy Liang, and Christopher D Manning. Learning language games through interac- tion. arXiv preprint arXiv:1606.02447, 2016.
Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve Young. A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562, 2016.
Margaret G Werts, Mark Wolery, Ariane Holcombe, and David L Gast. Instructive feedback: Review of parameters and effects. Journal of Behavioral Education, 5(1):55â75, 1995.
Jason Weston. Dialog-based language learning. arXiv preprint arXiv:1604.06045, 2016.
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992. | 1612.04936#52 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 53 | Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Terry Winograd. Understanding natural language. Cognitive psychology, 3(1):1â191, 1972.
# Ludwig Wittgenstein. Philosophical investigations. John Wiley & Sons, 2010.
Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 362, 2015.
# Appendix
End-to-End Memory Networks The input to an end-to-end memory network model (MemN2N) is the last utterance of the dialogue history x as well as a set of memories (context) (C=c1, c2, ..., cN ). Memory C encodes both short-term memory, e..g, dialogue histories between the bot and the teacher and long-term memories, e.g., the knowledgebase facts that the bot has access to. Given the input x and C, the goal is to produce an output/label a. | 1612.04936#53 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 54 | In the ï¬rst step, the query x is transformed to a vector representation u0 by summing up its con- stituent word embeddings: u0 = Ax. The input x is a bag-of-words vector and A is the d à V word embedding matrix where d denotes the vector dimensionality and V denotes the vocabulary size. Each memory ci is similarly transformed to vector mi. The model will read information from the memory by linking input representation q with memory vectors mi using softmax weights:
a= So pim pi = softmax(ug mi) (2) i
The goal is to select memories relevant to the last utterance x, i.e., the memories with large values of p1 i . The queried memory vector o1 is the weighted sum of memory vectors. The queried memory vector o1 will be added on top of original input, u1 = o1 + u0. u1 is then used to query the memory vector. Such a process is repeated by querying the memory N times (so called âhopsâ). N is set to three in all experiments in this paper.
In the end, uN is input to a softmax function for the ï¬nal prediction: | 1612.04936#54 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1612.04936 | 55 | In the end, uN is input to a softmax function for the ï¬nal prediction:
N y1, uT where L denotes the number of candidate answers and y denotes the representation of the answer. If the answer is a word, y is the corresponding word embedding. If the answer is a sentence, y is the embedding for the sentence achieved in the same way as we obtain embeddings for query x and memory c.
Reward Based Imitation (RBI) and Forward Prediction (FP) RBI and FP are two dialogue learn- ing strategies proposed in (Weston, 2016) by harnessing different types of dialogue signals. RBI handles the case where the reward or the correctness of a botâs answer is explicitly given (for ex- ample, +1 if the botâs answer is correct and 0 otherwise). The model is directly trained to predict the correct answers (with label 1) at training time, which can be done using End-to-End Memory Networks (MemN2N) (Sukhbaatar et al., 2015) that map a dialogue input to a prediction.
13
Published as a conference paper at ICLR 2017 | 1612.04936#55 | Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents. | http://arxiv.org/pdf/1612.04936 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20161215 | 20170213 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1506.06714"
},
{
"id": "1511.06732"
},
{
"id": "1510.03055"
},
{
"id": "1606.02447"
},
{
"id": "1502.05698"
},
{
"id": "1604.04562"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.