id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1506.08909#22
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
by feeding the word embeddings one at a time into its respec- tive RNN. Word embeddings are initialized using the pre-trained vectors (Common Crawl, 840B to- kens from [19]), and ï¬ ne-tuned during training. The hidden state of the RNN is updated at each step, and the ï¬ nal hidden state represents a sum- mary of the input utterance. Using the ï¬ nal hid- den states from both RNNs, we then calculate the probability that this is a valid pair: p(ï¬ ag = 1|c, r, M ) = Ï (cT M r + b), where the bias b and the matrix M ¢ R?*@ are learned model parameters. This can be thought of as a generative approach; given some input re- sponse, we generate a context with the product c! = Mr, and measure the similarity to the actual context using the dot product. This is converted to a probability with the sigmoid function. The model is trained by minimizing the cross entropy of all labeled (context, response) pairs [33]:
1506.08909#21
1506.08909#23
1506.08909
[ "1503.02364" ]
1506.08909#23
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
j A 2 L=â J log p( flag, len, tn, M) + mil = ||? n where ||θ||2 F is the Frobenius norm of θ = {M, b}. In our experiments, we use λ = 0 for computa- tional simplicity. For training, we used a 1:1 ratio between true re- sponses (ï¬ ag = 1), and negative responses (ï¬ ag=0) drawn randomly from elsewhere in the training set. The RNN architecture is set to 1 hidden layer with 50 neurons. The Wh matrix is initialized us- ing orthogonal weights [23], while Wx is initial- ized using a uniform distribution with values be- tween -0.01 and 0.01. We use Adam as our opti- mizer [15], with gradients clipped to 10. We found that weight initialization as well as the choice of optimizer were critical for training the RNNs. # 4.3 LSTM In addition to the RNN model, we consider the same architecture but changed the hidden units to long-short term memory (LSTM) units [12]. LSTMs were introduced in order to model longer- term dependencies. This is accomplished using a series of gates that determine whether a new in- put should be remembered, forgotten (and the old value retained), or used as output. The error sig- nal can now be fed back indeï¬ nitely into the gates of the LSTM unit. This helps overcome the van- ishing and exploding gradient problems in stan- dard RNNs, where the error gradients would oth- erwise decrease or increase at an exponential rate. In training, we used 1 hidden layer with 200 neu- rons. The hyper-parameter conï¬ guration (includ- ing number of neurons) was optimized indepen- dently for RNNs and LSTMs using a validation set extracted from the training data. # 5 Empirical Results The results for the TF-IDF, RNN, and LSTM mod- els are shown in Table 4. The models were eval- uated using both 1 (1 in 2) and 9 (1 in 10) false examples. Of course, the Recall@2 and Recall@5 are not relevant in the binary classiï¬ cation case9.
1506.08909#22
1506.08909#24
1506.08909
[ "1503.02364" ]
1506.08909#24
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Method 1 in 2 R@1 1 in 10 R@1 1 in 10 R@2 1 in 10 R@5 TF-IDF RNN LSTM 65.9% 76.8% 87.8% 41.0% 40.3% 60.4% 54.5% 54.7% 74.5% 70.8% 81.9% 92.6% Table 4: Results for the three algorithms using var- ious recall measures for binary (1 in 2) and 1 in 10 (1 in 10) next utterance classiï¬ cation %. We observe that the LSTM outperforms both the RNN and TF-IDF on all evaluation metrics. It is interesting to note that TF-IDF actually out- performs the RNN on the Recall@1 case for the 1 in 10 classiï¬
1506.08909#23
1506.08909#25
1506.08909
[ "1503.02364" ]
1506.08909#25
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
cation. This is most likely due to the limited ability of the RNN to take into account long contexts, which can be overcome by using the LSTM. An example output of the LSTM where the response is correctly classiï¬ ed is shown in Table 5. We also show, in Figure 3, the increase in per- formance of the LSTM as the amount of data used for training increases. This conï¬ rms the impor- tance of having a large training set. Context ""any apache hax around ? i just deleted all of __path__ - which package provides it ?", "reconï¬ guring apache do nâ t solve it ?" Ranked Responses 1. "does nâ t seem to, no" 2. "you can log in but not transfer ï¬ les ?" Flag 1 0
1506.08909#24
1506.08909#26
1506.08909
[ "1503.02364" ]
1506.08909#26
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Table 5: Example showing the ranked responses from the LSTM. Each utterance is shown after pre- processing steps. # 6 Discussion This paper presents the Ubuntu Dialogue Corpus, a large dataset for research in unstructured multi- turn dialogue systems. We describe the construc- tion of the dataset and its properties. The availabil- ity of a dataset of this size opens up several inter- esting possibilities for research into dialogue sys- tems based on rich neural-network architectures. We present preliminary results demonstrating use of this dataset to train an RNN and an LSTM for the task of selecting the next best response in a 9Note that these results are on the original dataset. Results on the new dataset should not be compared to the old dataset; baselines on the new dataset will be released shortly. 0.65 Recall@1 for 1 in 10 classification & 0.35, 0 20000 40000 +â «-60000 +~â «80000+~=â «100000 +~â «120000 Number of dialogues used in training Figure 3: The LSTM (with 200 hidden units), showing Recall@1 for the 1 in 10 classiï¬ cation, with increasing dataset sizes.
1506.08909#25
1506.08909#27
1506.08909
[ "1503.02364" ]
1506.08909#27
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
conversation; we obtain signiï¬ cantly better results with the LSTM architecture. There are several in- teresting directions for future work. # 6.1 Conversation Disentanglement Our approach to conversation disentanglement consists of a small set of rules. More sophisticated techniques have been proposed, such as training a maximum-entropy classiï¬ er to cluster utterances into separate dialogues [6]. However, since we are not trying to replicate the exact conversation between two users, but only to retrieve plausible natural dialogues, the heuristic method presented in this paper may be sufï¬
1506.08909#26
1506.08909#28
1506.08909
[ "1503.02364" ]
1506.08909#28
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
cient. This seems sup- ported through qualitative examination of the data, but could be the subject of more formal evaluation. # 6.2 Altering Test Set Difï¬ culty One of the interesting properties of the response selection task is the ability to alter the task dif- ï¬ culty in a controlled manner. We demonstrated this by moving from 1 to 9 false responses, and by varying the Recall@k parameter. In the future, instead of choosing false responses randomly, we will consider selecting false responses that are similar to the actual response (e.g. as measured by cosine similarity). A dialogue model that performs well on this more difï¬ cult task should also manage to capture a more ï¬ ne-grained semantic meaning of sentences, as compared to a model that naively picks replies with the most words in common with the context such as TF-IDF. # 6.3 State Tracking and Utterance Generation The work described here focuses on the task of re- sponse selection. This can be seen as an interme- diate step between slot ï¬ lling and utterance gener- ation. In slot ï¬ lling, the set of candidate outputs (states) is identiï¬ ed a priori through knowledge engineering, and is typically smaller than the set of responses considered in our work. When the set of candidate responses is close to the size of the dataset (e.g. all utterances ever recorded), then we are quite close to the response generation case. There are several reasons not to proceed directly to response generation. First, it is likely that cur- rent algorithms are not yet able to generate good results for this task, and it is preferable to tackle metrics for which we can make progress. Second, we do not yet have a suitable metric for evaluat- ing performance in the response generation case. One option is to use the BLEU [18] or METEOR [16] scores from machine translation. However, using BLEU to evaluate dialogue systems has been shown to give extremely low scores [28], due to the large space of potential sensible responses [7]. Further, since the BLEU score is calculated us- ing N-grams [18], it would provide a very low score for reasonable responses that do not have any words in common with the ground-truth next utterance.
1506.08909#27
1506.08909#29
1506.08909
[ "1503.02364" ]
1506.08909#29
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Alternatively, one could measure the difference between the generated utterance and the actual sentence by comparing their representations in some embedding (or semantic) space. However, different models inevitably use different embed- dings, necessitating a standardized embedding for evaluation purposes. Such a standardized embed- dings has yet to be created. Another possibility is to use human subjects to score automatically generated responses, but time and expense make this a highly impractical option. In summary, while it is possible that current lan- guage models have outgrown the use of slot ï¬ ll- ing as a metric, we are currently unable to mea- sure their ability in next utterance generation in a standardized, meaningful and inexpensive way. This motivates our choice of response selection as a useful metric for the time being.
1506.08909#28
1506.08909#30
1506.08909
[ "1503.02364" ]
1506.08909#30
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
# Acknowledgments The authors gratefully acknowledge ï¬ nancial sup- port for this work by the Samsung Advanced Institute of Technology (SAIT) and the Natural Sciences and Engineering Research Council of Canada (NSERC). We would like to thank Lau- rent Charlin for his input into this paper, as well as Gabriel Forgues and Eric Crawford for interesting discussions. # References [1] Y. Bengio, A. Courville, and P. Vincent. Representation learning:
1506.08909#29
1506.08909#31
1506.08909
[ "1503.02364" ]
1506.08909#31
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
A review and new Pattern Analysis and Ma- perspectives. chine Intelligence, IEEE Transactions on, 35(8):1798â 1828, 2013. [2] A. Bordes, J. Weston, and N. Usunier. Open question answering with weakly supervised embedding models. In MLKDD, pages 165â 180. Springer, 2014. J. Boyd-Graber, B. Satinoff, H. He, and H.
1506.08909#30
1506.08909#32
1506.08909
[ "1503.02364" ]
1506.08909#32
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Daume. Besting the quiz master: Crowd- sourcing incremental classiï¬ cation games. In EMNLP, 2012. [4] K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, and Y. Ben- gio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hier- archical image database. In CVPR, 2009. [6] M. Elsner and E. Charniak. You talking to me? a corpus and algorithm for conversa- In ACL, pages 834â tion disentanglement. 842, 2008. [7] M. Galley, C. Brockett, A. Sordoni, Y. Ji, M. Auli, C. Quirk, M. Mitchell, J. Gao, and B. Dolan. deltableu: A discriminative metric for generation tasks with intrinsically diverse arXiv preprint arXiv:1506.06863, targets. 2015. J.J. Godfrey, E.C. Holliman, and J. Mc- Switchboard: Telephone speech Daniel. corpus for research and development. In ICASSP, 1992. [9] M. Henderson, B. Thomson, and J. Williams. Dialog state tracking challenge 2 & 3, 2014. [10] M. Henderson, B. Thomson, and J. Williams.
1506.08909#31
1506.08909#33
1506.08909
[ "1503.02364" ]
1506.08909#33
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
The second dialog state tracking challenge. In SIGDIAL, page 263, 2014. [11] M. Henderson, B. Thomson, and S. Young. Word-based dialog state tracking with recur- rent neural networks. In SIGDIAL, page 292, 2014. [12] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â 1780, 1997. [13] Dialog state tracking challenge 4. [14] S. Jafarpour, C. Burges, and A.
1506.08909#32
1506.08909#34
1506.08909
[ "1503.02364" ]
1506.08909#34
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Ritter. Filter, rank, and transfer the knowledge: Learning to chat. Advances in Ranking, 10, 2010. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. [16] A. Lavie and M.J. Denkowski. The ME- TEOR metric for automatic evaluation of Machine Translation. Machine Translation, 23(2-3):105â 115, 2009. [17] L.R. Medsker and L.C. Jain. Recurrent neu- ral networks. Design and Applications, 2001. [18] K. Papineni, S. Roukos, T. Ward, and W.J. Zhu.
1506.08909#33
1506.08909#35
1506.08909
[ "1503.02364" ]
1506.08909#35
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Bleu: a method for automatic evalua- tion of machine translation. In ACL, 2002. [19] J. Pennington, R. Socher, and C.D. Manning. GloVe: Global Vectors for Word Representa- tion. In EMNLP, 2014. [20] J. Ramos. Using tf-idf to determine word rel- evance in document queries. In ICML, 2003. [21] A.
1506.08909#34
1506.08909#36
1506.08909
[ "1503.02364" ]
1506.08909#36
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Ritter, C. Cherry, and W. Dolan. Unsu- pervised modeling of twitter conversations. 2010. [22] A. Ritter, C. Cherry, and W. Dolan. Data- driven response generation in social media. In EMNLP, pages 583â 593, 2011. [23] A.M. Saxe, J.L. McClelland, and S. Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013. [24] J. Schatzmann, K. Georgila, and S.
1506.08909#35
1506.08909#37
1506.08909
[ "1503.02364" ]
1506.08909#37
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Young. Quantitative evaluation of user simulation techniques for spoken dialogue systems. In SIGDIAL, 2005. Neural responding machine for short-text conver- arXiv preprint arXiv:1503.02364, sation. 2015. [26] B. A. Shawar and E. Atwell. Chatbots: are In LDV Forum, vol- they really useful? ume 22, pages 29â 49, 2007. [27] S. Singh, D. Litman, M. Kearns, and M. Walker.
1506.08909#36
1506.08909#38
1506.08909
[ "1503.02364" ]
1506.08909#38
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Optimizing dialogue manage- ment with reinforcement learning: Experi- ments with the NJFun system. Journal of Artiï¬ cial Intelligence Research, 16:105â 133, 2002. [28] A. Sordoni, M. Galley, M. Auli, C. Brock- ett, Y. Ji, M. Mitchell, J.Y. Nie, J. Gao, and W. Dolan. A neural network approach to context-sensitive generation of conversa- tional responses. 2015. [29] D.C. Uthus and D.W. Aha. Extending word highlighting in multiparticipant chat.
1506.08909#37
1506.08909#39
1506.08909
[ "1503.02364" ]
1506.08909#39
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Tech- nical report, DTIC Document, 2013. [30] D.C. Uthus and D.W Aha. The ubuntu chat corpus for multiparticipant chat analysis. In AAAI Spring Symposium on Analyzing Mi- crotext, pages 99â 102, 2013. [31] H. Wang, Z. Lu, H. Li, and E. Chen. A dataset for research on short-text conversa- tions. In EMNLP, 2013. [32] J. Williams, A. Raux, D. Ramachandran, and A.
1506.08909#38
1506.08909#40
1506.08909
[ "1503.02364" ]
1506.08909#40
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Black. The dialog state tracking chal- lenge. In SIGDIAL, pages 404â 413, 2013. [33] L. Yu, K. M. Hermann, P. Blunsom, Deep learning for an- arXiv preprint and S. Pulman. swer sentence selection. arXiv:1412.1632, 2014. [34] M.D. Zeiler. Adadelta: learning rate method. arXiv:1212.5701, 2012. an adaptive arXiv preprint # Appendix A:
1506.08909#39
1506.08909#41
1506.08909
[ "1503.02364" ]
1506.08909#41
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Dialogue excerpts Time User Utterance 03:44 03:45 03:45 03:45 03:45 03:45 03:45 03:45 03:46 03:46 Sender Old kuja Taru bur[n]er kuja Taru LiveCD kuja _pm Taru Recipient I dont run graphical ubuntu, I run ubuntu server. Taru: Haha sucker. Kuja: ? Old: you can use "ps ax" and "kill (PID#)" Taru: Anyways, you made the changes right? Kuja: Yes. or killall speedlink Taru:
1506.08909#40
1506.08909#42
1506.08909
[ "1503.02364" ]
1506.08909#42
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Then from the terminal type: sudo apt-get update if i install the beta version, how can i update it when the ï¬ nal version comes out? Kuja: I did. Utterance Old bur[n]er Old I dont run graphical ubuntu, I run ubuntu server. you can use "ps ax" and "kill (PID#)" kuja Taru kuja Taru kuja Taru Taru Kuja Taru Kuja Taru Kuja Haha sucker. ? Anyways, you made the changes right? Yes. Then from the terminal type: sudo apt-get update I did.
1506.08909#41
1506.08909#43
1506.08909
[ "1503.02364" ]
1506.08909#43
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Figure 4: Example chat room conversation from the #ubuntu channel of the Ubuntu Chat Logs (top), with the disentangled conversations for the Ubuntu Dialogue Corpus (bottom). Time User Utterance [12:21] [12:21] [12:21] [12:21] [12:21] [12:21] [12:21] [12:21] [12:22] [12:22] dell cucho RC RC dell dell RC dell dell cucho well, can I move the drives? dell: ah not like that dell: you canâ t move the drives dell: deï¬ nitely not ok lol this is the problem with RAID:) RC haha yeah cucho, I guess I could just get an enclosure and copy via USB... dell: i would advise you to get the disk Sender Recipient Utterance dell cucho dell cucho dell cucho dell well, can I move the drives? ah not like that I guess I could just get an enclosure and copy via USB i would advise you to get the disk dell RC dell dell RC well, can I move the drives? you canâ t move the drives. deï¬ nitely not. this is the problem with RAID :) haha yeah Figure 5:
1506.08909#42
1506.08909#44
1506.08909
[ "1503.02364" ]
1506.08909#44
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Example of before (top box) and after (bottom box) the algorithm adds and concatenates utterances in dialogue extraction. Since RC only addresses dell, all of his utterances are added, however this is not done for dell as he addresses both RC and cucho.
1506.08909#43
1506.08909
[ "1503.02364" ]
1506.06724#0
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
5 1 0 2 n u J 2 2 ] V C . s c [ 1 v 4 2 7 6 0 . 6 0 5 1 : v i X r a # Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books Yukun Zhu â ,1 Ryan Kiros*,1 Richard Zemel1 Ruslan Salakhutdinov1 Raquel Urtasun1 Antonio Torralba2 Sanja Fidler1 1University of Toronto 2Massachusetts Institute of Technology {yukun,rkiros,zemel,rsalakhu,urtasun,fidler}@cs.toronto.edu, [email protected] # Abstract
1506.06724#1
1506.06724
[ "1502.03044" ]
1506.06724#1
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Books are a rich source of both ï¬ ne-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This paper aims to align books to their movie releases in order to provide rich descriptive explanations for visual content that go semanti- cally far beyond the captions available in current datasets. To align movies and books we exploit a neural sentence embedding that is trained in an unsupervised way from a large corpus of books, as well as a video-text neural em- bedding for computing similarities between movie clips and sentences in the book. We propose a context-aware CNN to combine information from multiple sources. We demon- strate good quantitative performance for movie/book align- ment and show several qualitative examples that showcase the diversity of tasks our model can be used for.
1506.06724#0
1506.06724#2
1506.06724
[ "1502.03044" ]
1506.06724#2
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
# 1. Introduction Figure 1: Shot from the movie Gone Girl, along with the subtitle, aligned with the book. We reason about the visual and dialog (text) alignment between the movie and a book. Books provide us with very rich, descriptive text that conveys both ï¬ ne-grained visual details (how people or scenes look like) as well as high-level semantics (what peo- ple think and feel, and how their states evolve through a story). This source of knowledge, however, does not come with associated visual information that would enable us to ground it with descriptions. Grounding descriptions in books to vision would allow us to get textual explanations or stories behind visual information rather than simplistic captions available in current datasets. It can also provide us with extremely large amount of data (with tens of thousands books available online). A truly intelligent machine needs to not only parse the surrounding 3D environment, but also understand why peo- ple take certain actions, what they will do next, what they could possibly be thinking, and even try to empathize with them. In this quest, language will play a crucial role in grounding visual information to high-level semantic con- cepts. Only a few words in a sentence may convey really rich semantic information. Language also represents a natu- ral means of interaction between a naive user and our vision algorithms, which is particularly important for applications such as social robotics or assistive driving. Combining images or videos with language has gotten signiï¬ cant attention in the past year, partly due to the cre- ation of CoCo [18], Microsoftâ s large-scale captioned im- age dataset. The ï¬ eld has tackled a diverse set of tasks such as captioning [13, 11, 36, 35, 21], alignment [11, 15, 34], Q&A [20, 19], visual model learning from textual descrip- tions [8, 26], and semantic visual search with natural multi- sentence queries [17].
1506.06724#1
1506.06724#3
1506.06724
[ "1502.03044" ]
1506.06724#3
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
In this paper, we exploit the fact that many books have been turned into movies. Books and their movie releases have a lot of common knowledge as well as they are com- plementary in many ways. For instance, books provide de- tailed descriptions about the intentions and mental states of the characters, while movies are better at capturing visual aspects of the settings. The ï¬ rst challenge we need to address, and the focus of this paper, is to align books with their movie releases in order to obtain rich descriptions for the visual content. We aim to align the two sources with two types of in- formation: visual, where the goal is to link a movie shot to a book paragraph, and dialog, where we want to ï¬ nd correspondences between sentences in the movieâ s subtitle and sentences in the book. We formulate the problem of movie/book alignment as ï¬ nding correspondences between shots in the movie as well as dialog sentences in the sub- titles and sentences in the book (Fig. 1). We introduce a novel sentence similarity measure based on a neural sen-
1506.06724#2
1506.06724#4
1506.06724
[ "1502.03044" ]
1506.06724#4
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
# â Denotes equal contribution 1 tence embedding trained on millions of sentences from a large corpus of books. On the visual side, we extend the neural image-sentence embeddings to the video domain and train the model on DVS descriptions of movie clips. Our approach combines different similarity measures and takes into account contextual information contained in the nearby shots and book sentences. Our ï¬ nal alignment model is for- mulated as an energy minimization problem that encourages the alignment to follow a similar timeline. To evaluate the book-movie alignment model we collected a dataset with 11 movie/book pairs annotated with 2,070 shot-to-sentence correspondences. We demonstrate good quantitative perfor- mance and show several qualitative examples that showcase the diversity of tasks our model can be used for. The alignment model can have multiple applications. Imagine an app which allows the user to browse the book as the scenes unroll in the movie: perhaps its ending or act- ing are ambiguous, and one would like to query the book for answers. Vice-versa, while reading the book one might want to switch from text to video, particularly for the juicy scenes. We also show other applications of learning from movies and books such as book retrieval (ï¬ nding the book that goes with a movie and ï¬ nding other similar books), and captioning CoCo images with story-like descriptions. # 2. Related Work
1506.06724#3
1506.06724#5
1506.06724
[ "1502.03044" ]
1506.06724#5
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Most effort in the domain of vision and language has been devoted to the problem of image captioning. Older work made use of ï¬ xed visual representations and translated them into textual descriptions [6, 16]. Recently, several approaches based on RNNs emerged, generating captions via a learned joint image-text embedding [13, 11, 36, 21]. These approaches have also been extended to generate de- scriptions of short video clips [35]. In [24], the authors go beyond describing what is happening in an image and pro- vide explanations about why something is happening. For text-to-image alignment, [15, 7] ï¬ nd correspon- dences between nouns and pronouns in a caption and visual objects using several visual and textual potentials. Lin et al. [17] does so for videos. In [11], the authors use RNN embeddings to ï¬ nd the correspondences. [37] combines neural embeddings with soft attention in order to align the words to image regions. Early work on movie-to-text alignment include dynamic time warping for aligning movies to scripts with the help of subtitles [5, 4]. Sankar et al. [28] further developed a system which identiï¬ ed sets of visual and audio features to align movies and scripts without making use of the subtitles. Such alignment has been exploited to provide weak labels for person naming tasks [5, 30, 25]. Closest to our work is [34], which aligns plot synopses to shots in the TV series for story-based content retrieval. This work adopts a similarity function between sentences in plot synopses and shots based on person identities and keywords in subtitles. Our work differs with theirs in several impor- tant aspects. First, we tackle a more challenging problem of movie/book alignment. Unlike plot synopsis, which closely follow the storyline of movies, books are more verbose and might vary in the storyline from their movie release.
1506.06724#4
1506.06724#6
1506.06724
[ "1502.03044" ]
1506.06724#6
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Fur- thermore, we use learned neural embeddings to compute the similarities rather than hand-designed similarity functions. Parallel to our work, [33] aims to align scenes in movies to chapters in the book. However, their approach operates on a very coarse level (chapters), while ours does so on the sentence/paragraph level. Their dataset thus evaluates on 90 scene-chapter correspondences, while our dataset draws 2,070 shot-to-sentences alignments. Furthermore, the ap- proaches are inherently different. [33] matches the pres- ence of characters in a scene to those in a chapter, as well as uses hand-crafted similarity measures between sentences in the subtitles and dialogs in the books, similarly to [34]. Rohrbach et al. [27] recently released the Movie De- scription dataset which contains clips from movies, each time-stamped with a sentence from DVS (Descriptive Video Service). The dataset contains clips from over a 100 movies, and provides a great resource for the captioning techniques. Our effort here is to align movies with books in order to ob- tain longer, richer and more high-level video descriptions. We start by describing our new dataset, and then explain our proposed approach.
1506.06724#5
1506.06724#7
1506.06724
[ "1502.03044" ]
1506.06724#7
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
# 3. The MovieBook and BookCorpus Datasets We collected two large datasets, one for movie/book alignment and one with a large number of books. The MovieBook Dataset. Since no prior work or data ex- ist on the problem of movie/book alignment, we collected a new dataset with 11 movies along with the books on which they were based on. For each movie we also have a sub- title ï¬ le, which we parse into a set of time-stamped sen- tences. Note that no speaker information is provided in the subtitles. We automatically parse each book into sentences, paragraphs (based on indentation in the book), and chapters (we assume a chapter title has indentation, starts on a new page, and does not end with an end symbol). Our annotators had the movie and a book opened side by side. They were asked to iterate between browsing the book and watching a few shots/scenes of the movie, and trying to ï¬ nd correspondences between them. In particular, they marked the exact time (in seconds) of correspondence in the movie and the matching line number in the book ï¬ le, indicating the beginning of the matched sentence. On the video side, we assume that the match spans across a shot (a video unit with smooth camera motion). If the match was longer in duration, the annotator also indicated the ending time of the match. Similarly for the book, if more sentences
1506.06724#6
1506.06724#8
1506.06724
[ "1502.03044" ]
1506.06724#8
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Title Gone Girl Fight Club No Country for Old Men Harry Potter and the Sorcerers Stone Shawshank Redemption The Green Mile American Psycho One Flew Over the Cuckoo Nest The Firm Brokeback Mountain The Road # sent. 12,603 4,229 8,050 6,458 2,562 9,467 11,992 7,103 15,498 638 6,638 # words 148,340 48,946 69,824 78,596 40,140 133,241 143,631 112,978 135,529 10,640 58,793 # unique words 3,849 1,833 1,704 2,363 1,360 3,043 4,632 2,949 3,685 470 1,580 BOOK avg. # words per sent. 15 14 10 15 18 17 16 19 11 20 10 max # words per sent. 153 90 68 227 115 119 422 192 85 173 74 # para- graphs 3,927 2,082 3,189 2,925 637 2,760 3,945 2,236 5,223 167 2,345 # shots 2,604 2,365 1,348 2,647 1,252 2,350 1,012 1,671 2,423 1,205 1,108 MOVIE # sent. in subtitles 2,555 1,864 889 1,227 1,879 1,846 1,311 1,553 1,775 1,228 782 ANNOTATION # dialog align. 76 104 223 164 44 208 278 64 82 80 126 # visual align. 106 42 47 73 12 102 85 25 60 20 49 85,238 980,658 9,032 156 29,436 19,985 16,909 1,449 621 15 All Table 1: Statistics for our MovieBook Dataset with ground-truth for alignment between books and their movie releases.
1506.06724#7
1506.06724#9
1506.06724
[ "1502.03044" ]
1506.06724#9
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
# of books 11,038 # of sentences 74,004,228 # of words 984,846,357 # of unique words mean # of words per sentence median # of words per sentence 1,316,420 13 11 Table 2: Summary statistics of our BookCorpus dataset. We use this corpus to train the sentence embedding model. matched, the annotator indicated from which to which line a match occurred. Each alignment was also tagged, indicating whether it was a visual, dialogue, or an audio match. Note that even for dialogs, the movie and book versions are se- mantically similar but not exactly the same.
1506.06724#8
1506.06724#10
1506.06724
[ "1502.03044" ]
1506.06724#10
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Thus deciding on what deï¬ nes a match or not is also somewhat subjective and may slightly vary across our annotators. Altogether, the annotators spent 90 hours labeling 11 movie/book pairs, locating 2,070 correspondences. lished authors. We only included books that had more than 20K words in order to ï¬ lter out perhaps noisier shorter sto- ries. The dataset has books in 16 different genres, e.g., Romance (2,865 books), Fantasy (1,479), Science ï¬ ction (786), Teen (430), etc. Table 2 highlights the summary statistics of our book corpus. # 4. Aligning Books and Movies Table 1 presents our dataset, while Fig. 8 shows a few ground-truth alignments. One can see the complexity and diversity of the data: the number of sentences per book vary from 638 to 15,498, even though the movies are similar in duration. This indicates a huge diversity in descriptiveness across literature, and presents a challenge for matching. The sentences also vary in length, with the sentences in Broke- back Mountain being twice as long as those in The Road. The longest sentence in American Psycho has 422 words and spans over a page in the book.
1506.06724#9
1506.06724#11
1506.06724
[ "1502.03044" ]
1506.06724#11
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Aligning movies with books is challenging even for hu- mans, mostly due to the scale of the data. Each movie is on average 2h long and has 1,800 shots, while a book has on average 7,750 sentences. Books also have different styles of writing, formatting, different and challenging language, slang (going vs goinâ , or even was vs â us), etc. As one can see from Table 1, ï¬ nding visual matches turned out to be particularly challenging. This is because the visual descrip- tions in books can be either very short and hidden within longer paragraphs or even within a longer sentence, or very verbose â in which case they get obscured with the sur- rounding text â and are hard to spot. Of course, how close the movie follows the book is also up to the director, which can be seen through the number of alignments that our an- notators found across different movie/books. Our approach aims to align a movie with a book by ex- ploiting visual information as well as dialogs. We take shots as video units and sentences from subtitles to represent di- alogs. Our goal is to match these to the sentences in the book. We propose several measures to compute similari- ties between pairs of sentences as well as shots and sen- tences. We use our novel deep neural embedding trained on our large corpus of books to predict similarities between sentences. Note that an extended version of the sentence embedding is described in detail in [14] showing how to deal with million-word vocabularies, and demonstrating its performance on a large variety of NLP benchmarks. For comparing shots with sentences we extend the neural em- bedding of images and text [13] to operate in the video do- main. We next develop a novel contextual alignment model that combines information from various similarity measures and a larger time-scale in order to make better local align- ment predictions. Finally, we propose a simple pairwise Conditional Random Field (CRF) that smooths the align- ments by encouraging them to follow a linear timeline, both in the video and book domain.
1506.06724#10
1506.06724#12
1506.06724
[ "1502.03044" ]
1506.06724#12
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
We ï¬ rst explain our sentence, followed by our joint video to text embedding. We next propose our contextual model that combines similarities and discuss CRF in more detail. # 4.1. Skip-Thought Vectors The BookCorpus Dataset. In order to train our sentence similarity model we collected a corpus of 11,038 books from the web. These are free books written by yet unpub- In order to score the similarity between two sentences, we exploit our architecture for learning unsupervised rep- resentations of text [14]. The model is loosely inspired by
1506.06724#11
1506.06724#13
1506.06724
[ "1502.03044" ]
1506.06724#13
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
oe >@â ® >@ >@ >@â ® a door confronted her <eos> ren a door confronted â her she stopped and tried to pull it i budge _<eos> open <eos> it didnt budge Figure 2: Sentence neural embedding [14]. Given a tuple (s;_1, 5;, 5:41) of consecutive sentences in text, where s; is the i-th sentence, we encode s; and aim to reconstruct the previous s;_; and the following sentence s;41. Unattached arrows are connected to the encoder output. Colors depict which components share parameters. (eos) is the end of sentence token.
1506.06724#12
1506.06724#14
1506.06724
[ "1502.03044" ]
1506.06724#14
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
he drove down the street off into the distance . the most effective way to end the battle . he started the car , left the parking lot and merged onto the highway a few miles down the road . he shut the door and watched the taxi drive off . she watched the lights ï¬ icker through the trees as the men drove toward the road . he jogged down the stairs , through the small lobby , through the door and into the street . a messy business to be sure , but necessary to achieve a ï¬ ne and noble end . they saw their only goal as survival and logically planned a strategy to achieve it . there would be far fewer casualties and far less destruction . the outcome was the lisbon treaty .
1506.06724#13
1506.06724#15
1506.06724
[ "1502.03044" ]
1506.06724#15
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Table 3: Qualitative results from the sentence embedding model. For each query sentence on the left, we retrieve the 4 nearest neighbor sentences (by inner product) chosen from books the model has not seen before. the skip-gram [22] architecture for learning representations of words. In the word skip-gram model, a word wi is cho- sen and must predict its surrounding context (e.g. wi+1 and wiâ 1 for a context window of size 1). Our model works in a similar way but at the sentence level. That is, given a sen- tence tuple (siâ 1, si, si+1) our model ï¬ rst encodes the sen- tence si into a ï¬ xed vector, then conditioned on this vector tries to reconstruct the sentences siâ 1 and si+1, as shown in Fig. 2. The motivation for this architecture is inspired by the distributional hypothesis: sentences that have similar surrounding context are likely to be both semantically and syntactically similar. Thus, two sentences that have similar syntax and semantics are likely to be encoded to a similar vector. Once the model is trained, we can map any sentence through the encoder to obtain vector representations, then score their similarity through an inner product. The learning signal of the model depends on having con- tiguous text, where sentences follow one another in se- quence. A natural corpus for training our model is thus a large collection of books. Given the size and diversity of genres, our BookCorpus allows us to learn very general representations of text. For instance, Table 3 illustrates the nearest neighbours of query sentences, taken from held out books that the model was not trained on. These qualitative results demonstrate that our intuition is correct, with result- ing nearest neighbors corresponds largely to syntactically and semantically similar sentences. Note that the sentence embedding is general and can be applied to other domains not considered in this paper, which is explored in [14].
1506.06724#14
1506.06724#16
1506.06724
[ "1502.03044" ]
1506.06724#16
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
vanishing gradient problem, through the use of gates to con- trol the ï¬ ow of information. The LSTM unit explicity em- ploys a cell that acts as a carousel with an identity weight. The ï¬ ow of information through a cell is controlled by in- put, output and forget gates which control what goes into a cell, what leaves a cell and whether to reset the contents of the cell. The GRU does not use a cell but employs two gates: an update and a reset gate. In a GRU, the hidden state is a linear combination of the previous hidden state and the proposed hidden state, where the combination weights are controlled by the update gate. GRUs have been shown to perform just as well as LSTM on several sequence predic- tion tasks [3] while being simpler. Thus, we use GRU as the activation function for our encoder and decoder RNNs. are (siâ 1, si, si+1), and let wt and let xt description into three parts: objective function. Encoder. Let w1 i denote words in sentence si with N the number of words in the sentence. The encoder pro- duces a hidden state ht i at each time step which forms the representation of the sequence w1 i , . . . , wt i. Thus, the hid- den state hN is the representation of the whole sentence. i The GRU produces the next hidden state as a linear combi- nation of the previous hidden state and the proposed state update (we drop subscript i): h'=(1-z')oh 142â on! (1)
1506.06724#15
1506.06724#17
1506.06724
[ "1502.03044" ]
1506.06724#17
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
To construct an encoder, we use a recurrent neural net- work, inspired by the success of encoder-decoder models for neural machine translation [10, 2, 1, 31]. Two kinds of activation functions have recently gained traction: long short-term memory (LSTM) [9] and the gated recurrent unit (GRU) [3]. Both types of activation successfully solve the where hâ is the proposed state update at time t, zâ is the up- date gate and (©) denotes a component-wise product. The update gate takes values between zero and one. In the ex- treme cases, if the update gate is the vector of ones, the previous hidden state is completely forgotten and hâ
1506.06724#16
1506.06724#18
1506.06724
[ "1502.03044" ]
1506.06724#18
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
= hâ . Alternatively, if the update gate is the zero vector, than the hidden state from the previous time step is simply copied over, that is ht = htâ 1. The update gate is computed as zt = Ï (Wzxt + Uzhtâ 1) (2) where Wz and Uz are the update gate parameters. The proposed state update is given by h! = tanh(Wx' + U(r! © hâ ~â )) (6))
1506.06724#17
1506.06724#19
1506.06724
[ "1502.03044" ]
1506.06724#19
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
where rt is the reset gate, which is computed as rt = Ï (Wrxt + Urhtâ 1) (4) If the reset gate is the zero vector, than the proposed state update is computed only as a function of the current word. Thus after iterating this equation sequence for each word, we obtain a sentence vector hN Decoder. The decoder computation is analogous to the en- coder, except that the computation is conditioned on the sentence vector hi. Two separate decoders are used, one for the previous sentence siâ 1 and one for the next sentence si+1. These decoders use different parameters to compute their hidden states but both share the same vocabulary ma- trix V that takes a hidden state and computes a distribution over words. Thus, the decoders are analogous to an RNN language model but conditioned on the encoder sequence. Alternatively, in the context of image caption generation, the encoded sentence hi plays a similar role as the image. We describe the decoder for the next sentence si+1 (com- putation for siâ 1 is identical). Let ht i+1 denote the hidden state of the decoder at time t. The update and reset gates for the decoder are given as follows (we drop i + 1): zhtâ 1 + Czhi) rhtâ 1 + Crhi) zt = Ï (Wd rt = Ï (Wd z xtâ 1 + Ud r xtâ 1 + Ud i+1 is then computed as: the hidden state ht hâ = tanh(W?%x'~! + U4(r! © h'â +) + Ch,) (7) hi, =(1â 2') oh! +2! oh! (8) â i Given hj,,, the probability of word w/,, given the previ- ous t â 1 words and the encoder vector is P(wi, whi) x exp(Vng, Wha) Q) where vwt word of wt the previous sentence siâ
1506.06724#18
1506.06724#20
1506.06724
[ "1502.03044" ]
1506.06724#20
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
1. Objective. Given (siâ 1, si, si+1), the objective optimized is the sum of log-probabilities for the next and previous sen- tences conditioned on the representation of the encoder: logP (wt i+1|w<t i+1, hi) + logP (wt iâ 1|w<t iâ 1, hi) t t (10) The total objective is the above summed over all such train- ing tuples. Adam algorithm [12] is used for optimization. # 4.2.
1506.06724#19
1506.06724#21
1506.06724
[ "1502.03044" ]
1506.06724#21
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Visual-semantic embeddings of clips and DVS The model above describes how to obtain a similarity score between two sentences, whose representations are learned from millions of sentences in books. We now dis- cuss how to obtain similarities between shots and sentences. Our approach closely follows the image-sentence rank- ing model proposed by [13]. In their model, an LSTM is used for encoding a sentence into a ï¬ xed vector. A linear mapping is applied to image features from a convolutional network. A score is computed based on the inner product between the normalized sentence and image vectors. Cor- rect image-sentence pairs are trained to have high score, while incorrect pairs are assigned low scores. In our case, we learn a visual-semantic embedding be- tween movie clips and their DVS description. DVS (â De- scriptive Video Serviceâ ) is a service that inserts audio de- scriptions of the movie between the dialogs in order to en- able the visually impaired to follow the movie like anyone else. We used the movie description dataset of [27] for learning our embedding. This dataset has 94 movies, and 54,000 described clips. We represent each movie clip as a vector corresponding to mean-pooled features across each frame in the clip. We used the GoogLeNet architecture [32] as well as hybrid-CNN [38] for extracting frame features. For DVS, we pre-processed the descriptions by removing names and replacing these with a someone token. The LSTM architecture in this work is implemented us- ing the following equations. As before, we represent a word embedding at time t of a sentence as xt: i = o(Waix! + Warm t+ Wee) UD ff = o(Wayx' + Wapmâ | + Wereâ ) (12) aâ = tanh(Weex! + W),.mâ ~') (13) ec = foc t+i' oat (14) of = o(Woox' + Wrom'! + Weoe') (15) mâ = o' @tanh(câ ) (16) where (o) denotes the sigmoid activation function and (©) indicates component-wise multiplication. The states (i', f¢, câ , o', m*) correspond to the input, forget, cell, out- put and memory vectors, respectively. If the sentence is of length N, then the vector mâ
1506.06724#20
1506.06724#22
1506.06724
[ "1502.03044" ]
1506.06724#22
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
¢ = m is the vector represen- tation of the sentence. Let q denote a movie clip vector, and let v = WI q be the embedding of the movie clip. We deï¬ ne a scoring function s(m, v) = m · v, where m and v are ï¬ rst scaled to have unit norm (making s equivalent to cosine similarity). We then optimize the following pairwise ranking loss: min > So max{0,a â s(m,v) +s(m,vz)} (17) mk +52 S° max{0,0 = s(v,m) + 5(v,ma)}, with mk a contrastive (non-descriptive) sentence vector for a clip embedding v, and vice-versa with vk.
1506.06724#21
1506.06724#23
1506.06724
[ "1502.03044" ]
1506.06724#23
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
We train our model with stochastic gradient descent without momentum. # 4.3. Context aware similarity We employ the clip-sentence embedding to compute similarities between each shot in the movie and each sen- tence in the book. For dialogs, we use several similarity measures each capturing a different level of semantic sim- ilarity. We compute BLEU [23] between each subtitle and book sentence to identify nearly identical matches. Simi- larly to [34], we use a tf-idf measure to ï¬ nd near duplicates but weighing down the inï¬ uence of the less frequent words. Finally, we use our sentence embedding learned from books to score pairs of sentences that are semantically similar but may have a very different wording (i.e., paraphrasing). These similarity measures indicate the alignment be- tween the two modalities. However, at the local, sentence level, alignment can be rather ambiguous. For example, de- spite being a rather dark book, Gone Girl contains 15 occur- rences of the sentence â
1506.06724#22
1506.06724#24
1506.06724
[ "1502.03044" ]
1506.06724#24
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
I love youâ . We exploit the fact that a match is not completely isolated but that the sentences (or shots) around it are also to some extent similar. We design a context aware similarity measure that takes into account all individual similarity measures as well as a ï¬ xed context window in both, the movie and book do- main, and predicts a new similarity score. We stack a set of M similarity measures into a tensor S(i, j, m), where i, j, and m are the indices of sentences in the subtitle, in the book, and individual similarity measures, respectively. In particular, we use M = 9 similarities: visual and sentence embedding, BLEU1-5, tf-idf, and a uniform prior. We want to predict a combined score score(i, j) = f (S(I, J, M)) at each location (i, j) based on all measurements in a ï¬
1506.06724#23
1506.06724#25
1506.06724
[ "1502.03044" ]
1506.06724#25
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
xed volume deï¬ ned by I around i, J around j, and 1, . . . , M . Evaluating the function f (·) at each location (i, j) on a 3-D tensor S is very similar to applying a convolution using a kernel of appropriate size. This motivates us to formulate the function f (·) as a deep convolutional neural network (CNN). In this paper, we adopt a 3-layer CNN as illustrated in Figure 3. We adopt the ReLU non-linearity with dropout to regularize our model. We optimize the cross-entropy loss over the training set using Adam algorithm.
1506.06724#24
1506.06724#26
1506.06724
[ "1502.03044" ]
1506.06724#26
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
# 4.4. Global Movie/Book Alignment So far, each shot/sentence was matched independently. However, most shots in movies and passages in the books follow a similar timeline. We would like to incorporate this prior into our alignment. In [34], the authors use dynamic time warping by enforcing that the shots in the movie can only match forward in time (to plot synopses in their case). However, the storyline of the movie and book can have crossings in time (Fig. 8), and the alignment might contain
1506.06724#25
1506.06724#27
1506.06724
[ "1502.03044" ]
1506.06724#27
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
mW Figure 3: Our CNN for context-aware similarity computa- tion. It has 3 conv. layers and a sigmoid layer on top. giant leaps forwards or backwards. Therefore, we formu- late a movie/book alignment problem as inference in a Con- ditional Random Field that encourages nearby shots/dialog alignments to be consistent. Each node yi in our CRF rep- resents an alignment of the shot in the movie with its cor- responding subtitle sentence to a sentence in the book. Its state space is thus the set of all sentences in the book. The CRF energy of a conï¬ guration y is formulated as: = Sondu yi) Ss > wp (Yi. Â¥;) i=1 JEN (i) â log p(x,y;w where K is the number of nodes (shots), and N (i) the left and right neighbor of yi. Here, Ï u(·) and Ï p(·) are unary and pairwise potentials, respectively, and Ï = (Ï u, Ï p). We directly use the output of the CNN from 4.3 as the unary potential Ï u(·). For the pairwise potential, we measure the time span ds(yi, yj) between two neighbouring sentences in the subtitle and the distance db(yi, yj) of their state space in the book. One pairwise potential is deï¬ ned as: Ï p(yi, yj) = (ds(yi, yj) â db(yi, yj))2 (ds(yi, yj) â db(yi, yj))2 + Ï 2 (18) Here Ï 2 is a robustness parameter to avoid punishing gi- ant leaps too harsh. Both ds and db are normalized to [0, 1]. In addition, we also employ another pairwise poten- tial Ï q(yi, yj) = (db(yi,yj ))2 (db(yi,yj ))2+Ï 2 to encourage state consis- tency between nearby nodes.
1506.06724#26
1506.06724#28
1506.06724
[ "1502.03044" ]
1506.06724#28
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
This potential is helpful when there is a long silence (no dialog) in the movie. Inference. Our CRF is a chain, thus exact inference is possible using dynamic programming. We also prune some states that are very far from the uniform alignment (over 1/3 length of the book) to further speed up computation. Learning. Since ground-truth is only available for a sparse set of shots, we regard the states of unobserved nodes as hidden variables and learn the CRF weights with [29]. # 5. Experimental Evaluation We evaluate our model on our dataset of 11 movie/book pairs. We train the parameters in our model (CNN and CRF) on Gone Girl, and test our performance on the remaining 10 movies. In terms of training speed, our video-text model â watchesâ 1,440 movies per day and our sentence model reads 870 books per day.
1506.06724#27
1506.06724#29
1506.06724
[ "1502.03044" ]
1506.06724#29
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
We also show various qualitative results demonstrating the power of our approach. We pro- vide more results in the Appendix of the paper. # 5.1. Movie/Book Alignment Evaluating the performance of movie/book alignment is an interesting problem on its own. This is because our ground-truth is far from exhaustive â around 200 correspon- dences were typically found between a movie and its book, and likely a number of them got missed. Thus, evaluating the precision is rather tricky. We thus focus our evaluation on recall, similar to existing work on retrieval. For each shot that has a GT correspondence in book, we check whether our prediction is close to the annotated one. We evaluate recall at the paragraph level, i.e., we say that the GT para- graph was recalled, if our match was at most 3 paragraphs away, and the shot was at most 5 subtitle sentences away. As a noisier measure, we also compute recall and precision at multiple alignment thresholds and report AP (avg. prec.). The results are presented in Table 4. Columns show dif- ferent instantiations of our model: we show the leave-one- feature-out setting (â indicates that all features were used), compare how different depths of the context-aware CNN in- ï¬ uence the performance, and compare it to our full model (CRF) in the last column. We get the highest boost by adding more layers to the CNN â recall improves by 14%, and AP doubles. Generally, each feature helps performance. Our sentence embedding (BOOK) helps by 4%, while nois- ier video-text embedding helps by 2% in recall. CRF which encourages temporal smoothness generally helps (but not for all movies), bringing additional 2%. We also show how a uniform timeline performs on its own. That is, for each shot (measured in seconds) in the movie, we ï¬ nd the sen- tence at the same location (measured in lines) in the book. We add another baseline to evaluate the role of context in our model. Instead of using our CNN that considers con- textual information, we build a linear SVM to combine dif- ferent similarity measures in a single node (shot) â
1506.06724#28
1506.06724#30
1506.06724
[ "1502.03044" ]
1506.06724#30
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
the ï¬ nal similarity is used as a unary potential in our CRF alignment model. The Table shows that our CNN contextual model outperforms the SVM baseline by 30% in recall, and dou- bles the AP. We plot alignment for a few movies in Fig. 8. Running Times. We show the typical running time of each component in our model in Table 5. For each movie- book pair, calculating BLEU score takes most of the time. Note that BLEU does not contribute signiï¬ cantly to the per- formance and is of optional use. With respect to the rest, extracting visual features VIS (mean pooling GoogleNet features over the shot frames) and SCENE features (mean pooling hybrid-CNN features [38] over the shot frames),
1506.06724#29
1506.06724#31
1506.06724
[ "1502.03044" ]
1506.06724#31
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
MOVIE BOOKS b u l C t h g i F . . . y r t n u o C o N . . . w e l F e n O d a o R e h T m r i F e h T . y s P n a c i r e m A . . . k n a h s w a h S Fight Club e l i M n e e r G 100.0 . . . w e l F e n O 45.4 . y s P n a c i r e m A 45.2 . . . k n a h s w a h S 45.1 . . . y r t n u o C o N 43.6 m r i F e h T 43.0 d a o R e h T 42.7 Green Mile r e t t o P y r r a H 100.0 . . . k c a b e k o r B 42.5 . y s P n a c i r e m A 40.1 d a o R e h T 39.6 . . . w e l F e n O 38.9 . . . k n a h s w a h S 38.0 m r i F e h T 36.7 Harry Potter o . y s P n a c i r e m A 100.0 m r i F e h T 40.5 . . . w e l F e n O 39.7 b u l C t h g i F 39.5 . . . k n a h s w a h S 39.1 . . . y r t n u o C o N 39.0 . . . k c a b e k o r B 38.7 American Psy. . . . w e l F e n O 100.0 m r i F e h T 55.5 r e t t o P y r r a H 54.9 d a o R e h T 53.5 . . . k n a h s w a h S 53.1 . . . k c a b e k o r B 52.6 . . . y r t n u o C o N 51.3 One Flew... . . . k n a h s w a h S 100.0 m r i F e h T 84.0 . . . y r t n u o C o N 80.8 . . . w e l F e n O 79.1 d a o R e h T 79.0 . . . k c a b e k o r B 77.8 e l i M n e e r G 76.9 Shawshank ... m r i F e h T 100.0 . . . k n a h s w a h S 66.0 b u l C t h g i F 62.0 . . . k c a b e k o r B 61.4 . . . w e l F e n O 60.9 . y s P n a c i r e m A 59.1 r e t t o P y r r a H 58.0 The Firm . . . k c a b e k o r B 100.0 . . . w e l F e n O 75.0 b u l C t h g i F 73.9 . y s P n a c i r e m A 73.7 e l i M n e e r G 71.5 m r i F e h T 71.4 . . . k n a h s w a h S 68.5 Brokeback ... d a o R e h T 100.0 m r i F e h T 54.8 . . . w e l F e n O 52.2 . . . y r t n u o C o N 51.9 b u l C t h g i F 50.9 . . . k n a h s w a h S 50.7 e l i M n e e r G 50.6 The Road . . . y r t n u o C o N 100.0 d a o R e h T 56.0 . . . k c a b e k o r B 55.9 . . . w e l F e n O 54.8 m r i F e h T 54.1 . . . k n a h s w a h S 53.9 r e t t o P y r r a H 53.4 No Country... 100.0 49.7 49.5 46.8 46.4 45.8 45.8
1506.06724#30
1506.06724#32
1506.06724
[ "1502.03044" ]
1506.06724#32
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
ae apie jgiepaey ING STEPHEN, | Table 6: Book â retrievalâ . For a movie (left), we rank books wrt to their alignment similarity with the movie. We normalize similarity to be 100 for the highest scoring book. takes most of the time (about 80% of the total time). We also report training times for our contextual model (CNN) and the CRF alignment model. Note that the times are reported for one movie/book pair since we used only one such pair to train all our CNN and CRF parameters. We chose Gone Girl for training since it had the best balance between the dialog and visual correspondences.
1506.06724#31
1506.06724#33
1506.06724
[ "1502.03044" ]
1506.06724#33
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
# 5.2. Describing Movies via the Book We next show qualitative results of our alignment. In particular, we run our model on each movie/book pair, and visualize the passage in the book that a particular shot in the movie aligns to. We show best matching paragraphs as well as a paragraph before and after. The results are shown in Fig. 8. One can see that our model is able to retrieve a semantically meaningful match despite large dialog devia- tions from those in the book, and the challenge of matching a visual representation to the verbose text in the book. [00:43:16:00:43:19] Okay, | wanna see the hands. Come on. "Certainly, Mr. Cheswick.
1506.06724#32
1506.06724#34
1506.06724
[ "1502.03044" ]
1506.06724#34
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
A vote is now before the group. Will a show of hands be adequate, Mr. McMurphy, or are you going to insist on a secret ballot?""| want to see the hands. | want to see the hands that don't go up, too." â Everyone in favor of changing the television time to the afternoon, raise his hand." ((f7 [02:14:29:02:14:32] Good afternoon, Harry. ... He realized he must be in the hospital wing, He was lying in a bed with white linen sheets, and next to him was a table piled high with what looked like half the candy shop. "Tokens from your friends and admirers," said Dumbledore, beaming. "What happened down in the dungeons between you and Professor Quirrell is a complete secret, so, naturally, the whole school knows. | believe your friends Misters Fred and George Weasley were responsible for trying to send you a toilet seat. No doubt they thought it would amuse you. Madam Pomfrey, however, felt it might not be very hygienic, and confiscated it."
1506.06724#33
1506.06724#35
1506.06724
[ "1502.03044" ]
1506.06724#35
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
[00:43:16:00:43:19] Okay, | wanna see the hands. Come on. [01:00:02:01:00:04] Are you saying my life is in danger? [01:13:05:01:13:06] Right, Caleb? group. Will a show of hands be adequate, Mr. McMurphy, or are you going to insist on a secret ballot?""| want to see the hands. | want to see the hands that don't go up, too." â Everyone in favor of changing the television time to the afternoon, raise his hand." Mitch braced himself and waited. "Mitch, no lawyer has ever left your law firm alive. Three have tried, and they were killed. Two were about to leave, and they died last summer.
1506.06724#34
1506.06724#36
1506.06724
[ "1502.03044" ]
1506.06724#36
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Once a lawyer joins Bendini, Lambert & Locke, he never leaves, unless he retires and keeps his mouth shut. And by the time they retire, they are a part of the conspiracy and cannot talk. The Firm has an extensive surveillance operation on the fifth floor. Your house and car are bugged. Your phones are tapped. Your desk and office are wired, Virtually every word you utter is heard and recorded on the fifth ' â ou, and sometimes your wife. They are here in Washington as we speak.
1506.06724#35
1506.06724#37
1506.06724
[ "1502.03044" ]
1506.06724#37
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
You see, Mitch, The Firm is more than a firm, It is a division of a very large business, a very profitable business, A very illegal business. The Firm is not owned by the partners.â Mitch turned and watched him closely. The Director looked at the frozen pond as he spoke. A huge, circular scar ran out of his hair, down his forehead, through one dead and indifferently cocked eye, and to the comer of his mouth, which had been disfigured into the knowing leer of a gambler or perhaps a whoremaster. One cheek was smooth and pretty; the other was bunched up like the stump of a tree. | guessed there had been a hole in it, but that, at least, had healed. "He has the one eye," Hammersmith said, caressing the boy's bunched cheek with a lover's kind fingers. "I suppose he's lucky not to be blind. We get down on our knees and thank God for that much, at least, Eh, Caleb?" "Yes, sir," the boy said shyly - the boy who would be beaten mercilessly on the play-yard by laughing, jeering bullies for all his miserable years of education, the boy who would never be asked to play Spin the Bottle or Post Office and would probably never sleep with a woman not bought and paid for once he was grown to manhood's times and needs, the boy who would always stand outside the warm and lighted circle of his peers, the boy who would look at himself in his mirror for the next fifty or sixty or seventy years of his life and think ugly, ugly, ugly. ((f7 [02:14:29:02:14:32] Good afternoon, Harry. 1h Jee prim. aliienlt Patan » (02:15:24:02:15:26] <i>You remember the name of the town, don't you?</i> [01:26:19:01:26:22] You're not the one that has to worry about everything, was lying in a bed with white linen sheets, and next to him was a table piled high with what looked like half the candy shop. "Tokens from your friends and admirers," said Dumbledore, beaming.
1506.06724#36
1506.06724#38
1506.06724
[ "1502.03044" ]
1506.06724#38
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
"What happened down in the dungeons between you and Professor Quirrell is a complete secret, so, naturally, the whole school knows. | believe your friends Misters Fred and George Weasley were responsible for trying to send you a toilet seat. No doubt they thought it would amuse you. Madam Pomfrey, however, felt it might not be very hygienic, and confiscated it." | took the envelope and left the rock where Andy had left it, and Andy's friend before him.
1506.06724#37
1506.06724#39
1506.06724
[ "1502.03044" ]
1506.06724#39
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Dear Red, If you're reading this, then you're out, One way or another, you're out. And f you've followed along this far, you might be willing to come a little further. | think you remember the name of the town, don't you? | could use a good man to help me get my project on wheels. Meantime, have a drink on me-and do think it over. | will be keeping an eye out for you. Remember that hope is a good thing, Red maybe the best of things, and no good thing ever dies. | will be hoping that this letter finds you, and finds you well. Your friend, Peter Stevens| didn't read that letter in the field The man squatted and looked at him. I'm scared, he said. Do you understand? I'm scared, The boy didn't answer. He just sat there with his head bowed, sobbing.
1506.06724#38
1506.06724#40
1506.06724
[ "1502.03044" ]
1506.06724#40
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
You're not the one who has to worry about everything. [01:00:02:01:00:04] Are you saying my life is in danger? Mitch braced himself and waited. "Mitch, no lawyer has ever left your law firm alive. Three have tried, and they were killed. Two were about to leave, and they died last summer. Once a lawyer joins Bendini, Lambert & Locke, he never leaves, unless he retires and keeps his mouth shut. And by the time they retire, they are a part of the conspiracy and cannot talk. The Firm has an extensive surveillance operation on the fifth floor. Your house and car are bugged. Your phones are tapped. Your desk and office are wired, Virtually every word you utter is heard and recorded on the fifth ' â ou, and sometimes your wife. They are here in Washington as we speak.
1506.06724#39
1506.06724#41
1506.06724
[ "1502.03044" ]
1506.06724#41
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
You see, Mitch, The Firm is more than a firm, It is a division of a very large business, a very profitable business, A very illegal business. The Firm is not owned by the partners.â Mitch turned and watched him closely. The Director looked at the frozen pond as he spoke. 1h Jee prim. aliienlt Patan » (02:15:24:02:15:26] <i>You remember the name of the town, don't you?</i> | took the envelope and left the rock where Andy had left it, and Andy's friend before him. Dear Red, If you're reading this, then you're out, One way or another, you're out. And f you've followed along this far, you might be willing to come a little further. | think you remember the name of the town, don't you? | could use a good man to help me get my project on wheels. Meantime, have a drink on me-and do think it over. | will be keeping an eye out for you. Remember that hope is a good thing, Red maybe the best of things, and no good thing ever dies. | will be hoping that this letter finds you, and finds you well. Your friend, Peter Stevens| didn't read that letter in the field [01:13:05:01:13:06] Right, Caleb?
1506.06724#40
1506.06724#42
1506.06724
[ "1502.03044" ]
1506.06724#42
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
A huge, circular scar ran out of his hair, down his forehead, through one dead and indifferently cocked eye, and to the comer of his mouth, which had been disfigured into the knowing leer of a gambler or perhaps a whoremaster. One cheek was smooth and pretty; the other was bunched up like the stump of a tree. | guessed there had been a hole in it, but that, at least, had healed. "He has the one eye," Hammersmith said, caressing the boy's bunched cheek with a lover's kind fingers. "I suppose he's lucky not to be blind. We get down on our knees and thank God for that much, at least, Eh, Caleb?" "Yes, sir," the boy said shyly - the boy who would be beaten mercilessly on the play-yard by laughing, jeering bullies for all his miserable years of education, the boy who would never be asked to play Spin the Bottle or Post Office and would probably never sleep with a woman not bought and paid for once he was grown to manhood's times and needs, the boy who would always stand outside the warm and lighted circle of his peers, the boy who would look at himself in his mirror for the next fifty or sixty or seventy years of his life and think ugly, ugly, ugly. [01:26:19:01:26:22] You're not the one that has to worry about everything, The man squatted and looked at him. I'm scared, he said. Do you understand? I'm scared, The boy didn't answer. He just sat there with his head bowed, sobbing.
1506.06724#41
1506.06724#43
1506.06724
[ "1502.03044" ]
1506.06724#43
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
You're not the one who has to worry about everything. Figure 4: Describing movie clips via the book: we align the movie to the book, and show a shot from the movie and its corresponding paragraph (plus one before and after) from the book. American.Psycho r ¥ , Y [00:13:29:00:13:33] Lady, if you don't shut your fucking mouth, | will kill you. Batman.Begins «\ 2. (02:06:23:02:06:26] - I'm sorry | didn't tell you, Rachel. - No. No, Bruce... (00:30:16:00:30:19] Prolemuris.
1506.06724#42
1506.06724#44
1506.06724
[ "1502.03044" ]
1506.06724#44
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
They're aggressive. Fight.Club | have your license. | know who you are. | know where you live. I'm keeping your license, and I'm going to check on you, mister Raymond K. Hessel. In three months, and then in six months, and then in a year, and if you aren't back in school on your way to being a veterinarian, you will be dead. You didn't say anything. Harry.Potter.and.the.Sorcerers.Stone (00:05:46:00;:05:48] I'm warning you now, boy Bane Chronicles-2 "She has graciously allowed me into her confidence." Magnus could read between the lines. Axel didn't kiss and tell, which made him only more attractive. â The escape is to be made on Sunday," Alex went on. "The plan is simple, but exacting. We have arranged it so the guards have seen certain people leaving by certain exits at certain times. On ...
1506.06724#43
1506.06724#45
1506.06724
[ "1502.03044" ]
1506.06724#45
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Adventures of Tom Bombadil Of crystal was his habergeon, his scabbard of chalcedony; with silver tipped at plenilune his spear was hewn of ebony. His javelins were of malachite and stalactite - he brandished them, and went and fought the dragon-flies of Paradise, and vanquished them. He battled with the Dumbledors, the Hummerhorns, and Honeybees, and won the Golden Honeycomb; and running home on sunny seas in ship of leaves and gossamer with blossom for a canopy, he sat... ay! it Batman.Begins â
1506.06724#44
1506.06724#46
1506.06724
[ "1502.03044" ]
1506.06724#46
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
¢~ ~ A) (01:38:41:01:38:44] I'm gonna give you a sedative. You'll wake up back at home. Batman.Begins [01:09:31:01:09:34] I'm going to have to Fight.Club You didn't say anything. Get out of here, and do your little life, but remember I'm watching you, Raymond Hessel, and I'd rather kill you than see you working a shit job for just enough money to buy cheese and watch television. Now, I'm going to walk away so don't turn around.
1506.06724#45
1506.06724#47
1506.06724
[ "1502.03044" ]
1506.06724#47
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
A Captive s Submission â | believe you will enjoy your time here. | am not a harsh master but | am strict. When we are with others, | expect you to present yourself properly. What we do here in your room and in the dungeon is between you and |. It is a testament to the trust and respect we have for each other and no one else needs to Know about our arrangement. I'm sure the past few days have been overwhelming thus far but I have tried to give you as much information as possible.
1506.06724#46
1506.06724#48
1506.06724
[ "1502.03044" ]
1506.06724#48
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Do you have any questions?" A Dirty Job "This says 'Purveyor of Fine Vintage Clothing and Accessories." "Right! Exactly!" He knew he should have had a second set of business cards printed up. "And where do you think | get those things? From the dead. You see?" â Mr. Asher, I'm going to have to ask you to leave." American.Psycho r Â¥ , Y [00:13:29:00:13:33] Lady, if you don't shut your fucking mouth, | will kill you. Fight.Club | have your license. | know who you are. | know where you live. I'm keeping your license, and I'm going to check on you, mister Raymond K.
1506.06724#47
1506.06724#49
1506.06724
[ "1502.03044" ]
1506.06724#49
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Hessel. In three months, and then in six months, and then in a year, and if you aren't back in school on your way to being a veterinarian, you will be dead. You didn't say anything. Harry.Potter.and.the.Sorcerers.Stone (00:05:46:00;:05:48] I'm warning you now, boy Fight.Club You didn't say anything. Get out of here, and do your little life, but remember I'm watching you, Raymond Hessel, and I'd rather kill you than see you working a shit job for just enough money to buy cheese and watch television. Now, I'm going to walk away so don't turn around. Batman.Begins «\ 2. (02:06:23:02:06:26] - I'm sorry | didn't tell you, Rachel. - No. No, Bruce... Bane Chronicles-2 "She has graciously allowed me into her confidence." Magnus could read between the lines. Axel didn't kiss and tell, which made him only more attractive. â The escape is to be made on Sunday," Alex went on. "The plan is simple, but exacting. We have arranged it so the guards have seen certain people leaving by certain exits at certain times. On ... ay! it Batman.Begins â ¢~ ~ A) (01:38:41:01:38:44] I'm gonna give you a sedative. You'll wake up back at home. A Captive s Submission â | believe you will enjoy your time here. | am not a harsh master but | am strict. When we are with others, | expect you to present yourself properly. What we do here in your room and in the dungeon is between you and |. It is a testament to the trust and respect we have for each other and no one else needs to Know about our arrangement.
1506.06724#48
1506.06724#50
1506.06724
[ "1502.03044" ]
1506.06724#50
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
I'm sure the past few days have been overwhelming thus far but I have tried to give you as much information as possible. Do you have any questions?" (00:30:16:00:30:19] Prolemuris. They're not aggressive. Adventures of Tom Bombadil Of crystal was his habergeon, his scabbard of chalcedony; with silver tipped at plenilune his spear was hewn of ebony. His javelins were of malachite and stalactite - he brandished them, and went and fought the dragon-flies of Paradise, and vanquished them. He battled with the Dumbledors, the Hummerhorns, and Honeybees, and won the Golden Honeycomb; and running home on sunny seas in ship of leaves and gossamer with blossom for a canopy, he sat...
1506.06724#49
1506.06724#51
1506.06724
[ "1502.03044" ]
1506.06724#51
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Batman.Begins [01:09:31:01:09:34] I'm going to have to ask you to leave. A Dirty Job "This says 'Purveyor of Fine Vintage Clothing and Accessories." "Right! Exactly!" He knew he should have had a second set of business cards printed up. "And where do you think | get those things? From the dead. You see?" â Mr. Asher, I'm going to have to ask you to leave." Figure 5:
1506.06724#50
1506.06724#52
1506.06724
[ "1502.03044" ]
1506.06724#52
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
We can use our model to caption movies via a corpus of books. Top: A shot from American Pyscho is captioned with paragraphs from the Fight Club, and a shot from Harry Potter with paragraphs from Fight Club. Middle and Bottom: We match shots from Avatar and Batman Begins against 300 books from our BookCorpus, and show the best matched paragraph. Fight Club The Green Mile Harry Potter and the Sorcerers Stone American Psycho One Flew Over the Cuckoo Nest Shawshank Redemption The Firm Brokeback Mountain The Road AP Recall AP Recall AP Recall AP Recall AP Recall AP Recall AP Recall AP Recall AP Recall AP Recall UNI 1.22 2.36 0.00 0.00 0.00 0.00 0.00 0.27 0.00 1.01 0.00 1.79 0.05 1.38 2.36 27.0 0.00 1.12 0.00 1.12 SVM 0.73 10.38 14.05 51.42 10.30 44.35 14.78 34.25 5.68 25.25 8.94 46.43 4.46 18.62 24.91 74.00 13.77 41.90 12.11 33.46 â
1506.06724#51
1506.06724#53
1506.06724
[ "1502.03044" ]
1506.06724#53
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
0.45 12.26 14.12 62.46 8.09 51.05 16.76 67.12 8.14 41.41 8.60 78.57 7.91 33.79 16.55 88.00 6.58 43.02 9.00 48.90 BLEU 0.41 12.74 14.09 60.57 8.18 52.30 17.22 66.58 6.27 34.34 8.89 76.79 8.66 36.55 17.82 92.00 7.83 48.04 9.39 49.63 1 layer CNN w/o one feature BOOK 0.50 11.79 10.12 57.10 7.84 48.54 14.88 64.66 8.49 36.36 7.99 73.21 6.22 23.45 15.16 86.00 5.11 38.55 9.40 47.79 TF-IDF 0.40 11.79 6.92 53.94 5.66 46.03 12.29 60.82 1.93 32.32 4.35 73.21 2.02 26.90 14.60 86.00 3.04 32.96 8.22 46.69 VIS 0.64 12.74 9.83 55.52 7.95 48.54 14.95 63.56 8.51 37.37 8.91 78.57 7.15 26.90 15.58 88.00 5.47 37.99 9.35 51.10 SCENE 0.50 11.79 13.00 60.57 8.04 49.37 15.68 66.58 9.32 36.36 9.22 75.00 7.25 30.34 15.41 86.00 6.09 42.46 8.63 49.26 PRIOR 0.48 11.79 14.42 62.78 8.20 52.72 16.54 67.67 9.04 40.40 7.86 78.57 7.26 31.03 16.21 87.00 7.00 44.13 9.40 48.53 CNN-3 1.95 17.92 28.80 74.13 27.17 76.57 34.32 81.92 14.83 49.49 19.33 94.64 18.34 37.93 31.80 98.00 19.80 65.36 28.75 71.69 CRF 5.17 19.81 27.60 78.23 23.65 78.66 32.87 80.27 21.13 54.55 19.96 96.79 20.74 44.83 30.58 100.00 19.58 65.10 30.45 72.79 No Country for Old Men Mean Recall AP 3.88 0.40 38.01 10.97 52.66 9.62 52.95 9.88 47.07 5.94 48.75 8.57 50.03 8.83 50.77 9.31 52.46 9.64 69.10 23.17
1506.06724#52
1506.06724#54
1506.06724
[ "1502.03044" ]
1506.06724#54
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
66.77 22.51 Table 4: Performance of our model for the movies in our dataset under different settings and metrics. Per movie-book pair BLEU 6h TF 10 min BOOK 3 min VIS 2h SCENE 1h CNN (training) 3 min CNN (inference) 0.2 min CRF (training) 5h CRF (inference) 5 min Table 5: Running time for our model per one movie/book pair. # 5.3. Book â Retrievalâ # 6. Conclusion
1506.06724#53
1506.06724#55
1506.06724
[ "1502.03044" ]
1506.06724#55
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
In this experiment, we compute alignment between a movie and all (test) 10 books, and check whether our model retrieves the correct book. Results are shown in Table 6. Under each book we show the computed similarity. In par- ticular, we use the energy from the CRF, and scale all sim- ilarities relative to the highest one (100). Notice that our model retrieves the correct book for each movie. Describing a movie via other books. We can also cap- tion movies by matching shots to paragraphs in a corpus of books. Here we do not encourage a linear timeline (CRF) since the stories are unrelated, and we only match at the lo- cal, shot-paragraph level. We show a description for Amer- ican Psycho borrowed from the book Fight Club in Fig. 5. In this paper, we explored a new problem of aligning a book to its movie release. We proposed an approach that computes several similarities between shots and di- alogs and the sentences in the book. We exploited our new sentence embedding in order to compute similarities be- tween sentences. We further extended the image-text neural embeddings to video, and proposed a context-aware align- ment model that takes into account all the available simi- larity information. We showed results on a new dataset of movie/book alignments as well as several quantitative re- sults that showcase the power and potential of our approach. # Acknowledgments # 5.4. The CoCoBook: Writing Stories for CoCo
1506.06724#54
1506.06724#56
1506.06724
[ "1502.03044" ]
1506.06724#56
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Our next experiment shows that our model is able to â generateâ descriptive stories for (static) images. In par- ticular we used the image-text embedding from [13] and generated a simple caption for an image. We used this cap- tion as a query, and used our sentence embedding trained on books to ï¬ nd top 10 nearest sentences (sampled from a few hundred thousand from BookCorpus). We re-ranked these based on the 1-gram precision of non-stop words. Given the best result, we return the sentence as well as the 2 sentences before and after it in the book. The results are in Fig. 6. Our sentence embedding is able to retrieve semantically mean- ingful stories to explain the images. We acknowledge the support from NSERC, CIFAR, Samsung, Google, and ONR-N00014-14-1-0232. We also thank Lea Jen- sterle for helping us with elaborate annotation, and Relu Patrascu for his help with numerous infrastructure related problems. # Appendix In the Appendix we provide more qualitative results. # A. Qualitative Movie-Book Alignment Results We show a few qualitative examples of alignment in Fig. 8. In this experiment, we show results obtained with our full model (CRF). For a chosen shot (a node in the CRF) we show the corresponding paragraph in the book.
1506.06724#55
1506.06724#57
1506.06724
[ "1502.03044" ]
1506.06724#57
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
the club was a little emptier than i would have expected for the late afternoon , and the bartender , in red waistcoat and bowtie , was busy wiping down his counter , replacing peanuts and putting out new coasters . a television with the latest la liga news was hung in an upper corner , and behind him , rows of bottles were reï¬ ected in a giant bar mirror . above the stools , a pergola-type overhead structure held rows of wine glasses . it was a classy place , with ferns in the corner , and not the kind of bar to which i was accustomed . my places usually had a more ... relaxed feel . he felt like an idiot for yelling at the child , but his frustration and trepidation was getting the better of him . he glanced toward the shadowed hall and quickly nodded toward melissa before making his way forward . he came across more children sitting upon a couch in the living room . they watched him , but did nâ
1506.06724#56
1506.06724#58
1506.06724
[ "1502.03044" ]
1506.06724#58
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
t move and did nâ t speak . his skin started to feel like hundreds of tiny spiders were running up and down it and he hurried on . a few miles before tioga road reached highway 395 and the town of lee vining , smith turned onto a narrow blacktop road . on either side were parched , grassy open slopes with barbed-wire fences marking property lines . cattle and horses grazed under trees whose black silhouettes stood stark against the gold-velvet mountains . marty burst into song : â home , home on the range , where the deer and the antelope play ! where seldom is heard a discouraging word and the skies are not cloudy all day ! â â number seventy-three , second to last from the corner . â adam slowed the porsche as he approached the quaint-he could think of no other word to use , even though â quaintâ was one he normally , manfully , avoided-townhouse , coming to a halt beside a sleek jaguar sedan . it was a quiet street , devoid of trafï¬ c at this hour on a monday night . in the bluish-tinted light of a corner street lamp , he developed a quick visual impression of wrought-iron railings on tidy front stoops , window boxes full of bright chrysanthemums , beveled glass in bay windows , and lace curtains . townhouses around here didnâ t rent cheaply , he could nâ t help but observe .
1506.06724#57
1506.06724#59
1506.06724
[ "1502.03044" ]
1506.06724#59
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Figure 6: CoCoBook: We generate a caption for a CoCo image via [13] and retrieve its best matched sentence (+ 2 before and after) from a large book corpus. One can see a semantic relevance of the retrieved passage to the image. Figure 7: Alignment results of our model (bottom) compared to ground-truth alignment (top). In ground-truth, blue lines indicate visual matches, and magenta are the dialog matches. Yellow lines indicate predicted alignments. We can see that some dialogs in the movies closely fol- low the book and thus help with the alignment. This is particularly important since the visual information is not as strong. Since the text around the dialogs typically describe the scene, the dialogs thus help us ground the visual infor- mation contained in the description and the video.
1506.06724#58
1506.06724#60
1506.06724
[ "1502.03044" ]
1506.06724#60
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
# B. Borrowing â Linesâ from Other Books We show a few qualitative examples of top-scoring matches for shot in a movie with a paragraph in another book (a book that does not correspond to this movie). In this experiment, we allow a clip in our 10 movie dataset (excluding the training movie) to match to paragraphs in the remaining 9 books (excluding the corresponding book). The results are in Fig. 12. Note that the top-scoring matches chosen from only a small set of books may not be too meaningful. 200 book experiment. We scale the experiment by ran- domly selecting 200 books from our BookCorpus. The re- sults are in Fig. 15. One can see that by using many more books results in increasingly better â storiesâ . American Psycho American Psycho # American Psycho Harry Potter
1506.06724#59
1506.06724#61
1506.06724
[ "1502.03044" ]
1506.06724#61
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Figure 8: Examples of movie-book alignment. We use our model to align a movie to a book. Then for a chosen shot (which is a node in our CRF) we show the corresponding paragraph, plus one before and one after, in the book inferred by our model. On the left we show one (central) frame from the shot along with the subtitle sentence(s) that overlap with the shot. Some dialogs in the movie closely follow the book and thus help with the alignment. One Flew Over the Cuckooâ
1506.06724#60
1506.06724#62
1506.06724
[ "1502.03044" ]
1506.06724#62
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
s Nest One Flew Over the Cuckooâ s Nest Shawshank Redemption Figure 9: Examples of movie-book alignment. We use our model to align a movie to a book. Then for a chosen shot (which is a node in our CRF) we show the corresponding paragraph, plus one before and one after, in the book inferred by our model. On the left we show one (central) frame from the shot along with the subtitle sentence(s) that overlap with the shot. Some dialogs in the movie closely follow the book and thus help with the alignment. The Firm The Firm The Firm
1506.06724#61
1506.06724#63
1506.06724
[ "1502.03044" ]
1506.06724#63
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Figure 10: Examples of movie-book alignment. We use our model to align a movie to a book. Then for a chosen shot (which is a node in our CRF) we show the corresponding paragraph, plus one before and one after, in the book inferred by our model. On the left we show one (central) frame from the shot along with the subtitle sentence(s) that overlap with the shot. Some dialogs in the movie closely follow the book and thus help with the alignment. The Green Mile The Green Mile The Road Figure 11: Examples of movie-book alignment. We use our model to align a movie to a book. Then for a chosen shot (which is a node in our CRF) we show the corresponding paragraph, plus one before and one after, in the book inferred by our model. On the left we show one (central) frame from the shot along with the subtitle sentence(s) that overlap with the shot. Some dialogs in the movie closely follow the book and thus help with the alignment.
1506.06724#62
1506.06724#64
1506.06724
[ "1502.03044" ]
1506.06724#64
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
| have your license. | know who you are. | know where you live. I'm keeping your license, and I'm going to check on you, mister Raymond K. Hessel. In three months, and then in six months, and then in a year, and if you aren't back in school on your way to being a veterinarian, you will be dead. You didn't say anything. [00:13:24:00:13:27] Two: | can only get these sheets in Santa Fe. Your head rolled up and away from the gun, and you said, yeah. You said, yes, you lived in a basement. You had some pictures in the wallet, too. There was your mother. This was a tough one for you, you'd have to open your eyes and see the picture of Mom and Dad smiling and see the gun at the same time, but you did, and then your eyes closed and you started to cry. You were going to cool, the amazing miracle of death. One minute, you're a person, the next minute, you're an ... [00:21:25:00:21:27] It's okay. | can tell.
1506.06724#63
1506.06724#65
1506.06724
[ "1502.03044" ]
1506.06724#65
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
I've never been in here before tonight. â If you say so, sir," the bartender says, â but Thursday night, you came in to ask how soon the police were planning to shut us down." Last Thursday night, | was awake all night with the insomnia, wondering was | awake, was | sleeping. | woke up late Friday morning, bone tired and feeling | hadn't ever had my eyes closed. "Yes, sir," the bartender says, "Thursday night, you were standing right where you are now and you were asking me about the police crackdown, and you were asking me how many guys we had to turn away from the Wednesday night fight club." [00:23:44:00:23:47] You're late, honey. Oh, yes, you are. | am not late.
1506.06724#64
1506.06724#66
1506.06724
[ "1502.03044" ]
1506.06724#66
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Figure 12: Examples of of borrowing paragraphs from other books â 10 book experiment. We show a few examples of top-scoring correspondences between a shot in a movie and a paragraph in a book that does not correspond to the movie. Note that by forcing the model to choose from another book, the top-scoring correspondences may still have a relatively low similarity. In this experiment, we did not enforce a global alignment over the full book â we use the similarity output by our contextual CNN.
1506.06724#65
1506.06724#67
1506.06724
[ "1502.03044" ]
1506.06724#67
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
â My friends, thou protest too much to believe the protesting. You are all believing deep inside your stingy little hearts that our Miss Angel of Mercy Ratched is absolutely correct in every assumption she made today about McMurphy. You know she was, and so do I. But why deny it? Let's be honest and give this man his due instead of secretly criticizing his capitalistic talent. What's wrong with him making a little profit? We've all certainly got our money's worth every time he fleeced us, haven't we? He's a shrewd character with an eye out for a quick dollar. He doesn't make any pretense about his motives, does he? Why should we? He has a healthy and honest attitude about his chicanery, and I'm all for him, just as I'm for the dear old capitalistic system of free individual enterprise, comrades, for him and his downright bullheaded gall and the American flag, bless it, and the Lincoln Memorial and the whole bit. Remember the Maine, P. T. Barnum and the Fourth of July. | feel compelled to defend my friend's honor as a good old red, white, and blue hundred-per-cent American con man. Good guy, my [00:35:25:00:35:27] Do you have any witnesses or foot. McMurphy would ... fingerprints ? You didn't say anything. Get out of here, and do your little life, but remember I'm watching you, Raymond Hessel, and I'd rather kill you than see you working a shit job for just enough money to buy cheese and watch television. Now, I'm going to walk away so don't turn around. [00:05:46:00:05:48] I'm warning you now, boy.
1506.06724#66
1506.06724#68
1506.06724
[ "1502.03044" ]
1506.06724#68
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
». course. She wasn't quite dead. | have often thought it would have been better - for me, if not for her - if she had been killed instantly. It might have made it possible for me to let her go a little sooner, a little more naturally. Or perhaps I'm only kidding myself about that. All | know for sure is that | have never let her go, not really. She was trembling all over. One of her shoes had come off and | could see her foot jittering. Her ... [00:16:22:00:16:26] "We have a witch in the family. Isn't it wonderful?"
1506.06724#67
1506.06724#69
1506.06724
[ "1502.03044" ]
1506.06724#69
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Figure 13: Examples of of borrowing paragraphs from other books â 10 book experiment. We show a few examples of top-scoring correspondences between a shot in a movie and a paragraph in a book that does not correspond to the movie. Note that by forcing the model to choose from another book, the top-scoring correspondences may still have a relatively low similarity. In this experiment, we did not enforce a global alignment over the full book â we use the similarity output by our contextual CNN.
1506.06724#68
1506.06724#70
1506.06724
[ "1502.03044" ]
1506.06724#70
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
. ya see, the thing is..." He scratched his beard. "See, | done heard yer little twitter feet up on my ceilin' there, so | come up to do some investigatin'. Yep, that's what | reckon, far as | recall." Tick exchanged a baffled look with Sofia and Paul. It didn't take a genius to realize they'd already caught Sally in his first lie. "Well," Tick said, "we need a minute to talk about what we're gonna do." [00:55:19:00:55:23] No, no. | may need to talk to you a little futher, so how about you just let me know if you're gonna leave town. . last night, or were the Tears still affecting me more than | realized? | didn't think about it again. | just turned and walked to the bathroom. A quick shower and we'd be on our way to the airport. Twenty minutes later | was ready, my hair still soaking wet. | was dressed in a pair of navy blue dress slacks, an emerald green silk blouse, and a navy suit jacket that matched the pants. Jeremy had also chosen a pair of black low-heeled pumps and included a pair of black thigh-highs. Since | didn't own any other kind of hose, that | didn't mind. But the rest of it...
1506.06724#69
1506.06724#71
1506.06724
[ "1502.03044" ]
1506.06724#71
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
"Next time you pick out clothes for me to run for my life in, include some jogging shoes. Pumps, no matter how low-heeled, just aren't made for it." [01:25:28:01:25:30] - Two pair of black pants? - Yes, sir. You, he wanted to say, I'm thinking of you. I'm thinking of your stink and how bad you smell and how | can't stop smelling you. I'm thinking of how you keep staring at me and how | never say anything about it and | don't know why. I'm thinking of you staring at me and why someone's screaming at me inside my head and how someone's screaming inside my head and why it seems odd that I'm not worried about that. [01:55:38:01:55:41] I'm thinking | don't know what | would do if you were gone.
1506.06724#70
1506.06724#72
1506.06724
[ "1502.03044" ]
1506.06724#72
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Figure 14: Examples of of borrowing paragraphs from other books â 200 book experiment. We show a few examples of top-scoring correspondences between a shot in a movie and a paragraph in a book that does not correspond to the movie. By scaling up the experiment (more books to choose from), our model gets increasingly more relevant â storiesâ . "A good bodyguard doesn't relax on the job," Ethan said. â You know we aren't a threat to Ms. Reed, Ethan. | don't know who you're supposed to be protecting her from, but it isn't us." â They may clean up for the press, but | know what they are, Meredith," Ethan said. A [01:52:05:01:52:09] - How do you know? - Someone's going to try and steal it. | could use, he reflected, anything that'd help, anything at all. Any hint, like from that girl, any suggestion.
1506.06724#71
1506.06724#73
1506.06724
[ "1502.03044" ]
1506.06724#73
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
He felt dismal and afraid. Shit, he thought, what am | going to do? If I'm off everything, he thought, then I'll never see any of them again, any of my friends, the people | watched and knew. I'll be out of it; I'll be maybe retired the rest of my life-anyhow, I've seen the last of Arctor and Luckman and Jerry Fabin and Charles Freck and most of all Donna Hawthorne. I'll never see any of my friends again, for the rest of eternity. It's over. [00:37:32:00:37:35] ...and I'll never do it again, that's for sure.
1506.06724#72
1506.06724#74
1506.06724
[ "1502.03044" ]
1506.06724#74
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
He came to his knees and put his hands on my arms, and stared down into my face. "I will love you always. When this red hair is white, | will still love you. When the smooth softness of youth is replaced by the delicate softness of age, | will still want to touch your skin. When your face is full of the line of every smile you have ever smiled, of every surprise | have seen flash through your eyes, when every tear you have ever cried has left its mark upon your face, | will treasure you all the more, because | was there to see it all. | will share your life with you, Meredith, and |... [00:55:54:00:55:58] Now, once you've got hold of your broom, | want you to mount it.
1506.06724#73
1506.06724#75
1506.06724
[ "1502.03044" ]
1506.06724#75
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Figure 15: Examples of of borrowing paragraphs from other books â 200 book experiment. We show a few examples of top-scoring correspondences between a shot in a movie and a paragraph in a book that does not correspond to the movie. By scaling up the experiment (more books to choose from), our model gets increasingly more relevant â storiesâ . Bottom row: failed example. # C. The CoCoBook We show more results for captioning CoCo images [18] with passages from the books.
1506.06724#74
1506.06724#76
1506.06724
[ "1502.03044" ]
1506.06724#76
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
if never â somewhere you â ll never ï¬ nd it , â owens sneered . meant ï¬ ve seconds , his claim was true . the little shit â s gaze cut left , where a laptop sat on a coffee table . trey strode to it . owens â email program was open . seriously . wreck . just something like that . i try to convince her . everyone was allowed to rest for the next twenty-four hours . that following evening : the elect , not their entourages , were called to a dining hall for supper with lady dolorous . a table that curved inward was laden with food and drink . the wall behind the table was windows with a view of the planet . girls in pink stood about and at attention . he had simply ... healed . brian watched his fellow passengers come aboard . a young woman with blonde hair was walking with a little girl in dark glasses . the little girl â s hand was on the blonde â s elbow . the woman murmured to her charge , the girl looked immediately toward the sound of her voice , and brian understood she was blind - it was something in the gesture of the head . this was a beautiful miniature reproduction of a real london town house , and when jessamine touched it , tessa saw that the front of it swung open on tiny hinges . tessa caught her breath . there were beautiful tiny rooms perfectly decorated with miniature furniture , everything built to scale , from the little wooden chairs with needlepoint cushions to the cast-iron stove in the kitchen . there were small dolls , too , with china heads , and real little oil paintings on the walls . â this was my house . â if he had been nearby he would have dragged her out of the room by her hair and strangled her . during lunch break she went with a group back to the encampment . out of view of the house , under a stand of towering trees , several tents were sitting in a ï¬ eld of mud . the rain the night before had washed the world , but here it had made a mess of things . a few women ï¬ red up a camp stove and put on rice and lentils . Ta? ALL ALM then a frightened yell . â hang on ! â suddenly , jake was ï¬ ying through the air . nefertiti became airborne , too . he screamed , not knowing what was happening-then he splashed into a pool of water .
1506.06724#75
1506.06724#77
1506.06724
[ "1502.03044" ]