id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1703.06585#6
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Our setup is illustrated in Fig. 1. We formulate a game be- tween a questioner bot (Q-BOT) and an answerer bot (A- BOT). Q-BOT is shown a 1-sentence description (a caption) of an unseen image, and is allowed to communicate in natu- ral language (discrete symbols) with the answering bot (A- BOT), who is shown the image. The objective of this fully- cooperative game is for Q-BOT to build a mental model of the unseen image purely from the natural language dialog, and then retrieve that image from a lineup of images. Notice that this is a challenging game. Q-BOT must ground the words mentioned in the provided caption (â Two zebra are walking around their pen at the zoo.â ), estimate which images from the provided pool contain this content (there will typically be many such images since captions describe only the salient entities), and ask follow-up questions (â Any people in the shot? Are there clouds in the sky? Are they facing each other?â
1703.06585#5
1703.06585#7
1703.06585
[ "1605.06069" ]
1703.06585#7
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
) that help it identify the correct image. Analogously, A-BOT must build a mental model of what Q- BOT understands, and answer questions (â No, there arenâ t any. I canâ t see the sky. They arenâ t.â ) in a precise enough way to allow discrimination between similar images from a pool (that A-BOT does not have access to) while being concise enough to not confuse the imperfect Q-BOT. At every round of dialog, Q-BOT listens to the answer pro- vided by A-BOT, updates its beliefs, and makes a prediction about the visual representation of the unseen image (specif- ically, the fc7 vector of I), and receives a reward from the environment based on how close Q-BOTâ s prediction is to the true fc7 representation of I. The goal of Q-BOT and A-BOT is to communicate to maximize this reward. One critical issue is that both the agents are imperfect and noisy â both â forgetâ things in the past, sometimes repeat them- 2 selves, may not stay consistent in their responses, A-BOT does not have access to an external knowledge-base so it cannot answer all questions, etc. Thus, to succeed at the task, they must learn to play to each otherâ s strengths. An important question to ask is â why force the two agents to communicate in discrete symbols (English words) as op- posed to continuous vectors? The reason is twofold. First, discrete symbols and natural language is interpretable. By forcing the two agents to communicate and understand nat- ural language, we ensure that humans can not only inspect the conversation logs between two agents, but more im- portantly, communicate with them. After the two bots are trained, we can pair a human questioner with A-BOT to ac- complish the goals of visual dialog (aiding visually/situa- tionally impaired users), and pair a human answerer with Q-BOT to play a visual 20-questions game. The second reason to communicate in discrete symbols is to prevent cheating â if Q-BOT and A-BOT are allowed to exchange continuous vectors, then the trivial solution is for A-BOT to ignore Q-BOTâ s question and directly convey the fc7 vec- tor for I, allowing Q-BOT to make a perfect prediction. In essence, discrete natural language is an interpretable low- dimensional â
1703.06585#6
1703.06585#8
1703.06585
[ "1605.06069" ]
1703.06585#8
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
bottleneckâ layer between these two agents. Contributions. We introduce a novel goal-driven training for visual question answering and dialog agents. Despite signiï¬ cant popular interest in VQA (over 200 works citing [1] since 2015), all previous approaches have been based on supervised learning, making this the ï¬ rst instance of goal- driven training for visual question answering / dialog. We demonstrate two experimental results. First, as a â sanity checkâ demonstration of pure RL (from scratch), we show results on a diagnostic task where per- ception is perfect â a synthetic world with â imagesâ con- taining a single object deï¬ ned by three attributes (shape/- color/style). In this synthetic world, for Q-BOT to identify an image, it must learn about these attributes. The two bots communicate via an ungrounded vocabulary, i.e., symbols with no pre-speciï¬ ed human-interpretable meanings (â Xâ , â Yâ , â 1â , â 2â ). When trained end-to-end with RL on this task, we ï¬ nd that the two bots invent their own communica- tion protocol â Q-BOT starts using certain symbols to query for speciï¬ c attributes (â Xâ for color), and A-BOT starts re- sponding with speciï¬ c symbols indicating the value of that attribute (â 1â for red). Essentially, we demonstrate the auto- matic emergence of grounded language and communication among â visualâ dialog agents with no human supervision! Second, we conduct large-scale real-image experiments on the VisDial dataset [4]. With imperfect perception on real images, discovering a human-interpretable language and communication strategy from scratch is both tremendously difï¬ cult and an unnecessary re-invention of English. Thus, we pretrain with supervised dialog data in VisDial before â ï¬ ne tuningâ with RL; this alleviates a number of challenges in making deep RL converge to something meaningful. We show that these RL ï¬ ne-tuned bots signiï¬ cantly outperform the supervised bots. Most interestingly, while the super- vised Q-BOT attempts to mimic how humans ask questions, the RL trained Q-BOT shifts strategies and asks questions that the A-BOT is better at answering, ultimately resulting in more informative dialog and a better team. # 2. Related Work
1703.06585#7
1703.06585#9
1703.06585
[ "1605.06069" ]
1703.06585#9
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Vision and Language. A number of problems at the inter- section of vision and language have recently gained promi- nence, e.g., image captioning [6, 7, 13, 34], and visual ques- tion answering (VQA) [1, 9, 20, 21, 24]. Most related to this paper are two recent works on visually-grounded dia- log [4, 5]. Das et al. [4] proposed the task of Visual Di- alog, collected the VisDial dataset by pairing two subjects on Amazon Mechanical Turk to chat about an image (with assigned roles of â
1703.06585#8
1703.06585#10
1703.06585
[ "1605.06069" ]
1703.06585#10
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Questionerâ and â Answererâ ), and trained neural visual dialog answering models. De Vries et al. [5] extended the Referit game [14] to a â GuessWhatâ game, where one person asks questions about an image to guess which object has been â selectedâ , and the second person answers questions in â yesâ /â noâ /NA (natural language an- swers are disallowed). One disadvantage of GuessWhat is that it requires bounding box annotations for objects; our image guessing game does not need any such annotations and thus an unlimited number of game plays may be sim- ulated. Moreover, as described in Sec. 1, both these works unnaturally treat dialog as a static supervised learning prob- lem. Although both datasets contain thousands of human dialogs, they still only represent an incredibly sparse sam- ple of the vast space of visually-grounded questions and an- swers. Training robust, visually-grounded dialog agents via supervised techniques is still a challenging task. In our work, we take inspiration from the AlphaGo [27] ap- proach of supervision from human-expert games and rein- forcement learning from self-play. Similarly, we perform supervised pretraining on human dialog data and ï¬ ne-tune in an end-to-end goal-driven manner with deep RL. 20 Questions and Lewis Signaling Game. Our proposed image-guessing game is naturally the visual analog of the popular 20-questions game. More formally, it is a general- ization of the Lewis Signaling (LS) [17] game, widely stud- ied in economics and game theory. LS is a cooperative game between two players â a sender and a receiver. In the clas- sical setting, the world can be in a number of ï¬ nite discrete states {1, 2, . . . , N }, which is known to the sender but not the receiver. The sender can send one of N discrete sym- bols/signals to the receiver, who upon receiving the signal must take one of N discrete actions. The game is perfectly cooperative, and one simple (though not unique) Nash Equi- librium is the â identity mappingâ , where the sender encodes each world state with a bijective signal, and similarly the
1703.06585#9
1703.06585#11
1703.06585
[ "1605.06069" ]
1703.06585#11
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
3 receiver has a bijective mapping from a signal to an action. Our proposed â image guessingâ game is a generalization of LS with Q-BOT being the receiver and A-BOT the sender. However, in our proposed game, the receiver (Q-BOT) is not passive. It actively solicits information by asking ques- tions. Moreover, the signaling process is not â single shotâ , but proceeds over multiple rounds of conversation. Text-only or Classical Dialog. Li et al. [18] have pro- posed using RL for training dialog systems.
1703.06585#10
1703.06585#12
1703.06585
[ "1605.06069" ]
1703.06585#12
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
However, they hand-deï¬ ne what a â goodâ utterance/dialog looks like (non- repetition, coherence, continuity, etc.). In contrast, taking a cue from adversarial learning [10, 19], we set up a cooper- ative game between two agents, such that we do not need to hand-deï¬ ne what a â goodâ dialog looks like â a â goodâ dialog is one that leads to a successful image-guessing play. Emergence of Language. There is a long history of work on language emergence in multi-agent systems [23]. The more recent resurgence has focused on deep RL [8, 11, 16, 22]. The high-level ideas of these concurrent works are sim- ilar to our synthetic experiments. For our large-scale real- image results, we do not want our bots to invent their own uninterpretable language and use pretraining on VisDial [4] to achieve â alignmentâ with English. 3.
1703.06585#11
1703.06585#13
1703.06585
[ "1605.06069" ]
1703.06585#13
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Cooperative Image Guessing Game: In Full Generality and a Speciï¬ c Instantiation Players and Roles. The game involves two collaborative agents â a questioner bot (Q-BOT) and an answerer bot (A- BOT) â with an information asymmetry. A-BOT sees an im- age I, Q-BOT does not. Q-BOT is primed with a 1-sentence description c of the unseen image and asks â questionsâ (se- quence of discrete symbols over a vocabulary V), which A- BOT answers with another sequence of symbols. The com- munication occurs for a fixed number of rounds. Game Objective in General. At each round, in addition to communicating, Q-BOT must provide a â descriptionâ 4 of the unknown image J based only on the dialog history and both players receive a reward from the environment in- versely proportional to the error in this description under some metric ¢(, 9%â ). We note that this is a general set- ting where the â descriptionâ y can take on varying levels of specificity â from image embeddings (or fe7 vectors of J) to textual descriptions to pixel-level image generations. Specific Instantiation. In our experiments, we focus on the setting where Q-BOT is tasked with estimating a vector em- bedding of the image J. Given some feature extractor (i.e., a pretrained CNN model, say VGG-16), no human annotation is required to produce the target â descriptionâ 9% (simply forward-prop the image through the CNN). Reward/error can be measured by simple Euclidean distance, and any im- age may be used as the visual grounding for a dialog.
1703.06585#12
1703.06585#14
1703.06585
[ "1605.06069" ]
1703.06585#14
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Thus, an unlimited number of â game playsâ may be simulated. # 4. Reinforcement Learning for Dialog Agents In this section, we formalize the training of two visual dia- log agents (Q-BOT and A-BOT) with Reinforcement Learn- ing (RL) â describing formally the action, state, environ- ment, reward, policy, and training procedure. We begin by noting that although there are two agents (Q-BOT, A-BOT), since the game is perfectly cooperative, we can without loss of generality view this as a single-agent RL setup where the single â meta-agentâ comprises of two â constituent agentsâ communicating via a natural language bottleneck layer. Action. Both agents share a common action space con- sisting of all possible output sequences under a token vo- cabulary V. This action space is discrete and in princi- ple, infinitely-large since arbitrary length sequences q, a¢ may be produced and the dialog may go on forever. In our synthetic experiment, the two agents are given different vo- cabularies to coax a certain behavior to emerge (details in Sec. 5). In our VisDial experiments, the two agents share a common vocabulary of English tokens. In addition, at each round of the dialog t, Q-BOT also predicts y, its current guess about the visual representation of the unseen image. This component of Q-BOTâ s action space is continuous. State. Since there is information asymmetry (A-BOT can see the image J, Q-BOT cannot), each agent has its own observed state. For a dialog grounded in image J with caption c, the state of Q-BOT at round ¢ is the caption and dialog history so far se = [e,q1,41,---,Qâ 1, 4-1], and the state of A-BOT also includes the image s/ = [L,¢,Q1,1,---,t-1, 4-1, U]- Policy. We model Q-BOT and A-BOT operating under stochastic policies 7Q(q | 8;0Q) and ma(a; | sf;6.4), such that questions and answers may be sampled from these policies conditioned on the dialog/state history. These poli- cies will be learned by two separate deep neural networks parameterized by 0g and 64.
1703.06585#13
1703.06585#15
1703.06585
[ "1605.06069" ]
1703.06585#15
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
In addition, Q-BOT includes a feature regression network f (-) that produces an image rep- resentation prediction after listening to the answer at round tie, He = f(s? qe, a1;f) = f(s2.450). Thus, the goal of policy learning is to estimate the parameters 0g, 04, Of. Environment and Reward. The environment is the image I upon which the dialog is grounded. Since this is a purely cooperative setting, both agents receive the same reward. Let ¢(-,+) be a distance metric on image representations (Euclidean distance in our experiments). At each round t, we define the reward for a state-action pair as: ri( 82 (grav) ) =C(Gi-1,9") â â ¬(Gy") A) state action distance at t-1 distance at t i.e., the change in distance to the true representation be- fore and after a round of dialog. In this way, we consider a question-answer pair to be low quality (i.e., have a negative reward) if it leads the questioner to make a worse estimate of
1703.06585#14
1703.06585#16
1703.06585
[ "1605.06069" ]
1703.06585#16
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
4 the target image representation than if the dialog had ended. Note that the total reward summed over all time steps of a dialog is a function of only the initial and ï¬ nal states due to the cancellation of intermediate terms, i.e., Yon(s?, (ae, 41.44))) = C(Go,y") â â ¬(Gr,y") (2) = Se t=1 overall improvement due to dialog This is again intuitive â â How much do the feature predic- tions of Q-BOT improve due to the dialog?â The details of policy learning are described in Sec. 4.2, but before that, let us describe the inner working of the two agents. # 4.1. Policy Networks for Q-BOT and A-BOT Fig. 2 shows an overview of our policy networks for Q-BOT and A-BOT and their interaction within a single round of dialog. Both the agent policies are modeled via Hierarchical Recurrent Encoder-Decoder neural networks, which have recently been proposed for dialog modeling [4, 25, 26]. Q-BOT consists of the following four components: - Fact Encoder: Q-BOT asks a question qt: â Are there any animals?â and receives an answer at: â Yes, there are two elephants.â . Q-BOT treats this concatenated (qt, at)-pair as a â factâ it now knows about the unseen image. The fact encoder is an LSTM whose ï¬ nal hidden state F Q t â R512 is used as an embedding of (qt, at). - State/History Encoder is an LSTM that takes the en- coded fact F Q t at each time step to produce an encoding of the prior dialog including time t as SQ t â R512. Notice that this results in a two-level hierarchical encoding of the dialog (qt, at) â F Q - Question Decoder is an LSTM that takes the state/his- tâ 1 and gener- tory encoding from the previous round SQ ates question qt by sequentially sampling words. - Feature Regression Network f (·) is a single fully- connected layer that produces an image representation prediction Ë yt from the current encoded state Ë yt = f (SQ t ). Each of these components and their relation to each other are shown on the left side of Fig. 2.
1703.06585#15
1703.06585#17
1703.06585
[ "1605.06069" ]
1703.06585#17
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
We collectively refer to the parameters of the three LSTM models as θQ and those of the feature regression network as θf . A-BOT has a similar structure to Q-BOT with slight differ- ences since it also models the image I via a CNN: - Question Encoder: A-BOT receives a question qt from t â R512. Q-BOT and encodes it via an LSTM QA - Fact Encoder: Similar to Q-BOT, A-BOT also encodes t â R512. The the (qt, at)-pairs via an LSTM to get F A purpose of this encoder is for A-BOT to remember what it has already told Q-BOT and be able to understand ref- erences to entities already mentioned. 1Q Sey Question Decoder Q History [J Fact Encoder [~~] Embedding Rounds of Dialog Feature |â â
1703.06585#16
1703.06585#18
1703.06585
[ "1605.06069" ]
1703.06585#18
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
+ Regression Network [0.1, -2, 0, ..., 0.57] Are there any animals? Yes, there are two elephants. Answer at Reward Function y_ FA, SA, Question | History Encoder Encoder Decoder Fact Embedding FA sa Figure 2: Policy networks for Q-BOT and A-BOT. At each round t of dialog, (1) Q-BOT generates a question qt from its question decoder conditioned on its state encoding SQ t , and generates an answer at, (3) both encode the completed exchange as F Q t , predicts an image representation Ë yt, and receives a reward. - State/History Encoder is an LSTM that takes as in- put at each round t â the encoded question Q/, the image features from VGG [28] y, and the previous fact encoding F;4, â to produce a state encoding, i.e. the model to contextualize the current question w.r.t. the history while looking at the image to seek an answer. While the above is a natural objective, we ï¬ nd that consid- ering the entire dialog as a single RL episode does not dif- ferentiate between individual good or bad exchanges within it. Thus, we update our model based on per-round rewards, J(94,0Q,99) = E TQITA [re (s?, (qt, ae; w))| (5) - Answer Decoder is an LSTM that takes the state encod- t and generates at by sequentially sampling words. Following the REINFORCE algorithm, we can write the gradient of this expectation as an expectation of a quantity related to the gradient. For θQ, we derive this explicitly: Our code will be publicly available. To recap, a dialog round at time t consists of 1) Q-BOT generating a question qt conditioned on its state encoding SQ tâ 1, 2) A-BOT encoding qt, updating its state encoding SA t , and generating an answer at, 3) Q-BOT and A-BOT both encoding the completed exchange as F Q t and F A t , and 4) Q-BOT updating its state to SQ t based on F Q t and making an image representation prediction Ë yt for the unseen image.
1703.06585#17
1703.06585#19
1703.06585
[ "1605.06069" ]
1703.06585#19
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
VooI = Veo E [ri «| (r; inputs hidden to avoid clutter) TQ.TA = Vo [> m9 (als@1) ma (a Ino] = > TQ (als) Veg log TQ (als?) TA (a:|s#) Tt (-) qe.ae = _E [m6 () Vag logo (aels?1)| ©) TQ.TA # 4.2. Joint Training with Policy Gradients Similarly, gradient w.r.t. θA, i.e., â θA J can be derived as In order to train these agents, we use the REINFORCE [35] algorithm that updates policy parameters (θQ, θA, θf ) in re- sponse to experienced rewards. In this section, we derive the expressions for the parameter gradients for our setup. Recall that our agents take actions â communication (qt, at) and feature prediction Ë yt â and our objective is to maximize the expected reward under the agentsâ policies, summed over the entire dialog: min J(04,9Q, 9) 4,0 .99 where, (3) Vot= E [re() Vox log ma (arls)]. As is standard practice, we estimate these expectations with sample averages. Speciï¬ cally, we sample a question from Q-BOT (by sequentially sampling words from the question decoder LSTM till a stop token is produced), sample its an- swer from A-BOT, compute the scalar reward for this round, multiply that scalar reward to gradient of log-probability of this exchange, propagate backward to compute gradients w.r.t. all parameters θQ, θA. This update has an intuitive interpretation â if a particular (qt, at) is informative (i.e., leads to positive reward), its probabilities will be pushed up (positive gradient). Conversely, a poor exchange leading to negative reward will be pushed down (negative gradient).
1703.06585#18
1703.06585#20
1703.06585
[ "1605.06069" ]
1703.06585#20
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
5 shape A HO : Task Image i : Task: (shape, style) : color | : (color, shape) {purple, square, filled} : aK ( P Pee ) te Reword vs # Ker c ® i Q2:Z A2:4 : syle @O) Predicted: (iriangle, filled) : Tasks i (color, shape), (shape, color), (style, color), (color, style), (shape, style), (style, shape) (a) : (b) L| i Qu:zZ AT Task: (style, color) Reward Q2:X AZT pL Predicted: (solid, purple) $ "© â 100-200 300400 # Iter (c) ; (d) Figure 3: Emergence of grounded dialog: (a) Each â imageâ has three attributes, and there are six tasks for Q-BOT (ordered pairs of attributes). (b) Both agents interact for two rounds followed by attribute pair prediction by Q-BOT. (c) Example 2-round dialog where grounding emerges: color, shape, style have been encoded as X, Y, Z respectively. (d) Improvement in reward while policy learning. Finally, since the feature regression network f(-) forms a deterministic policy, its parameters 67 receive â supervisedâ gradient updates for differentiable ¢(-, -). # 5. Emergence of Grounded Dialog To succeed at our image guessing game, Q-BOT and A-BOT need to accomplish a number of challenging sub-tasks â they must learn a common language (do you understand what I mean when I say â personâ ?) and develop map- pings between symbols and image representations (what does â personâ look like?), i.e., A-BOT must learn to ground language in visual perception to answer questions and Q- BOT must learn to predict plausible image representations â all in an end-to-end manner from a distant reward func- tion. Before diving in to the full task on real images, we conduct a â sanity checkâ on a synthetic dataset with perfect perception to ask â is this even possible? Setup. As shown in Fig. 3, we consider a synthetic world with â imagesâ represented as a triplet of attributes â 4 shapes, 4 colors, 4 styles â for a total of 64 unique images.
1703.06585#19
1703.06585#21
1703.06585
[ "1605.06069" ]
1703.06585#21
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
A-BOT has perfect perception and is given direct access to this representation for an image. Q-BOT is tasked with de- ducing two attributes of the image in a particular order â e.g., if the task is (shape, color), Q-BOT would need to out- put (square, purple) for a (purple, square, ï¬ lled) image seen by A-BOT (see Fig. 3b). We form all 6 such tasks per image. Vocabulary. We conducted a series of pilot experiments and found the choice of the vocabulary size to be crucial for coaxing non-trivial â
1703.06585#20
1703.06585#22
1703.06585
[ "1605.06069" ]
1703.06585#22
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
non-cheatingâ behavior to emerge. For instance, we found that if the A-BOT vocabulary VA is large enough, say |VA| â ¥ 64 (#images), the optimal policy learnt simply ignores what Q-BOT asks and A-BOT conveys the entire image in a single token (e.g. token 1 â ¡ (red, square, ï¬ lled)). As with human communication, an impoverished vocabulary that cannot possibly encode the richness of the visual sensor is necessary for non-trivial dialog to emerge. To ensure at least 2 rounds of dialog, we restrict each agent to only produce a single symbol utterance per round from â minimalâ vocabularies VA = {1, 2, 3, 4} for A-BOT and VQ = {X, Y, Z} for Q-BOT. Since |VA|#rounds < #images, a non-trivial dialog is necessary to succeed at the task. Policy Learning. Since the action space is discrete and small, we instantiate Q-BOT and A-BOT as fully specified tables of Q-values (state, action, future reward estimate) and apply tabular Q-learning with Monte Carlo estimation over 10k episodes to learn the policies. Updates are done alternately where one bot is frozen while the other is up- dated. During training, we use e-greedy policies [29], en- suring an action probability of 0.6 for the greedy action and split the remaining probability uniformly across other ac- tions. At test time, we default to greedy, deterministic pol- icy obtained from these e-greedy policies.
1703.06585#21
1703.06585#23
1703.06585
[ "1605.06069" ]
1703.06585#23
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
The task requires outputting the correct attribute value pair based on the task and image. Since there are a total of 4+ 4+ 4 = 12 unique values across the 3 attributes, Q-BOTâ s final action selects one of 12x 12=144 attribute-pairs. We use +1 and â 1 as rewards for right and wrong predictions. Results. Fig. 3d shows the reward achieved by the agentsâ policies vs. number of RL iterations (each with 10k episodes/dialogs). We can see that the two quickly learn the optimal policy. Fig. 3b,c show some example exchanges be- tween the trained bots. We find that the two invent their own communication protocol â Q-BOT consistently uses specific symbols to query for specific attributes: X â color, Y > shape, Z â > style. And A-BOT consistently responds with specific symbols to indicate the inquired attribute, e.g., if Q- BOT emits X (asks for color), A-BOT responds with: 1 > purple, 2 â > green, 3 â blue, 4 â > red.
1703.06585#22
1703.06585#24
1703.06585
[ "1605.06069" ]
1703.06585#24
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Similar mappings exist for responses to other attributes. Essentially, we find the automatic emergence of grounded language and a com- munication protocol among â visualâ dialog agents without any human supervision! # 6. Experiments Our synthetic experiments in the previous section establish that when faced with a cooperative task where information must be exchanged, two agents with perfect perception are capable of developing a complex communication protocol. In general, with imperfect perception on real images, dis- covering human-interpretable language and communication 6 Image + Caption Human-Human dialog [4] 1 man 1 woman What are their genders? Yes Are they both adults? Looks like sunglasses Do they wear goggles? Man does Do they have hats on? No Are there any other people? Black What color is manâ s hat? Is it snowing now? No What is woman wearing? Blue jacket and black pants Yes Are they smiling? Yes Do you see trees? SL-pretrained Q-BOT-A-BOT dialog 2 How many people? Male or female?
1703.06585#23
1703.06585#25
1703.06585
[ "1605.06069" ]
1703.06585#25
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
I canâ t tell I canâ t tell, they are far away What are they wearing? I canâ t tell What color are skis? Yes Are they wearing goggles? Yes Are they wearing goggles? Yes Are they wearing goggles? Yes Are they wearing goggles? Yes Are they wearing hats? Yes Are they wearing goggles? RL-full-QAf Q-BOT-A-BOT dialog Are people male or female or both? What are they wearing? Is it snowing? Can you see any trees? Can you see any buildings in background? Can you see any trees? Can you see any buildings in background? Can you see any trees? Can you see any buildings in background? Does it look like they are in park?
1703.06585#24
1703.06585#26
1703.06585
[ "1605.06069" ]
1703.06585#26
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
I think 1 is male, canâ t see other 1 ... They are all wearing snow pants and jackets It does not appear to be snowing at moment Yes, there are lot of trees in background No, I can not Yes, there are lot of trees in background No, I do not see any buildings Yes, there are lot of trees in background No , I do not see any buildings It does not appear to be A couple of people in the snow on skis. They look to be 30â s How old are men? I canâ t tell, its close up Are they in restaurant? Maybe Do men look they might be related? Are they each holding slice of pizza in 1 hand and giving thumbs up with other? 1 holding slice, other holding box... cheese What kind of pizza do they have? No Does box have pizza company name on it? No Are there any pictures or graphics on box?
1703.06585#25
1703.06585#27
1703.06585
[ "1605.06069" ]
1703.06585#27
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Slender Are men slender or heavy set? Yes Do either of men wear glasses? 1 1 or both? How old are men? Are they male or female? What are they wearing? What kind of pizza is it? Does it look good? Can you tell what kind of pizza it is? Are there any toppings on pizza? Are there any toppings on pizza? Are there any toppings on pizza? Are there any other toppings on pizza? They look like they are in their 20â s 1 male and 1 female T shirt and shorts Looks like pepperoni Yes it does Looks like pepperoni Pepperoni Pepperoni Looks like cheese, but I canâ t tell...
1703.06585#26
1703.06585#28
1703.06585
[ "1605.06069" ]
1703.06585#28
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
No How old are men? Are they wearing chefâ s hat? Do they have long hair? What kind of pizza is it? Does it look good? Are there any other people around? Are they inside or outside? Can you tell what time of day it is? Are there any other people around? What color are walls? They look like they are in their 20â s No No Looks like pepperoni Yes No Inside No No White Two men giving thumbs up holding large slices of pizza. How old is man? Is he wearing helmet? Is he wearing any protective gear? What color is his hair? What color is his shirt? Is he wearing shorts? What color are his shorts? Are there other people visible? What color are his shoes? Is this at skate park? In his teens No No Dirty blonde or dark brown Black with red letters Yes Denim No I canâ t see them Yes How old is man? What is he wearing? What color is his shirt? what color are his pants? what color is his shirt? What color are his shoes? What color are his shoes? What color are his shoes? What color are his shoes? What color are his shoes?
1703.06585#27
1703.06585#29
1703.06585
[ "1605.06069" ]
1703.06585#29
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
I canâ t see his face, I donâ t know shorts and t shirt white black blue Black Black Black Black Black What color is skateboard? Is he wearing helmet? Is he wearing knee pads? Is it day or night? Is it sunny? Can you see any other people? Is it day or night? Is it sunny? What color is skateboard? Is he wearing helmet? It is hard to tell but I think itâ s black Yes No, no knee pads It is daytime It is bright out, but I see no shadows, so I can not There are 2 other people in background It is daytime Yes, it is sunny It is hard to tell but I think itâ s black Yes, he is wearing helmet A man riding a skateboard down the side of a ramp. What color is plane? Is there writing on plane? What color is ground? Any people? Can you see airport? What color is it? Is there sunshine? Any other planes? Any grass? Any trees? White and blue Yes it says lufthansa Grey tarmac No people visible Yes Grey No itâ s kind of overcast No other planes No grass No tree What color is plane?
1703.06585#28
1703.06585#30
1703.06585
[ "1605.06069" ]
1703.06585#30
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Is it large or small? Any people? Any writing on plane? Any writing on plane? Any writing on plane? Any writing on plane? What does it say? What color is plane? Any writing on it? White Large No Yes Yes Yes Yes UNK White Yes What color is airplane? Can you tell what airline it is? What time of day is it? Can you see any people? Can you see any buildings? Is it day or night time? Can you see sky? Are there clouds in sky? Does it look like itâ s going to rain? Does it look like itâ s going to rain? White with red and blue stripes No, itâ s too far away It is daytime No Yes, in background It is daytime Yes Yes, few Yes it does Yes it does An airplane sitting on an asphalt landing strip. What color is dog? What breed is dog? Is dog wearing collar?
1703.06585#29
1703.06585#31
1703.06585
[ "1605.06069" ]
1703.06585#31
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Can you tell what is cooking? Is oven stainless steel? Does it have see through glass? How many racks in oven? Can you see knobs? Does oven have wooden handle? Are beans in pot? Brown and white Look like brittany spaniel Not able to see Look like beans Yes Yes 2 No Not able to see Glass dish What kind of dog is it? What color is it? Is it large or small? What kind of food is it? Is it large or small? What color is it? Is it large or small? What color are walls? What kind of ï¬ oor? What kind of ï¬ oor? I canâ t tell White Large I canâ t tell Small White Large White Canâ t see ï¬ oor Canâ t see ï¬ oor What kind of dog is it? What color is it? Can you tell what kind of food it is? Are there people around? Is it inside or outside? Can you tell what time of day it is? What color is room? Do you see any windows? Do you see any windows? Do you see any windows? Terrier of some kind White and grey No No people Inside No I canâ t I canâ t see walls No No No A dog watching food cook in an oven.
1703.06585#30
1703.06585#32
1703.06585
[ "1605.06069" ]
1703.06585#32
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Are there more people in this photo or just man? No Just tennis player man in this photo? Yes Canâ t tell, itâ s close up Is it outdoor court? White shirt and shorts What is man wearing? No Can you see net? Yes Can you see bottom of court? Yes Can you see ball in photo? Yes Is it close to manâ s racquet? Yes Is man about to take swing? Right Is he left handed or right handed? How old is man? What color is his hair? What is he wearing? What color is his shirt? What color is his shirt? What color is his shirt? What color is his shirt? What color is his shirt? What color is his shirt? What color is his shirt?
1703.06585#31
1703.06585#33
1703.06585
[ "1605.06069" ]
1703.06585#33
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
I canâ t see his face, I donâ t know Brown Shorts and t shirt White White White White White White White How old is man? What color is his hair? Is he wearing glasses? Is he wearing hat? Is he wearing glasses? Are there other people around? Can you see net? What color is court? Is it sunny? Do you see any trees? He looks to be in his 20â s His hair is brown No he is not No he is not No he is not No itâ s just him No I canâ t Court is green Yes it is No I do not A man prepares to swing at the tennis ball.
1703.06585#32
1703.06585#34
1703.06585
[ "1605.06069" ]
1703.06585#34
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Is giraffe adult? Is grass at higher altitude than its feet? Is rock big? How is weather? What color is rock? Is grass green or dry? Does giraffe look healthy? Does giraffe look happy? Is giraffe laying down? Is giraffeâ s mouth closed? I cannot tell, I only see face No Yes Sunny Tan color Green but looks little dry Yes It looks tired Yes Yes Is there only 1 giraffe? Are there any other animals? Are there any people? Are there any other animals? Is it day or night? Is it sunny? What time of day it is? Are there any other animals? Are there any trees? Are there any other animals? Yes No No No Day Yes I canâ t tell No No No Is there only 1 giraffe? Are there any other animals? Are there any people? Does this look like zoo? Is there fence? Is it day or night? Is it sunny? What time of day is it? Are there any trees? Can you see sky? Yes No No Canâ t tell No Day Yes Morning No No A giraffe is resting its head in the grass near a rock. Are there any people in picture? people Is cheese white? Are crackers white? Are there lots of greens? Was picture taken in kitchen? zoomed in How many crackers are there? Is there design on plate? Is there good lighting? Is plate on table? background Does cracker look delicious? No, there are no Yes, cheese is white No, cracker is brownish No, only 1 sprig I cannot tell, it is too There is 1 No, plate is plane white Yes, lighting is decent I cannot tell I can only see plate as Are there any people? What color is plate? Is plate on table? What color is plate? Is plate on table? Is there anything else on table? Is there anything else on table? Is there anything else on table? Is there anything else on table? Is there anything else on table? No White I canâ t tell White I canâ t tell No No No No No Is plate large or small? Is it on table? What kind of donut? What kind of chips? Any people? Is this inside or outside? What color is table? Any chairs? What color is table? Is it large or small? Large I canâ t tell Plain with sugar on it Potato chips No I canâ
1703.06585#33
1703.06585#35
1703.06585
[ "1605.06069" ]
1703.06585#35
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
t tell, itâ s close up White No I canâ t tell I canâ t see plate Absolutely A cracker with shredded cheese and leafy greens on a white plate. He looks like late teens How old is this man? Medium Does he have short hair or long hair? Brown What color is his hair? No Is he wearing glasses? No Does he have facial hair? How is he dressed? Heâ s white dress shirt with red tie and matching red vest over it looks like school uniform Is he larger man or is he skinny? Is this picture taken outside? Can you tell if itâ s daytime? Can you tell what room he is standing in? hallway Heâ s skinny Inside Iâ m not sure Heâ s in
1703.06585#34
1703.06585#36
1703.06585
[ "1605.06069" ]
1703.06585#36
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
How old is man? What is he wearing? What color is his hair? What is he wearing? What color is his hair? What is he wearing? What color is his hair? Is it short or long? What color is his hair? Is it short or long? He looks like heâ s in his 20s T shirt and jeans Black T shirt and jeans Black T shirt and jeans Black Short Black Short Where is man located? What is man wearing? Are there any other people in picture? What color is table? Is there anything else on table? What are other people doing? they Are there any windows? What color are walls? What time of day do you think it is? What is man wearing? Looks like classroom of some sort Black t shirt and jeans Yes, quite few people in background Picture is black and white, but itâ s wood table Not that I can see They are standing in front of him, but I donâ t know what Not that I can see I canâ t see walls I canâ t tell Black t shirt and jeans
1703.06585#35
1703.06585#37
1703.06585
[ "1605.06069" ]
1703.06585#37
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
A man making the live long and prosper sign from star trek. Table 1: Selected examples of Q-BOT-A-BOT interactions for SL-pretrained and RL-full-QAf. RL-full-QAf interactions are diverse, less prone to repetitive and safe exchanges (â canâ t tell", â donâ t know" etc.), and more image-discriminative. 7 strategy from scratch is both tremendously difficult and an unnecessary re-invention of English. We leverage the recently introduced VisDial dataset [4] that contains (as of the publicly released v0.5) human dialogs (10 rounds of question-answer pairs) on 68k images from the COCO dataset, for a total of 680k QA-pairs. Example dialogs from the VisDial dataset are shown in Tab. 1. Image Feature Regression. We consider a specific in- stantiation of the visual guessing game described in Sec. 3 â specifically at each round t, Q-BOT needs to regress to the vector embedding %, of image J corresponding to the fc7 (penultimate fully-connected layer) output from VGG- 16 [28]. The distance metric used in the reward computation . : to» 2 tn 2 is C2, ie. r4(-) = llyâ
1703.06585#36
1703.06585#38
1703.06585
[ "1605.06069" ]
1703.06585#38
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
â Grills â lly â dello- Training Strategies. We found two training strategies to be crucial to ensure/improve the convergence of the RL frame- work described in Sec. 4, to produce any meaningful dialog exchanges, and to ground the agents in natural language. 1) Supervised Pretraining. We first train both agents in a supervised manner on the train split of VisDial [4] v0.5 under an MLE objective. Thus, conditioned on human di- alog history, Q-BOT is trained to generate the follow-up question by human1, A-BOT is trained to generate the re- sponse by human2, and the feature network f(-) is opti- mized to regress to y. The CNN in A-BOT is pretrained on ImageNet. This pretraining ensures that the agents can generally recognize some objects/scenes and emit English questions/answers. The space of possible (q;, a,) is ttemen- dously large and without pretraining most exchanges result in no information gain about the image. 2) Curriculum Learning. After supervised pretraining, we â smoothlyâ transition the agents to RL training accord- ing to a curriculum. Specifically, we continue supervised training for the first (say 9) rounds of dialog and tran- sition to policy-gradient updates for the remaining 10 â rounds. We start at A = 9 and gradually anneal to 0. This curriculum ensures that the agent team does not suddenly diverge off policy, if one incorrect q or a is generated. Models are pretrained for 15 epochs on VisDial, after which we transition to policy-gradient training by annealing K down by 1 every epoch. All LSTMs are 2-layered with 512- d hidden states. We use Adam [15] with a learning rate of 10-3, and clamp gradients to [â 5,5] to avoid explosion.
1703.06585#37
1703.06585#39
1703.06585
[ "1605.06069" ]
1703.06585#39
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
All our code will be made publicly available. There is no explicit state-dependent baseline in our training as we ini- tialize from supervised pretraining and have zero-centered reward, which ensures a good proportion of random sam- ples are both positively and negatively reinforced. Model Ablations. We compare to a few natural ablations of our full model, denoted RL-ful1-QAf. First, we evaluate the purely supervised agents (denoted SL-pret rained), i.e., trained only on VisDial data (no RL). Comparison to these agents establishes how much RL helps over super-
1703.06585#38
1703.06585#40
1703.06585
[ "1605.06069" ]
1703.06585#40
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
8 vised learning. Second, we ï¬ x one of Q-BOT or A-BOT to the supervised pretrained initialization and train the other agent (and the regression network f ) with RL; we label these as Frozen-Q or Frozen-A respectively. Compar- ing to these partially frozen agents tell us the importance of coordinated communication. Finally, we freeze the regres- sion network f to the supervised pretrained initialization while training Q-BOT and A-BOT with RL. This measures improvements from language adaptation alone. We quantify performance of these agents along two dimen- sions â how well they perform on the image guessing task (i.e. image retrieval) and how closely they emulate human dialogs (i.e. performance on VisDial dataset [4]). Evaluation: Guessing Game. To assess how well the agents have learned to cooperate at the image guessing task, we setup an image retrieval experiment based on the test split of VisDial v0.5 (â ¼9.5k images), which were never seen by the agents in RL training. We present each im- age + an automatically generated caption [13] to the agents, and allow them to communicate over 10 rounds of dialog. After each round, Q-BOT predicts a feature representation Ë yt. We sort the entire test set in ascending distance to this prediction and compute the rank of the source image. Fig. 4a shows the mean percentile rank of the source im- age for our method and the baselines across the rounds (shaded region indicates standard error). A percentile rank of 95% means that the source image is closer to the predic- tion than 95% of the images in the set. Tab. 1 shows ex- ample exchanges between two humans (from VisDial), the SL-pretrained and the RL-full-QAf agents. We make a few observations: We see that outperforms SL-pretrained and all other ablations (e.g., at improving percentile rank by over 3%), round 10, indicating that our training framework is indeed effective at training these agents for image guessing.
1703.06585#39
1703.06585#41
1703.06585
[ "1605.06069" ]
1703.06585#41
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
â ¢ All agents â forgetâ ; RL agents forget less. One in- teresting trend we note in Fig. 4a is that all methods signiï¬ cantly improve from round 0 (caption-based re- trieval) to rounds 2 or 3, but beyond that all methods with the exception of RL-full-QAf get worse, even though they have strictly more information. As shown in Tab. 1, agents will often get stuck in inï¬ nite repeat- ing loops but this is much rarer for RL agents. More- over, even when RL agents repeat themselves, it is af- ter longer gaps (2-5 rounds). We conjecture that the goal of helping a partner over multiple rounds encour- ages longer term memory retention.
1703.06585#40
1703.06585#42
1703.06585
[ "1605.06069" ]
1703.06585#42
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
â ¢ RL leads to more informative dialog. SL A-BOT tends to produce â safeâ generic responses (â I donâ t knowâ , â I canâ t seeâ ) but RL A-BOT responses are 2 Bom = Frozen-f Oo = Frozen-A Oo1% Frozen-Q oO SL-pretrained O o0% 89% 88% 2 4 6 8 10 Dialog Round 95% 94% RL-full-QAF 8 Fd Model SL-pretrain Frozen-Q Frozen-f RL-full-QAf Frozen-Q-multi MRR R@5 R@10 Mean Rank 0.436 0.428 0.432 0.428 0.437 53.41 53.12 53.28 53.08 53.67 60.09 60.19 60.11 60.22 60.48 21.83 21.52 21.54 21.54 21.13 (a) Guessing Game Evaluation. (b) Visual Dialog Answerer Evaluation. £2 distance to ground truth image in fc7 space Round 1:
1703.06585#41
1703.06585#43
1703.06585
[ "1605.06069" ]
1703.06585#43
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
What kind of pizza is it? Cheese and | maybe mushroom. ce he | Fag tS 0.9263 gaa! Pizza slice sittingon | Round 5: Is there anything else on plate? top of white plate. Yes, es are 2 other plates in background. Group of people standing on top of lush green field wa 78| i 1.0477 1 ' 1 f 1 1 1 0.9343 '0.9352|1 0.9423 10,9426 0.9446 = Round 4: Are th outdoors? Outdoors. ae \ pee & ! H i] i = lus a3 111508] 28) 1761 Man in light-colored suit and tie standing next to woman in short purple dress. £2 distance to ground truth image in fc7 space 1 is i i 1.1551 H 1629 1.1591 Round 3: What aE one flowers in one of many ceramic vases. Round 1: How many people are there? Lot, too many to count. il i f H f i 0.8687! 0.8890 ik os 1,0.9006} 0.9149 People staring at man Round 3: Does it look old or 7" It looks new. on fancy motorcycle. 1 | = 1.1861 Round 9: |s it sunny out? Yes. gm 1.1882 11852 (c) Retrieval Results. (c) Qualitative Retrieval Results. Figure 4: a) Guessing Game Evaluation. Plot shows the rank in percentile (higher is better) of the â ground truthâ image (shown to A-BOT) as retrieved using fc7 predictions of Q-BOT vs. rounds of dialog. Round 0 corresponds to image guessing based on the caption alone. We can see that the RL-full-QAf bots signiï¬ cantly outperforms the SL-pretrained bots (and other ablations). Error bars show standard error of means. (c) shows qualitative results on this predicted fc7-based image retrieval. Left column shows true image and caption, right column shows dialog exchange, and a list of images sorted by their distance to the ground-truth image. The image predicted by Q-BOT is highlighted in red.
1703.06585#42
1703.06585#44
1703.06585
[ "1605.06069" ]
1703.06585#44
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
We can see that the predicted image is often semantically quite similar. b) VisDial Evaluation. Performance of A-BOT on VisDial v0.5 test, under mean reciprocal rank (MRR), recall@k for k = {5, 10} and mean rank metrics. Higher is better for MRR and recall@k, while lower is better for mean rank. We see that our proposed Frozen-Q-multi outperforms all other models on VisDial metrics by 3% relative gain. This improvement is entirely â for freeâ since no additional annotations were required for RL. much more detailed (â It is hard to tell but I think itâ s blackâ ).
1703.06585#43
1703.06585#45
1703.06585
[ "1605.06069" ]
1703.06585#45
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
These observations are consistent with re- cent literature in text-only dialog [18]. Our hypothesis for this improvement is that human responses are di- verse and SL trained agents tend to â hedge their betsâ and achieve a reasonable log-likelihood by being non- committal. In contrast, such â safeâ responses do not help Q-BOT in picking the correct image, thus encour- aging an informative RL A-BOT. Evaluation: Emulating Human Dialogs. To quantify how well the agents emulate human dialog, we evaluate A-BOT on the retrieval metrics proposed by Das et al. [4].
1703.06585#44
1703.06585#46
1703.06585
[ "1605.06069" ]
1703.06585#46
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Speciï¬ - 9 cally, every question in VisDial is accompanied by 100 can- didate responses. We use the log-likehood assigned by the A-BOT answer decoder to sort these candidates and report the results in Tab. 4b. We ï¬ nd that despite the RL A-BOTâ s answer being more informative, the improvements on Vis- Dial metrics are minor. We believe this is because while the answers are correct, they may not necessarily mimic hu- man responses (which is what the answer retrieval metrics check for). In order to dig deeper, we train a variant of Frozen-Q with a multi-task objective â simultaneous (1) ground truth answer supervision and (2) image guessing re- ward, to keep A-BOT close to human-like responses. We use a weight of 1.0 for the SL loss and 10.0 for RL. This model, denoted Frozen-Q-multi, performs better than all other approaches on VisDial answering metrics, improv- ing the best reported result on VisDial by 0.7 mean rank (relative improvement of 3%). Note that this gain is entirely â freeâ since no additional annotations were required for RL. Human Study. We conducted a human interpretabil- ity study to measure (1) whether humans can easily un- derstand the Q-BOT-A-BOT dialog, and (2) how image- discriminative the interactions are. We show human sub- jects a pool of 16 images, the agent dialog (10 rounds), and ask humans to pick their top-5 guesses for the image the two agents are talking about.
1703.06585#45
1703.06585#47
1703.06585
[ "1605.06069" ]
1703.06585#47
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
We ï¬ nd that mean rank of the ground-truth image for SL-pretrained agent dialog is 3.70 vs. 2.73 for RL-full-QAf dialog. In terms of MRR, the comparison is 0.518 vs. 0.622 respectively. Thus, un- der both metrics, humans ï¬ nd it easier to guess the unseen image based on RL-full-QAf dialog exchanges, which shows that agents trained within our framework (1) success- fully develop image-discriminative language, and (2) this language is interpretable; they do not deviate off English. # 7. Conclusions To summarize, we introduce a novel training framework for visually-grounded dialog agents by posing a cooperative â image guessingâ game between two agents. We use deep reinforcement learning to learn the policies of these agents end-to-end â from pixels to multi-agent multi-round dialog to game reward. We demonstrate the power of this frame- work in a completely ungrounded synthetic world, where the agents communicate via symbols with no pre-speciï¬ ed meanings (X, Y, Z). We ï¬
1703.06585#46
1703.06585#48
1703.06585
[ "1605.06069" ]
1703.06585#48
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
nd that two bots invent their own communication protocol without any human supervision. We go on to instantiate this game on the VisDial [4] dataset, where we pretrain with supervised dialog data. We ï¬ nd that the RL â ï¬ ne-tunedâ agents not only signiï¬ cantly outperform SL agents, but learn to play to each otherâ s strengths, all the while remaining interpretable to outside humans observers. Acknowledgements. We thank Devi Parikh for helpful discussions. This work was funded in part by the following
1703.06585#47
1703.06585#49
1703.06585
[ "1605.06069" ]
1703.06585#49
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
10 awards to DB â NSF CAREER award, ONR YIP award, ONR Grant N00014-14-1-0679, ARO YIP award, ICTAS Junior Faculty award, Google Faculty Research Award, Amazon Academic Research Award, AWS Cloud Credits for Research, and NVIDIA GPU donations. SK was sup- ported by ONR Grant N00014-12-1-0903, and SL was par- tially supported by the Bradley Postdoctoral Fellowship.
1703.06585#48
1703.06585#50
1703.06585
[ "1605.06069" ]
1703.06585#50
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Views and conclusions contained herein are those of the au- thors and should not be interpreted as necessarily represent- ing the ofï¬ cial policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor. # References [1] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. VQA: Visual Question Answering. In ICCV, 2015. 1, 2, 3 [2] J. P. Bigham, C. Jayant, H. Ji, G. Little, A. Miller, R. C. Miller, R. Miller, A. Tatarowicz, B. White, S. White, and T. Yeh. VizWiz: Nearly Real-time Answers to Visual Ques- tions. In UIST, 2010. 1 [3] X. Chen and C. L. Zitnick. Mindâ s Eye:
1703.06585#49
1703.06585#51
1703.06585
[ "1605.06069" ]
1703.06585#51
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
A Recurrent Vi- sual Representation for Image Caption Generation. In CVPR, 2015. 1 [4] A. Das, S. Kottur, K. Gupta, A. Singh, D. Yadav, J. M. Moura, D. Parikh, and D. Batra. Visual Dialog. In CVPR, 2017. 1, 2, 3, 4, 7, 8, 9, 10 [5] H. de Vries, F. Strub, S. Chandar, O. Pietquin, H. Larochelle, and A. Courville. GuessWhat?! visual object discovery through multi-modal dialogue. In CVPR, 2017. 1, 2, 3 [6] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T.
1703.06585#50
1703.06585#52
1703.06585
[ "1605.06069" ]
1703.06585#52
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Darrell. Long-term Re- current Convolutional Networks for Visual Recognition and Description. In CVPR, 2015. 3 [7] H. Fang, S. Gupta, F. N. Iandola, R. K. Srivastava, L. Deng, P. Dollár, J. Gao, X. He, M. Mitchell, J. C. Platt, C. L. Zit- nick, and G. Zweig.
1703.06585#51
1703.06585#53
1703.06585
[ "1605.06069" ]
1703.06585#53
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
From Captions to Visual Concepts and Back. In CVPR, 2015. 3 [8] J. Foerster, Y. M. Assael, N. de Freitas, and S. Whiteson. Learning to communicate with deep multi-agent reinforce- ment learning. In Advances in Neural Information Process- ing Systems, 2016. 3 [9] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu.
1703.06585#52
1703.06585#54
1703.06585
[ "1605.06069" ]
1703.06585#54
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering. In NIPS, 2015. 3 J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Gen- erative Adversarial Nets. In NIPS, 2014. 3 [11] S. Havrylov and I. Titov.
1703.06585#53
1703.06585#55
1703.06585
[ "1605.06069" ]
1703.06585#55
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Emergence of language with multi- agent games: Learning to communicate with sequences of symbols. In ICLR Workshop, 2017. 3 [12] J. Johnson, A. Karpathy, and L. Fei-Fei. DenseCap: Fully Convolutional Localization Networks for Dense Captioning. In CVPR, 2016. 1 [13] A. Karpathy and L. Fei-Fei. Deep visual-semantic align- In CVPR, 2015. ments for generating image descriptions. 3, 8 [14] S. Kazemzadeh, V. Ordonez, M. Matten, and T. L. Berg. ReferItGame: Referring to Objects in Photographs of Nat- ural Scenes. In EMNLP, 2014. 3 [15] D. Kingma and J. Ba. Adam:
1703.06585#54
1703.06585#56
1703.06585
[ "1605.06069" ]
1703.06585#56
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
A Method for Stochastic Opti- mization. In ICLR, 2015. 8 [16] A. Lazaridou, A. Peysakhovich, and M. Baroni. Multi-agent cooperation and the emergence of (natural) language. In ICLR, 2017. 3 [17] D. Lewis. Convention: A philosophical study. John Wiley & Sons, 2008. 3 [18] J. Li, W. Monroe, A. Ritter, M. Galley, J. Gao, and D. Juraf- sky. Deep Reinforcement Learning for Dialogue Generation. In EMNLP, 2016. 3, 9 [19] J. Li, W. Monroe, T. Shi, A. Ritter, and D. Jurafsky. Adver- sarial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547, 2017. 3 [20] M. Malinowski and M. Fritz.
1703.06585#55
1703.06585#57
1703.06585
[ "1605.06069" ]
1703.06585#57
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
A Multi-World Approach to Question Answering about Real-World Scenes based on Un- certain Input. In NIPS, 2014. 3 [21] M. Malinowski, M. Rohrbach, and M. Fritz. Ask your neu- rons: A neural-based approach to answering questions about images. In ICCV, 2015. 1, 3 [22] I. Mordatch and P. Abbeel. Emergence of grounded compo- sitional language in multi-agent populations. arXiv preprint arXiv:1703.04908, 2017. 3 [23] S. Nolï¬ and M. Mirolli.
1703.06585#56
1703.06585#58
1703.06585
[ "1605.06069" ]
1703.06585#58
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Evolution of Communication and Language in Embodied Agents. Springer Publishing Com- pany, Incorporated, 1st edition, 2009. 3 [24] M. Ren, R. Kiros, and R. Zemel. Exploring Models and Data for Image Question Answering. In NIPS, 2015. 1, 3 [25] I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau.
1703.06585#57
1703.06585#59
1703.06585
[ "1605.06069" ]
1703.06585#59
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models. In AAAI, 2016. 4 [26] I. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio. A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues. arXiv preprint arXiv:1605.06069, 2016. 4 [27] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, I. Antonoglou, 11 V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 2016. 3 [28] K. Simonyan and A. Zisserman.
1703.06585#58
1703.06585#60
1703.06585
[ "1605.06069" ]
1703.06585#60
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 5, 8 [29] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. 6 [30] M. Tapaswi, Y. Zhu, R. Stiefelhagen, A. Torralba, R. Ur- tasun, and S. Fidler. MovieQA: Understanding Stories in Movies through Question-Answering.
1703.06585#59
1703.06585#61
1703.06585
[ "1605.06069" ]
1703.06585#61
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
In CVPR, 2016. 1 [31] K. Tu, M. Meng, M. W. Lee, T. E. Choe, and S. C. Zhu. Joint Video and Text Parsing for Understanding Events and Answering Queries. IEEE MultiMedia, 2014. 1 [32] S. Venugopalan, M. Rohrbach, J. Donahue, R. J. Mooney, T. Darrell, and K. Saenko.
1703.06585#60
1703.06585#62
1703.06585
[ "1605.06069" ]
1703.06585#62
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Sequence to Sequence - Video to Text. In ICCV, 2015. 1 [33] S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. J. Mooney, and K. Saenko. Translating Videos to Natural Lan- guage Using Deep Recurrent Neural Networks. In NAACL HLT, 2015. 1 [34] O. Vinyals, A. Toshev, S. Bengio, and D.
1703.06585#61
1703.06585#63
1703.06585
[ "1605.06069" ]
1703.06585#63
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
Erhan. Show and tell: A neural image caption generator. In CVPR, 2015. 1, 3 [35] R. J. Williams. Simple statistical gradient-following algo- rithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â 256, 1992. 5 [36] S. Wu, H. Pique, and J. Wieland. Using artiï¬ - facebook. intelligence to help blind people â seeâ cial http://newsroom.fb.com/news/2016/04/using-artiï¬ cial- intelligence-to-help-blind-people-see-facebook/, 1 2016.
1703.06585#62
1703.06585#64
1703.06585
[ "1605.06069" ]
1703.06585#64
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
[37] K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhut- dinov, R. S. Zemel, and Y. Bengio. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In ICML, 2015. 1
1703.06585#63
1703.06585
[ "1605.06069" ]
1703.05175#0
Prototypical Networks for Few-shot Learning
7 1 0 2 n u J 9 1 ] G L . s c [ 2 v 5 7 1 5 0 . 3 0 7 1 : v i X r a # Prototypical Networks for Few-shot Learning # Jake Snell University of Torontoâ Kevin Swersky Twitter # Richard S. Zemel University of Toronto, Vector Institute # Abstract We propose prototypical networks for the problem of few-shot classiï¬ cation, where a classiï¬ er must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classiï¬ cation can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reï¬ ect a simpler inductive bias that is beneï¬ cial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the- art results on the CU-Birds dataset.
1703.05175#1
1703.05175
[ "1605.05395" ]
1703.05175#1
Prototypical Networks for Few-shot Learning
# Introduction Few-shot classiï¬ cation [20, 16, 13] is a task in which a classiï¬ er must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely overï¬ t. While the problem is quite difï¬ cult, it has been demonstrated that humans have the ability to perform even one-shot classiï¬ cation, where only a single example of each new class is given, with a high degree of accuracy [16]. Two recent approaches have made signiï¬ cant progress in few-shot learning. Vinyals et al. [29] proposed matching networks, which uses an attention mechanism over a learned embedding of the labeled set of examples (the support set) to predict classes for the unlabeled points (the query set). Matching networks can be interpreted as a weighted nearest-neighbor classiï¬ er applied within an embedding space. Notably, this model utilizes sampled mini-batches called episodes during training, where each episode is designed to mimic the few-shot task by subsampling classes as well as data points. The use of episodes makes the training problem more faithful to the test environment and thereby improves generalization. Ravi and Larochelle [22] take the episodic training idea further and propose a meta-learning approach to few-shot learning. Their approach involves training an LSTM [9] to produce the updates to a classiï¬ er, given an episode, such that it will generalize well to a test-set. Here, rather than training a single model over multiple episodes, the LSTM meta-learner learns to train a custom model for each episode. We attack the problem of few-shot learning by addressing the key issue of overï¬
1703.05175#0
1703.05175#2
1703.05175
[ "1605.05395" ]
1703.05175#2
Prototypical Networks for Few-shot Learning
tting. Since data is severely limited, we work under the assumption that a classiï¬ er should have a very simple inductive bias. Our approach, prototypical networks, is based on the idea that there exists an embedding in which points cluster around a single prototype representation for each class. In order to do this, we learn a non-linear mapping of the input into an embedding space using a neural network and take a classâ s prototype to be the mean of its support set in the embedding space. Classiï¬ cation is then performed for an embedded query point by simply ï¬ nding the nearest class prototype. We follow the same approach to tackle zero-shot learning; here each class comes with meta-data giving a high-level description of the class rather than a small number of labeled examples. We therefore learn an embedding of the meta-data into a shared space to serve as the prototype for each class.
1703.05175#1
1703.05175#3
1703.05175
[ "1605.05395" ]
1703.05175#3
Prototypical Networks for Few-shot Learning
*Initial work by ï¬ rst author done while at Twitter. (a) Few-shot (b) Zero-shot Figure 1: Prototypical networks in the few-shot and zero-shot scenarios. Left: Few-shot prototypes ck are computed as the mean of embedded support examples for each class. Right: Zero-shot prototypes ck are produced by embedding class meta-data vk. In either case, embedded query points are classiï¬ ed via a softmax over distances to class prototypes: pÏ (y = k|x) â exp(â d(fÏ (x), ck)). Classiï¬ cation is performed, as in the few-shot scenario, by ï¬ nding the nearest class prototype for an embedded query point. In this paper, we formulate prototypical networks for both the few-shot and zero-shot settings. We draw connections to matching networks in the one-shot setting, and analyze the underlying distance function used in the model. In particular, we relate prototypical networks to clustering [4] in order to justify the use of class means as prototypes when distances are computed with a Bregman divergence, such as squared Euclidean distance.
1703.05175#2
1703.05175#4
1703.05175
[ "1605.05395" ]
1703.05175#4
Prototypical Networks for Few-shot Learning
We ï¬ nd empirically that the choice of distance is vital, as Euclidean distance greatly outperforms the more commonly used cosine similarity. On several benchmark tasks, we achieve state-of-the-art performance. Prototypical networks are simpler and more efï¬ cient than recent meta-learning algorithms, making them an appealing approach to few-shot and zero-shot learning. # 2 Prototypical Networks # 2.1 Notation In few-shot classiï¬ cation we are given a small support set of N labeled examples S = {(x1, y1), . . . , (xN , yN )} where each xi â RD is the D-dimensional feature vector of an example and yi â {1, . . . , K} is the corresponding label. Sk denotes the set of examples labeled with class k. # 2.2 Model Prototypical networks compute an M -dimensional representation ck â RM , or prototype, of each class through an embedding function fÏ : RD â RM with learnable parameters Ï . Each prototype is the mean vector of the embedded support points belonging to its class: 1 c= DL folxi) ) (xi,yi)ESk Given a distance function d : RM à RM â [0, +â
1703.05175#3
1703.05175#5
1703.05175
[ "1605.05395" ]
1703.05175#5
Prototypical Networks for Few-shot Learning
), prototypical networks produce a distribution over classes for a query point x based on a softmax over distances to the prototypes in the embedding space: x exp(â d( f(x), cx)) B13) = 5 exp(â dlfo(), cr) °) Poly Learning proceeds by minimizing the negative log-probability J(Ï ) = â log pÏ (y = k | x) of the true class k via SGD. Training episodes are formed by randomly selecting a subset of classes from the training set, then choosing a subset of examples within each class to act as the support set and a subset of the remainder to serve as query points. Pseudocode to compute the loss J(Ï ) for a training episode is provided in Algorithm 1.
1703.05175#4
1703.05175#6
1703.05175
[ "1605.05395" ]
1703.05175#6
Prototypical Networks for Few-shot Learning
2 Algorithm 1 Training episode loss computation for prototypical networks. N is the number of examples in the training set, K is the number of classes in the training set, NC â ¤ K is the number of classes per episode, NS is the number of support examples per class, NQ is the number of query examples per class. RANDOMSAMPLE(S, N ) denotes a set of N elements chosen uniformly at random from set S, without replacement. Input: Training set D = {(x1,41),...,(xn,yn)}, where each y; â ¬ {1,..., A}. Dy denotes the subset of D containing all elements (x;, yi) such that y; = k. Output: The loss J for a randomly generated training episode. V < RANDOMSAMPLE({1,..., A}, No) > Select class indices for episode for k in {1,...,Nco}do S; < RANDOMSAMPLE(Dy, , Ns) > Select support examples Qk ++ RANDOMSAMPLE(Dy, \ Si, Ng) > Select query examples 1 Che Ne > fo(xi) > Compute prototype from support examples © (xiv ESt end for J<-0 > Initialize loss for k in {1,...,Nco} do for (x, y) in Q;, do Dede 4 (fo(x), ex) ) + los) _exp(- d(fo(x), â ¬r)) > Update loss end for end for # 2.3 Prototypical Networks as Mixture Density Estimation For a particular class of distance functions, known as regular Bregman divergences [4], the prototypi- cal networks algorithm is equivalent to performing mixture density estimation on the support set with an exponential family density. A regular Bregman divergence dÏ is deï¬ ned as: d,(z,2') = 9(z) â g(zâ ) â (2 -2') Ve(zâ ), (3) where y is a differentiable, strictly convex function of the Legendre type. Examples of Bregman divergences include squared Euclidean distance ||z â zâ ||? and Mahalanobis distance.
1703.05175#5
1703.05175#7
1703.05175
[ "1605.05395" ]
1703.05175#7
Prototypical Networks for Few-shot Learning
Prototype computation can be viewed in terms of hard clustering on the support set, with one cluster per class and each support point assigned to its corresponding class cluster. It has been shown [4] for Bregman divergences that the cluster representative achieving minimal distance to its assigned points is the cluster mean. Thus the prototype computation in Equation (1) yields optimal cluster representatives given the support set labels when a Bregman divergence is used. Moreover, any regular exponential family distribution pÏ (z|θ) with parameters θ and cumulant function Ï can be written in terms of a uniquely determined regular Bregman divergence [4]: pÏ (z|θ) = exp{zT θ â Ï (θ) â gÏ (z)} = exp{â dÏ (z, µ(θ)) â gÏ (z)} (4) Consider now a regular exponential family mixture model with parameters Î = {θk, Ï k}K k=1: p(2|P) = Ym 2/0.) = Yomeso(- dy(z, M(x) â 9p(z)) (5) Given Î , inference of the cluster assignment y for an unlabeled point z becomes: mx exp(â dy(z, W(Ox))) Der Te exp(â de(z, w(Ox))) p(y = kz) (6) For an equally-weighted mixture model with one cluster per class, cluster assignment inference (6) is equivalent to query class prediction (2) with fÏ (x) = z and ck = µ(θk). In this case, prototypical networks are effectively performing mixture density estimation with an exponential family distribution determined by dÏ
1703.05175#6
1703.05175#8
1703.05175
[ "1605.05395" ]
1703.05175#8
Prototypical Networks for Few-shot Learning
. The choice of distance therefore speciï¬ es modeling assumptions about the class- conditional data distribution in the embedding space. 3 # 2.4 Reinterpretation as a Linear Model A simple analysis is useful in gaining insight into the nature of the learned classifier. When we use Euclidean distance d(z,zâ ) = ||z â zâ ||?, then the model in Equation (2) is equivalent to a linear model with a particular parameterization [19]. To see this, expand the term in the exponent:
1703.05175#7
1703.05175#9
1703.05175
[ "1605.05395" ]
1703.05175#9
Prototypical Networks for Few-shot Learning
â lFo(x) â ex? = â So()" fox) + 2h fo(x) â ef cx 7) The ï¬ rst term in Equation (7) is constant with respect to the class k, so it does not affect the softmax probabilities. We can write the remaining terms as a linear model as follows: 2c) f(x) â eer = wi f(x) + be, where wy = 2cy and by = â cj cx (8)
1703.05175#8
1703.05175#10
1703.05175
[ "1605.05395" ]
1703.05175#10
Prototypical Networks for Few-shot Learning
We focus primarily on squared Euclidean distance (corresponding to spherical Gaussian densities) in this work. Our results indicate that Euclidean distance is an effective choice despite the equivalence to a linear model. We hypothesize this is because all of the required non-linearity can be learned within the embedding function. Indeed, this is the approach that modern neural network classiï¬ cation systems currently use, e.g., [14, 28]. # 2.5 Comparison to Matching Networks Prototypical networks differ from matching networks in the few-shot case with equivalence in the one-shot scenario. Matching networks [29] produce a weighted nearest neighbor classiï¬ er given the support set, while prototypical networks produce a linear classiï¬ er when squared Euclidean distance is used. In the case of one-shot learning, ck = xk since there is only one support point per class, and matching networks and prototypical networks become equivalent. A natural question is whether it makes sense to use multiple prototypes per class instead of just one. If the number of prototypes per class is ï¬ xed and greater than 1, then this would require a partitioning scheme to further cluster the support points within a class. This has been proposed in Mensink et al. [19] and Rippel et al. [25]; however both methods require a separate partitioning phase that is decoupled from the weight updates, while our approach is simple to learn with ordinary gradient descent methods. Vinyals et al. [29] propose a number of extensions, including decoupling the embedding functions of the support and query points, and using a second-level, fully-conditional embedding (FCE) that takes into account speciï¬
1703.05175#9
1703.05175#11
1703.05175
[ "1605.05395" ]
1703.05175#11
Prototypical Networks for Few-shot Learning
c points in each episode. These could likewise be incorporated into prototypical networks, however they increase the number of learnable parameters, and FCE imposes an arbitrary ordering on the support set using a bi-directional LSTM. Instead, we show that it is possible to achieve the same level of performance using simple design choices, which we outline next. # 2.6 Design Choices Distance metric Vinyals et al. [29] and Ravi and Larochelle [22] apply matching networks using cosine distance. However for both prototypical and matching networks any distance is permissible, and we found that using squared Euclidean distance can greatly improve results for both. We conjecture this is primarily due to cosine distance not being a Bregman divergence, and thus the equivalence to mixture density estimation discussed in Section 2.3 does not hold. Episode composition A straightforward way to construct episodes, used in Vinyals et al. [29] and Ravi and Larochelle [22], is to choose Nc classes and NS support points per class in order to match the expected situation at test-time. That is, if we expect at test-time to perform 5-way classiï¬ cation and 1-shot learning, then training episodes could be comprised of Nc = 5, NS = 1. We have found, however, that it can be extremely beneï¬ cial to train with a higher Nc, or â wayâ , than will be used at test-time.
1703.05175#10
1703.05175#12
1703.05175
[ "1605.05395" ]
1703.05175#12
Prototypical Networks for Few-shot Learning
In our experiments, we tune the training Nc on a held-out validation set. Another consideration is whether to match NS, or â shotâ , at train and test-time. For prototypical networks, we found that it is usually best to train and test with the same â shotâ number. # 2.7 Zero-Shot Learning Zero-shot learning differs from few-shot learning in that instead of being given a support set of training points, we are given a class meta-data vector vk for each class.
1703.05175#11
1703.05175#13
1703.05175
[ "1605.05395" ]
1703.05175#13
Prototypical Networks for Few-shot Learning
These could be determined 4 Table 1: Few-shot classiï¬ cation accuracies on Omniglot. 5-way Acc. 20-way Acc. Model Dist. Fine Tune 1-shot 5-shot 1-shot 5-shot MATCHING NETWORKS [29] MATCHING NETWORKS [29] NEURAL STATISTICIAN [6] PROTOTYPICAL NETWORKS (OURS) Cosine Cosine - Euclid. N Y N N 98.1% 98.9% 93.8% 98.5% 97.9% 98.7% 93.5% 98.7% 98.1% 99.5% 93.2% 98.1% 98.8% 99.7% 96.0% 98.9%
1703.05175#12
1703.05175#14
1703.05175
[ "1605.05395" ]
1703.05175#14
Prototypical Networks for Few-shot Learning
in advance, or they could be learned from e.g., raw text [7]. Modifying prototypical networks to deal with the zero-shot case is straightforward: we simply deï¬ ne ck = gÏ (vk) to be a separate embedding of the meta-data vector. An illustration of the zero-shot procedure for prototypical networks as it relates to the few-shot procedure is shown in Figure 1. Since the meta-data vector and query point come from different input domains, we found it was helpful empirically to ï¬ x the prototype embedding g to have unit length, however we do not constrain the query embedding f . # 3 Experiments For few-shot learning, we performed experiments on Omniglot [16] and the miniImageNet version of ILSVRC-2012 [26] with the splits proposed by Ravi and Larochelle [22]. We perform zero-shot experiments on the 2011 version of the Caltech UCSD bird dataset (CUB-200 2011) [31]. # 3.1 Omniglot Few-shot Classiï¬
1703.05175#13
1703.05175#15
1703.05175
[ "1605.05395" ]
1703.05175#15
Prototypical Networks for Few-shot Learning
cation Omniglot [16] is a dataset of 1623 handwritten characters collected from 50 alphabets. There are 20 examples associated with each character, where each example is drawn by a different human subject. We follow the procedure of Vinyals et al. [29] by resizing the grayscale images to 28 à 28 and augmenting the character classes with rotations in multiples of 90 degrees. We use 1200 characters plus rotations for training (4,800 classes in total) and the remaining classes, including rotations, for test. Our embedding architecture mirrors that used by Vinyals et al. [29] and is composed of four convolutional blocks. Each block comprises a 64-ï¬
1703.05175#14
1703.05175#16
1703.05175
[ "1605.05395" ]
1703.05175#16
Prototypical Networks for Few-shot Learning
lter 3 Ã 3 convolution, batch normalization layer [10], a ReLU nonlinearity and a 2 Ã 2 max-pooling layer. When applied to the 28 Ã 28 Omniglot images this architecture results in a 64-dimensional output space. We use the same encoder for embedding both support and query points. All of our models were trained via SGD with Adam [11]. We used an initial learning rate of 10â 3 and cut the learning rate in half every 2000 episodes. No regularization was used other than batch normalization. We trained prototypical networks using Euclidean distance in the 1-shot and 5-shot scenarios with training episodes containing 60 classes and 5 query points per class. We found that it is advantageous to match the training-shot with the test-shot, and to use more classes (higher â
1703.05175#15
1703.05175#17
1703.05175
[ "1605.05395" ]
1703.05175#17
Prototypical Networks for Few-shot Learning
wayâ ) per training episode rather than fewer. We compare against various baselines, including the neural statistician [6] and both the ï¬ ne-tuned and non-ï¬ ne-tuned versions of matching networks [29]. We computed classiï¬ cation accuracy for our models averaged over 1000 randomly generated episodes from the test set. The results are shown in Table 1 and to our knowledge they represent the state-of-the-art on this dataset. # 3.2 miniImageNet Few-shot Classiï¬
1703.05175#16
1703.05175#18
1703.05175
[ "1605.05395" ]
1703.05175#18
Prototypical Networks for Few-shot Learning
cation The miniImageNet dataset, originally proposed by Vinyals et al. [29], is derived from the larger ILSVRC-12 dataset [26]. The splits used by Vinyals et al. [29] consist of 60,000 color images of size 84 Ã 84 divided into 100 classes with 600 examples each. For our experiments, we use the splits introduced by Ravi and Larochelle [22] in order to directly compare with state-of-the-art algorithms for few-shot learning. Their splits use a different set of 100 classes, divided into 64 training, 16 validation, and 20 test classes. We follow their procedure by training on the 64 training classes and using the 16 validation classes for monitoring generalization performance only. We use the same four-block embedding architecture as in our Omniglot experiments, though here it results in a 1600-dimensional output space due to the increased size of the images.
1703.05175#17
1703.05175#19
1703.05175
[ "1605.05395" ]
1703.05175#19
Prototypical Networks for Few-shot Learning
We also 5 Table 2: Few-shot classiï¬ cation accuracies on miniImageNet. All accuracy results are averaged over 600 test episodes and are reported with 95% conï¬ dence intervals. â Results reported by [22]. 5-way Acc. Model Dist. Fine Tune 1-shot 5-shot BASELINE NEAREST NEIGHBORSâ MATCHING NETWORKS [29]â MATCHING NETWORKS FCE [29]â META-LEARNER LSTM [22]â PROTOTYPICAL NETWORKS (OURS) Cosine Cosine Cosine - Euclid. N N N N N 28.86 ± 0.54% 49.79 ± 0.79% 43.40 ± 0.78% 51.09 ± 0.71% 43.56 ± 0.84% 55.31 ± 0.73% 43.44 ± 0.77% 60.60 ± 0.71% 49.42 ± 0.78% 68.20 ± 0.66% 80% + 80% ~ EE Matching / Proto. Nets ~ ME Matching Nets § 70% J 70% TE Proto. Nets e 60% 4 2 60% e e 3 50% + 3 50% 8 8 ft 40% 4 < 40% _ a Oo | ? 30% + 2 30% ~ 6 20% 20% 5-way 5-way 20-way 20-way 5-way 5-way 20-way 20-way Cosine Euclid Cosine Euclid. Cosine Euclid Cosine Euclid. 1-shot 5-shot Figure 2: Comparison showing the effect of distance metric and number of classes per training episode on 5-way classiï¬ cation accuracy for both matching and prototypical networks on miniImageNet. The x-axis indicates conï¬ guration of the training episodes (way, distance, and shot), and the y-axis indicates 5-way test accuracy for the corresponding shot. Error bars indicate 95% conï¬ dence intervals as computed over 600 test episodes. Note that matching networks and prototypical networks are identical in the 1-shot case.
1703.05175#18
1703.05175#20
1703.05175
[ "1605.05395" ]
1703.05175#20
Prototypical Networks for Few-shot Learning
use the same learning rate schedule as in our Omniglot experiments and train until validation loss stops improving. We train using 30-way episodes for 1-shot classiï¬ cation and 20-way episodes for 5-shot classiï¬ cation. We match train shot to test shot and each class contains 15 query points per episode. We compare to the baselines as reported by Ravi and Larochelle [22], which include a simple nearest neighbor approach on top of features learned by a classiï¬ cation network on the 64 training classes. The other baselines are two non-ï¬ ne-tuned variants of matching networks (both ordinary and FCE) and the Meta-Learner LSTM. As can be seen in Table 2, prototypical networks achieves state-of-the-art here by a wide margin. We conducted further analysis, to determine the effect of distance metric and the number of training classes per episode on the performance of prototypical networks and matching networks. To make the methods comparable, we use our own implementation of matching networks that utilizes the same embedding architecture as our prototypical networks. In Figure 2 we compare cosine vs. Euclidean distance and 5-way vs. 20-way training episodes in the 1-shot and 5-shot scenarios, with 15 query points per class per episode. We note that 20-way achieves higher accuracy than 5-way and conjecture that the increased difï¬ culty of 20-way classiï¬ cation helps the network to generalize better, because it forces the model to make more ï¬ ne-grained decisions in the embedding space. Also, using Euclidean distance improves performance substantially over cosine distance. This effect is even more pronounced for prototypical networks, in which computing the class prototype as the mean of embedded support points is more naturally suited to Euclidean distances since cosine distance is not a Bregman divergence. # 3.3 CUB Zero-shot Classiï¬
1703.05175#19
1703.05175#21
1703.05175
[ "1605.05395" ]
1703.05175#21
Prototypical Networks for Few-shot Learning
cation In order to assess the suitability of our approach for zero-shot learning, we also run experiments on the Caltech-UCSD Birds (CUB) 200-2011 dataset [31]. The CUB dataset contains 11,788 images of 200 bird species. We closely follow the procedure of Reed et al. [23] in preparing the data. We use 6 Table 3: Zero-shot classiï¬ cation accuracies on CUB-200. Model Image Features 50-way Acc. 0-shot ALE [1] SJE [2] SAMPLE CLUSTERING [17] SJE [2] DS-SJE [23] DA-SJE [23] PROTO. NETS (OURS) Fisher AlexNet AlexNet GoogLeNet GoogLeNet GoogLeNet GoogLeNet 26.9% 40.3% 44.3% 50.1% 50.4% 50.9% 54.6% their splits to divide the classes into 100 training, 50 validation, and 50 test. For images we use 1,024- dimensional features extracted by applying GoogLeNet [28] to middle, upper left, upper right, lower left, and lower right crops of the original and horizontally-ï¬ ipped image2. At test time we use only the middle crop of the original image. For class meta-data we use the 312-dimensional continuous attribute vectors provided with the CUB dataset. These attributes encode various characteristics of the bird species such as their color, shape, and feather patterns. We learned a simple linear mapping on top of both the 1024-dimensional image features and the 312-dimensional attribute vectors to produce a 1,024-dimensional output space. For this dataset we found it helpful to normalize the class prototypes (embedded attribute vectors) to be of unit length, since the attribute vectors come from a different domain than the images. Training episodes were constructed with 50 classes and 10 query images per class. The embeddings were optimized via SGD with Adam at a ï¬ xed learning rate of 10â 4 and weight decay of 10â 5.
1703.05175#20
1703.05175#22
1703.05175
[ "1605.05395" ]
1703.05175#22
Prototypical Networks for Few-shot Learning
Early stopping on validation loss was used to determine the optimal number of epochs for retraining on the training plus validation set. Table 3 shows that we achieve state-of-the-art results by a large margin when compared to methods utilizing attributes as class meta-data. We compare our method to other embedding approaches, such as ALE [1], SJE [2], and DS-SJE/DA-SJE [23]. We also compare to a recent clustering approach [17] which trains an SVM on a learned feature space obtained by ï¬ ne-tuning AlexNet [14]. These zero-shot classiï¬ cation results demonstrate that our approach is general enough to be applied even when the data points (images) are from a different domain relative to the classes (attributes). # 4 Related Work The literature on metric learning is vast [15, 5]; we summarize here the work most relevant to our proposed method. Neighborhood Components Analysis (NCA) [8] learns a Mahalanobis distance to maximize K-nearest-neighborâ s (KNN) leave-one-out accuracy in the transformed space. Salakhutdi- nov and Hinton [27] extend NCA by using a neural network to perform the transformation. Large margin nearest neighbor (LMNN) classiï¬ cation [30] also attempts to optimize KNN accuracy but does so using a hinge loss that encourages the local neighborhood of a point to contain other points with the same label. The DNet-KNN [21] is another margin-based method that improves upon LMNN by utilizing a neural network to perform the embedding instead of a simple linear transformation. Of these, our method is most similar to the non-linear extension of NCA [27] because we use a neural network to perform the embedding and we optimize a softmax based on Euclidean distances in the transformed space, as opposed to a margin loss. A key distinction between our approach and non-linear NCA is that we form a softmax directly over classes, rather than individual points, computed from distances to each classâ
1703.05175#21
1703.05175#23
1703.05175
[ "1605.05395" ]
1703.05175#23
Prototypical Networks for Few-shot Learning
s prototype representation. This allows each class to have a concise representation independent of the number of data points and obviates the need to store the entire support set to make predictions. Our approach is also similar to the nearest class mean approach [19], where each class is represented by the mean of its examples. This approach was developed to rapidly incorporate new classes into a classiï¬ er without retraining, however it relies on a linear embedding and was designed to handle 2Features downloaded from https://github.com/reedscot/cvpr2016.
1703.05175#22
1703.05175#24
1703.05175
[ "1605.05395" ]
1703.05175#24
Prototypical Networks for Few-shot Learning
7 the case where the novel classes come with a large number of examples. In contrast, our approach utilizes neural networks to non-linearly embed points and we couple this with episodic training in order to handle the few-shot scenario. Mensink et al. attempt to extend their approach to also perform non-linear classiï¬ cation, but they do so by allowing classes to have multiple prototypes. They ï¬ nd these prototypes in a pre-processing step by using k-means on the input space and then perform a multi-modal variant of their linear embedding. Prototypical networks, on the other hand, learn a non-linear embedding in an end-to-end manner with no such pre-processing, producing a non-linear classiï¬ er that still only requires one prototype per class. In addition, our approach naturally generalizes to other distance functions, particularly Bregman divergences. Another relevant few-shot learning method is the meta-learning approach proposed in Ravi and Larochelle [22]. The key insight here is that LSTM dynamics and gradient descent can be written in effectively the same way. An LSTM can then be trained to itself train a model from a given episode, with the performance goal of generalizing well on the query points. Matching networks and prototypical networks can also be seen as forms of meta-learning, in the sense that they produce simple classiï¬ ers dynamically from new training episodes; however the core embeddings they rely on are ï¬ xed after training. The FCE extension to matching nets involves a secondary embedding that depends on the support set. However, in the few-shot scenario the amount of data is so small that a simple inductive bias seems to work well, without the need to learn a custom embedding for each episode. Prototypical networks are also related to the neural statistician [6] from the generative modeling literature, which extends the variational autoencoder [12, 24] to learn generative models of datasets rather than individual points. One component of the neural statistician is the â statistic networkâ which summarizes a set of data points into a statistic vector. It does this by encoding each point within a dataset, taking a sample mean, and applying a post-processing network to obtain an approximate posterior over the statistic vector. Edwards and Storkey test their model for one-shot classiï¬
1703.05175#23
1703.05175#25
1703.05175
[ "1605.05395" ]
1703.05175#25
Prototypical Networks for Few-shot Learning
cation on the Omniglot dataset by considering each character to be a separate dataset and making predictions based on the class whose approximate posterior over the statistic vector has minimal KL-divergence from the posterior inferred by the test point. Like the neural statistician, we also produce a summary statistic for each class. However, ours is a discriminative model, as beï¬ ts our discriminative task of few-shot classiï¬ cation. With respect to zero-shot learning, the use of embedded meta-data in prototypical networks resembles the method of [3] in that both predict the weights of a linear classiï¬ er. The DS-SJE and DA-SJE approach of [23] also learns deep multimodal embedding functions for images and class meta-data. Unlike ours, they learn using an empirical risk loss. Neither [3] nor [23] uses episodic training, which allows us to help speed up training and regularize the model.
1703.05175#24
1703.05175#26
1703.05175
[ "1605.05395" ]
1703.05175#26
Prototypical Networks for Few-shot Learning
# 5 Conclusion We have proposed a simple method called prototypical networks for few-shot learning based on the idea that we can represent each class by the mean of its examples in a representation space learned by a neural network. We train these networks to speciï¬ cally perform well in the few-shot setting by using episodic training. The approach is far simpler and more efï¬ cient than recent meta-learning approaches, and produces state-of-the-art results even without sophisticated extensions developed for matching networks (although these can be applied to prototypical nets as well). We show how performance can be greatly improved by carefully considering the chosen distance metric, and by modifying the episodic learning procedure. We further demonstrate how to generalize prototypical networks to the zero-shot setting, and achieve state-of-the-art results on the CUB-200 dataset. A natural direction for future work is to utilize Bregman divergences other than squared Euclidean distance, corresponding to class-conditional distributions beyond spherical Gaussians. We conducted preliminary explorations of this, including learning a variance per dimension for each class. This did not lead to any empirical gains, suggesting that the embedding network has enough ï¬ exibility on its own without requiring additional ï¬ tted parameters per class. Overall, the simplicity and effectiveness of prototypical networks makes it a promising approach for few-shot learning.
1703.05175#25
1703.05175#27
1703.05175
[ "1605.05395" ]
1703.05175#27
Prototypical Networks for Few-shot Learning
8 # Acknowledgements We would like to thank Marc Law, Sachin Ravi, Hugo Larochelle, Renjie Liao, and Oriol Vinyals for helpful discussions. This work was supported by the Samsung GRP project and the Canadian Institute for Advanced Research. # References [1] Zeynep Akata, Florent Perronnin, Zaid Harchaoui, and Cordelia Schmid. Label-embedding for attribute- based classiï¬ cation. In Computer Vision and Pattern Recognition, pages 819â 826, 2013. [2] Zeynep Akata, Scott Reed, Daniel Walter, Honglak Lee, and Bernt Schiele.
1703.05175#26
1703.05175#28
1703.05175
[ "1605.05395" ]
1703.05175#28
Prototypical Networks for Few-shot Learning
Evaluation of output embed- dings for ï¬ ne-grained image classiï¬ cation. In Computer Vision and Pattern Recognition, pages 2927â 2936, 2015. [3] Jimmy Ba, Kevin Swersky, Sanja Fidler, and Ruslan Salakhutdinov. Predicting deep zero-shot convolutional neural networks using textual descriptions. In International Conference on Computer Vision, pages 4247â 4255, 2015. [4] Arindam Banerjee, Srujana Merugu, Inderjit S Dhillon, and Joydeep Ghosh. Clustering with bregman divergences. Journal of machine learning research, 6(Oct):1705â 1749, 2005. [5] Aurélien Bellet, Amaury Habrard, and Marc Sebban. A survey on metric learning for feature vectors and structured data. arXiv preprint arXiv:1306.6709, 2013. [6] Harrison Edwards and Amos Storkey. Towards a neural statistician. International Conference on Learning Representations, 2017. [7] Mohamed Elhoseiny, Babak Saleh, and Ahmed Elgammal. Write a classiï¬ er:
1703.05175#27
1703.05175#29
1703.05175
[ "1605.05395" ]
1703.05175#29
Prototypical Networks for Few-shot Learning
Zero-shot learning using purely textual descriptions. In International Conference on Computer Vision, pages 2584â 2591, 2013. [8] Jacob Goldberger, Geoffrey E. Hinton, Sam T. Roweis, and Ruslan Salakhutdinov. Neighbourhood components analysis. In Advances in Neural Information Processing Systems, pages 513â 520, 2004. [9] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735â 1780, 1997. [10] Sergey Ioffe and Christian Szegedy.
1703.05175#28
1703.05175#30
1703.05175
[ "1605.05395" ]
1703.05175#30
Prototypical Networks for Few-shot Learning
Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [11] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [12] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [13] Gregory Koch. Siamese neural networks for one-shot image recognition.
1703.05175#29
1703.05175#31
1703.05175
[ "1605.05395" ]
1703.05175#31
Prototypical Networks for Few-shot Learning
Masterâ s thesis, University of Toronto, 2015. [14] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097â 1105, 2012. [15] Brian Kulis. Metric learning: A survey. Foundations and Trends in Machine Learning, 5(4):287â 364, 2012. [16] Brenden M. Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B. Tenenbaum.
1703.05175#30
1703.05175#32
1703.05175
[ "1605.05395" ]
1703.05175#32
Prototypical Networks for Few-shot Learning
One shot learning of simple visual concepts. In CogSci, 2011. [17] Renjie Liao, Alexander Schwing, Richard Zemel, and Raquel Urtasun. Learning deep parsimonious representations. Advances in Neural Information Processing Systems, 2016. [18] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579â 2605, 2008. [19] Thomas Mensink, Jakob Verbeek, Florent Perronnin, and Gabriela Csurka.
1703.05175#31
1703.05175#33
1703.05175
[ "1605.05395" ]
1703.05175#33
Prototypical Networks for Few-shot Learning
Distance-based image classiï¬ - cation: Generalizing to new classes at near-zero cost. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11):2624â 2637, 2013. 9 [20] Erik G Miller, Nicholas E Matsakis, and Paul A Viola. Learning from one example through shared densities on transforms. In CVPR, volume 1, pages 464â 471, 2000. [21] Renqiang Min, David A Stanley, Zineng Yuan, Anthony Bonner, and Zhaolei Zhang.
1703.05175#32
1703.05175#34
1703.05175
[ "1605.05395" ]
1703.05175#34
Prototypical Networks for Few-shot Learning
A deep non-linear feature mapping for large-margin knn classiï¬ cation. In IEEE International Conference on Data Mining, pages 357â 366, 2009. [22] Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. International Conference on Learning Representations, 2017. [23] Scott Reed, Zeynep Akata, Bernt Schiele, and Honglak Lee. Learning deep representations of ï¬ ne-grained visual descriptions. arXiv preprint arXiv:1605.05395, 2016.
1703.05175#33
1703.05175#35
1703.05175
[ "1605.05395" ]
1703.05175#35
Prototypical Networks for Few-shot Learning
[24] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi- mate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. [25] Oren Rippel, Manohar Paluri, Piotr Dollar, and Lubomir Bourdev. Metric learning with adaptive density discrimination. International Conference on Learning Representations, 2016. [26] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â 252, 2015. [27] Ruslan Salakhutdinov and Geoffrey E. Hinton. Learning a nonlinear embedding by preserving class neighbourhood structure. In AISTATS, pages 412â 419, 2007. [28] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â
1703.05175#34
1703.05175#36
1703.05175
[ "1605.05395" ]
1703.05175#36
Prototypical Networks for Few-shot Learning
9, 2015. [29] Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pages 3630â 3638, 2016. [30] Kilian Q Weinberger, John Blitzer, and Lawrence K Saul. Distance metric learning for large margin nearest neighbor classiï¬ cation. In Advances in Neural Information Processing Systems, pages 1473â 1480, 2005. [31] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010. 10
1703.05175#35
1703.05175#37
1703.05175
[ "1605.05395" ]
1703.05175#37
Prototypical Networks for Few-shot Learning
# A Additional Omniglot Results In Table 4 we show test classiï¬ cation accuracy for prototypical networks using Euclidean distance trained with 5, 20, and 60 classes per episode. Table 4: Additional classiï¬ cation accuracy results for prototypical networks on Omniglot. Conï¬ gura- tion of training episodes is indicated by number of classes per episode (â wayâ ), number of support points per class (â shotâ ) and number of query points per class (â queryâ ). Classiï¬
1703.05175#36
1703.05175#38
1703.05175
[ "1605.05395" ]
1703.05175#38
Prototypical Networks for Few-shot Learning
cation accuracy was averaged over 1,000 randomly generated episodes from the test set. Train Episodes 5-way Acc. 20-way Acc. Model Dist. Shot Query Way 1-shot 5-shot 1-shot 5-shot PROTONETS PROTONETS PROTONETS Euclid. Euclid. Euclid. 1 1 1 15 15 5 5 20 60 97.4% 99.3% 92.0% 97.8% 98.7% 99.6% 95.4% 98.8% 98.8% 99.7% 96.0% 99.0% PROTONETS PROTONETS PROTONETS Euclid. Euclid. Euclid. 5 5 5 15 15 5 5 20 60 96.9% 99.3% 90.7% 97.8% 98.1% 99.6% 94.1% 98.7% 98.5% 99.7% 94.7% 98.9%
1703.05175#37
1703.05175#39
1703.05175
[ "1605.05395" ]
1703.05175#39
Prototypical Networks for Few-shot Learning
Figure 3 shows a sample t-SNE visualization [18] of the embeddings learned by prototypical networks. We visualize a subset of test characters from the same alphabet in order to gain better insight, despite the fact that classes in actual test episodes are likely to come from different alphabets. Even though the visualized characters are minor variations of each other, the network is able to cluster the hand-drawn characters closely around the class prototypes. # B Additional miniImageNet Results In Table 5 we show the full results for the comparison of training episode conï¬ guration in Figure 2 of the main paper. We also compared Euclidean-distance prototypical networks trained with a different number of classes per episode. Here we vary the classes per training episode from 5 up to 30 while keeping the number of query points per class ï¬
1703.05175#38
1703.05175#40
1703.05175
[ "1605.05395" ]
1703.05175#40
Prototypical Networks for Few-shot Learning
xed at 15. The results are shown in Figure 4. Our ï¬ ndings indicate that construction of training episodes is an important consideration in order to achieve good results for few-shot classiï¬ cation. Table 6 contains the full results for this set of experiments. 11 Figure 3: A t-SNE visualization of the embeddings learned by prototypical networks on the Omniglot dataset. A subset of the Tengwar script is shown (an alphabet in the test set). Class prototypes are indicated in black. Several misclassiï¬ ed characters are highlighted in red along with arrows pointing to the correct prototype. 51% 1-shot 69.0% 5-shot S som 08.5% 7 Ea 4 2 68.0% > ann aor 6 â Tt 8 - . © 49% u 2 -- ~ > fo fee _ > 67.5% ra Ae © age, ao © 67.0% â a § 40% â § 67.0% im . g 2 66.5% â 47% â < , a B 66.0% | |. 46% 5 65.5% 45% 65.0% 5 10 15 20 25 30 5 10 15 20 25 30 Training Classes per Episode Training Classes per Episode Figure 4: Comparison of the effect of training â wayâ
1703.05175#39
1703.05175#41
1703.05175
[ "1605.05395" ]