doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1709.08568 | 18 | 1and we probably do not want to represent that graph explicitly, and instead use conscious attention to selectively traverse and explore only relevant parts of it, in the context of given goals
3
satisfy both the sparsity requirement (each sentence involves few words) and the "strong dip" requirement (otherwise the statement is not worth communicating). In the quest to discover encoding functions which disentangle [Bengio, 2009, Bengio et al., 2013] high-level concepts from each other, we should see the consciousness prior as one of many tools to constrain the learner towards better high-level representa- tions. Please note in passing that by âdisentangled" we do not generally mean marginally independent (that would make all the top-level variables independent of each other), as in recent work on variational autoencoders [Higgins et al., 2017]. Indeed, notice how natural language concepts (like say "fork" and "knife") tend to not be independent of each other, but instead may be combined to form probable state- ments (like "she was eating with her knife and fork"). | 1709.08568#18 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.08568 | 19 | The analogy with natural language and with knowledge graphs, ontologies and formal declarative knowledge also suggests that new potential functions can be created as needed. Instead of having a large but ï¬xed set of potential functions, what we have are mechanisms for creating new ones which "make sense" according to observations, reasoning, or imagination. Instead of enumerating all the possible potential functions, the brain may have the ability to instantiate new ones on the ï¬y. This connects the previous section, which was about the attention mechanisms for selecting a small set of variables forming a conscious thought (ct) with the topic of this section, which is about the declarative knowledge formed by the set of potential functions each linking a few variables together. Whereas the sparse factor graph constraint is about the underlying beliefs about the world (when expressed with the high-level variables), the attention mechanisms used to build conscious thoughts are part of the inference mechanisms used to compute efï¬ciently according to the consciousness prior.
# 3.3 Training Objectives
To capture the assumption that a conscious thought can encapsulate a statement about the future, we could introduce a veriï¬er network which can match a current representation state ht with a past conscious state ctâk stored in memory mtâ1:
# V (ht, ctâk) â R
(5) | 1709.08568#19 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.08568 | 20 | # V (ht, ctâk) â R
(5)
which should be structured so that V (ht, ctâk) indicates the consistency of ctâk with ht, e.g., estimating the probability of the corresponding statement being true, given ht.
More generally, we would like to deï¬ne an objective (or reward) function which embodies the idea that the attended (conscious) elements are useful, in a way which can be quantiï¬ed and optimized, i.e., that the representation RNN and the attention mechanism which extracts ct from ht are trained to optimize this objective function. This can be in addition to other objectives such as being able to reconstruct the raw input or any other supervised, RL, or unsupervised objectives which we probably want to throw in. | 1709.08568#20 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.08568 | 21 | There are two distinct mechanisms at play which contribute to map the high-level state representation to the objective function: (1) the attention mechanism (e.g. the consciousness RNN) which selects and combines a few elements from the high-level state representation into a low-dimensional âconscious sub- stateâ object (the current content of our consciousness), and (2) the predictions or actions which are derived from the sequence of these conscious sub-states. The second mechanism is easy to grasp and frame in standard ML practice, either in deep learning or RL, e.g. for supervised or unsupervised or RL tasks. For example, the attention mechanism could select elements B from the current representation state and choose to make a prediction about future elements A. Then to improve the quality of the prediction mechanism we may just want to maximize logP (A|B) or some proxy for it, e.g., using a variational auto-encoder [Kingma and Welling, 2014] objective or a a conditional GAN [Mirza and Osindero, 2014] if one wants to sample accurately an A from B. Note again that such an objective function is not just used to learn the mapping from B to A (or to probabilities over the | 1709.08568#21 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.08568 | 22 | to sample accurately an A from B. Note again that such an objective function is not just used to learn the mapping from B to A (or to probabilities over the space of A values), but also drives the learning of the representation function itself, i.e., is back-propagated into the representation RNN). However, this part of the objective function (e.g. predictive value, computed by V above) is not sufï¬cient and in fact is not appropriate to train the attention mechanism itself (which variables A and B should be selected?). Indeed, if that was the driving objective for attention, the learner would always pick a pair (A, B) such that A is trivially predictable from B (and there are such aspects of reality which are trivially predictable yet do not help us to further understand the world and make sense of it or achieve our goals). It remains an open question what other objectives would be appropriate for learning how to attend to the most useful elements, but ultimately we should be able to use the actual RL reward of the learning agent using ct for taking decisions. Some form of mutual information, entropy or diversity may be needed so that the attention mechanism is stochastic and can choose a very | 1709.08568#22 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.08568 | 24 | 4
# 3.4 Naming Variables and Indirection
Content-based soft-attention or hard-attention mechanisms [Bahdanau et al., 2015, Xu et al., 2015] ex- tract a value from a set of element by taking a convex weighted sum of values from an input set of values. Those weights are the attention weights and they are computed by an attention mechanism which gives a larger weight on the element with the most appropriate "key", according to some context. | 1709.08568#24 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.08568 | 25 | In standard neural networks without attention, a neuron i is identiï¬ed by its position in its layer and the signal it sends to some other neuron j downstream does not need to be identiï¬ed as coming from i. However, when attention mechanisms such as described above are used to provide an input value to j, the input could come from any of the elements over which attention is making a selection. Depending on the computation performed, it could thus be useful for downstream layers with attention mechanisms selecting their input to receive not just the weighted (selected) value but also information about the source of the information. We can think of that information as a variable name (and possibly other attributes which we can interpret as variable type), which complement the variable value. The idea of (key,value) pairs was used in memory augmented neural networks [Graves et al., 2014, Weston et al., 2014], although it is not clear if a distinction between keys and values exists in the brain, or if a general auto-associative mechanism is used instead. | 1709.08568#25 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.08568 | 26 | When elements from the unconscious state ht are selected to enter the conscious state ct using content- based soft-attention [Bahdanau et al., 2015], it is not just a value which should be copied but also some "key" which identiï¬es the origin of that value. Modern attention-based deep learning architectures such as Transformers [Vaswani et al., 2017] bind (key,value) pairs together precisely for that purpose. For example, the kind of veriï¬er network discussed above needs to associate a (key,prediction) pair made in the past with a (key,realization) pair observed later. The key thus acts like a name and provides a form of indirection or reference. If the key and value were mixed up and the predicted value differs substantially from the observed value, a simple associative process might miss the opportunity to match these and thus provide a strong training signal (to correct the predictor). Another reason to represent keys separately from values is that the keys can be used to represent a form of type information, to help match the expected argument type of a downstream computation with an appropriate element selected by an attention mechanism. This is important in order to obtain systematic generalization [Lake and Baroni, 2017] and | 1709.08568#26 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.08568 | 27 | downstream computation with an appropriate element selected by an attention mechanism. This is important in order to obtain systematic generalization [Lake and Baroni, 2017] and combinatorial properties omnipresent in natural language, making it easier to combine different pieces of neural hardware together dynamically, with keys being used to decide which information should be routed where. We could thus see the conscious state as a bottleneck to route such information across many different modules. | 1709.08568#27 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.08568 | 28 | # 3.5 Connection to Language and Symbolic Knowledge Representation
We hypothesize that conscious processing of the kind described above could thus help the brain (and future machine learning systems) achieve better systematic generalization and combine concepts in ï¬uent and combinatorial ways. The fact that we deï¬ne consciousness in terms of verbal reporting may be important to note here. All this indeed suggests that there is a fairly simple transformation of conscious states into natural language sentences. Conversely, an externally provided sentence (heard or read) could also elicit an associated conscious state, although we postulate that the conscious state is generally a richer object than the uttered sentence, i.e., mapping from conscious states to sentences loses information (think about visual imagery, or artistic expression, which are difï¬cult to put in words), and the same sentence could thus be interpreted differently depending on context and the particulars of the agent who reads that sentence. Formally, we could use another RNN to map a conscious state to an utterance ut:
ut = U (ct, utâ1). (6) | 1709.08568#28 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.08568 | 29 | ut = U (ct, utâ1). (6)
A learning agent which uses language could thus beneï¬t from an additional regularization effect putting pressure on the encoder: the set of currently consciously attended elements should have a direct two- way mapping with natural language utterances which may be uttered by other agents, such as a human teacher. This would act as a weak form of supervision for the concepts produced by the encoder. A sentence focuses on just a handful of elements and concepts, unlike our full internal state. This imposes soft constraints on the representation function in that its individual elements or dimensions are more likely to correspond to concepts which can typically be expressed by a single word or phrase. Based on these arguments, it is reasonable to hypothesize that language may actually help humans build sharper internal representations (which are better disentangled) as well as facilitate learning â see the arguments around curriculum learning [Bengio et al., 2009] and cultural learning [Bengio, 2014] â and enable collaborative task-solving.
Along the same line, this research opens the door to the possibility of better connecting deep learn- ing with classical symbolic AI and cognitive science, and move deep learning from perception (where
5 | 1709.08568#29 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.08568 | 30 | Along the same line, this research opens the door to the possibility of better connecting deep learn- ing with classical symbolic AI and cognitive science, and move deep learning from perception (where
5
it currently shines) to higher-level cognition and knowledge representation (where many questions re- main open). For example, declarative knowledge is classically represented by facts and rules: each of them is a very sharp statement (true with high probability) about reality involving just a few concepts. Such a nugget of information or knowledge seems to ï¬t well as a conscious state. Combining such conscious states sequentially in order to make more complex predictions and inferences or actions is ba- sically what reasoning is about. However, pasting symbolic logic computations on top of a deep learning encoder might not succeed for several reasons. This would lose the ability manipulate uncertainty as well as represent the context-dependent effect of goals and background knowledge which deep learning with content-based attention can provide, in addition to the ability to improve generalization through distributed representations. Instead, we envision extensions of deep learning based on attention that im- plement conscious processing functionalities associated with system 2 tasks in humans. Progress in this direction would also address the often expressed concern about obtaining explanations from deep nets, since the approach proposed here would make it easier for a trained agent to communicate verbally its high-level state.
# 4 Considerations for Experimenting with the Consciousness Prior | 1709.08568#30 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.08568 | 31 | # 4 Considerations for Experimenting with the Consciousness Prior
Because this is a novel theory which may be developped in many different ways, it is important to start with simple toy experiments allowing one to test and evaluate qualitatively different approaches, such that the turnaround time for each experiment is very short and the analysis of the representations learned very easy (because we already have a preconceived idea of what concepts would be the most appropriate to disentangle).
Although working with natural language input would be likely to help the agent learn better and more abstract representations, it might be better to start with experiments with no linguistic input, to make sure that it is the training objective and the training framework alone which are leading to the discovery of the appropriate high-level concepts. For example, learning some form of intuitive physics is done by babies without the need for linguistic guidance. Similarly, although the consciousness prior could be used in supervised learning or task-oriented RL, testing its ability alone to discover high-level abstractions would be best done in the context of unsupervised RL, e.g., using an intrinsic reward which favours the discovery of how the environment works. | 1709.08568#31 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.08568 | 32 | It would be more interesting for the learning task to involve meaningful abstractions which have a high predictive power. For example, consider predicting whether a pile of blocks will fall on or off a table. It involves a high-level discrete outcome which can be predicted easily, even if the details of where the blocks will fall is very difï¬cult even for humans to predict. In that case, predicting the future at the pixel level would be extremely difï¬cult because future states have high entropy, with a highly multi-modal distribution. However, some aspects of the future may have low entropy. If in addition, these aspects have a big impact on predicting what will come next (or on taking the right decisions now), then the consciousness prior should be very useful.
# Acknowledgements
The author wants to thank Philippe Beaudoin, Gerry (Tong) Che, William Fedus, Devon Hjelm and Anirudh Goyal for preliminary discussions about the consciousness prior, as well as funding from NSERC, CIFAR, the Canada Research Chairs, and the Open Philanthropy Project.
# References
Bernard J. Baars. A Cognitive Theory of Consciousness. Cambridge, MA: Cambridge University Press, 1988.
Bernard J. Baars. In the Theater of Consciousness. New York, NY: Oxford University Press, 1997. | 1709.08568#32 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.08568 | 33 | Bernard J. Baars. In the Theater of Consciousness. New York, NY: Oxford University Press, 1997.
Bernard J. Baars. The conscious access hypothesis: Origins and recent evidence, volume 6. 2002.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLRâ2015, arXiv:1409.0473, 2015.
Yoshua Bengio. Learning deep architectures for AI. Now Publishers, 2009.
6
Yoshua Bengio. Deep learning and cultural evolution. In Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation, pages 1â2. ACM, 2014. URL http://dl.acm.org/citation.cfm?id=2598395.
Yoshua Bengio, Jerome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In ICMLâ09, 2009.
Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new per- spectives. IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), 35(8):1798â1828, 2013. | 1709.08568#33 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.08568 | 34 | S. Dehaene and L. Naccache. Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition, 79(1â2):1â37, 2001.
S. Dehaene, H. Lau, and S. Kouider. What is consciousness, and could machines have it? Science, 358 (6362):486â492, 2017.
Lisa Ehrlinger and Wolfram WöÃ. Towards a deï¬nition of knowledge graphs. SEMANTiCS (Posters, Demos, SuCCESS), 48, 2016.
Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing machines. arXiv preprint arXiv:1410.5401, 2014.
Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained vari- ational framework. ICLR, 2(5):6, 2017.
Daniel Kahneman. Thinking, Fast and Slow. Macmillan, 2011.
Durk P. Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the International Conference on Learning Representations (ICLR), 2014. | 1709.08568#34 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.08568 | 35 | Durk P. Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the International Conference on Learning Representations (ICLR), 2014.
Brenden M Lake and Marco Baroni. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. arXiv preprint arXiv:1711.00350, 2017.
Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
Richard Sutton and Andrew Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
Robert van Gulick. Consciousness. In Stanford Encyclopedia of Philosophy. 2004.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998â6008, 2017.
Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014. | 1709.08568#35 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | [
{
"id": "1711.00350"
}
] |
1709.07871 | 0 | 7 1 0 2 c e D 8 1
] V C . s c [
2 v 1 7 8 7 0 . 9 0 7 1 : v i X r a
# FiLM: Visual Reasoning with a General Conditioning Layer
Ethan Perez1,2, Florian Strub4, Harm de Vries1, Vincent Dumoulin1, Aaron Courville1,3 1MILA, Universit´e de Montr´eal, 2Rice University, 3CIFAR Fellow, 4Univ. Lille, CNRS, Centrale Lille, Inria, UMR 9189 CRIStAL France [email protected], ï¬[email protected], [email protected],{dumouliv,courvila}@iro.umontreal.ca
# Abstract | 1709.07871#0 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 1 | # Abstract
We introduce a general-purpose conditioning method for neu- ral networks called FiLM: Feature-wise Linear Modulation. FiLM layers inï¬uence neural network computation via a sim- ple, feature-wise afï¬ne transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning â answering image-related questions which require a multi-step, high-level process â a task which has proven difï¬cult for standard deep learning methods that do not explicitly model reasoning. Speciï¬cally, we show on visual reasoning tasks that FiLM layers 1) halve state-of-the- art error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modiï¬cations, and 4) generalize well to challenging, new data from few examples or even zero-shot. | 1709.07871#1 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 2 | (a) Q: What number of cylinders are small pur- ple things or yellow rubber things? A: 2 (b) Q: What color is the other object that is the same shape as the large brown matte thing? A: Brown
Figure 1: CLEVR examples and FiLM model answers.
to exploit biases in the data rather than capture complex un- derlying structure behind reasoning (Goyal et al. 2017).
# 1 Introduction
The ability to reason about everyday visual input is a fun- damental building block of human intelligence. Some have argued that for artiï¬cial agents to learn this complex, struc- tured process, it is necessary to build in aspects of reason- ing, such as compositionality (Hu et al. 2017; Johnson et al. 2017b) or relational computation (Santoro et al. 2017). However, if a model made from general-purpose compo- nents could learn to visually reason, such an architecture would likely be more widely applicable across domains. | 1709.07871#2 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 3 | To understand if such a general-purpose architecture ex- ists, we take advantage of the recently proposed CLEVR dataset (Johnson et al. 2017a) that tests visual reasoning via question answering. Examples from CLEVR are shown in Figure 1. Visual question answering, the general task of ask- ing questions about images, has its own line of datasets (Ma- linowski and Fritz 2014; Geman et al. 2015; Antol et al. 2015) which generally focus on asking a diverse set of simpler questions on images, often answerable in a single glance. From these datasets, a number of effective, general- purpose deep learning models have emerged for visual ques- tion answering (Malinowski, Rohrbach, and Fritz 2015; Yang et al. 2016; Lu et al. 2016; Anderson et al. 2017). How- ever, tests on CLEVR show that these general deep learning approaches struggle to learn structured, multi-step reason- ing (Johnson et al. 2017a). In particular, these methods tend | 1709.07871#3 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 4 | In this work, we show that a general model architecture can achieve strong visual reasoning with a method we intro- duce as FiLM: Feature-wise Linear Modulation. A FiLM layer carries out a simple, feature-wise afï¬ne transformation on a neural networkâs intermediate features, conditioned on an arbitrary input. In the case of visual reasoning, FiLM lay- ers enable a Recurrent Neural Network (RNN) over an input question to inï¬uence Convolutional Neural Network (CNN) computation over an image. This process adaptively and rad- ically alters the CNNâs behavior as a function of the input question, allowing the overall model to carry out a variety of reasoning tasks, ranging from counting to comparing, for example. FiLM can be thought of as a generalization of Con- ditional Normalization, which has proven highly successful for image stylization (Dumoulin, Shlens, and Kudlur 2017; Ghiasi et al. 2017; Huang and Belongie 2017), speech recog- nition (Kim, Song, and Bengio 2017), and visual question answering (de Vries et al. 2017), demonstrating FiLMâs broad applicability. | 1709.07871#4 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 5 | In this paper, which expands upon a shorter report (Perez et al. 2017), our key contribution is that we show FiLM is a strong conditioning method by showing the following on visual reasoning tasks:
1. FiLM models achieve state-of-the-art across a variety of visual reasoning tasks, often by signiï¬cant margins.
Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
2. FiLM operates in a coherent manner. It learns a complex, underlying structure and manipulates the conditioned net- workâs features in a selective manner. It also enables the
CNN to properly localize question-referenced objects.
3. FiLM is robust; many FiLM model ablations still outper- form prior state-of-the-art. Notably, we ï¬nd there is no close link between normalization and the success of a con- ditioned afï¬ne transformation, a previously untouched as- sumption. Thus, we relax the conditions under which this method can be applied.
4. FiLM models learn from little data to generalize to more complex and/or substantially different data than seen dur- ing training. We also introduce a novel FiLM-based zero- shot generalization method that further improves and val- idates FiLMâs generalization capabilities.
# 2 Method | 1709.07871#5 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 6 | # 2 Method
Our model processes the question-image input using FiLM, illustrated in Figure 2. We start by explaining FiLM and then describe our particular model for visual reasoning.
# 2.1 Feature-wise Linear Modulation
FiLM learns to adaptively inï¬uence the output of a neural network by applying an afï¬ne transformation, or FiLM, to the networkâs intermediate features, based on some input. More formally, FiLM learns functions f and h which output γi,c and βi,c as a function of input xi:
γi,c = fc(xi) βi,c = hc(xi), (1)
where γi,c and βi,c modulate a neural networkâs activations Fi,c, whose subscripts refer to the ith inputâs cth feature or feature map, via a feature-wise afï¬ne transformation:
F iLM (Fi,c|γi,c, βi,c) = γi,cFi,c + βi,c. | 1709.07871#6 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 7 | F iLM (Fi,c|γi,c, βi,c) = γi,cFi,c + βi,c.
f and h can be arbitrary functions such as neural networks. Modulation of a target neural networkâs processing can be based on the same input to that neural network or some other input, as in the case of multi-modal or conditional tasks. For CNNs, f and h thus modulate the per-feature-map distribu- tion of activations based on xi, agnostic to spatial location. In practice, it is easier to refer to f and h as a single func- tion that outputs one (γ, β) vector, since, for example, it is often beneï¬cial to share parameters across f and h for more efï¬cient learning. We refer to this single function as the FiLM generator. We also refer to the network to which FiLM layers are applied as the Feature-wise Linearly Mod- ulated network, the FiLM-ed network. | 1709.07871#7 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 8 | FiLM layers empower the FiLM generator to manipulate feature maps of a target, FiLM-ed network by scaling them up or down, negating them, shutting them off, selectively thresholding them (when followed by a ReLU), and more. Each feature map is conditioned independently, giving the FiLM generator moderately ï¬ne-grained control over acti- vations at each FiLM layer.
As FiLM only requires two parameters per modulated fea- ture map, it is a scalable and computationally efï¬cient con- ditioning method. In particular, FiLM has a computational cost that does not scale with the image resolution.
FiLM activation | = + 4
Figure 2: A single FiLM layer for a CNN. The dot signiï¬es a Hadamard product. Various combinations of γ and β can modulate individual feature maps in a variety of ways. | 1709.07871#8 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 9 | 2.2 Model Our FiLM model consists of a FiLM-generating linguis- tic pipeline and a FiLM-ed visual pipeline as depicted in Figure 3. The FiLM generator processes a question xi us- ing a Gated Recurrent Unit (GRU) network (Chung et al. 2014) with 4096 hidden units that takes in learned, 200- dimensional word embeddings. The ï¬nal GRU hidden state is a question embedding, from which the model predicts i,·, βn i,·) for each nth residual block via afï¬ne projection. (γn The visual pipeline extracts 128 14 à 14 image feature maps from a resized, 224 à 224 image input using either a CNN trained from scratch or a ï¬xed, pre-trained feature extractor with a learned layer of 3 à 3 convolutions. The CNN trained from scratch consists of 4 layers with 128 4 à 4 kernels each, ReLU activations, and batch normalization, similar to prior work on CLEVR (Santoro et al. 2017). The ï¬xed feature extractor outputs the conv4 layer of a ResNet- 101 (He et al. 2016) pre-trained on ImageNet (Russakovsky et al. 2015) to match | 1709.07871#9 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 10 | feature extractor outputs the conv4 layer of a ResNet- 101 (He et al. 2016) pre-trained on ImageNet (Russakovsky et al. 2015) to match prior work on CLEVR (Johnson et al. 2017a; 2017b). Image features are processed by several â 4 for our model â FiLM-ed residual blocks (ResBlocks) with 128 feature maps and a ï¬nal classiï¬er. The classiï¬er consists of a 1 à 1 convolution to 512 feature maps, global max-pooling, and a two-layer MLP with 1024 hidden units that outputs a softmax distribution over ï¬nal answers. | 1709.07871#10 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 11 | Each FiLM-ed ResBlock starts with a 1 à 1 convolu- tion followed by one 3 à 3 convolution with an architec- ture as depicted in Figure 3. We turn the parameters of batch normalization layers that immediately precede FiLM layers off. Drawing from prior work on CLEVR (Hu et al. 2017; Santoro et al. 2017) and visual reasoning (Watters et al. 2017), we concatenate two coordinate feature maps indi- cating relative x and y spatial position (scaled from â1 to 1) with the image features, each ResBlockâs input, and the classiï¬erâs input to facilitate spatial reasoning.
We train our model end-to-end from scratch with
Answer: Yes âa [sett 3x_,x | ResBlock N there [ cru | more â>| GRU | cubesâ>| GRU | than yellow things
Figure 3: The FiLM generator (left), FiLM-ed network (mid- dle), and residual block architecture (right) of our model. | 1709.07871#11 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 12 | Figure 3: The FiLM generator (left), FiLM-ed network (mid- dle), and residual block architecture (right) of our model.
Adam (Kingma and Ba 2015) (learning rate 3eâ4), weight decay (1eâ5), batch size 64, and batch normalization and ReLU throughout FiLM-ed network. Our model uses only image-question-answer triplets from the training set with- out data augmentation. We employ early stopping based on validation accuracy, training for 80 epochs maximum. Fur- ther model details are in the appendix. Empirically, we found FiLM had a large capacity, so many architectural and hyper- parameter choices were for added regularization.
We stress that our model relies solely on feature-wise afï¬ne conditioning to use question information inï¬uence the visual pipeline behavior to answer questions. This approach differs from classical visual question answering pipelines which fuse image and language information into a single embedding via element-wise product, concatenation, atten- tion, and/or more advanced methods (Yang et al. 2016; Lu et al. 2016; Anderson et al. 2017). | 1709.07871#12 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 13 | 3 Related Work FiLM can be viewed as a generalization of Conditional Nor- malization (CN) methods. CN replaces the parameters of the feature-wise afï¬ne transformation typical in normalization layers, as introduced originally (Ioffe and Szegedy 2015), with a learned function of some conditioning information. Various forms of CN have proven highly effective across a number of domains: Conditional Instance Norm (Dumoulin, Shlens, and Kudlur 2017; Ghiasi et al. 2017) and Adaptive Instance Norm (Huang and Belongie 2017) for image styl- ization, Dynamic Layer Norm for speech recognition (Kim, Song, and Bengio 2017), and Conditional Batch Norm for general visual question answering on complex scenes such as VQA and GuessWhat?! (de Vries et al. 2017). This work complements our own, as we seek to show that feature-wise afï¬ne conditioning is effective for multi-step reasoning and understand the underlying mechanism behind its success. | 1709.07871#13 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 14 | Notably, prior work in CN has not examined whether the afï¬ne transformation must be placed directly after nor- malization. Rather, prior work includes normalization in the method name for instructive purposes or due to implemen- tation details. We investigate the connection between FiLM and normalization, ï¬nding it not strictly necessary for the afï¬ne transformation to occur directly after normalization. Thus, we provide a uniï¬ed framework for all of these meth- ods through FiLM, as well as a normalization-free relaxation of this approach which can be more broadly applied. | 1709.07871#14 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 15 | Beyond CN, there are many connections between FiLM and other conditioning methods. A common approach, used for example in Conditional DCGANs (Radford, Metz, and Chintala 2016), is to concatenate constant feature maps of conditioning information with convolutional layer input. Though not as parameter efï¬cient, this method simply re- sults in a feature-wise conditional bias. Likewise, concate- nating conditioning information with fully-connected layer input amounts to a feature-wise conditional bias. Other ap- proaches such as WaveNet (van den Oord et al. 2016a) and Conditional PixelCNN (van den Oord et al. 2016b) directly add a conditional feature-wise bias. These approaches are equivalent to FiLM with γ = 1, which we compare FiLM to in the Experiments section. In reinforcement learning, an alternate formulation of FiLM has been used to train one game-conditioned deep Q-network to play ten Atari games (Kirkpatrick et al. 2017), though FiLM was neither the focus of this work nor analyzed as a major component. | 1709.07871#15 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 16 | Other methods gate an inputâs features as a function of that same input, rather than a separate conditioning in- put. These methods include LSTMs for sequence model- ing (Hochreiter and Schmidhuber 1997), Convolutional Se- quence to Sequence for machine translation (Gehring et al. 2017), and even the ImageNet 2017 winning model, Squeeze and Excitation Networks (Hu, Shen, and Sun 2017). This approach amounts to a feature-wise, conditional scaling, re- stricted to between 0 and 1, while FiLM consists of both scaling and shifting, each unrestricted. In the Experiments section, we show the effect of restricting FiLMâs scaling to between 0 and 1 for visual reasoning. We ï¬nd it noteworthy that this general approach of feature modulation is effective across a variety of settings and architectures. | 1709.07871#16 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 17 | There are even broader links between FiLM and other methods. For example, FiLM can be viewed as using one network to generate parameters of another network, mak- ing it a form of hypernetwork (Ha, Dai, and Le 2016). Also, FiLM has potential ties with conditional computation and mixture of experts methods, where specialized network sub- parts are active on a per-example basis (Jordan and Jacobs 1994; Eigen, Ranzato, and Sutskever 2014; Shazeer et al. 2017); we later provide evidence that FiLM learns to selec- tively highlight or suppress feature maps based on condi- tioning information. Those methods select at a sub-network level while FiLM selects at a feature map level. | 1709.07871#17 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 18 | In the domain of visual reasoning, one leading method is the Program Generator + Execution Engine model (John- son et al. 2017b). This approach consists of a sequence- to-sequence Program Generator, which takes in a question and outputs a sequence corresponding to a tree of composModel Overall Count Exist Compare Numbers Query Attribute Compare Attribute Human (Johnson et al. 2017b) 92.6 86.7 96.6 86.5 95.0 96.0 Q-type baseline (Johnson et al. 2017b) LSTM (Johnson et al. 2017b) CNN+LSTM (Johnson et al. 2017b) CNN+LSTM+SA (Santoro et al. 2017) N2NMN* (Hu et al. 2017) PG+EE (9K prog.)* (Johnson et al. 2017b) PG+EE (700K prog.)* (Johnson et al. 2017b) CNN+LSTM+RNâ â¡ (Santoro et al. 2017) 41.8 46.8 52.3 76.6 83.7 88.6 96.9 95.5 34.6 41.7 43.7 64.4 68.5 79.7 92.7 90.1 50.2 61.1 65.2 82.7 85.7 89.7 97.1 97.8 | 1709.07871#18 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 20 | Table 1: CLEVR accuracy (overall and per-question-type) by baselines, competing methods, and FiLM. (*) denotes use of extra supervision via program labels. (â ) denotes use of data augmentation. (â¡) denotes training from raw pixels.
able neural modules, each of which is a two or three layer residual block. This tree of neural modules is assembled to form the Execution Engine that then predicts an answer from the image. This modular approach is part of a line of neu- ral module network methods (Andreas et al. 2016a; 2016b; Hu et al. 2017), of which End-to-End Module Networks (Hu et al. 2017) have also been tested on visual reasoning. These models use strong priors by explicitly modeling the compo- sitional nature of reasoning and by training with additional program labels, i.e. ground-truth step-by-step instructions on how to correctly answer a question. End-to-End Mod- ule Networks further build in model biases via per-module, hand-crafted neural architectures for speciï¬c functions. Our approach learns directly from visual and textual input with- out additional cues or a specialized architecture. | 1709.07871#20 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 21 | 4.1 CLEVR Task CLEVR is a synthetic dataset of 700K (image, question, an- swer, program) tuples (Johnson et al. 2017a). Images con- tain 3D-rendered objects of various shapes, materials, col- ors, and sizes. Questions are multi-step and compositional in nature, as shown in Figure 1. They range from counting questions (âHow many green objects have the same size as the green metallic block?â) to comparison questions (âAre there fewer tiny yellow cylinders than yellow metal cubes?â) and can be 40+ words long. Answers are each one word from a set of 28 possible answers. Programs are an additional supervisory signal consisting of step-by-step instructions, such as filter shape[cube], relate[right], and count, on how to answer the question. | 1709.07871#21 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 22 | Relation Networks (RNs) are another leading approach for visual reasoning (Santoro et al. 2017). RNs succeed by explicitly building in a comparison-based prior. RNs use an MLP to carry out pairwise comparisons over each location of extracted convolutional features over an image, includ- ing LSTM-extracted question features as input to this MLP. RNs then element-wise sum over the resulting comparison vectors to form another vector from which a ï¬nal classi- ï¬er predicts the answer. We note that RNs have a compu- tational cost that scales quadratically in spatial resolution, while FiLMâs cost is independent of spatial resolution. No- tably, since RNs concatenate question features with MLP in- put, a form of feature-wise conditional biasing as explained earlier, their conditioning approach is related to FiLM. | 1709.07871#22 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 23 | 4 Experiments First, we test our model on visual reasoning with the CLEVR task and use trained FiLM models to analyze what FiLM learns. Second, we explore how well our model generalizes to more challenging questions with the CLEVR-Humans task. Finally, we examine how FiLM performs in few- shot and zero-shot generalization settings using the CLEVR Compositional Generalization Test. In the appendix, we pro- vide an error analysis of our model. Our code is available at https://github.com/ethanjperez/film.
Baselines We compare against the following methods, dis- cussed in detail in the Related Work section: ⢠Q-type baseline: Predicts based on a questionâs category. ⢠LSTM: Predicts using only the question. ⢠CNN+LSTM: MLP prediction over CNN-extracted image features and LSTM-extracted question features.
⢠Stacked Attention Networks (CNN+LSTM+SA): Lin- ear prediction over CNN-extracted image feature and LSTM-extracted question features combined via two rounds of soft spatial attention (Yang et al. 2016). | 1709.07871#23 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 24 | ⢠End-to-End Module Networks (N2NMN) and Pro- gram Generator + Execution Engine (PG+EE): Meth- ods in which separate neural networks learn separate sub- functions and are assembled into a question-dependent structure (Hu et al. 2017; Johnson et al. 2017b).
⢠Relation Networks (CNN+LSTM+RN): An approach which builds in pairwise comparisons over spatial lo- cations to explicitly model reasoningâs relational na- ture (Santoro et al. 2017).
Results FiLM achieves a new overall state-of-the-art on CLEVR, as shown in Table 1, outperforming humans and previous methods, including those using explicit models of reasoning, program supervision, and/or data augmentation.
# ...red thing right of the blue thing? A: sphere
...red thing left of blue thing? A: cube the
Q: What shape is the...
# ...purple thing? A: cube
# ...blue thing? A: sphere
Q: How many things are... cyan ...right of the gray cube? A: 3 ...left of the small cube? A: 2 ...right of the gray cube and left of the small cube? A: 1 ...right of the gray cube or left of the small cube? A: 4 (P: 3) | 1709.07871#24 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 25 | Figure 4: Visualizations of the distribution of locations which the model uses for its globally max-pooled features which its ï¬nal MLP predicts from. FiLM correctly localizes the answer-referenced object (top) or all question-referenced objects (bottom), but not as accurately when it answers incorrectly (rightmost bottom). Questions and images used match (Johnson et al. 2017b).
For methods not using extra supervision, FiLM roughly halves state-of-the-art error (from 4.5% to 2.3%). Note that using pre-trained image features as input can be viewed as a form of data augmentation in itself but that FiLM performs equally well using raw pixel inputs. Interestingly, the raw pixel model seems to perform better on lower-level ques- tions (i.e. querying and comparing attributes) while the im- age features model seems to perform better on higher-level questions (i.e. compare numbers of objects). | 1709.07871#25 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 26 | 4.2 What Do FiLM Layers Learn? To understand how FiLM visually reasons, we visualize acti- vations to observe the net result of FiLM layers. We also use histograms and t-SNE (van der Maaten and Hinton 2008) to ï¬nd patterns in the learned FiLM γ and β parameters them- selves. In Figures 14 and 15 in the appendix, we visualize the effect of FiLM at the single feature map level.
Figure 5: Histograms of γi,c (left) and βi,c (right) values over all FiLM layers, calculated over the validation set.
tures on objects that are not referred to by the answer but are referred to by the question. The latter example provides ev- idence that the ï¬nal MLP itself carries out some reasoning, using FiLM to extract relevant features for its reasoning. | 1709.07871#26 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 27 | Activation Visualizations Figure 4 visualizes the distri- bution of locations responsible for the globally-pooled fea- tures which the MLP in the modelâs ï¬nal classiï¬er uses to predict answers. These images reveal that the FiLM model predicts using features of areas near answer-related or question-related objects, as the high CLEVR accuracy also suggests. This ï¬nding highlights that appropriate fea- ture modulation indirectly results in spatial modulation, as regions with question-relevant features will have large acti- vations while other regions will not. This observation might explain why FiLM outperforms Stacked Attention, the next best method not explicitly built for reasoning, so signiï¬- cantly (21%); FiLM appears to carry many of spatial atten- tionâs beneï¬ts, while also inï¬uencing feature representation. Figure 4 also suggests that the FiLM-ed network carries out reasoning throughout its pipeline. In the top example, the FiLM-ed network has localized the answer-referenced ob- ject alone before the MLP classiï¬er. In the bottom example, the FiLM-ed network retains, for | 1709.07871#27 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 28 | network has localized the answer-referenced ob- ject alone before the MLP classiï¬er. In the bottom example, the FiLM-ed network retains, for the MLP classiï¬er, feaFiLM Parameter Histograms To analyze at a lower level how FiLM uses the question to condition the visual pipeline, we plot γ and β values predicted over the validation set, as shown in Figure 5 and in more detail in the appendix (Fig- ures 16 to 18). γ and β values take advantage of a sizable range, varying from -15 to 19 and from -9 to 16, respec- tively. γ values show a sharp peak at 0, showing that FiLM learns to use the question to shut off or signiï¬cantly sup- press whole feature maps. Simultaneously, FiLM learns to upregulate a much more selective set of other feature maps with high magnitude γ values. Furthermore, a large frac- tion (36%) of γ values are negative; since our model uses a ReLU after FiLM, γ < 0 can cause a signiï¬cantly differ- ent set of activations to pass the ReLU to downstream layers than γ > 0. Also, 76% of β values are negative, suggest- ing that FiLM also uses β to | 1709.07871#28 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 30 | First FiLM Parameters Last FiLM Parameters A) © 0- exist 5 % e 1-less_than B » 2-greater_than e 3-count e 4-query_material » 5-query_size » 6 - query_color e 7-query shape e° 8 -equal_color e 9- equal_integer ° 10 - equal_shape e° 11-equal_size e° 12- equal_material
Figure 6: t-SNE plots of (γ, β) of the ï¬rst (left) and last (right) FiLM layers of a 6-FiLM layer Network. FiLM parameters cluster by low-level reasoning functions in the ï¬rst layer and by high-level reasoning functions in the last layer.
in a speciï¬c case. Together, these ï¬ndings suggest that FiLM learns to selectively upregulate, downregulate, and shut off feature maps based on conditioning information. | 1709.07871#30 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 31 | FiLM Parameters t-SNE Plot In Figure 6, we visualize FiLM parameter vectors (γ, β) for 3,000 random valida- tion points with t-SNE. We analyze the deeper, 6-ResBlock version of our model, which has a similar validation accu- racy as our 4-ResBlock model, to better examine how FiLM layers in different layers of a hierarchy behave. First and last layer FiLM (γ, β) are grouped by the low-level and high-level reasoning functions necessary to answer CLEVR questions, respectively. For example, FiLM parameters for equal color and query color are close for the ï¬rst layer but apart for the last layer. The same is true for shape, size and material questions. Conversely, equal shape, equal size, and equal material FiLM parameters are grouped in the last layer but split in the ï¬rst layer â like- wise for other high level groupings such as integer compar- ison and querying. These ï¬ndings suggest that FiLM layers learn a sort of function-based modularity without an archi- tectural prior. Simply with end-to-end training, FiLM learns to handle not only different types of questions differently, but also different types of question | 1709.07871#31 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 32 | archi- tectural prior. Simply with end-to-end training, FiLM learns to handle not only different types of questions differently, but also different types of question sub-parts differently; the FiLM model works from low-level to high-level processes as is the proper approach. For models with fewer FiLM lay- ers, such patterns also appear, but less clearly; these models must begin higher level reasoning sooner. | 1709.07871#32 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 33 | # 4.3 Ablations
Using the validation set, we conduct an ablation study on our best model to understand how FiLM learns visual reasoning. We show results for test time ablations in Figure 7, for archi- tectural ablations in Table 2, and for varied model depths in Table 3. Without hyperparameter tuning, most architectural ablations and model depths outperform prior state-of-the-art on training from only image-question-answer triplets, sup- porting FiLMâs overall robustness. Table 3 also shows using the validation set that our results are statistically signiï¬cant.
Impact of Gaussian Noise on FiLM parameters âeâ Beta + Gaussian Noise â=â Gamma + Gaussian Noise § â*â Gamma/Beta + Gaussian Noise a a0 Fy 40 5 20 Gaussian Noise Std
Figure 7: An analysis of how robust FiLM parameters are to noise at test time. The horizontal lines correspond to setting γ or β to their respective training set mean values. | 1709.07871#33 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 34 | Figure 7: An analysis of how robust FiLM parameters are to noise at test time. The horizontal lines correspond to setting γ or β to their respective training set mean values.
Effect of γ and β To test the effect of γ and β separately, we trained one model with a constant γ = 1 and another with β = 0. With these models, we ï¬nd a 1.5% and .5% accuracy drop, respectively; FiLM can learn to condition the CNN for visual reasoning through either biasing or scaling alone, albeit not as well as conditioning both together. This result also suggests that γ is more important than β. | 1709.07871#34 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 35 | To further compare the importance of γ and β, we run a series of test time ablations (Figure 7) on our best, fully- trained model. First, we replace β with the mean β across the training set. This ablation in effect removes all condition- ing information from β parameters during test time, from a model trained to use both γ and β. Here, we ï¬nd that ac- curacy only drops by 1.0%, while the same procedure on γ results in a 65.4% drop. This large difference suggests that, in practice, FiLM largely conditions through γ rather than β. Next, we analyze performance as we add increasingly more Gaussian noise to the best modelâs FiLM parameters at test time. Noise in gamma hurts performance signiï¬cantly more, showing FiLMâs higher sensitivity to changes in γ than in β and corroborating the relatively greater importance of γ. | 1709.07871#35 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 36 | Restricting γ To understand what aspect of γ is most ef- fective, we train a model that limits γ to (0, 1) using sigModel Overall Restricted γ or β FiLM with β := 0 FiLM with γ := 1 FiLM with γ := Ï(γ) FiLM with γ := tanh(γ) FiLM with γ := exp(γ) 96.9 95.9 95.9 96.3 96.3 Moving FiLM within ResBlock FiLM after residual connection FiLM after ResBlock ReLU-2 FiLM after ResBlock Conv-2 FiLM before ResBlock Conv-1 96.6 97.7 97.1 95.0 Removing FiLM from ResBlocks No FiLM in ResBlock 4 No FiLM in ResBlock 3-4 No FiLM in ResBlock 2-4 No FiLM in ResBlock 1-4 96.8 96.5 97.3 21.4 Miscellaneous 1 à 1 conv only, with no coord. maps No residual connection No batch normalization Replace image features with raw pixels 95.3 94.0 93.7 97.6 Best Architecture 97.4±.4
Table 2: CLEVR val accuracy for ablations, trained with the best architecture with only speciï¬ed changes. We report the standard deviation of the best model accuracy over 5 runs. | 1709.07871#36 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 37 | Table 2: CLEVR val accuracy for ablations, trained with the best architecture with only speciï¬ed changes. We report the standard deviation of the best model accuracy over 5 runs.
moid, as many models which use feature-wise, multiplica- tive gating do. Likewise, we also limit γ to (â1, 1) using tanh. Both restrictions hurt performance, roughly as much as removing conditioning from γ entirely by training with γ = 1. Thus, FiLMâs ability to scale features by large mag- nitudes appears to contribute to its success. Limiting γ to (0, â) with exp also hurts performance, validating the value of FiLMâs capacity to negate and zero out feature maps. | 1709.07871#37 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 38 | Conditional Normalization We perform an ablation study on the placement of FiLM to evaluate the relation- ship between normalization and FiLM that Conditional Nor- malization approaches assume. Unfortunately, it is difï¬cult to accurately decouple the effect of FiLM from normaliza- tion by simply training our corresponding model without normalization, as normalization signiï¬cantly accelerates, regularizes, and improves neural network learning (Ioffe and Szegedy 2015), but we include these results for com- pleteness. However, we ï¬nd no substantial performance drop when moving FiLM layers to different parts of our modelâs ResBlocks; we even reach the upper end of the best modelâs performance range when placing FiLM after the post-normalization ReLU in the ResBlocks. Thus, we decouple the name from normalization for clarity regarding where the fundamental effectiveness of the method comes from. By demonstrating this conditioning mechanism is not closely connected to normalization, we open the doors to ap- plications other settings in which normalization is less com- mon, such as RNNs and reinforcement learning, which are | 1709.07871#38 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 39 | Model Overall Model Overall 1 ResBlock 2 ResBlocks 3 ResBlocks 4 ResBlocks 5 ResBlocks 93.5 97.1 96.7 97.4±.4 97.4 6 ResBlocks 7 ResBlocks 8 ResBlocks 12 ResBlocks 97.7 97.4 97.6 96.9
Table 3: CLEVR val accuracy by FiLM model depth.
promising directions for future work with FiLM.
Repetitive Conditioning To understand the contribution of repetitive conditioning towards FiLM model success, we train FiLM models with successively fewer FiLM layers. Models with fewer FiLM layers, even a single FiLM layer, do not deviate far from the best modelâs performance, reveal- ing that the model can reason and answer diverse questions successfully by modulating features even just once. This ob- servation highlights the capacity of even one FiLM layer. Perhaps one FiLM layer can pass enough question informa- tion to the CNN to enable it to carry out reasoning later in the network, in place of the more hierarchical conditioning deeper FiLM models appear to use. We leave more in-depth investigation of this matter for future work. | 1709.07871#39 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 40 | Spatial Reasoning To examine how FiLM models ap- proach spatial reasoning, we train a version of our best model architecture, from image features, with only 1 à 1 convolutions and without feeding coordinate feature maps indicating relative spatial position to the model. Due to the global max-pooling near the end of the model, this model cannot transfer information across spatial positions. No- tably, this model still achieves a high 95.3% accuracy, in- dicating that FiLM models are able to reason about space simply from the spatial information contained in a single lo- cation of ï¬xed image features.
Residual Connection Removing the residual connection causes one of the larger accuracy drops. Since there is a global max-pooling operation near the end of the network, this ï¬nding suggests that the best model learns to primar- ily use features of locations that are repeatedly important throughout lower and higher levels of reasoning to make its ï¬nal decision. The higher accuracies for models with FiLM modulating features inside residual connections rather than outside residual connections supports this hypothesis.
Model Depth Table 3 shows model performance by the number of ResBlocks. FiLM is robust to varying depth but less so with only 1 ResBlock, backing the earlier theory that the FiLM-ed network reasons throughout its pipeline. | 1709.07871#40 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 41 | 4.4 CLEVR-Humans: Human-Posed Questions To assess how well visual reasoning models generalize to more realistic, complex, and free-form questions, the CLEVR-Humans dataset was introduced (Johnson et al. 2017b). This dataset contains human-posed questions on CLEVR images along with their corresponding answers. The number of samples is limited â 18K for training, 7K
Q: What object color of Cylinder is the grass? A: Q: Which shape objects are partially obscured from view? A: Sphere Q: What color is the matte object farthest to the right? A: Brown Q: What is reï¬ecting in the large cube? A: Cylinder shape | 1709.07871#41 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 42 | If all cubical ob- Q: jects were removed what shaped objects would there be the most of? A: Sphere (P: Rubber)
Figure 8: Examples from CLEVR-Humans, which introduces new words (underlined) and concepts. After ï¬ne-tuning on CLEVR-Humans, a CLEVR-trained model can now reason about obstruction, superlatives, and reï¬ections but still struggles with hypothetical scenarios (rightmost). It also has learned human preference to primarily identify objects by shape (leftmost).
for validation, and 7K for testing. The questions were col- lected from Amazon Mechanical Turk workers prompted to ask questions that were likely hard for a smart robot to an- swer. As a result, CLEVR-Humans questions use more di- verse vocabulary and complex concepts.
Method To test FiLM on CLEVR-Humans, we take our best CLEVR-trained FiLM model and ï¬ne-tune its FiLM- generating linguistic pipeline alone on CLEVR-Humans. Similar to prior work (Johnson et al. 2017b), we do not update the visual pipeline on CLEVR-Humans to mitigate overï¬tting to the small training set. | 1709.07871#42 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 43 | Model LSTM CNN+LSTM CNN+LSTM+SA+MLP PG+EE (18K prog.) Train CLEVR 27.5 37.7 50.4 54.0 Train CLEVR, ï¬ne-tune human 36.5 43.2 57.6 66.6 CNN+GRU+FiLM 56.6 75.9
Table 4: CLEVR-Humans test accuracy, before (left) and after (right) ï¬ne-tuning on CLEVR-Humans data
Results Our model achieves state-of-the-art generalization to CLEVR-Humans, both before and after ï¬ne-tuning, as shown in Table 4, indicating that FiLM is well-suited to han- dle more complex and diverse questions. Figure 8 shows ex- amples from CLEVR-Humans with FiLM model answers. Before ï¬ne-tuning, FiLM outperforms prior methods by a smaller margin. After ï¬ne-tuning, FiLM reaches a consider- ably improved ï¬nal accuracy. In particular, the gain in ac- curacy made by FiLM upon ï¬ne-tuning is more than 50% greater than those made by other models; FiLM adapts data- efï¬ciently using the small CLEVR-Humans dataset. | 1709.07871#43 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 44 | state-of-the-art method, Program Generator + Execution Engine (PG+EE), after ï¬ne-tuning by 9.3%. Prior work on PG+EEs explains that this neural module network method struggles on ques- tions which cannot be well approximated with the modelâs module inventory (Johnson et al. 2017b). In contrast, FiLM has the freedom to modulate existing feature maps, a fairly ï¬exible and ï¬ne-grained operation, in novel ways to reason about new concepts. These results thus provide some evi- dence for the beneï¬ts of FiLMâs general nature.
# 4.5 CLEVR Compositional Generalization Test
To test how well models learn compositional concepts that generalize, CLEVR-CoGenT was introduced (Johnson et al. 2017a). This dataset is synthesized in the same way as CLEVR but contains two conditions: in Condition A, all cubes are gray, blue, brown, or yellow and all cylinders are
red, green, purple, or cyan; in Condition B, cubes and cylin- ders swap color palettes. Both conditions contain spheres of all colors. CLEVR-CoGenT thus indicates how a model an- swers CLEVR questions: by memorizing combinations of traits or by learning disentangled or general representations. | 1709.07871#44 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 45 | Results We train our best model architecture on Condition A and report accuracies on Conditions A and B, before and after ï¬ne-tuning on B, in Figure 9. Our results indicate FiLM surpasses other visual reasoning models at learning general concepts. FiLM learns better compositional generalization even than PG+EE, which explicitly models compositional- ity and is trained with program-level supervision that specif- ically includes ï¬ltering colors and ï¬ltering shapes.
Sample Efï¬ciency and Catastrophic Forgetting We show sample efï¬ciency and forgetting curves in Figure 9. FiLM achieves prior state-of-the-art accuracy with 1/3 as much ï¬ne-tuning data. However, our FiLM model still suf- fers from catastrophic forgetting after ï¬ne-tuning.
Zero-Shot Generalization FiLMâs accuracy on Condi- tion A is much higher than on B, suggesting FiLM has mem- orized attribute combinations to an extent. For example, the model learns a bias that cubes are not cyan, as learning this training set bias helps minimize training loss. | 1709.07871#45 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 46 | To overcome this bias, we develop a novel FiLM-based zero-shot generalization method. Inspired by word embed- ding manipulations, e.g. âKingâ - âManâ + âWomanâ = âQueenâ (Mikolov et al. 2013), we test if linear manipula== val = ValB 0 5 10 6 20 5 40 Number of unique samples (in thousands) used for training
Method Train A B A Fine-tune B A B CNN+LSTM+SA PG+EE (18K prog.) CNN+GRU+FiLM CNN+GRU+FiLM 0-Shot 80.3 96.6 98.3 98.3 68.7 73.7 75.6 78.8 75.7 76.1 80.8 81.1 75.8 92.7 96.9 96.9
Figure 9: CoGenT results. FiLM ValB accuracy reported on ValB without the 30K ï¬ne-tuning samples (Figure). Accu- racy before and after ï¬ne-tuning on 30K of ValB (Table). | 1709.07871#46 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 47 | tion extends to reasoning with FiLM. We compute (γ, β) for âHow many cyan cubes are there?â via the linear com- bination of questions in the FiLM parameter space: âHow many cyan spheres are there?â + âHow many brown cubes are there?â â âHow many brown spheres are there?â. With this (γ, β), our model can correctly count cyan cubes. We show another example of this method in Figure 10.
We evaluate this method on validation B, using a parser to automatically generate the right combination of questions. We test previously reported CLEVR-CoGenT FiLM mod- els with this method and show results in Figure 9. With this method, there is a 3.2% overall accuracy gain when train- ing on A and testing for zero-shot generalization on B. Yet this method could only be applied to 1/3 of questions in B. For these questions, model accuracy starts at 71.5% and jumps to 80.7%. Before ï¬ne-tuning on B, the accuracy be- tween zero-shot and original approaches on A is identical, likewise for B after ï¬ne-tuning. We note that difference in the predicted FiLM parameters between these two methods is negligible, likely causing the similar performance. | 1709.07871#47 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 48 | We achieve these improvements without speciï¬cally training our model for zero-shot generalization. Our method simply allows FiLM to take advantage of any concept dis- entanglement in the CNN after training. We also observe that convex combinations of the FiLM parameters â i.e. be- tween âHow many cyan things are there?â and âHow many brown things are there?â â often monotonically interpolates the predicted answer between the answers to endpoint ques- tions. These results highlight, to a limited extent, the ï¬exi- bility of FiLM parameters for meaningful manipulations.
As implemented, this method has many limitations. How- ever, approaches from word embeddings, representation learning, and zero-shot learning can be applied to directly optimize (γ, β) for analogy-making (Bordes et al. 2013; Guu, Miller, and Liang 2015; Oh et al. 2017). The FiLM-ed network could directly train with this procedure via back- propagation. A learned model could also replace the parser. We ï¬nd such avenues promising for future work.
What is the blue big cylinder made of? Question What is the blue big sphere made of? (1) Swap shape What is the green big cylinder made of? (2) Swap color (3) Swap shape/color What is the green big sphere made of? | 1709.07871#48 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 50 | 5 Conclusion We show that a model can achieve strong visual reasoning using general-purpose Feature-wise Linear Modulation lay- ers. By efï¬ciently manipulating a neural networkâs interme- diate features in a selective and meaningful manner using FiLM layers, a RNN can effectively use language to mod- ulate a CNN to carry out diverse and multi-step reasoning tasks over an image. Our ablation study suggests that FiLM is resilient to architectural modiï¬cations, test time ablations, and even restrictions on FiLM layers themselves. Notably, we provide evidence that FiLMâs success is not closely con- nected with normalization as previously assumed. Thus, we open the door for applications of this approach to settings where normalization is less common, such as RNNs and re- inforcement learning. Our ï¬ndings also suggest that FiLM models can generalize better, more sample efï¬ciently, and even zero-shot to foreign or more challenging data. Overall, the results of our investigation of FiLM in the case of visual reasoning complement broader literature that demonstrates the success of FiLM-like techniques across many domains, supporting the case for FiLMâs strength not simply within a single domain but as a general, versatile approach. | 1709.07871#50 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 51 | 6 Acknowledgements We thank the developers of PyTorch (pytorch.org) and (Johnson et al. 2017b) for open-source code which our implementation was based off. We thank Mohammad Pezeshki, Dzmitry Bahdanau, Yoshua Bengio, Nando de Freitas, Hugo Larochelle, Laurens van der Maaten, Joseph Cohen, Joelle Pineau, Olivier Pietquin, J´er´emie Mary, C´esar Laurent, Chin-Wei Huang, Layla Asri, Max Smith, and James Ough for helpful discussions and Justin Johnson for CLEVR test evaluations. We thank NVIDIA for do- nating a DGX-1 computer used in this work. We also ac- knowledge FRQNT through the CHIST-ERA IGLU project, Coll`ege Doctoral Lille Nord de France, and CPER Nord- Pas de Calais/FEDER DATA Advanced data science and technologies 2015-2020 for funding our work. Lastly, we thank acronymcreator.net for the acronym FiLM. | 1709.07871#51 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 52 | References Anderson, P.; He, X.; Buehler, C.; Teney, D.; Johnson, M.; Gould, S.; and Zhang, L. 2017. Bottom-up and top-down attention for image captioning and vqa. In VQA Workshop at CVPR. Andreas, J.; Marcus, R.; Darrell, T.; and Klein, D. 2016a. Learning to compose neural networks for question answering. In NAACL. Andreas, J.; Rohrbach, M.; Darrell, T.; and Klein, D. 2016b. Neural module networks. In CVPR. Antol, S.; Agrawal, A.; Lu, J.; Mitchell, M.; Batra, D.; Zitnick, C. L.; and Parikh, D. 2015. VQA: Visual Question Answering. In ICCV. Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; and Yakhnenko, O. 2013. Translating embeddings for modeling multi- relational data. In Burges, C. J. C.; Bottou, L.; Welling, M.; Ghahramani, Z.; and Weinberger, K. Q., eds., NIPS. Curran As- sociates, | 1709.07871#52 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 53 | Bottou, L.; Welling, M.; Ghahramani, Z.; and Weinberger, K. Q., eds., NIPS. Curran As- sociates, Inc. 2787â2795. Chung, J.; G¨ulc¸ehre, C¸ .; Cho, K.; and Bengio, Y. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. In Deep Learning Workshop at NIPS. de Vries, H.; Strub, F.; Mary, J.; Larochelle, H.; Pietquin, O.; and Courville, A. C. 2017. Modulating early visual processing by lan- guage. In NIPS. Dumoulin, V.; Shlens, J.; and Kudlur, M. 2017. A learned repre- sentation for artistic style. In ICLR. Eigen, D.; Ranzato, M.; and Sutskever, I. 2014. Learning factored representations in a deep mixture of experts. In ICLR Workshops. Gehring, J.; Auli, M.; Grangier, D.; Yarats, D.; and Dauphin, Y. N. 2017. Convolutional sequence to sequence learning. In ICML. Geman, D.; Geman, S.; | 1709.07871#53 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 54 | D.; and Dauphin, Y. N. 2017. Convolutional sequence to sequence learning. In ICML. Geman, D.; Geman, S.; Hallonquist, N.; and Younes, L. 2015. Vi- sual turing test for computer vision systems. volume 112, 3618â 3623. National Acad Sciences. Ghiasi, G.; Lee, H.; Kudlur, M.; Dumoulin, V.; and Shlens, J. 2017. Exploring the structure of a real-time, arbitrary neural artistic styl- ization network. CoRR abs/1705.06830. Goyal, Y.; Khot, T.; Summers-Stay, D.; Batra, D.; and Parikh, D. 2017. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In CVPR. Guu, K.; Miller, J.; and Liang, P. 2015. Traversing knowledge graphs in vector space. In EMNLP. Ha, D.; Dai, A.; and Le, Q. 2016. Hypernetworks. In ICLR. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learn- ing for image recognition. In | 1709.07871#54 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 55 | In ICLR. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learn- ing for image recognition. In CVPR. Hochreiter, S., and Schmidhuber, J. 1997. Long short-term mem- ory. Neural Comput. 9(8):1735â1780. Hu, R.; Andreas, J.; Rohrbach, M.; Darrell, T.; and Saenko, K. 2017. Learning to reason: End-to-end module networks for visual question answering. In ICCV. Hu, J.; Shen, L.; and Sun, G. 2017. Squeeze-and-Excitation Net- works. In ILSVRC 2017 Workshop at CVPR. Huang, X., and Belongie, S. 2017. Arbitrary style transfer in real- time with adaptive instance normalization. In ICCV. Ioffe, S., and Szegedy, C. 2015. Batch normalization: Accelerat- ing deep network training by reducing internal covariate shift. In ICML. Johnson, J.; Hariharan, B.; van der Maaten, L.; Fei-Fei, L.; Zitnick, C. L.; and Girshick, R. B. 2017a. CLEVR: A | 1709.07871#55 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 57 | Johnson, J.; Hariharan, B.; van der Maaten, L.; Hoffman, J.; Li, F.; Zitnick, C. L.; and Girshick, R. B. 2017b. Inferring and executing programs for visual reasoning. In ICCV. Jordan, M. I., and Jacobs, R. A. 1994. Hierarchical mixtures of experts and the em algorithm. Neural Comput. 6(2):181â214. Kim, T.; Song, I.; and Bengio, Y. 2017. Dynamic layer normaliza- tion for adaptive neural acoustic modeling in speech recognition. In InterSpeech. Kingma, D. P., and Ba, J. 2015. Adam: A method for stochastic optimization. In ICLR. Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Desjardins, G.; Rusu, A. A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska- Barwinska, A.; Hassabis, D.; Clopath, C.; Kumaran, D.; and Had- sell, R. 2017. Overcoming catastrophic forgetting in neural net- works. National Academy of Sciences | 1709.07871#57 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 58 | C.; Kumaran, D.; and Had- sell, R. 2017. Overcoming catastrophic forgetting in neural net- works. National Academy of Sciences 114(13):3521â3526. Lu, J.; Yang, J.; Batra, D.; and Parikh, D. 2016. Hierarchi- cal question-image co-attention for visual question answering. In NIPS. Malinowski, M., and Fritz, M. 2014. A multi-world approach to question answering about real-world scenes based on uncertain input. In NIPS. Malinowski, M.; Rohrbach, M.; and Fritz, M. 2015. Ask your neurons: A neural-based approach to answering questions about images. In ICCV. Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G. S.; and Dean, J. 2013. Distributed representations of words and phrases and their compositionality. In NIPS. Oh, J.; Singh, S.; Lee, H.; and Kholi, P. 2017. Zero-shot task gen- eralization with multi-task deep reinforcement learning. In ICML. Perez, E.; de Vries, H.; Strub, F.; | 1709.07871#58 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 59 | Zero-shot task gen- eralization with multi-task deep reinforcement learning. In ICML. Perez, E.; de Vries, H.; Strub, F.; Dumoulin, V.; and Courville, A. C. 2017. Learning visual reasoning without strong priors. In MLSLP Workshop at ICML. Radford, A.; Metz, L.; and Chintala, S. 2016. Unsupervised rep- resentation learning with deep convolutional generative adversarial networks. In ICLR. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M. S.; Berg, A. C.; and Li, F. 2015. Imagenet large scale visual recognition challenge. IJCV 115(3):211â252. Santoro, A.; Raposo, D.; Barrett, D. G.; Malinowski, M.; Pascanu, R.; Battaglia, P.; and Lillicrap, T. 2017. A simple neural network module for relational reasoning. CoRR abs/1706.01427. Shazeer, N.; Mirhoseini, A.; Maziarz, K.; | 1709.07871#59 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 60 | neural network module for relational reasoning. CoRR abs/1706.01427. Shazeer, N.; Mirhoseini, A.; Maziarz, K.; Davis, A.; Le, Q.; Hinton, G.; and Dean, J. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In ICLR. van den Oord, A.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.; and Kavukcuoglu, K. 2016a. Wavenet: A generative model for raw audio. CoRR abs/1609.03499. van den Oord, A.; Kalchbrenner, N.; Espeholt, L.; Vinyals, O.; Graves, A.; and Kavukcuoglu, K. 2016b. Conditional image gen- eration with pixelcnn decoders. In NIPS. van der Maaten, L., and Hinton, G. 2008. Visualizing data using t-sne. JMLR 9(Nov):2579â2605. Watters, N.; Tacchetti, A.; Weber, T.; Pascanu, R.; Battaglia, | 1709.07871#60 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 62 | # 7 Appendix
# 7.1 Error Analysis
We examine the errors our model makes to understand where our model fails and how it acts when it does. Examples of these errors are shown in Figures 12 and 13.
Occlusion Many model errors are due to partial occlusion. These errors may likely be ï¬xed using a CNN that operates at a higher resolution, which is feasible since FiLM has a computa- tional cost that is independent of resolution.
Counting 96.1% of counting mistakes are off-by-one errors, showing FiLM has learned underlying concepts behind counting such as close relationships between close numbers.
Logical Consistency The model sometimes makes curious reasoning mistakes a human would not. For example, we ï¬nd a case where our model correctly counts one gray object and two cyan objects but simultaneously answers that there are the same number of gray and cyan objects. In fact, it answers that the num- ber of gray objects is both less than and equal to the number of yellow blocks. These errors could be prevented by directly mini- mizing logical inconsistency, an interesting avenue for future work orthogonal to FiLM.
# 7.2 Model Details
Rather than output γi,c directly, we output âγi,c, where:
γi,c = 1 + âγi,c, (3) | 1709.07871#62 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 63 | Rather than output γi,c directly, we output âγi,c, where:
γi,c = 1 + âγi,c, (3)
since initially zero-centered γi,c can zero out CNN feature map activations and thus gradients. In our implementation, we opt to output âγi,c rather than γi,c, but for simplicity, throughout our pa- per, we explain FiLM using γi,c. However, this modiï¬cation does not seem to affect our modelâs performance on CLEVR statistically signiï¬cantly.
We present training and validation curves for best model trained from image features in Figure 11. We observe fast accuracy gains initially, followed by slow, steady increases to a best validation ac- curacy of 97.84%, at which point training accuracy is 99.53%. We train on CLEVR for 80 epochs, which takes 4 days using 1 NVIDIA TITAN Xp GPU when learning from image features. For practical reasons, we stop training on CLEVR after 80 epochs, but we observe that accuracy continues to increase slowly even after- wards.
CLEVR: Best FILM Model Training Curves 0.9 0.6 â Training â Validation 0s o 10 2 3 4 So 60 70 80 Epochs
Figure 11: Best model training and validation curves. | 1709.07871#63 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 65 | Q: What shape is the big metal thing that is the same color as the small cylinder? A: Cylinder (P: Sphere)
Q: How many other things are the same material as the tiny sphere? A: 3 (P: 2)
Figure 12: Some image-question pairs where our model pre- dicts incorrectly. Most errors we observe are due to partially occluded objects, as highlighted in the three ï¬rst examples.
Question How many gray things are there? How many cyan things are there? Are there as many gray things as cyan things? Are there more gray things than cyan things? Are there fewer gray things than cyan things? Answer 1 2 Yes No Yes
Figure 13: An interesting failure example where our model counts correctly but compares counts erroneously. Its third answer is incorrect and inconsistent with its other answers.
7.3 What Do FiLM Layers Learn? We visualize FiLMâs effect on a single arbitrary feature map in Fig- ures 14 and 15. We also show histograms of per-layer γi,c values, per-layer βi,c values, and per-channel FiLM parameter statistics in Figures 16, 17, and 18, respectively.
Feature 14 - Block 1
# M L i F e r o f e B
# M L i F r e t f
&
# A | 1709.07871#65 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 66 | Feature 14 - Block 1
# M L i F e r o f e B
# M L i F r e t f
&
# A
Q: What is the color of the large rubber cylin- der? A: Cyan
|
Q: What is the color of the large rubber sphere? A: Gray
Q: What is the color of the cube? A: Yellow
Q: How many cylin- ders are there? A: 4
M L i F e r o f e B M L i F r e t f A Q: What is the color of the large rubber cylin- der? A: Yellow Q: What is the color of the large rubber sphere? A: Gray
Feature 14 Block 1
Feature 14 - Block 1
Q: What is the color of the cube? A: Yellow
| | 1709.07871#66 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 67 | Feature 14 Block 1
Feature 14 - Block 1
Q: What is the color of the cube? A: Yellow
|
Q: How many cylin- ders are there? A: 4
Figure 14: Visualizations of feature map activations (scaled from 0 to 1) before and after FiLM for a single arbitrary feature map from the ï¬rst ResBlock. This particular feature map seems to detect gray and brown colors. Interestingly, FiLM modiï¬es activations for speciï¬cally colored objects for color-speciï¬c questions but leaves activations alone for color-agnostic questions. Note that since this is the ï¬rst FiLM layer, pre-FiLM activations (Rows 1 and 3) for all questions are identical, and differences in post-FiLM activations (Rows 2 and 4) are solely due FiLMâs use of question information.
Feature 79 - Block 4 M L i F e r o f e B M L F r e t f i A Q: How many cyan objects are behind the gray sphere? A: 2 Q: How many cyan objects are in front of the gray sphere? A: 1 Q: How many cyan objects are left of the gray sphere? A: 2 Q: How many cyan objects are right of the gray sphere? A: 1 | 1709.07871#67 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 68 | Figure 15: Visualization of the impact of FiLM for a single arbitrary feature map from the last ResBlock. This particular feature map seems to focus on spatial features (i.e. front/back or left/right) Note that since this is the last FiLM layer, the top row activations have already been inï¬uenced by question information via several FiLM layers.
Gammas: Layer L Frequency
Gammas: Layer 2
Gammas: Layer 3
Gammas: Layer 4
Gammas: Layer L Gammas: Layer 2 Gammas: Layer 3 Gammas: Layer 4 Frequency
Figure 16: Histograms of γi,c values for each FiLM layer (layers 1-4 from left to right), computed on CLEVRâs validation set. Plots are scaled identically. FiLM layers appear gradually more selective and higher variance.
eta: Layer 1
Beta: Layer 2
Betas: Layer 3
Betas: Layer a
eta: Layer 1 Beta: Layer 2 Betas: Layer 3 Betas: Layer a
Figure 17: Histograms of βi,c values for each FiLM layer (layers 1-4 from left to right) computed on CLEVRâs validation set. Plots are scaled identically. βi,c values take a different, higher variance distribution in the ï¬rst layer than in later layers.
Gamma Means
Gamma SDs
Beta Meane
Beta SDs | 1709.07871#68 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.07871 | 69 | Gamma Means
Gamma SDs
Beta Meane
Beta SDs
Gamma Means Gamma SDs Beta Meane Beta SDs
Figure 18: Histograms of per-channel γc and βc statistics (mean and standard deviation) computed on CLEVRâs validation set. From left to right: γc means, γc standard deviations, βc means, βc standard deviations. Different feature maps are modulated by FiLM in different patterns; some are often zero-ed out while other rarely are, some are consistently scaled or shifted by similar values while others by high variance values, etc. | 1709.07871#69 | FiLM: Visual Reasoning with a General Conditioning Layer | We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | http://arxiv.org/pdf/1709.07871 | Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.03017 | null | cs.CV | 20170922 | 20171218 | [] |
1709.06560 | 0 | 9 1 0 2 n a J 0 3 ] G L . s c [
3 v 0 6 5 6 0 . 9 0 7 1 : v i X r a
# Deep Reinforcement Learning that Matters
Peter Henderson1â, Riashat Islam1,2â, Philip Bachman2 Joelle Pineau1, Doina Precup1, David Meger1 1 McGill University, Montreal, Canada 2 Microsoft Maluuba, Montreal, Canada {peter.henderson,riashat.islam}@mail.mcgill.ca, [email protected] {jpineau,dprecup}@cs.mcgill.ca, [email protected]
# Abstract | 1709.06560#0 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 1 | # Abstract
In recent years, signiï¬cant progress has been made in solving challenging problems across various domains using deep re- inforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel meth- ods is vital to sustaining this progress. Unfortunately, repro- ducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without signiï¬cance metrics and tighter standardization of experimental reporting, it is difï¬cult to determine whether im- provements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the ï¬eld by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted. | 1709.06560#1 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 2 | Introduction Reinforcement learning (RL) is the study of how an agent can interact with its environment to learn a policy which maximizes expected cumulative rewards for a task. Recently, RL has experienced dramatic growth in attention and interest due to promising results in areas like: controlling continuous systems in robotics (Lillicrap et al. 2015a), playing Go (Silver et al. 2016), Atari (Mnih et al. 2013), and competitive video games (Vinyals et al. 2017; Silva and Chaimowicz 2017). Figure 1 illustrates growth of the ï¬eld through the number of publications per year. To maintain rapid progress in RL research, it is important that existing works can be easily reproduced and compared to accurately judge improvements offered by novel methods.
However, reproducing deep RL results is seldom straight- forward, and the literature reports a wide range of results for the same baseline algorithms (Islam et al. 2017). Re- producibility can be affected by extrinsic factors (e.g. hy- perparameters or codebases) and intrinsic factors (e.g. efâThese two authors contributed equally
Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
15,000 10,000 5,000 0 1990 1995 2000 2005 2010 2015 | 1709.06560#2 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 3 | Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
15,000 10,000 5,000 0 1990 1995 2000 2005 2010 2015
Figure 1: Growth of published reinforcement learning papers. Shown are the number of RL-related publications (y-axis) per year (x-axis) scraped from Google Scholar searches.
fects of random seeds or environment properties). We inves- tigate these sources of variance in reported results through a representative set of experiments. For clarity, we focus our investigation on policy gradient (PG) methods in con- tinuous control. Policy gradient methods with neural net- work function approximators have been particularly suc- cessful in continuous control (Schulman et al. 2015a; 2017; Lillicrap et al. 2015b) and are competitive with value-based methods in discrete settings. We note that the diversity of metrics and lack of signiï¬cance testing in the RL literature creates the potential for misleading reporting of results. We demonstrate possible beneï¬ts of signiï¬cance testing using techniques common in machine learning and statistics. | 1709.06560#3 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 4 | Several works touch upon evaluating RL algorithms. Duan et al. (2016) benchmark several RL algorithms and provide the community with baseline implementations. Generaliz- able RL evaluation metrics are proposed in (Whiteson et al. 2011). Machado et al. (2017) revisit the Arcade Learning Environment to propose better evaluation methods in these benchmarks. However, while the question of reproducibility and good experimental practice has been examined in related ï¬elds (Wagstaff 2012; Boulesteix, Lauer, and Eugster 2013; Stodden, Leisch, and Peng 2014; Bouckaert and Frank 2004; Bouckaert 2004; Vaughan and Wawerla 2012), to the best of our knowledge this is the ï¬rst work to address this important question in the context of deep RL.
In each section of our experimental analysis, we pose ques- tions regarding key factors affecting reproducibility. We ï¬nd that there are numerous sources of non-determinism when reproducing and comparing RL algorithms. To this end, we show that ï¬ne details of experimental procedure can be critical. Based on our experiments, we conclude with possible recommendations, lines of investigation, and points of dis- cussion for future works to ensure that deep reinforcement learning is reproducible and continues to matter. | 1709.06560#4 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 5 | Technical Background This work focuses on several model-free policy gradient algorithms with publicly available implementations which appear frequently in the literature as baselines for compar- ison against novel methods. We experiment with Trust Re- gion Policy Optimization (TRPO) (Schulman et al. 2015a), Deep Deterministic Policy Gradients (DDPG) (Lillicrap et al. 2015b), Proximal Policy Optimization (PPO) (Schulman et al. 2017), and Actor Critic using Kronecker-Factored Trust Region (ACKTR) (Wu et al. 2017). These methods have shown promising results in continuous control MuJoCo domain tasks (Todorov, Erez, and Tassa 2012) from Ope- nAI Gym (Brockman et al. 2016). Generally, they optimize p(O, 80) = Ex, [Dopo 7'r(sz)|So], using the policy gradient theorem: S0(9,80) = DY, ure (sl50) 0 orolals) (s,a). Here, pix, (s|80) = So¢297'P(s: = sso) (Sutton et al. 2000). TRPO (Schulman et al. 2015a) and PPO | 1709.06560#5 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 6 | = So¢297'P(s: = sso) (Sutton et al. 2000). TRPO (Schulman et al. 2015a) and PPO (Schulman et al. 2017) use constraints and advantage estimation to per- form this update, reformulating the optimization problem To(az|se) we, (aslo) A: (se, a)| . Here, A; is the general- ized advantage function (Schulman et al. 2015b). TRPO uses conjugate gradient descent as the optimization method with a KL constraint: E; [KL [79,,, (-|Sz), 79(-|S¢)]] < 6. PPO re- formulates the constraint as a penalty (or clipping objective). DDPG and ACKTR use actor-critic methods which estimate Q(s, a) and optimize a policy that maximizes the Q-function based on Monte-Carlo rollouts. DDPG does this using deter- ministic policies, while ACKTR uses Kronecketer-factored trust regions to ensure stability with stochastic policies. as: maxg E;, [ | 1709.06560#6 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 7 | Experimental Analysis We pose several questions about the factors affecting repro- ducibility of state-of-the-art RL methods. We perform a set of experiments designed to provide insight into the questions posed. In particular, we investigate the effects of: speciï¬c hyperparameters on algorithm performance if not properly tuned; random seeds and the number of averaged experi- ment trials; speciï¬c environment characteristics; differences in algorithm performance due to stochastic environments; differences due to codebases with most other factors held constant. For most of our experiments1, except for those com- paring codebases, we generally use the OpenAI Baselines2 implementations of the following algorithms: ACKTR (Wu et al. 2017), PPO (Schulman et al. 2017), DDPG (Plappert et al. 2017), TRPO (Schulman et al. 2017). We use the Hopper- v1 and HalfCheetah-v1 MuJoCo (Todorov, Erez, and Tassa 2012) environments from OpenAI Gym (Brockman et al. 2016). These two environments provide contrasting dynam- ics (the former being more unstable).
1Speciï¬c details can be found in the supplemental and code can be found at: https://git.io/vFHnf | 1709.06560#7 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 8 | 1Speciï¬c details can be found in the supplemental and code can be found at: https://git.io/vFHnf
2https://www.github.com/openai/baselines
To ensure fairness we run ï¬ve experiment trials for each evaluation, each with a different preset random seed (all experiments use the same set of random seeds). In all cases, we highlight important results here, with full descriptions of experimental setups and additional learning curves included in the supplemental material. Unless otherwise mentioned, we use default settings whenever possible, while modifying only the hyperparameters of interest. All results (including graphs) show mean and standard error across random seeds. We use multilayer perceptron function approximators in all cases. We denote the hidden layer sizes and activations as (N, M, activation). For default settings, we vary the hy- perparameters under investigation one at a time. For DDPG we use a network structure of (64, 64, ReLU) for both actor and critic. For TRPO and PPO, we use (64, 64, tanh) for the policy. For ACKTR, we use (64, 64, tanh) for the actor and (64, 64, ELU) for the critic.
Hyperparameters What is the magnitude of the effect hyperparameter settings can have on baseline performance? | 1709.06560#8 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 9 | Hyperparameters What is the magnitude of the effect hyperparameter settings can have on baseline performance?
Tuned hyperparameters play a large role in eliciting the best results from many algorithms. However, the choice of op- timal hyperparameter conï¬guration is often not consistent in related literature, and the range of values considered is often not reported3. Furthermore, poor hyperparameter selec- tion can be detrimental to a fair comparison against baseline algorithms. Here, we investigate several aspects of hyperpa- rameter selection on performance.
Network Architecture How does the choice of network architecture for the policy and value function approximation affect performance? | 1709.06560#9 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 10 | In (Islam et al. 2017), it is shown that policy network architec- ture can signiï¬cantly impact results in both TRPO and DDPG. Furthermore, certain activation functions such as Rectiï¬ed Linear Unit (ReLU) have been shown to cause worsened learning performance due to the âdying reluâ problem (Xu et al. 2015). As such, we examine network architecture and ac- tivation functions for both policy and value function approxi- mators. In the literature, similar lines of investigation have shown the differences in performance when comparing linear approximators, RBFs, and neural networks (Rajeswaran et al. 2017). Tables 1 and 2 summarize the ï¬nal evaluation per- formance of all architectural variations after training on 2M samples (i.e. 2M timesteps in the environment). All learning curves and details on setup can be found in the supplemental material. We vary hyperparameters one at a time, while using a default setting for all others. We investigate three multilayer perceptron (MLP) architectures commonly seen in the liter- ature: (64, 64), (100, 50, 25), and (400, 300). Furthermore, we | 1709.06560#10 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 12 | 3A sampled literature review can be found in the supplemental.
HalfCheetah-vi (PPO, Policy Network Structure) HalfCheetah-v1 (TRPO, Policy Network Activation) DDPG with HalfCheetah Environment - Critic Network Activations 6000 2000) oa 1000 1000) 3004 2000 Average Ret Average Returns 1000 (6464) 1000. (100,50,25) 2000) (400,300) tah rely Critic Network Activation ~ ReLU ~~ Critic Network Activation â TanH leaky ela 70 7 - 1000 zh a a | i Timesteps nu ~ Critic Network Activation Ca a Timesteps OM 035 050 O75 100 135 150 Timesteps vat
DDPG with HalfCheetah Environment - Critic Network Activations 6000 oa 1000) 3004 2000 Average Returns 1000. Critic Network Activation ~ ReLU ~~ Critic Network Activation â TanH 1000 ~ Critic Network Activation OM 035 050 O75 100 135 150 Timesteps vat
Figure 2: Signiï¬cance of Policy Network Structure and Activation Functions PPO (left), TRPO (middle) and DDPG (right). | 1709.06560#12 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 13 | Figure 2: Signiï¬cance of Policy Network Structure and Activation Functions PPO (left), TRPO (middle) and DDPG (right).
sa HalfCheetah-vi (DDPG, Reward Scale, Layer Norm) HalfCheetah-vi (DDPG, Reward Scale, No Layer Norm) 4000 sano) 2000 Average Return 1000 1 Timesteps Tio 155 Timesteps
Figure 3: DDPG reward rescaling on HalfCheetah-v1, with and without layer norm.
activations. We ï¬nd that usually ReLU or Leaky ReLU acti- vations perform the best across environments and algorithms. The effects are not consistent across algorithms or environ- ments. This inconsistency demonstrates how interconnected network architecture is to algorithm methodology. For exam- ple, using a large network with PPO may require tweaking other hyperparameters such as the trust region clipping or learning rate to compensate for the architectural change4. This intricate interplay of hyperparameters is one of the rea- sons reproducing current policy gradient methods is so dif- ï¬cult. It is exceedingly important to choose an appropriate architecture for proper baseline results. This also suggests a possible need for hyperparameter agnostic algorithmsâthat is algorithms that incorporate hyperparameter adaptation as part of the designâsuch that fair comparisons can be made without concern about improper settings for the task at hand. | 1709.06560#13 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 14 | Reward Scale How can the reward scale affect results? Why is reward rescaling used?
Reward rescaling has been used in several recent works (Duan et al. 2016; Gu et al. 2016) to improve results for DDPG. This involves simply multiplying the rewards gen- erated from an environment by some scalar (Ër = rËÏ) for training. Often, these works report using a reward scale of ËÏ = 0.1. In Atari domains, this is akin to clipping the rewards to [0, 1]. By intuition, in gradient based methods (as used in most deep RL) a large and sparse output scale can result in problems regarding saturation and inefï¬ciency in learning (LeCun et al. 2012; Glorot and Bengio 2010; Vincent, de Br´ebisson, and Bouthillier 2015). Therefore clip- ping or rescaling rewards compresses the space of estimated
4We ï¬nd that the KL divergence of updates with the large net- work (400, 300) seen in Figure 2 is on average 33.52 times higher than the KL divergence of updates with the (64, 64) network.
expected returns in action value function based methods such as DDPG. We run a set of experiments using reward rescaling in DDPG (with and without layer normalization) for insights into how this aspect affects performance. | 1709.06560#14 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 15 | Results Our analysis shows that reward rescaling can have a large effect (full experiment results can be found in the supplemental material), but results were inconsistent across environments and scaling values. Figure 3 shows one such ex- ample where reward rescaling affects results, causing a failure to learn in small settings below ËÏ = 0.01. In particular, layer normalization changes how the rescaling factor affects results, suggesting that these impacts are due to the use of deep net- works and gradient-based methods. With the value function approximator tracking a moving target distribution, this can potentially affect learning in unstable environments where a deep Q-value function approximator is used. Furthermore, some environments may have untuned reward scales (e.g. the HumanoidStandup-v1 of OpenAI gym which can reach rewards in the scale of millions). Therefore, we suggest that this hyperparameter has the potential to have a large impact if considered properly. Rather than rescaling rewards in some environments, a more principled approach should be taken to address this. An initial foray into this problem is made in (van Hasselt et al. 2016), where the authors adaptively rescale reward targets with normalized stochastic gradient, but further research is needed.
Random Seeds and Trials Can random seeds drastically alter performance? Can one distort results by averaging an improper number of trials? | 1709.06560#15 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 16 | Random Seeds and Trials Can random seeds drastically alter performance? Can one distort results by averaging an improper number of trials?
A major concern with deep RL is the variance in results due to environment stochasticity or stochasticity in the learning process (e.g. random weight initialization). As such, even averaging several learning results together across totally dif- ferent random seeds can lead to the reporting of misleading results. We highlight this in the form of an experiment. | 1709.06560#16 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 17 | Algorithm TRPO (Schulman et al. 2015a) TRPO (Duan et al. 2016) TRPO (Schulman et al. 2017) PPO (Schulman et al. 2017) DDPG (Plappert et al. 2017) DDPG (Gu et al. 2016) DDPG (Duan et al. 2016) ACKTR (Wu et al. 2017) Environment Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 400,300 2980 ± 35 1791 ± 224 1243 ± 55 738 ± 240 2909 ± 87 -155 ± 188 61 ± 33 -1180 ± 444 1419 ± 313 5579 ± 354 600 ± 126 2845 ± 589 506 ± 208 850 ± 41 2577 ± 529 2653 ± 408 64,64 2674 ± 227 1939 ± 140 1303 ± 89 834 ± 317 2828 ± 70 205 ± 256 2790 ± 62 2201 ± 323 1632 ± 459 4198 ± 606 593 ± 155 2771 ± 535 | 1709.06560#17 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 18 | ± 140 1303 ± 89 834 ± 317 2828 ± 70 205 ± 256 2790 ± 62 2201 ± 323 1632 ± 459 4198 ± 606 593 ± 155 2771 ± 535 749 ± 271 1573 ± 385 1608 ± 66 2691 ± 231 100,50,25 3110 ± 78 2151 ± 27 1243 ± 55 850±378 2812 ± 88 306 ± 261 2592 ± 196 1314 ± 340 2142 ± 436 5600 ± 601 501 ± 129 1638 ± 624 629 ± 138 1224 ± 553 2287 ± 946 2498 ± 112 tanh 2674 ± 227 1939 ± 140 1303 ± 89 834 ± 317 2828 ± 70 205 ± 256 2790 ± 62 2201 ± 323 1491 ± 205 5325 ± 281 436 ± 48 1638 ± 624 354 ± 91 1311 ± 271 1608 ± 66 2621 ± 381 ReLU 2772 ± 211 3041 ± 161 1131 ± 65 784 ± 352 2941 ± 91 1045 ± 114 2695 ± 86 2971 ± 364 1632 ± 459 4198 ± 606 593 ± 155 2771 ± 535 749 ± 271 1573 ± 385 2835 ± 503 2160 ± 151 LeakyReLU - - 1341± 127 1139 ±364 2865 ± 189 778 ± 177 2587 ± 53 2895 ± 365 1384 ± 285 4094 ± 233 319 ± 127 1405± 511 - - 2718 | 1709.06560#18 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
1709.06560 | 21 | Algorithm TRPO (Schulman et al. 2015a) TRPO (Schulman et al. 2017) PPO (Schulman et al. 2017) DDPG (Plappert et al. 2017) DDPG (Gu et al. 2016) DDPG (Duan et al. 2016) ACKTR (Wu et al. 2017) Environment Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 400,300 3011 ± 171 2355 ± 48 2909 ± 87 178 ± 242 2704 ± 37 1523 ± 297 1419 ± 312 5600 ± 601 523 ± 248 1373 ± 678 1208 ± 423 789 ± 91 152 ± 47 518 ± 632 64,64 2674 ± 227 1939 ± 140 2828 ± 70 205 ± 256 2790 ± 62 2201 ± 323 1632 ± 458 4197 ± 606 343 ± 34 1717 ± 508 394 ± 144 1095 ± 139 1930 ± 185 3018 ± 386 100,50,25 2782 ± 120 1673 ± 148 2812 ± 88 172 ± 257 2969 ± 111 | 1709.06560#21 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | [
{
"id": "1611.02247"
},
{
"id": "1506.02438"
},
{
"id": "1707.06347"
},
{
"id": "1703.02660"
},
{
"id": "1705.10443"
},
{
"id": "1703.01703"
},
{
"id": "1509.02971"
},
{
"id": "1612.03780"
},
{
"id": "1606.01540"
},
{
"id": "1706.01905"
},
{
"id": "1706.00387"
},
{
"id": "1709.06009"
},
{
"id": "1505.00853"
},
{
"id": "1708.04782"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.