doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1706.06708 | 138 | Consider what happens if we apply the move sequence a1,â¢1,..., mx to Cy until right after both Ma and mg have occurred. Call this configuration C,,;q. For every j ⬠{1,...,m}, the stickers that are in (x, z) coordinates (j,m +741) and (j,m + iz) of face +y in Cyrig are (m+ %1,m + ta, j)-paired. When transitioning from Ci,iq to Câ, no index-(m+i,) or index-(m +2) moves occur, and so these stickers are also (m + %1,m + ig, j)-paired in Câ. Thus we conclude that the stickers in each pair are the same color.
Therefore we have that in Cmid, the stickers on face +y with z = i2 and 1 ⤠x ⤠n have the same color scheme, call it S, as the stickers on face +y with z = i1 and 1 ⤠x ⤠n. Before we reach the conï¬guration Cmid, the ï¬nal few moves are a sequence of O-moves and T -moves including mα
37 | 1706.06708#138 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06708 | 139 | 37
and mβ. Furthermore, among these O-moves and T -moves, none that occur after mα aï¬ect the stickers with z = i1 and none that occur after mβ aï¬ect the stickers with z = i2. Therefore the color scheme of the stickers in positions z = i2 and 1 ⤠x ⤠n of face +y immediately after mβ is the same as S: the color scheme of those stickers in Cmid. Similarly, the color scheme of the stickers in positions z = i1 and 1 ⤠x ⤠n of face +y immediately after mα is also S. Using Lemmas 5.29 and 5.30, we conclude that the color scheme of the stickers in positions z = i2 and 1 ⤠ây ⤠n of face +x in conï¬guration Cb is S and that the color scheme of the stickers in positions z = i1 and 1 ⤠ây ⤠n of face +x in conï¬guration Cb is also S. This, however, is a contradiction, since those two color schemes in Cb are diï¬erent for any two diï¬erent i1 and i2 (see Theorem 5.4).
We conclude that i1 â T cannot exist, and therefore that T is empty.
This completes the proof of Theorem 5.8 outlined in Section 5.4. | 1706.06708#139 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06708 | 140 | We conclude that i1 â T cannot exist, and therefore that T is empty.
This completes the proof of Theorem 5.8 outlined in Section 5.4.
# 5.10 Conclusion
Theorems 5.2 and 5.8 and Corollaries 5.3 and 5.9 show that the polynomial-time reductions given are answer preserving. As a result, we conclude that
Theorem 5.32. The STM/SQTM Rubikâs Cube and Group STM/SQTM Rubikâs Cube problems are NP-complete.
# 6 Future work
In this paper, we resolve the complexity of optimally solving Rubikâs Cubes under move count metrics for which a single move rotates a single slice. It could be interesting to consider the complexity of this problem under other move count metrics. | 1706.06708#140 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06708 | 141 | Of particular interest are the Wide Turn Metric (WTM) and Wide Quarter Turn Metric (WQTM), in which the puzzle solver can rotate any number of contiguous layers together pro- vided they include one of the faces. These move count metrics are the closest to how one would physically solve a real-world n à n à n Rubikâs Cube: by grabbing some of the slices in the cube (including a face) from the side and rotating those slices together. We can also consider the 1ÃnÃn analogue of the Rubikâs Cube with WTM move count metric: this would be a Rubikâs Square in which a single move ï¬ips a contiguous sequence of rows or columns including a row or column at the edge of the Square. Solving this toy model could help point us in the right direction for the WTM and WQTM Rubikâs Cube problems. If even the toy model resists analysis, it could be interesting to consider this toy model with missing stickers.
# References
[1] Stephen A. Cook. Can computers routinely discover mathematical proofs? Proceedings of the American Philosophical Society, 128(1):40â43, 1984. | 1706.06708#141 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06708 | 142 | # References
[1] Stephen A. Cook. Can computers routinely discover mathematical proofs? Proceedings of the American Philosophical Society, 128(1):40â43, 1984.
[2] Cride5. Move count metrics for big cubes - standards and preferences. Speed Solv- URL: https://www.speedsolving.com/forum/showthread.php? ing Forum, August 2010. 23546-Move-count-metrics-for-big-cubes-standards-and-preferences.
[3] Erik D. Demaine, Martin L. Demaine, Sarah Eisenstat, Anna Lubiw, and Andrew Winslow. In Proceedings of the 19th European Conference on Algorithms for solving Rubikâs Cubes. Algorithms, ESAâ11, pages 689â700, Berlin, Heidelberg, 2011. Springer-Verlag.
38
[4] Jeï¬ Erickson. Is optimally solving the nÃnÃn Rubikâs Cube NP-hard? Theoretical Computer Science Stack Exchange. URL: https://cstheory.stackexchange.com/q/783 (version: 2010-10- 23). | 1706.06708#142 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06708 | 143 | [5] Alon Itai, Christos H. Papadimitriou, and Jayme Luiz Szwarcï¬ter. Hamilton paths in grid graphs. SIAM Journal on Computing, 11(4):676â686, November 1982.
[6] Graham Kendall, Andrew J. Parkes, and Kristian Spoerer. A survey of NP-complete puzzles. ICGA Journal, 31:13â34, 2008.
[7] Daniel Ratner and Manfred Warmuth. The (n2 â 1)-puzzle and related relocation problems. Journal of Symbolic Computation, 10(2):111â137, July 1990.
[8] Wiki. Metric. Speed Solving Wiki, May 2010. URL: https://www.speedsolving.com/wiki/ index.php/Metric.
39 | 1706.06708#143 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06551 | 0 | 7 1 0 2 n u J 6 2 ] L C . s c [
2 v 1 5 5 6 0 . 6 0 7 1 : v i X r a
Grounded Language Learning
# Grounded Language Learning in a Simulated 3D World
Karl Moritz Hermannââ , Felix Hillâ, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis and Phil Blunsomâ
# Deepmind London, UK
# Abstract | 1706.06551#0 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 1 | # Deepmind London, UK
# Abstract
We are increasingly surrounded by artiï¬cially intelligent technology that takes decisions and executes actions on our behalf. This creates a pressing need for general means to communicate with, instruct and guide artiï¬cial agents, with human language the most compelling means for such communication. To achieve this in a scalable fashion, agents must be able to relate language to the world and to actions; that is, their understanding of language must be grounded and embodied. However, learning grounded language is a notoriously challenging problem in artiï¬cial intelligence research. Here we present an agent that learns to interpret language in a simulated 3D environment where it is rewarded for the successful execution of written instructions. Trained via a combination of reinforcement and unsupervised learning, and beginning with minimal prior knowledge, the agent learns to relate linguistic symbols to emergent perceptual representations of its physical surroundings and to pertinent sequences of actions. The agentâs comprehension of language extends beyond its prior experience, enabling it to apply familiar language to unfamiliar situations and to interpret entirely novel instructions. Moreover, the speed with which this agent learns new words increases as its semantic knowledge grows. This facility for generalising and bootstrapping semantic knowledge indicates the potential of the present approach for reconciling ambiguous natural language with the complexity of the physical world.
# 1. Introduction | 1706.06551#1 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 2 | # 1. Introduction
Endowing machines with the ability to relate language to the physical world is a long- standing challenge for the development of Artiï¬cial Intelligence. As situated intelligent technology becomes ubiquitous, the development of computational approaches to under- standing grounded language has become critical to human-AI interaction. Beginning with Winograd (1972), early attempts to ground language understanding in a physical world were constrained by their reliance on the laborious hard-coding of linguistic and physical rules. Modern devices with voice control may appear more competent but suï¬er from the same limitation in that their language understanding components are mostly rule-based and do not generalise or scale beyond their programmed domains.
â. These authors contributed equally to this work. â . Corresponding authors: [email protected] and [email protected].
1
Hermann & Hill et al.
This work presents a novel paradigm for simulating language learning and understand- ing. The approach diï¬ers from conventional computational language learning in that the learning and understanding take place with respect to a continuous, situated environment. Simultaneously, we go beyond rule-based approaches to situated language understanding as our paradigm requires agents to learn end-to-end the grounding for linguistic expressions in the context of using language to complete tasks given only pixel-level visual input. | 1706.06551#2 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 3 | The initial experiments presented in this paper take place in an extended version of the DeepMind Lab (Beattie et al., 2016) environment, where agents are tasked with ï¬nding and picking up objects based on a textual description of each task. While the paradigm outlined gives rise to a large number of possible learning tasks, even the simple setup of object retrieval presents challenges for conventional machine learning approaches. Critically, we ï¬nd that language learning is contingent on a combination of reinforcement (reward- based) and unsupervised learning. By combining these techniques, our agents learn to connect words and phrases with emergent representations of the visible surroundings and embodied experience. We show that the semantic knowledge acquired during this process generalises both with respect to new situations and new language. Our agents exhibit zero- shot comprehension of novel instructions, and the speed at which they acquire new words accelerates as their semantic knowledge grows. Further, by employing a curriculum training regime, we train a single agent to execute phrasal instructions pertaining to multiple tasks requiring distinct action policies as well as lexical semantic and object knowledge.1
# 2. Related work | 1706.06551#3 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 4 | # 2. Related work
Learning semantic grounding without prior knowledge is notoriously diï¬cult, given the limitless possible referents for each linguistic expression (Quine, 1960). A learner must discover correlations in a stream of low level inputs, relate these correlations to both its own actions and to linguistic expressions and retain these relationships in memory. Perhaps unsurprisingly, the few systems that attempt to learn language grounding in artiï¬cial agents do so with respect to environments that are far simpler than the continuous, noisy sensory experience encountered by humans (Steels, 2008; Roy and Pentland, 2002; Krening et al., 2016; Yu et al., 2017). | 1706.06551#4 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 5 | The idea of programming computers to understand how to relate language to a simulated physical environment was pioneered in the seminal work of Winograd (1972). His SHRDLU system was programmed to understand user generated language containing a small number of words and predicates, to execute corresponding actions or to ask questions requesting more information. While initially impressive, this system required that all of the syntax and semantics (in terms of the physical world) of each word be hard coded a priori, and thus it was unable to learn new concepts or actions. Such rule-based approaches to language understanding have come to be considered too brittle to scale to the full complexities of natural language. Since this early work, research on language grounding has taken place across a number of disciplines, primarily in robotics, computer vision and computational linguistics. Research in both natural language processing and computer vision has pointed to the importance of cross modal approaches to grounded concept learning. For instance, it was shown that learnt concept representation spaces more faithfully reï¬ect human semantic
1. See https://youtu.be/wJjdu1bPJ04 for a video of the trained agents.
2
# Grounded Language Learning
intuitions if induced from information about the perceptible properties of objects as well as from raw text (Silberer and Lapata, 2012). | 1706.06551#5 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 6 | Semantic parsing, as pursued the ï¬eld of natural language processing, has predominantly focussed on building a compositional mapping from natural language to formal semantic representations that are then grounded in a database or knowledge graph (Zettlemoyer and Collins, 2005; Berant et al., 2013). The focus of this direction of work is on the compositional mapping between the two abstract modalities, natural language and logical form, where the grounding is usually discrete and high level. This is in contrast to the work presented in this paper where we focus on learning to ground language in low level perception and actions. Siskind (1995) represents an early attempt to ground language in perception by seeking to link objects and events in stick-ï¬gure animations to language. Broadly this can be seen as a precursor to more recent work on mapping language to actions in video and similar modalities (Siskind, 2001; Chen and Mooney, 2008; Yu and Siskind, 2013). In a similar vein, the work of Roy and Pentland (2002) applies machine learning to aspects of grounded language learning, connecting speech or text input with images, videos or even robotic controllers. These systems consisted of modular pipelines in which machine learning was used to | 1706.06551#6 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 7 | grounded language learning, connecting speech or text input with images, videos or even robotic controllers. These systems consisted of modular pipelines in which machine learning was used to optimise individual components while complementing hard-coded representations of the input data. Within robotics, there has been interest in using language to facilitate human-robot communication, as part of which it is necessary to devise mechanisms for grounding a perceptible environment with language (Hemachandra et al., 2014; Walter et al., 2014). In general, the amount of actual learning in these prior works is heavily constrained, either through the extensive use of hand-written grammars and mechanisms to support the grounding, or through simpliï¬cation in terms of the setup and environment. Other related work focuses on language grounding from the perspective of human- machine communication (Thomason et al., 2015; Wang et al., 2016; Arumugam et al., 2017). The key diï¬erence between these approaches and our work is that here again language is grounded to highly structured environments, as opposed to the continuous perceptible input our learning environment provides. | 1706.06551#7 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 8 | In the ï¬eld of computer vision, image classiï¬cation (Krizhevsky et al., 2012) can be interpreted as aligning visual data and semantic or lexical concepts. Moreover, neural net- works can eï¬ectively map image or video representations from these classiï¬cation networks to human-written image captions. These mappings can also yield plausible descriptions of visual scenes that were not observed during training (Xu et al., 2015; Vendrov et al., 2015). However, unlike our approach, these captioning models typically learn visual and linguistic processing and representation from ï¬xed datasets as part of two separate, independent op- timisations. Moreover, they do not model the grounding of linguistic symbols in actions or a visual stimuli that constantly change based on the exploration policy of the agent. | 1706.06551#8 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 9 | The idea that reinforcement-style learning could play a role in language learning has been considered for decades (Chomsky, 1959). Recently, however, RL agents controlled by deep neural nets have been trained to solve tasks in both 2D (Mnih et al., 2015) and 3D (Mnih et al., 2016) environments. Our language learning agents build on these approaches and algorithms, but with an agent architecture and auxiliary unsupervised objectives that are speciï¬c to our multi-modal learning task. Other recently-proposed frameworks for interactive language learning involve unimodal (text-only) settings (Narasimhan et al., 2015; Mikolov et al., 2015).
3
Hermann & Hill et al.
# 3. The 3D language learning environment | 1706.06551#9 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 10 | 3
Hermann & Hill et al.
# 3. The 3D language learning environment
To conduct our language learning experiments we integrated a language channel into a 3D In this environment, an agent simulated world (DeepMind Lab, Beattie et al. (2016)). perceives its surroundings via a constant stream of continuous visual input and a textual instruction. It perceives the world actively, controlling what it sees via movement of its vi- sual ï¬eld and exploration of its surroundings. One can specify the general conï¬guration of layouts and possible objects in this environment together with the form of language instruc- tions that describe how the agent can obtain rewards. While the high-level conï¬guration of these simulations is customisable, the precise world experienced by the agent is chosen at random from billions of possibilities, corresponding to diï¬erent instantiations of objects, their colours, surface patterns, relative positions and the overall layout of the 3D world. | 1706.06551#10 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 11 | To illustrate this setup, consider a very simple environment comprising two connected rooms, each containing two objects. To train the agent to understand simple referring expressions, the environment could be conï¬gured to issue an instruction of the form pick the X in each episode. During training, the agent experiences multiple episodes with the shape, colour and pattern of the objects themselves diï¬ering in accordance with the instruction. Thus, when the instruction is pick the pink striped ladder, the environment might contain, in random positions, a pink striped ladder (with positive reward), an entirely pink ladder, a pink striped chair and a blue striped hairbrush (all with negative reward).
It is important to emphasise the complexity of the learning challenge faced by the agent, even for a simple reference task such as this. To obtain positive rewards across multiple training episodes, the agent must learn to eï¬ciently explore the environment and inspect candidate objects (requiring the execution of hundreds of inter-dependent actions) while simultaneously learning the (compositional) meanings of multi-word expressions and how they pertain to visual features of diï¬erent objects (Figure 1)
We also construct more complex tasks pertaining to other characteristics of human language understanding, such as the generalisation of linguistic predicates to novel objects, the productive composition of words and short phrases to interpret unfamiliar instructions and the grounding of language in relations and actions as well as concrete objects.
# 4. Agent design | 1706.06551#11 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 12 | # 4. Agent design
Our agent consists of four inter-connected modules optimised as a single neural network. At each time step t, the visual input vt is encoded by the convolutional vision module V and a recurrent (LSTM, Hochreiter and Schmidhuber (1997)) language module L encodes the instruction string lt. A mixing module M determines how these signals are combined before they are passed to a two-layer LSTM action module A. The hidden state st of the upper LSTM in A is fed to a policy function, which computes a probability distribution over possible motor actions Ï(at|st), and a state-value function approximator V al(st), which computes a scalar estimate of the agent value function for optimisation. To learn from the scalar rewards that can be issued by the environment, the agent employs an actor-critic algorithm (Mnih et al., 2016).
The policy Ï is a distribution over a discrete set of actions. The baseline function V al es- timates the expected discounted future return following the state the agent is currently in. In
4
Grounded Language Learning
Top down view Agent view d object nex | 1706.06551#12 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 13 | 4
Grounded Language Learning
Top down view Agent view d object nex
Figure 1: In this example, the agent begins in position 1 and immediately receives the instruction pick the red object next to the green object. It explores the two-room layout, viewing objects and their relative positions before retrieving the object that best conforms to the instruction. This exploration and selection behaviour emerges entirely from the reward-driven learning and is not preprogrammed. When training on a task such as this, there are billions of possible episodes that the agent can experience, containing diï¬erent objects in diï¬erent positions across diï¬erent room layouts.
5
Hermann & Hill et al. | 1706.06551#13 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 14 | Figure 2: Schematic organisation of the network modules (grey) supplemented with auxil- iary learning objectives (coloured components)
other words, it approximates the state-value function Val,(s) = Ex[S7po9 Aâri+e41 | St = 5] where 5; is the state of the environment at time t when following policy 7 and r; is the reward received following the action performed at time t. ⬠[0, 1] is a discount parameter. The agentâs primary objective is is to find a policy which maximizes the expected dis- counted return Ex[>>/2 0 Mr,]. We apply the Advantage Actor Critic algorithm (Mnih et al., 2016) to optimize the policy âa Softmax multinomial distribution parametrized by the agentâs networkâtowards higher discounted returns.
Parameters are updated according to the RMSProp update rule (Tieleman and Hinton, 2012). We share a single parameter vector across 32 asynchronous threads. This conï¬gu- ration oï¬ers a suitable trade-oï¬ between increased speed and loss of accuracy due to the asynchronous updates (Mnih et al., 2016). | 1706.06551#14 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 15 | Importantly, early simulation results revealed that this initial design does not learn to solve even comparably simple tasks in our setup. As described thus far, the agent can learn only from comparatively infrequent object selection rewards, without exploiting the stream of potentially useful perceptual feedback available at each time step when exploring the environment. We address this by endowing the agent with ways to learn in an unsupervised manner from its immediate surroundings, by means of auto-regressive objectives that are applied concurrently with the reward-based learning and involve predicting or modelling aspects of the agentâs surroundings (Jaderberg et al., 2016).
Temporal autoencoding The temporal autoencoder auxiliary task tAE is designed to illicit intuitions in our agent about how the perceptible world will change as a consequence of its actions. The objective is to predict the visual environment vt+1 conditioned on the prior
6
# Grounded Language Learning
visual input vt and the action at (Oh et al., 2015). Our implementation reuses the standard visual module V and combines the representation of vt with an embedded representation of at. The combined representation is passed to a deconvolutional network to predict vt+1. As well as providing a means to ï¬ne-tune the visual system V, the tAE auxiliary task results in additional training of the action-policy network, since the action representations can be shared between tAE and the policy network Ï. | 1706.06551#15 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 16 | Language prediction To strengthen the ability of the agent to reconcile visual and linguistic modalities we design a word prediction objective LP that estimates instruction words lt given the visual observation vt, using model parameters shared with both V and L. The LP network can also serve to make the behaviour of trained agents more interpretable, as the agent emits words that it considers to best describe what it is currently observing.
The tAE and LP auxiliary networks were optimised with mini-batch gradient descent based on the mean-squared error and negative-log-likelihood respectively. We also experi- mented with reward prediction (RP) and value replay (VR) as additional auxiliary tasks to stabilise reinforcement based training (Jaderberg et al., 2016).
Figure 2 gives a schematic organisation of the agent with all the above auxiliary learning objectives. Precise implementation details of the agent are given in Appendix A.
# 5. Experiments | 1706.06551#16 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 17 | Figure 2 gives a schematic organisation of the agent with all the above auxiliary learning objectives. Precise implementation details of the agent are given in Appendix A.
# 5. Experiments
In evaluating the agent, we constructed tasks designed to test its capacity to cope with var- ious challenges inherent in language learning and understanding. We ï¬rst test its ability to eï¬ciently acquire a varied vocabulary of words pertaining to physically observable aspects of the environment. We then examine whether the agent can combine this lexical knowl- edge to interpret both familiar and unfamiliar word combinations (phrases). This analysis includes phrases whose meaning is dependent of word order, and cases in which the agent must induce and re-use lexical knowledge directly from (potentially ambiguous) phrases. Finally, we test the agentâs ability to learn less concrete aspects of language, including in- structions referring to relational concepts (Doumas et al., 2008) and phrases referring to actions and behaviours.
# 5.1 Role of unsupervised learning | 1706.06551#17 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 18 | # 5.1 Role of unsupervised learning
Our ï¬rst experiment explored the eï¬ect of the auxiliary objectives on the ability of the agent to acquire a vocabulary of diï¬erent concrete words (and associated lexical concepts). Training consisted of multiple episodes in a single room containing two objects. For each episode, at time t = 0, the agent was spawned in a position equidistant from the two objects, and received a single-word instruction that unambiguously referred to one of the two objects. It received a reward of 1 if it walked over to and selected the correct referent object and â1 if it picked the incorrect object. A new episode began immediately after an object was selected, or if the agent had not selected either object after 300 steps. Objects and instructions were sampled at random from the full set of factors available in the simulation environment.2 We trained 16 replicas for each agent conï¬guration (Figure 3) with ï¬xed hyperparameters from
2. See Appendix B for a complete list.
7
Hermann & Hill et al.
the standard settings and random hyperparameters sampled uniformly from the standard ranges.3 | 1706.06551#18 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 19 | 2. See Appendix B for a complete list.
7
Hermann & Hill et al.
the standard settings and random hyperparameters sampled uniformly from the standard ranges.3
A3C agent ASC agent +RP +VR A8C agent +RP +VR +LP A3C agent +RP +VR +tAE A3C agent +RP +VR +tAE +LP Average Reward per Episode A A, / \_ Pmt â.ta0ao- ee oS Yon Â¥ \ 500000 1000000 1500000 2000000 Training Episodes
Figure 3: Unsupervised learning via auxiliary prediction objectives facilitates word learning. Learning curves for a vocabulary acquisition task. The agent is situated in a single room faced with two objects and must select the object that correctly matches the textual instruction. A total of 59 diï¬erent words were used as instructions during training, referring to either the shape, colours, relative size (larger, smaller), relative shade (lighter, darker) or surface pattern (striped, spotted, etc.) of the target object. RP: reward predic- tion, VR: value replay, LP: language prediction, tAE: temporal autoencoder. Data show mean and conï¬dence bands (CB) across best ï¬ve of 16 hyperparameter settings sampled at random from ranges speciï¬ed in the appendix. Training episodes counts individual levels seen during training. | 1706.06551#19 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 20 | As shown in Figure 3, when relying on reinforcement learning alone, the agent exhibited no learning even after millions of training episodes. The fastest learning was exhibited by an agent applying both temporal auto-encoding and language prediction in conjunction with value replay and reward prediction. These results demonstrate that auto-regressive objectives can extract information that is critical for language learning from the perceptible environment, even when explicit reinforcement is not available.
8
Grounded Language Learning
10 So Q * 8 â a £ = o 5b 6 v 34 3 = Ri e - [) v Â¥ = o â 2 2 o @am Agent that already knows 20 words outside of training set @m Agent that already knows 2 words outside of training set 0 @m Agent trained from scratch 0 100000 +~4200000 300000 400000 500000 600000 700000 9800000 Training Episodes | 1706.06551#20 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 21 | Figure 4: Word learning is much faster once some words are already known The rate at which agents learned a vocabulary of 20 shape words was measured in agents in three conditions. In one condition, the agent had prior knowledge of 20 shapes and their names outside of the training data used here. In the second condition, the agent had prior knowledge of two shape words outside of the target vocabulary (same number of pre-training steps). In the third condition, the agent was trained from scratch. All agents used RP, VR, LP and tAE auxiliary objectives. Data show mean and conï¬dence bands across best ï¬ve of 16 hyperparameter settings in each condition, sampled at random from ranges speciï¬ed in Appendix C.
9
Hermann & Hill et al.
# 5.2 Word learning speed experiment | 1706.06551#21 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 22 | 9
Hermann & Hill et al.
# 5.2 Word learning speed experiment
Before it can exhibit any lexical knowledge, the agent must learn various skills and capacities that are independent of the speciï¬cs of any particular language instruction. These include an awareness of objects as distinct from ï¬oors or walls; some capacity to sense ways in which those objects diï¬er; and the ability to both look and move in the same direction. In addition, the agent must infer that solving the solution to tasks is always contingent on both visual and linguistic input, without any prior programming or explicit teaching of the importance of inter-modal interaction. Given the complexity of this learning challenge, it is perhaps unsurprising that the agent requires thousands of training episodes before evidence of word learning emerges. | 1706.06551#22 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 23 | To establish the importance of this âpre-linguisticâ learning, we compared the speed of vocabulary acquisition in agents with diï¬erent degrees of prior knowledge. The training set consisted of instructions (and corresponding environments) from the twenty shape terms banana, cherries, cow, ï¬ower, fork, fridge, hammer, jug, knife, pig, pincer, plant, saxo- phone, shoe, spoon, tennis-racket, tomato, tree, wine-glass and zebra. The agent with most prior knowledge was trained in advance (in a single room setting with two objects) on the remaining twenty shapes from the full environment. The agent with minimal prior knowl- edge was trained only on the two terms ball and tv. Both regimes of advanced training were stopped once the agent reached an average reward of 9.5/10 across 1,000 episodes. The agent with no prior knowledge began learning directly on the training set. | 1706.06551#23 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 24 | The comparison presented in Figure 4 demonstrates that much of the initial learning in an agent trained from scratch involves acquiring visual and motor, rather than expressly linguistic, capabilities. An agent already knowing two words (and therefore exhibiting rudimentary motor and visual skills) learned new words at a notably faster rate than an agent trained from scratch. Moreover, the speed of word learning appeared to accelerate as more words were learned. This shows that the acquisition of new words is supported not only by general-purpose motor-skills and perception, but also existing lexical or semantic knowledge. In other words, the agent is able to bootstrap its existing semantic knowledge to enable the acquisition of new semantic knowledge.
# 5.3 One-shot learning experiments | 1706.06551#24 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 25 | # 5.3 One-shot learning experiments
Two important facets of natural language understanding are the ability to compose the meanings of known words to interpret otherwise unfamiliar phrases, and the ability to generalise linguistic knowledge learned in one setting to make sense of new situations. To examine these capacities in our agent, we trained it in settings where its (linguistic or visual) experience was constrained to a training set, and simultaneously as it learned from the training set, tested the performance of the agent on situations outside of this set (Figure 5). In the colour-shape composition experiment, the training instructions were either unigrams or bigrams. Possible unigrams were the 40 shape and the 13 colour terms listed in Appendix B. The possible bigrams were any colour-shape combination except those containing the shapes ice lolly, ladder, mug, pencil, suitcase or the colours red, magenta, grey, purple (subsets selected randomly). The test instructions consisted of all possible In each training episode, the target object was bigrams excluded from the training set.
3. See Appendix C for details.
10
# Grounded Language Learning | 1706.06551#25 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 26 | 3. See Appendix C for details.
10
# Grounded Language Learning
rendered to match the instruction (in colour, shape or both) and the confounding object did not correspond to any of the bigrams in the test set. Similarly, in each test episode, both the target object and the confounding object corresponded to bigrams in the test instructions. These constraints ensured that the agent could not interpret test instructions by excluding other objects or terms that it had seen in the training set.
The colour-shape decomposition / composition experiment is similar in design to the colour-shape composition experiment. The test tasks were identical, but the possible training instructions consisted only of the bigram instructions from the colour-shape com- position training set. To achieve above chance performance on the test set, the agent must therefore isolate aspects of the world that correspond to each of the constituent words in the bigram instructions (decomposition), and then build an interpretation of novel bigrams using these constituent concepts. | 1706.06551#26 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 27 | The relative size and relative shade experiments were designed to test the gener- smaller, larger ality of agentsâ representation of relational concepts (in this case larger, and darker. Training and testing episodes again took place in a single room with two ob- jects. The relative size experiment involved the 16 shapes in our environment whose size could be varied while preserving their shape. The possible instructions in both training and test episodes were simply the unigrams larger and smaller. The agent was required to choose between two objects of the same shape but diï¬erent size (and possibly diï¬erent colour) according to the instruction. All training episodes involved target and confounding objects whose shape was either a tv, ball, balloon, cake, can, cassette, chair, guitar, hair- brush or hat. All test episodes involved objects whose shape was either an ice lolly, ladder, mug, pencil or toothbrush.
The relative shade experiment followed the same design, but the agent was presented with two objects of possibly diï¬ering shape that diï¬ered only in the shade of their colouring (e.g. one light blue and one dark blue). The training colours were green, blue, cyan, yellow, pink, brown and orange. The test colours were red, magenta, grey and purple. | 1706.06551#27 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 28 | When trained on colour and shape unigrams together with a limited number of colour- shape bigrams, the agent naturally understood additional colour-shape bigrams if it is familiar with both constituent words. Moreover, this ability to productively compose known words to interpret novel phrases was not contingent on explicit training of those words in isolation. When exposed only to bigram phrases during training, the agent inferred the constituent lexical concepts and reapplied these concepts to novel combinations at test time. Indeed, in this condition (the decomposition/composition case), the agent learned to generalise after fewer training instances than in the apparently simpler composition case. This can be explained by by the fact that episodes involving bigram instructions convey greater information content, such that the latter condition avails the agent of more information per training episode. Critically, the agentâs ability to decompose phrases into constituent (emergent) lexical concepts reï¬ects an ability that may be essential for human- like language learning in naturalistic environments, since linguistic stimuli rarely contain words in isolation.
Another key requirement for linguistic generalisation is the ability to extend category terms beyond the speciï¬c exemplars from which those concepts were learned (Quinn et al., 1993; Rogers and McClelland, 2004). This capacity was also observed in our agent; when trained on the relational concepts larger and smaller in the context of particular shapes
11
Hermann & Hill et al. | 1706.06551#28 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 29 | 11
Hermann & Hill et al.
10 Color-Shape Composition » Color-Shape Decomposition / Recomposition Average Reward per Episode (/10) Average Reward per E â Performance on training set â Performance on training set â Performance on test set â Performance on test set 200000 400000 600000 © 8000001000000 1200000 1400000 200000 400000 600000 800000 1000000 1200000 1400000 Training Episodes Training Episodes 10 Lighter / Darker » Larger / Smaller Average Reward per Episode (/10) f 4 2 $2 < â Performance on training set â Performance on training set â Performance on test set â Performance on test set 0 ° 200000 400000 60000 © 8000001000000 1200000 1400000 200000 400000 600000 + 800000 1000000 1200000 1400000 Training Episodes Training Episodes | 1706.06551#29 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 30 | Figure 5: Semantic knowledge generalises to unfamiliar language and objects. Composition (A): training covered all shape and colour unigrams and â¼ 90% of possible colour-shape bigrams, such as blue ladder. Agents were periodically tested on the remaining 10% of bigrams without updating parameters. Decomposition-composition (B): the same regime as in A, but without any training on unigram descriptors. Lighter / darker (C): agents were trained to interpret the terms lighter and darker applied to a set of colours, and tested on the terms in the context of a set of diï¬erent colours. Relative size (D): agents were trained to interpret the terms larger and smaller applied to a set of shapes, and tested on the terms in the context of a set of diï¬erent shapes. Data show mean and CB across best ï¬ve of 16 randomly sampled hyperparameter settings in each condition. See Appendix B for hyperparameter ranges and exact train/test stimuli.
12
# Grounded Language Learning
it naturally applied them to novel shapes with almost perfect accuracy. In contrast, the ability to generalise lighter and darker to unfamiliar colours was signiï¬cantly above chance but less than perfect. This may be because it is particularly diï¬cult to infer the mapping corresponding to lighter and darker (as understood by humans) in an RGB colour space from the small number of examples observed during training. | 1706.06551#30 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 31 | Taken together, these instances of generalisation demonstrate that our agent does not simply ground language in hard coded features of the environment such as pixel activa- tions or speciï¬c action sequences, but rather learns to ground meaning in more abstract semantic representations. More practically, these results also suggest how artiï¬cial agents that are necessarily exposed to ï¬nite training regimes may ultimately come to exhibit the productivity characteristic of human language understanding.
# 5.4 Extending learning via a curriculum
A consequence of the agentâs facility for re-using its acquired knowledge for further learning is the potential to train the agent on more complex language and tasks via exposure to a curriculum of levels. Figure 6 shows an example for the successful application of such a curriculum, here applied to the task of selecting an object based on the ï¬oor colour of the room it is located in. | 1706.06551#31 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 32 | We also applied a curriculum to train an agent on a range of multi-word referring instructions of the form pick the X, where X represents a string consisting of either a single noun (shape term, such as chair ) an adjective and a noun (a colour term, pattern term or shade term, followed by a shape term, such as striped ladder ) or two adjectives and a noun (a shade term or a pattern term, followed by a colour term, followed by a shape term, such as dark purple toothbrush). The latter two cases were also possible with the generic term âobjectâ in place of a shape term. In each case, the training episode involved one object that coincided with the instruction and some number of distractors that did not. Learning curves for this âreferring expression agentâ are illustrated in Figure 7.
# 5.5 Multi-task learning
Language is typically used to refer to actions and behaviours as much as to objects and entities. To test the ability of our agents to ground such words in corresponding proce- dures, we trained a single agent to follow instructions pertaining to three dissociable tasks. We constructed these tasks using a two-room world with both ï¬oor colourings and object properties sampled at random. | 1706.06551#32 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 33 | In this environment, the Selection task involved instructions of the form pick the X object or pick all X, where X denotes a colour term. The Next to task involved instructions of the form pick the X object next to the Y object, where X and Y refer to objects. Finally, the In room task involved instructions of the form pick the X in the Y room, where Y referred to the colour of the ï¬oor in the target room. Both the Next to and the In room task employed large degrees of ambiguity, i.e. a given Next to level may contain several objects X and Y , but in a constellation that only one X would be located next to a Y .
The agent was exposed to instances of each task with equal probability during training. The possible values for variables X and Y in these instructions were red, blue, green, yellow,
13
Hermann & Hill et al. | 1706.06551#33 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 34 | 13
Hermann & Hill et al.
single-room layout two room layout two object two object words and words and 1 room 2 room descriptors descriptors 2 Agent trained from scratch Agent previously trained on level 1 ° ° Agent trained from scratch 1000000 2000000 ooo +â5090000 600000 3000000 T Episodes Average Reward per Episode (/19) two room layout two room layout medium full object object word / word / 3 room 4 room descriptor descriptor vocabulary vocabulary w bette 4 Agent previously trained on level 2 ] Agent previously trained on level 3 ° Agent previously trained on level 1 ° Agent previously trained on level 2 1000000 | Agent trained from scratch 2000000 Agent previously trained on level 1 Average Reward per Episode (/10 Agent trained from scratch | 1706.06551#34 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 35 | Figure 6: Curriculum learning is necessary for solving more complex tasks. For the agent to learn to retrieve an object in a particular room as instructed, a four-lesson training curriculum was required. Each lesson involved a more complex layout or a wider selection of objects and words, and was only solved by an agent that had successfully solved the previous lesson. The schematic layout and vocabulary scope for each lesson is shown above the training curves for that lesson. The initial (spawn) position of this agent varies randomly during training among the locations marked x, as do the position of the four possible objects among the positions marked with a white diamond. Data show mean and CB across best ï¬ve of 16 randomly sampled hyperparameter settings in each condition.
14
Grounded Language Learning
Layout â agent trained from scratch â agent trained on level 1 â agent trained from scratch Average Reward per Episode (/10) Average Reward per Episode (/10) 500000 3000000 3500000 2000000 2500000 500000 1000000 11500000 2000000 2500000 Training Episodes Training Episodes
Figure 7: Learning curve for the referring expression agent. The trained agent is able to select the correct object in a two-object setup when described using a compositional expression. This ability transfers to more complex environments with a larger number of confounding objects. | 1706.06551#35 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 36 | Layout Layout â agent trained from scratch â agent trained on level 1 0 0 eNOS eee ea ae se â agent trained from scratch Average Reward per Episode (/10) Average Reward per Episode (/10) 200000 400000600000» 800000 1000000 1200000 200000 400000 00000 800000 1000000 Training Episodes Training Episodes
Figure 8: Multi-task learning via an eï¬cient curriculum of two steps. A single agent can learn to solve a number of diï¬erent tasks following a two-lesson training curricu- lum. The diï¬erent tasks cannot be distinguished based on visual information alone, but require the agent to use the language input to identify the task in question.
15
Hermann & Hill et al.
cyan and magenta. The shape of all objects in the environment was selected randomly from 40 possibilities. | 1706.06551#36 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 37 | 15
Hermann & Hill et al.
cyan and magenta. The shape of all objects in the environment was selected randomly from 40 possibilities.
As previously, a curriculum was required to achieve the best possible agent performance on these tasks (see Figure 8). When trained from scratch, the agent learned to solve all three types of task in a single room where the colour of the ï¬oor was used as a proxy for a diï¬erent room. However, it was unable to achieve the same learning in a larger layout with two distinct rooms separated by a corridor. When the agent trained in a single room was transferred to the larger environment, it continued learning and eventually was able to solve the more diï¬cult task.4
By learning these tasks, this agent demonstrates an ability to ground language referring not only to single (concrete) objects, but also to (more abstract) sequences of actions, plans and inter-entity relationships. Moreover, in mastering the Next to and In room tasks, the agent exhibits sensitivity to a critical facet of many natural languages, namely the dependence of utterance meaning on word order. The ability to solve more complex tasks by curriculum training emphasises the generality of the emergent semantic representations acquired by the agent, allowing it to transfer learning from one scenario to a related but more complex environment.
# 6. Conclusion | 1706.06551#37 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 38 | # 6. Conclusion
An artiï¬cial agent capable of relating natural languages to the physical world would trans- form everyday interactions between humans and technology. We have taken an important step towards this goal by describing an agent that learns to execute a large number of multi-word instructions in a simulated three-dimensional world, with no pre-programming or hard-coded knowledge. The agent learns simple language by making predictions about the world in which that language occurs, and by discovering which combinations of words, perceptual cues and action decisions result in positive outcomes. Its knowledge is distributed across language, vision and policy networks, and pertains to modiï¬ers, relational concepts and actions, as well as concrete objects. Its semantic representations enable the agent to productively interpret novel word combinations, to apply known relations and modiï¬ers to unfamiliar objects and to re-use knowledge pertinent to the concepts it already has in the process of acquiring new concepts. | 1706.06551#38 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 39 | While our simulations focus on language, the outcomes are relevant to machine learn- ing in a more general sense. In particular, the agent exhibits active, multi-modal concept induction, the ability to transfer its learning and apply its knowledge representations in un- familiar settings, a facility for learning multiple, distinct tasks, and the eï¬ective synthesis of unsupervised and reinforcement learning. At the same time, learning in the agent reï¬ects various eï¬ects that are characteristic of human development, such as rapidly accelerating rates of vocabulary growth, the ability to learn from both rewarded interactions and pre- dictions about the world, a natural tendency to generalise and re-use semantic knowledge, and improved outcomes when learning is moderated by curricula (Vosniadou and Brewer, 1992; Smith et al., 1996; Pinker, 1987, 2009). Taken together, these contributions open many avenues for future investigations of language learning, and learning more generally, in both humans and artiï¬cial agents.
4. See https://youtu.be/wJjdu1bPJ04 for a video of the ï¬nal trained agent.
16
Grounded Language Learning
# References | 1706.06551#39 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 40 | 4. See https://youtu.be/wJjdu1bPJ04 for a video of the ï¬nal trained agent.
16
Grounded Language Learning
# References
Dilip Arumugam, Siddharth Karamcheti, Nakul Gopalan, Lawson L. S. Wong, and Ste- fanie Tellex. Accurately and eï¬ciently interpreting human-robot instructions of varying granularities. CoRR, abs/1704.06616, 2017. URL http://arxiv.org/abs/1704.06616.
Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich K¨uttler, Andrew Lefrancq, Simon Green, V´ıctor Vald´es, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaï¬ney, Helen King, Demis Hassabis, Shane Legg, and Stig Petersen. Deepmind lab. CoRR, abs/1612.03801, 2016. URL http://arxiv.org/abs/1612.03801. | 1706.06551#40 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 41 | Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic parsing on free- base from question-answer pairs. In EMNLP, pages 1533â1544. ACL, 2013. ISBN 978- 1-937284-97-8. URL http://dblp.uni-trier.de/db/conf/emnlp/emnlp2013.html# BerantCFL13.
David L. Chen and Raymond J. Mooney. Learning to sportscast: A test of grounded language acquisition. In Proceedings of the 25th International Conference on Machine Learning (ICML), Helsinki, Finland, July 2008. URL http://www.cs.utexas.edu/ users/ai-lab/?chen:icml08.
Noam Chomsky. A review of BF Skinnerâs Verbal Behavior. Language, 35(1):26â58, 1959.
Leonidas AA Doumas, John E Hummel, and Catherine M Sandhofer. A theory of the discovery and predication of relational concepts. Psychological review, 115(1):1, 2008. | 1706.06551#41 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 42 | Sachithra Hemachandra, Matthew R. Walter, Stefanie Tellex, and Seth Teller. Learning spatially-semantic representations from natural language descriptions and scene classiï¬ca- tions. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, May 2014.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxil- iary tasks. In International Conference on Learning Representations, 2016.
Samantha Krening, Brent Harrison, Karen M Feigh, Charles Isbell, Mark Riedl, and Andrea Thomaz. Learning from explanations using sentiment and advice in RL. In 2016 IEEE Transactions on Cognitive and Developmental Systems. IEEE, 2016.
Alex Krizhevsky, Ilya Sutskever, and Geoï¬rey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012. | 1706.06551#42 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 43 | Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541â551, 1989.
17
Hermann & Hill et al.
Tomas Mikolov, Armand Joulin, and Marco Baroni. A roadmap towards machine intelli- gence. arXiv preprint arXiv:1511.08130, 2015.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lil- licrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, 2016. | 1706.06551#43 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 44 | Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. Language understanding for text-based games using deep reinforcement learning. Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2015.
Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action- conditional video prediction using deep networks in Atari games. In Advances in Neural Information Processing Systems 28, 2015.
Steven Pinker. The bootstrapping problem in language acquisition. Mechanisms of language acquisition, pages 399â441, 1987.
Steven Pinker. Language learnability and language development, volume 7. Harvard Uni- versity Press, 2009.
W. V. O. Quine. Word & Object. MIT Press, 1960.
Paul C Quinn, Peter D Eimas, and Stacey L Rosenkrantz. Evidence for representa- tions of perceptually similar natural categories by 3-month-old and 4-month-old infants. Perception, 22(4):463â475, 1993.
Timothy T Rogers and James L McClelland. Semantic cognition: A parallel distributed processing approach. MIT press, 2004.
Deb K Roy and Alex P Pentland. Learning words from sights and sounds: A computational model. Cognitive science, 26(1):113â146, 2002. | 1706.06551#44 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 45 | Deb K Roy and Alex P Pentland. Learning words from sights and sounds: A computational model. Cognitive science, 26(1):113â146, 2002.
In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2012.
Jeï¬rey Mark Siskind. Grounding Language in Perception, pages 207â227. Springer Nether- ISBN 978-94-011-0273-5. doi: 10.1007/978-94-011-0273-5 12. lands, Dordrecht, 1995. URL http://dx.doi.org/10.1007/978-94-011-0273-5_12.
Jeï¬rey Mark Siskind. Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic. J. Artif. Intell. Res. (JAIR), 15:31â90, 2001. doi: 10. 1613/jair.790. URL https://doi.org/10.1613/jair.790.
18
Grounded Language Learning
Linda B Smith, Susan S Jones, and Barbara Landau. Naming in young children: A dumb attentional mechanism? Cognition, 60(2):143â171, 1996. | 1706.06551#45 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 46 | Linda B Smith, Susan S Jones, and Barbara Landau. Naming in young children: A dumb attentional mechanism? Cognition, 60(2):143â171, 1996.
Luc Steels. The symbol grounding problem has been solved. so whatâs next. Symbols and embodiment: Debates on meaning and cognition, pages 223â244, 2008.
Jesse Thomason, Shiqi Zhang, Raymond Mooney, and Peter Stone. Learning to interpret In Proceedings of the 24th natural language commands through human-robot dialog. International Conference on Artiï¬cial Intelligence, IJCAIâ15, pages 1923â1929. AAAI Press, 2015. ISBN 978-1-57735-738-4. URL http://dl.acm.org/citation.cfm?id= 2832415.2832516.
Tijmen Tieleman and Geoï¬rey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012. | 1706.06551#46 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 47 | Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and language. CoRR, abs/1511.06361, 2015. URL http://arxiv.org/abs/1511.06361.
Stella Vosniadou and William F Brewer. Mental models of the earth: A study of conceptual change in childhood. Cognitive psychology, 24(4):535â585, 1992.
Matthew R. Walter, Sachithra Hemachandra, Bianca Homberg, Stefanie Tellex, and Seth Teller. A framework for learning semantic maps from grounded natural language de- scriptions. The International Journal of Robotics Research, 33(9):1167â1190, 2014. doi: 10.1177/0278364914537359. URL http://dx.doi.org/10.1177/0278364914537359.
S. I. Wang, P. Liang, and C. Manning. Learning language games through interaction. In Association for Computational Linguistics (ACL), 2016.
Terry Winograd. Understanding natural language. Cognitive psychology, 3(1):1â191, 1972. | 1706.06551#47 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 48 | Terry Winograd. Understanding natural language. Cognitive psychology, 3(1):1â191, 1972.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdi- nov, Richard S Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. International Conference of Machine Learning, 2(3):5, 2015.
Haonan Yu and Jeï¬rey Mark Siskind. Grounded language learning from video described with sentences. In ACL, pages 53â63. The Association for Computer Linguistics, 2013. ISBN 978-1-937284-50-3.
Haonan Yu, Haichao Zhang, and Wei Xu. A deep compositional framework for human- like language acquisition in virtual environment. CoRR, abs/1703.09831, 2017. URL https://arxiv.org/abs/1703.09831. | 1706.06551#48 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 49 | Luke S. Zettlemoyer and Michael Collins. Learning to map sentences to logical form: Structured classiï¬cation with probabilistic categorial grammars. In Proceedings of the Twenty-First Conference on Uncertainty in Artiï¬cial Intelligence, UAIâ05, pages 658â 666, Arlington, Virginia, United States, 2005. AUAI Press. ISBN 0-9749039-1-4. URL http://dl.acm.org/citation.cfm?id=3020336.3020416.
19
Hermann & Hill et al.
# Appendix A. Agent details
# A.1 Agent core | 1706.06551#49 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 50 | 19
Hermann & Hill et al.
# Appendix A. Agent details
# A.1 Agent core
At every time-step t the vision module V receives an 84 à 84 pixel RGB representation of t â R3Ã84Ã84), which is then processed the agentâs (ï¬rst person) view of the environment (xv with a three-layer convolutional neural network (LeCun et al., 1989) to emit an output representation vt â R64Ã7Ã7. The ï¬rst layer of the convolutional network contains 8 kernels applied at stride width 4, resulting in 32 (20Ã20) output channels. The second layer applies 4 kernels at stride with 2 yielding 64 (9 à 9) output channels. The third layer applies 3 kernels at stride width 1 resulting again in 64 (7 à 7) output channels. | 1706.06551#50 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 51 | t â Ns, where s is the maximum instruction length with words represented as indices in a dictionary. For tasks that require sensitivity to the order of words in the language instruction, the language module L encodes xl t with a recurrent (LSTM) architecture (Hochreiter and Schmidhuber, 1997). For other tasks, we applied a simpler bag-of-words (BOW) encoder, in which an instruction is represented as the sum of the embeddings of its constituent words, as this resulted in faster training. Both the LSTM and BOW encoders use word embeddings of dimension 128, and the hidden layer of the LSTM is also of dimension 128, resulting in both cases in an output representation lt â R128.
In the mixing module M, outputs vt and lt are combined by ï¬attening vt into a single vector and concatenating the two resultant vectors into a shared representation mt. The output from M at each time-step is fed to the action module A which maintains the agent state ht â Rd. ht is updated using an LSTM network combining output mt from M and htâ1 from the previous time-step. By default we set d = 256 in all our experiments.
# A.2 Auxiliary networks | 1706.06551#51 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 52 | # A.2 Auxiliary networks
Temporal Autoencoder The temporal autoencoder auxiliary network tAE samples sequences containing two data points xi, xi+1 as well as one-shot action representation i using the convolutional network deï¬ned by V into y â R64Ã7Ã7. The ai â Na. It encodes xv feature representation is then transformed using the action ai,
g=W, (Wyai © Woy),
with Ëy â R64Ã7Ã7. The weight matrix Wb shares its weights with the ï¬nal layer of the perceptron computing Ï in the core policy head. The transformed visual encoding Ëy is passed into a deconvolutional network (mirroring the conï¬guration of the convolutional encoder) to emit a predicted input w â R3Ã84Ã84. The tAE module is optimised on the mean-squared loss between w and xv | 1706.06551#52 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 53 | Language Prediction At each time-step t, the language prediction auxiliary network LP applies a replica of V (with shared weights) to encode vt. A linear layer followed by a rectiï¬ed linear activation function is applied to transform this representation from size 64 à 7 à 7 to a ï¬at vector of dimension 128 (the same size as the word embedding dimension in L). This representation is then transformed to an output layer with the same number of units as the agentâs vocabulary. The weights in this ï¬nal layer are shared with the initial layer (word embedding) weights from L. The output activations are fed through a Softmax
20
Grounded Language Learning
activation function to yield a probability distribution over words in the vocabulary, and the negative log likelihood of the instruction word lt is computed as the loss. Note that this objective requires a single meaningful word to be extracted from the instruction as the target.
# Appendix B. Environment details
The environment can contain any number of rooms connected through corridors. A level in the simulated 3D world is described by a map (a combination of rooms), object speciï¬ers, language and a reward function. Objects in the world are drawn from a ï¬xed inventory and can be described using a combination of ï¬ve factors. | 1706.06551#53 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 54 | Shapes (40) tv, ball, balloon, cake, can, cassette, chair, guitar, hairbrush, hat, ice lolly, ladder, mug, pencil, suitcase, toothbrush, key, bottle, car, cherries, fork, fridge, ham- mer, knife, spoon, apple, banana, cow, ï¬ower, jug, pig, pincer, plant, saxophone, shoe, tennis racket, tomato, tree, wine glass, zebra.
Colours (13) red , blue , white , grey , cyan , pink , orange , black , green , magenta , brown , purple , yellow.
Patterns (9) plain, chequered, crosses, stripes, discs, hex, pinstripe, spots, swirls.
Shades (3) light, dark, neutral.
Sizes (3) small, large, medium.
Within an environment, agent spawn points and object locations can be speciï¬ed or randomly sampled. The environment itself is subdivided into multiple rooms which can be distinguished through randomly sampled (unique) ï¬oor colours. We use up to seven factors to describe a particular object: the ï¬ve object-internal factors, the room it is placed in and its proximity to another object, which can itself be described by its ï¬ve internal factors. | 1706.06551#54 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 55 | In all simulations presented here, reward is attached to picking up a particular object. Reward is scaled to be in [â10; 10] and, where possible, balanced so that a random agent would have an expected reward of 0. This prevents agents from learning degenerate strate- gies that could otherwise allow them to perform well in a given task without needing to learn to ground the textual instructions.
# Appendix C. Hyperparameters
Tables 1 and 2 show parameter setting used throughout the experiments presented in this paper. We report results with conï¬dence bands (CB) equivalent to ± one standard deviation on the mean, assuming normal distribution.
21
Hermann & Hill et al. | 1706.06551#55 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 56 | Hyperparameter Value Description train steps env steps per core step num workers unroll length 640m 4 32 50 Theoretical maximum number of time steps (across all episodes) for which the agent will be trained. Number of time steps between each action decision (action smoothing) Number of independent workers running replicas of the environment with asynchronous updating. Number of time steps through which error is backpropagated in the core LSTM action module auxiliary networks vr batch size rp batch size lp batch size tae batch size 1 10 10 10 Aggregated time steps processed by value replay auxiliary for each weight update. Aggregated time steps processed by reward prediction auxiliary for each weight update. Aggregated time steps processed by language prediction auxiliary for each weight update. Aggregated time steps processed by temporal AE auxiliary for each weight update. language encoder encoder type BOW Whether the language encoder uses an additive bag-of-words (BOW) or an LSTM architecture. cost calculation additional discounting cost base 0.99 0.5 Discount used to compute the long-term return R t in the A3C objective Multiplicative scaling of all computed gradients on the backward pass in the network optimisation clip grad norm decay epsilon learning rate ï¬nish | 1706.06551#56 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 57 | objective Multiplicative scaling of all computed gradients on the backward pass in the network optimisation clip grad norm decay epsilon learning rate ï¬nish momentum 100 0.99 0.1 0 0 Limit on the norm of the gradient across all agent network parameters (if above, scale down) Decay term in RMSprop gradient averaging function Epsilon term in RMSprop gradient averaging function Learning rate at the end of training, based on which linear annealing of is applied. Momentum parameter in RMSprop gradient averaging function | 1706.06551#57 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06551 | 58 | Table 1: Agent hyperparameters that are ï¬xed throughout our experimentation but other- wise not speciï¬ed in the text.
Hyperparameter Value Description auxiliary networks vr weight rp weight lp weight tae weight uniform(0.1, 1) uniform(0.1, 1) uniform(0.1, 1) uniform(0.1, 1) Scalar weighting of value replay auxiliary loss relative to the core (A3C) objective. Scalar weighting of reward prediction auxiliary loss. Scalar weighting of language prediction auxiliary loss. Scalar weighting of temporal autoencoder prediction auxiliary. language encoder embed init uniform(0.5, 1) Standard deviation of normal distribution (mean = 0) for sampling initial values of word-embedding weights in L. optimisation entropy cost learning rate start uniform(0.0005, 0.005) loguniform(0.0001, 0.002) Strength of the (additive) entropy regularisation term in the A3C cost function. Learning rate at the beginning of training annealed linearly to reach learning rate ï¬nish at the end of train steps. | 1706.06551#58 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | [
{
"id": "1511.08130"
}
] |
1706.06064 | 0 | # ee
7 1 0 2
# p e S 2
] M M . s c [ 2 v 4 6 0 6 0 . 6 0 7 1 : v i X r a
# Recent Advance in Content-based Image Retrieval: A Literature Survey
Wengang Zhou, Houqiang Li, and Qi Tian Fellow, IEEE
AbstractâThe explosive increase and ubiquitous accessibility of visual data on the Web have led to the prosperity of research activity in image search or retrieval. With the ignorance of visual content as a ranking clue, methods with text search techniques for visual retrieval may suffer inconsistency between the text words and visual content. Content-based image retrieval (CBIR), which makes use of the representation of visual content to identify relevant images, has attracted sustained attention in recent two decades. Such a problem is challenging due to the intention gap and the semantic gap problems. Numerous techniques have been developed for content-based image retrieval in the last decade. The purpose of this paper is to categorize and evaluate those algorithms proposed during the period of 2003 to 2016. We conclude with several promising directions for future research.
Index Termsâcontent-based image retrieval, visual representation, indexing, similarity measurement, spatial context, search re-ranking.
â¦
# 1 INTRODUCTION | 1706.06064#0 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 1 | Index Termsâcontent-based image retrieval, visual representation, indexing, similarity measurement, spatial context, search re-ranking.
â¦
# 1 INTRODUCTION
With the universal popularity of digital devices embedded with cameras and the fast development of Internet tech- nology, billions of people are projected to the Web shar- ing and browsing photos. The ubiquitous access to both digital photos and the Internet sheds bright light on many emerging applications based on image search. Image search aims to retrieve relevant visual documents to a textual or visual query efï¬ciently from a large-scale visual corpus. Although image search has been extensively explored since the early 1990s [1], it still attracts lots of attention from the multimedia and computer vision communities in the past decade, thanks to the attention on scalability challenge and emergence of new techniques. Traditional image search engines usually index multimedia visual data based on the surrounding meta data information around images on the Web, such as titles and tags. Since textual information may be inconsistent with the visual content, content-based image retrieval (CBIR) is preferred and has been witnessed to make great advance in recent years. | 1706.06064#1 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 2 | In content-based visual retrieval, there are two fun- damental challenges, i.e., intention gap and semantic gap. The intention gap refers to the difï¬culty that a user suf- fers to precisely express the expected visual content by a query at hand, such as an example image or a sketch map. The semantic gap originates from the difï¬culty in describing high-level semantic concept with low-level visual feature [2] [3] [4]. To narrow those gaps, extensive efforts have been made from both the academia and industry. | 1706.06064#2 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 3 | From the early 1990s to the early 2000s, there have been extensive study on content-based image search. The progress in those years has been comprehensively discussed in existing survey papers [5] [6] [7]. Around the early 2000s, the introduction of some new insights and methods triggers another research trend in CBIR. Specially, two pioneering works have paved the way to the signiï¬cant advance in content-based visual retrieval on large-scale multimedia database. The ï¬rst one is the introduction of invariant local visual feature SIFT [8]. SIFT is demonstrated with excellent descriptive and discriminative power to capture visual con- tent in a variety of literature. It can well capture the invari- ance to rotation and scaling transformation and is robust to illumination change. The second work is the introduction of the Bag-of-Visual-Words (BoW) model [9]. Leveraged from information retrieval, the BoW model makes a compact representation of images based on the quantization of the contained local features and is readily adapted to the classic inverted ï¬le indexing structure for scalable image retrieval. the last emergence of numerous decade | 1706.06064#3 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 4 | local features and is readily adapted to the classic inverted ï¬le indexing structure for scalable image retrieval. the last emergence of numerous decade has witnessed the retrieval work on multimedia [15] [16] [17] [18] [19] [20] [21] [10] [11] [12] [13] [9] [14] [28] [29]. Meanwhile, in in- [22] [23] [24] [25] [26] [27] dustry, some commercial engines on content-based image search have been launched with different focuses, such as Tineye1, Ditto2, Snap Fashion3, ViSenze4, Cortica5, etc. Tineye is launched as a billion-scale reverse image search engine in May, 2008. Until January of 2017, the indexed image database size in Tineye has reached up to 17 billion. Different from Tineye, Ditto is specially focused on brand images in the wild. It provides an access to uncover the | 1706.06064#4 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 5 | ⢠Wengang Zhou and Houqiang Li are with the CAS Key Laboratory of Technology in Geo-spatial Information Processing and Application System, Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, 230027, China. E-mail: {zhwg, lihq}@ustc.edu.cn.
⢠Qi Tian is with the Department of Computer Science, University of Texas at San Antonio, San Antonio, TX, 78249, USA. E-mail: [email protected].
1. http://tineye.com/ 2. http://ditto.us.com/ 3. https://www.snapfashion.co.uk/ 4. https://www.visenze.com 5. http://www.cortica.com/
1
brands inside the shared photos on the public social media web sites.
there are three key issues in content-based image retrieval: image representation, image organization, and image similarity measurement. Existing algorithms can also be categorized based on their contribu- tions to those three key items. | 1706.06064#5 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 6 | Image representation originates from the fact that the intrinsic problem in content-based visual retrieval is image comparison. For convenience of comparison, an image is transformed to some kind of feature space. The motivation is to achieve an implicit alignment so as to eliminate the impact of background and potential transformations or changes while keeping the intrinsic visual content distin- guishable. In fact, how to represent an image is a fundamen- tal problem in computer vision for image understanding. There is a saying that âAn image is worth a thousand wordsâ. However, it is nontrivial to identify those âwordsâ. Usually, images are represented as one or multiple visual features. The representation is expected to be descriptive and discriminative so as to distinguish similar and dis- similar images. More importantly, it is also expected to be invariant to various transformations, such as translation, rotation, resizing, illumination change, etc. | 1706.06064#6 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 7 | In multimedia retrieval, the visual database is usually very large. It is a nontrivial issue to organize the large scale database to efï¬ciently identify the relevant results of a given query. Inspired by the success of information retrieval, many existing content-based visual retrieval algorithms and systems leverage the classic inverted ï¬le structure to index large scale visual database for scalable retrieval. Mean- while, some hashing based techniques are also proposed for indexing in a similar perspective. To achieve this goal, visual codebook learning and feature quantization on high- dimensional visual features are involved, with spatial con- text embedded to further enrich the discriminative capabil- ity of the visual representation.
Ideally, the similarity between images should reï¬ect the relevance in semantics, which, however, is difï¬cult due to the intrinsic âsemantic gapâ problem. Conventionally, the image similarity in content-based retrieval is formulated based on the visual feature matching results with some weighing schemes. Alternatively, the image similarity for- mulations in existing algorithms can also be viewed as different match kernels [30]. | 1706.06064#7 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 8 | In this paper, we focus on the overview over research works in the past decade after 2003. For discussion be- fore and around 2003, we refer readers to previous sur- vey [5] [6][7]. Recently, there have been some surveys related to CBIR [31] [2] [3]. In [31], Zhang et al. surveyed image search in the past 20 years from the perspective of database scaling from thousands to billions. In [3], Li et al. made a review of the state-of-the-art CBIR techniques in the context of social image tagging, with focus on three closed linked problems, including image tag assignment, reï¬nement, and tag-based image retrieval. Another recent related survey is referred in [2]. In this work, we approach the recent advance in CBIR with different insights and emphasize more on the progress in methodology of a generic framework.
In the following sections, we ï¬rst brieï¬y review the generic pipeline of content-based image search. Then, we
discuss ï¬ve key modules of the pipeline, respectively. Af- ter that, we introduce the ground-truth datasets popularly exploited and the evaluation metrics. Finally, we discuss future potential directions and conclude this survey.
# 2 GENERAL FLOWCHART OVERVIEW | 1706.06064#8 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 9 | # 2 GENERAL FLOWCHART OVERVIEW
Content-based image search or retrieval has been a core problem in the multimedia ï¬eld for over two decades. The general ï¬owchart is illustrated in Fig. 1. Such a visual search framework consists of an off-line stage and an on-line stage. In the off-line stage, the database is built by image crawling and each database image is represented into some vectors and then indexed. In the on-line stage, several modules are involved, including user intention analysis, query forma- tion, image representation, image scoring, search reranking, and retrieval browsing. The image representation module is shared in both the off-line and on-line stages. This paper will not cover image crawling, user intention analysis [32], and retrieval browsing [33], of which the survey can be referred in previous work [6] [34]. In the following, we will focus on the other ï¬ve modules, i.e., query formation, image rep- resentation, database indexing, image scoring, and search reranking.
In the following sections, we make a review of related work in each module, discuss and evaluate a variety of strategies to address the key issues in the corresponding modules.
# 3 QUERY FORMATION | 1706.06064#9 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 10 | In the following sections, we make a review of related work in each module, discuss and evaluate a variety of strategies to address the key issues in the corresponding modules.
# 3 QUERY FORMATION
At the beginning of image retrieval, a user expresses his or her imaginary intention into some concrete visual query. The quality of the query has a signiï¬cant impact on the retrieval results. A good and speciï¬c query may sufï¬ciently reduce the retrieval difï¬culty and lead to satisfactory re- trieval results. Generally, there are several kinds of query formation, such as query by example image, query by sketch map, query by color map, query by context map, etc. As illustrated in Fig. 2, different query schemes lead to signiï¬cantly distinguishing results. In the following, we will discuss each of those representative query formations. | 1706.06064#10 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 11 | The most intuitive query formation is query by example image. That is, a user has an example image at hand and would like to retrieve more or better images about the same or similar semantics. For instance, a picture holder may want to check whether his picture is used in some web pages without his permission; a cybercop may want to check a terrorist logo appearing in the Web images or videos for anti-terrorism. To eliminate the effect of the background, a bounding box may be speciï¬ed in the example image to constrain the region of interest for query. Since the example images are objective without little human involvement, it is convenient to make quantitative analysis based on it so as to guide the design of the corresponding algorithms. Therefore, query by example is the most widely explored query formation style in the research on content-based im- age retrieval [9] [10] [35] [36].
Besides query by example, a user may also express his intention with a sketch map [37] [38]. In this way, the query is a contour image. Since sketch is more close to the semantic
2
Offline Stage
Image Crawling Image Database Image Representation Database Indexing User Intention Query Formation Image Representation Image Scoring Search Reranking Retrieval Browsing
Image Representation
Image Crawling
Database Indexing | 1706.06064#11 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 13 | Image Database
Database Indexing
>
>
>
Image Scoring
Search Reranking
Image Representation
Retrieval Browsing
Query Formation
Search Reranking
Retrieval Browsing
Query Formation
User Intention
User Intention
>
>
>
=
>
Online Stage
Fig. 1. The general framework of content-based image retrieval. The modules above and below the green dashed line are in the off-line stage and on-line stage, respectively. In this paper, we focus the discussion on ï¬ve components, i.e., query formation, image representation, database indexing, image scoring, and search reranking.
Abstract Thoughts Query by keyword Interface Query by example Query by sketch Query by color layout
dog
iat sa
Interface
re
Joop | 1706.06064#13 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 14 | # Query by concept layout
Fig. 2. Illustration of different query schemes with the corresponding retrieval results.
representation, it tends to help retrieve target results in usersâ mind from the semantic perspective [37]. Initial works on sketch based retrieval are limited to search for special artworks, such as clip arts [39] [40] and simple patterns [41]. As a milestone, the representative work on sketch-based retrieval for natural images is the edgel [42]. Sketch has also been employed in some image search engines, such as Gazopa6 and Retrievr7. However, there are two non-trivial issues on sketch based query. Firstly, although some simple concepts, such as sun, ï¬sh, and ï¬ower, can be easily inter- preted as simple shapes, in most time, it is difï¬cult for a user to quickly sketch out what he wants to search. Secondly, since the images in the database are usually natural images, it needs to design special algorithms to convert them to sketch maps consistent with user intention.
color map based query can easily involve user interaction to improve the retrieval results but is limited by potential concepts to be represented. Besides, color or illumination change is prevalent in image capturing, which casts severe challenge on the reliance of color-based feature. | 1706.06064#14 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 15 | The above query formations are convenient for uses to input but may still be difï¬cult to express the userâs semantic intention. To alleviate this problem, Xu et al. proposed to form the query with concepts by text words in some speciï¬c layout in the image plain [44] [45]. Such structured object query is also explored in [46] with a latent ranking SVM model. This kind of query is specially suitable for searching generalized objects or scenes with context when the object recognition results are ready for the database images and the queries.
Another query formation is color map. A user is allowed to specify the spatial distribution of colors in a given grid- like palette to generate a color map, which is used as query to retrieve images with similar colors in the relative regions of the image plain [43]. With coarse shape embedded, the
6. http://www.gazopa.com/ 7. http://labs.systemone.at/retrievr
It is notable that, in the above query schemes taken by most existing work, the query takes the form of single image, which may be insufï¬cient to reï¬ect user intension in some situations. If provided with multiple probe images as query, some new strategies are expected to collaboratively represent the the query or fuse the retrieval results of each single probe [47]. That may be an interesting research topic
3 | 1706.06064#15 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 16 | 3
especially in the case of video retrieval where the query a video shot of temporal sequence.
# 4 IMAGE REPRESENTATION
In content based image retrieval, the key problem is how to efï¬ciently measure the similarity between images. Since the visual objects or scenes may undergo various changes or transformations, it is infeasible to directly compare images at pixel level. Usually, visual features are extracted from images and subsequently transformed into a ï¬x-sized vec- tor for image representation. Considering the contradiction between large scale image database and the requirement for efï¬cient query response, it is necessary to âpackâ the visual features to facilitate the following indexing and image comparison. To achieve this goal, quantization with visual codebook training are used as a routine encoding processing for feature aggregation/pooling. Besides, as an important characteristic for visual data, spatial context is demonstrated vital to improve the distinctiveness of visual representation. Based on the above discussion, we can mathematically formulate the content similarity between two images X and Y in Eq. 1.
S(X , Y) = k(x, y) = yâY xâX X X Ï(x)T Ï(y) yâY xâX X X = Ψ(X )T Ψ(Y). (1) (2) (3) | 1706.06064#16 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 17 | Based on Eq. 1, there emerge three questions.
1)
2)
Firstly, how to describe the content image X by a set of visual features {x1, x2, · · · }? Secondly, how to transform feature sets X = {x1, x2, · · · } with various sizes to a ï¬xed-length vector Ψ(X )?
3) Thirdly, how to efï¬ciently compute the similarity between the ï¬xed-length vectors Ψ(X )T Ψ(Y)?
The above three questions essentially correspond to the feature extraction, feature encoding & aggregation, and database indexing, respectively. As for feature encoding and aggregation, it involves visual codebook learning, spatial context embedding, and quantization. In this section, we discuss the related works on those key issues in image representation, including feature extraction, visual code- book learning, spatial context embedding, quantization, and feature aggregation. The database indexing is left to the next section for discussion.
# 4.1 Feature Extraction
Traditionally, visual features are heuristically designed and can be categorized into local features and global features. Besides those hand-crafted features, recent years have wit- nessed the development of learning-based features. In the following, we will discuss those two kinds of features, respectively.
4.1.1 Hand Crafted Feature | 1706.06064#17 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 18 | 4.1.1 Hand Crafted Feature
In early CBIR algorithms and systems, global features are commonly used to describe image content by color [48] [43], shape [42] [49] [50] [51], texture [52][53], and structure [54] into a single holistic representation. As one of the repre- sentative global feature, GIST feature [55] is biologically plausible with low computational complexity and has been widely applied to evaluate approximate nearest neighbor search algorithms [56] [57] [58] [59]. With compact repre- sentation and efï¬cient implementation, global visual fea- ture are very suitable for duplicate detection in large-scale image database [54], but may not work well when the target images involve background clutter. Typically, global features can be used as a complementary part to improve the accuracy on near-duplicate image search based on local features [24]. | 1706.06064#18 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 19 | Since the introduction of SIFT feature by Lowe [60] [8], local feature has been extensively explored as a routine image representation in many works on content-based im- age retrieval. Generally, local feature extraction involves two key steps, i.e. interest point detection and local region description. In interest point detection, some key points or regions with characteristic scale are detected with high repeatability. The repeatability here means that the interest points can be identiï¬ed under various transformations or changes. Popular detectors include Difference of Gaussian (DoG) [8], MSER [61], Hessian afï¬ne detector [62], Harris- Hessian detector [63], and FAST [64]. In interest point detec- tion, the invariance to translation and resizing is achieved. Distinguished from the above methods, it is also possible to obtain the interest points by uniformly and densely sample the image plane without any explicit detector [65]. | 1706.06064#19 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 20 | After the detection of interest points, a descriptor or multiple descriptors [66] are extracted to describe the visual appearance of the local region centered at the interest point. Usually, the descriptor is designed to be invariant to rotation change and robust to afï¬ne distortion, addition of noise, and illumination changes, etc. Besides, it should also be distinctive so as to correctly match a single feature with high probability against a large corpus of features from many images. Such property is especially emphasized in the scenario of large-scale visual applications. The most popular choice with the above merits is SIFT feature [8]. As a variant, SURF [67] is demonstrated with comparable performance but better efï¬ciency. | 1706.06064#20 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 21 | Some improvements or extensions have been made on the basis of SIFT. In [23], Arandjelovic et al proposed a root-SIFT by making root-normalization on the original SIFT descriptor. Although such operation is simple, it is demonstrated to signiï¬cantly improve the image retrieval accuracy and can be readily plugged into many SIFT based image retrieval algorithms [68]. Zhou et al. proposed to generate binary signature of the SIFT descriptor with two median thresholds determined by the original descriptor itself [36]. The obtained binary SIFT leads to a new indexing scheme for image retrieval [69]. Liu et al. extend the binary SIFT by ï¬rst generating a binary comparison matrix via dimension-pair comparison and then ï¬exibly dividing the matrix entries into segments each of which is hashed to a bit [70]. In [21], the SIFT descriptor is transformed to
4
binary code with principal component analysis (PCA) and simple thresholding operations simply based on coefï¬cientsâ sign. In [71], Afï¬ne-SIFT (ASIFT) simulates a set of sample views of the initial images by varying the two camera axis orientation parameters, i.e., the latitude and the longitude angles and covers effectively all six parameters of the afï¬ne transformation, consequently achieving fully afï¬ne invari- ance. | 1706.06064#21 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 23 | Apart from ï¬oating point feature like SIFT, binary fea- tures are popularly explored and directly extracted from the local region of interest. Recently, binary feature BRIEF [73] and its variants, such as ORB [74], FREAK [75], and BRISK [76], have been proposed and have attracted a great deal of attention in visual matching applications. Those binary features are computed by some simple intensity difference tests, which are extremely computationally ef- ï¬cient. With the advantage in efï¬ciency from Hamming distance computation, those binary features based on FAST detector [64] may have potential in large scale image search. In [77], Zhang et al. proposed a novel ultra short binary descriptor (USB) from the local regions of regions detected by DoG detector. The USB achieves fast image matching and indexing. Besides, following the binary SIFT scheme [36], it avoids the expensive codebook training and feature quanti- zation in BoW model for image retrieval. A comprehensive evaluation of binary descriptors are referred in [78]. | 1706.06064#23 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 24 | Besides the gradient information in the local regions as in SIFT feature, edge and color can also be expressed into a compact descriptor, generating Edge-SIFT [79] and color-SIFT [80]. As a binary local feature, Edge-SIFT [79] describes a local region with the extracted Canny edge detection results. Zheng et al extracted color name feature from the local regions, which is further transformed to a binary signature to enhance the discrimination of local SIFT feature [68].
# 4.1.2 Learning-based Feature | 1706.06064#24 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 25 | # 4.1.2 Learning-based Feature
Apart from the above handcrafted visual features, it is also possible to learn features in a data-driven manner for image retrieval. Attribute feature, originally used for object catego- rization, can be used to represent the semantic characteris- tics for image retrieval [81] [82] [83]. Generally, the attribute vocabulary can be manually deï¬ned by humans [84] [85] or some ontology [86]. For each attribute, a classiï¬er can be trained with kernel over multiple low-level visual features based on labeled training image set and used to predict the attribute score for unseen images [86] [85] [87] [88]. In [89], the attribute feature is adopted as a semantic-aware representation to compensate local SIFT feature for image search. Karayev et al. learned classiï¬ers to predict image styles and applied it to search and rank image collection by styles [90]. The advantage of attribute feature is that it pro- vides an elegant way to approximate the visual semantics so as to reduce the semantic gap. However, there are two | 1706.06064#25 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 26 | issues on attribute features. Firstly, it is difï¬cult to deï¬ne a complete set of attribute vocabulary, either manually or in an automatic manner. Thus, the representation with the limited attribute vocabulary may be biased for a large and semantically diverse image database. Secondly, it is usually computationally expensive to extract attribute features due to the necessity to do classiï¬cation over thousands of at- tribute categories [81] [86].
Topic models, such as probabilistic Latent Semantic Analysis (pLSA) model [91] and Latent Dirichlet Alloca- tion (LDA) model [92], are popularly adopted to learn feature representation with semantics embedded for image retrieval [93] [94]. | 1706.06064#26 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 27 | With the explosive research on deep neural network (DNN) [65] [95] [96], recent years have witnessed the success of the learning-based features in multiple areas. With the deep architectures, high-level abstractions close to human cognition process can be learned [97]. As a result, it is feasible to adopt DNN to extract semantic-aware features by the activations of different lays in the networks. In [98], features are extracted in local patches with a deep re- stricted Boltzmann machine (DBN) which is reï¬ned by using back-propagation. As a typical structure of the DNN family, deep convolutional neural network (CNN) [99] has demonstrated state-of-the-art performance in various tasks on image recognition and retrieval [100]. In [101], compre- hensive studies is conducted on the potential of learned visual features with deep CNN for various applications including content based image retrieval. Razavian et al. study the Alex-Net [99] and VGG-Net [95], and exploit the last convolutional layers response with max pooling as image representation for image retrieval [102]. In [103], the activations of | 1706.06064#27 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 29 | Besides working as a global description of images, learning-based feature can also be obtained in a manner similar to local features [104]. The local regions of interest are generated by unsupervised object detection algorithms, such as selective search [105], objectness [106], and binarized normed gradients (BING) [107]. Those algorithms generate a number of object proposals in the form of bounding boxes. Then, in each object proposal region, the learning-based feature can be extracted. In [108], Sun et al. adopted the CNN model to extract features from local image regions detected by a general object detector [107], and applied it for image retrieval and demonstrated impressive performance. Considering the fact that object detection is sensitive to rotation transformation, Xie et al. proposed to rotate the test image by four different angles and then conduct object detection. Object proposals with top detection scores are then selected to extract the deep CNN feature [99]. Tolias et al. generate feature vector of regional maximum acti- vation of convolutions (R-MAC) towards geometry-aware re-ranking [109]. To speedup the max-pooling operation, a novel approximation is proposed by extending the idea of integral images. In [110], the R-MAC descriptor is extended by selecting regions with a region-of-interest (ROI) selector based on region proposal network [111].
5 | 1706.06064#29 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 30 | In the above approaches, the learning-based feature is extracted with the deep learning model trained for clas- siï¬cation task. As a result, the learned feature may not well reï¬ect the visual content characteristics of retrieval images, which may result in limited retrieval performance. Therefore, it is preferred to train the deep learning model directly for the retrieval task, which, however, is difï¬cult since the potential image category in retrieval is difï¬cult to deï¬ne or enumerated. To partially address this difï¬culty, Babenko et al. focus on landmark retrieval and ï¬ne-tune the pre-trained CNN model on ImageNet with the class corre- sponding to landmarks [112]. after the ï¬ne-tuning, promis- ing performance improvement is witnessed on the retrieval datasets with similar visual statistics, such as the Oxford Building dataset [11]. To get rid of the dependence on examples or class labels, Paulin et al. proposed to generate patch-level feature representation based on convolutional kernel networks in an unsupervised way [113]. In [114], the supervision takes the form of binary codes, which are | 1706.06064#30 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 31 | patch-level feature representation based on convolutional kernel networks in an unsupervised way [113]. In [114], the supervision takes the form of binary codes, which are obtained by decomposing the similarity matrix of training images. The resultant deep CNN model is therefore ready to generate binary codes for images in an end-to-end way. Further, Lai et al. propose deep neuron networks to hash images into short binary codes with optimization based on triplet ranking loss [115]. The resulted short binary codes for image representation enable efï¬cient retrieval by Hamming distance and considerable gain in storage. | 1706.06064#31 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 32 | # 4.2 Visual Codebook Learning
Usually, hundreds or thousands of local features can be extracted from a single image. To achieve a compact repre- sentation, high dimensional local features are quantized to visual words of a pre-trained visual codebook, and based on the quantization results an image with a set of local features can be transformed to a ï¬xed-length vector, by the Bag-of-Visual-Words model [9], VLAD [116], or Fisher Vector [117]. To generate a visual codebook beforehand, the most intuitive way is by clustering the training feature samples with brute-force k-means [9] [12] and then regard- ing the clustering centers as visual words. Since the local feature dimension is high and the training sample corpus is large, it suffers extremely high computational complexity to train a large, say, million-scale or larger, visual codebook. To address this problem, an alternative to to adopt the hierarchical k-means [10], which reduces the computational complexity from linear to logarithm for large size visual codebook generation. | 1706.06064#32 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 33 | In the standard k-means, the most computing overhead is consumed on the assignment of feature samples to the close cluster center vector, which is implemented by linearly comparing all cluster center vectors. That process can be speeded up by replacing the linear scan with approximate nearest neighbor search. With such observation, Philbin et al. proposed an approximate k-means algorithm by exploiting randomized k-D trees for fast assignment [11]. Instead of using k-means to generate visual words, Li et al. generated hyper-spheres by randomly sampling seed feature points with a predeï¬ned radius [118]. Then, those hyper-spheres with the seed features corresponds to the visual codebook. In [119], Chu et al. proposed to build the visual vocabulary
based on graph density. It measures the intra-word simi- larity by the feature graph density and derives the visual word by dense feature graph with a Scalable Maximization Estimation (SME) scheme. | 1706.06064#33 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 34 | In the Bag-of-Visual-Words model, the visual codebook works as a media to identify the visual word ID, which can be regarded as the quantization or hashing result. In other words, it is feasible to directly transform the visual feature to a visual word ID without explicitly deï¬ning the visual word. Following this idea, different from the above codebook generation methods, some other approaches on image retrieval generate a virtual visual codebook without explicit training. Those methods transform a local feature to binary signature, based on which the visual word ID is heuristically deï¬ned. In [21], Zhang et al. proposed a new query-sensitive ranking algorithm to rank PCA-based binary hash codes to search for Ç«-neighbors for image retrieval. The binary signature is generated with a LSH (locality sensitive hashing) strategy and the top bits are used as the visual word ID to group feature points with the same ID. Zhou et al. [36] proposed to binarize a SIFT descriptor into a 256-bit binary signature. Without training a codebook, this method selects 32 bits from the 256-bit vector as a codeword for indexing and search. The drawback of this approach is that the rest 224 bits per feature have to be | 1706.06064#34 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 35 | method selects 32 bits from the 256-bit vector as a codeword for indexing and search. The drawback of this approach is that the rest 224 bits per feature have to be stored in the inverted index lists, which casts a heavy overhead on memory. Similarly, Dong et al proposed to transform to a SIFT descriptor to a 128-bit vector [72] with a sketch embedding technique [120]. Then, the 128-bit vector is divided into 4 non-overlapped block, each of which is con- sidered as a key or a visual word for later indexing. In [121], Zhou et al proposed a codebook-training-free framework based on scalable cascaded hashing. To ensure the recall rate of feature matching, the scalable cascaded hashing (SCH) scheme which conducts scalar quantization on the principal components of local descriptors in a cascaded manner. | 1706.06064#35 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 36 | # 4.3 Spatial Context Embedding
As the representation of structured visual content, visual features are correlated by spatial context in terms of orien- tation, scale, and key pointsâ distance in image plane. By in- cluding the contextual information, the discriminative capa- bility of visual codebook can be greatly enhanced [26]. Anal- ogy to the text phrase in information retrieval, it is feasible to generate visual phrase over visual words. In [27] [122], neighboring local features are correlated to generate high- order visual phrases, which are further reï¬ned to be more descriptive for content representation.
Many algorithms target on modeling the local spatial context among local visual features. Loose spatial consis- tency from some spatially nearest neighbors can be imposed to ï¬lter false visual-word matches. Supports are collected by checking the matched features with the search area deï¬ned by 15 nearest neighbors [9]. Such loose scheme, although efï¬cient, is sensitive to the image noise incurred by edit- ing. Zhang et al. generated contextual visual codebook by modeling the spatial context of local features in group with a discriminant group distance metric [28]. Wang et al. pro- posed descriptor contextual weighting (DCW) and spatial contextual weighting (SCW) of local features in the descrip- tor domain and spatial domain, respectively, to upgrade
6 | 1706.06064#36 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 37 | 6
the vocabulary tree based approach [123]. DCW down- weights less informative features based on frequencies of descriptor quantization paths on a vocabulary tree while SCW exploits some efï¬cient spatial contextual statistics to preserve the rich descriptive information of local features. In [124], Liu et al. built a spatial-relationship dictionary by embedding spatial context among local features for image retrieval.
Further, the multi-modal property that multiple different features are extracted at an identical key points is discussed and explored for contextual hashing [125]. In [126], geo- metric min-hashing constructs repeatable hash keys with loosely local geometric information for more discriminative description. In [17], Wu et al. proposed to bundle local features in a MSER region [61]. The MSER regions are deï¬ned by an extremal property of the intensity function in the region and on its outer boundary and are detected as stable regions across a threshold range from watershed- based segmentation [61]. Bundled features are compared by the shared visual word amount and the relative ordering of matched visual words. In [63], ordinal measure (OM) feature [127] is extracted from the spatial neighborhood around local interest points. Then, local spatial consistency veriï¬cation is conducted by checking whether the OMs of the correspondence features are below a predeï¬ned thresh- old. | 1706.06064#37 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.