doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1601.00257 | 7 | As we know, quantum ï¬eld theory is a powerful framework for us to understand a huge range of phenomena in Nature such as high energy physics and condensed matter physics. Although the underlying philosophies are diï¬erent, they share quantum ï¬eld theory as their common language. In high energy physics, the philosophy is reductionism, where the goal is to ï¬gure out the UV physics for our eï¬ective low energy IR physics. The standard model for particle physics is believed to be an eï¬ective low energy theory. To see what really happens at UV, we are required to go beyond the standard model by reaching a higher energy scale. This is the reason why we built LHC in Geneva. This is also the reason why we plan to go to the Great Collider from the Great Wall in China. While in condensed matter physics, the philosophy is emergence. Actually we have a theory of everything for condensed matter physics, namely QED, or the Schrodinger equation for electrons with Coulomb interaction
â 2 â
J (u=7) North South Pole Pole (y=9) (y=) J (u=0)
Figure 1: The Penrose diagram for the global de Sitter space, where the planar de Sitter space associated with the observer located at the south pole is given by the shaded portion. | 1601.00257#7 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 8 | Figure 1: The Penrose diagram for the global de Sitter space, where the planar de Sitter space associated with the observer located at the south pole is given by the shaded portion.
among them. What condensed matter physicists are concerned with is how to engineer various low temperature IR ï¬xed points, namely various phases from such a known UV theory. Such a variety of phases gives rise to a man-made multiverse, which is actually resonant to the landscape suggested by string theory.
On the other hand, general relativity tells us that gravity is geometry. Gravity is dif- ferent, so subtle is gravity. The very longstanding issue in fundamental physics is trying to reconcile general relativity with quantum field theory. People like to give a name to it, called Quantum Gravity although we have not fully succeeded along this lane. Here is a poor manâs perspective into the current status of quantum gravity, depending on the asymptotic 1. The reason is twofold. First, due to the existence of Planck scale geometry of spacetime lp = (yar, spacetime is doomed such that one can not define local field operators in a d+1 dimensional gravitational theory. Instead, the observables can live only on the boundary of spacetime. Second, it is the dependence on the asymptopia that embodies the background independence of quantum gravity.
# 2.1 De Sitter space: Meta-observables | 1601.00257#8 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 9 | # 2.1 De Sitter space: Meta-observables
If the spacetime is asymptotically de Sitter as
ds2 = âdt2 + l2 cosh2 t l dâ¦2 d, (2.1)
when t â ±â, then by the coordinate transformation u = 2 tanâ1 e
# t l , the metric becomes
ds2 = l2 sin2 u (du2 + dÏ2 + sin2 Ïdâ¦2 dâ1) (2.2)
1This is a poor manâs perspective because we shall try our best not to touch upon string theory although it is evident that this perspective is well shaped by string theory in a direct or indirect way throughout these lecture notes.
â 3 â
with Ï the polar angle for the d-sphere. We plot the Penrose diagram in Figure 1 for de Sitter space. Whence both the past and future conformal inï¬nity I â are spacelike. As a result, any observer can only detect and inï¬uence portion of the whole spacetime. Moreover, any point in I + is causally connected by a null geodesic to its antipodal point in I â for de Sitter. In view of this, Witten has proposed the meta-observables for quantum gravity in de Sitter space, namely
[" | 1601.00257#9 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 10 | ["
cit) = [" Dge (2.3) Ii
Ii with gf and g; a set of data specified on .% ~ respectively. Then one can construct the Hilbert space H; at %~ for quantum gravity in de Sitter space with the inner product (j,i) = (Q¥|#) by CPT transformation ©. The Hilbert space Hy at %+ can be constructed in a similar fashion. At the perturbative level, the dimension of Hilbert space for quantum gravity in de Sitter is infinite, which is evident from the past-future singularity of the meta-correlation functions at those points connected by the aforementioned geodesics. But it is suspected that the non-perturbative dimension of Hilbert space is supposed to be finite. This is all one can say with such mata-observables[1]. | 1601.00257#10 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 11 | However, there are also diï¬erent but more optimistic perspectives. Among others, in- spired by AdS/CFT, Strominger has proposed DS/CFT correspondence. First, with I + identiï¬ed as I â by the above null geodesics, the dual CFT lives only on one sphere rather than two spheres. Second, instead of working with the global de Sitter space, DS/CFT cor- respondence can be naturally formulated in the causal past of any given observer, where the bulk spacetime is the planar de Sitter and the dual CFT lives on I â. For details, the readers are referred to Stromingerâs original paper as well as his Les Houches lectures[2, 3].
# 2.2 Minkowski space: S-Matrix program | 1601.00257#11 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 12 | # 2.2 Minkowski space: S-Matrix program
The situation is much better if the spacetime is asymptotically ï¬at. As the Penrose diagram for Minkowski space shows in Figure 2, the conformal inï¬nity is lightlike. In this case, the only observable is scattering amplitude, abbreviated as S-Matrix, which connects the out states at I + to the in states at I â2. One can claim to have a well deï¬ned quantum gravity in asymptotically ï¬at space once a sensible recipe is made for the computation of S-Matrix with gravitons. Actually, inspired by BCFW recursion relation[4], there has been much progress achieved over the last few years along this direction by the so called S-Matrix program, in which the scattering amplitude is constructed without the local Lagrangian, resonant to the non-locality of quantum gravity[5]. Traditionally, S-Matrix is computed by the Feynman diagram techniques, where the Feynman rules come from the local Lagrangian. But the computation becomes more and more complicated when the scattering process involves either more external legs or higher loops. While in the S-Matrix program the recipe for the | 1601.00257#12 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 15 | Figure 3: The Penrose diagram for the global anti-de Sitter space, where the conformal inï¬nity I itself can be a spacetime on which the dynamics can live.
computation of scattering amplitude, made out of the universal properties of S-Matrix, such as Poincare or BMS symmetry, unitarity and analyticity of S-Matrix, turns out to be far more eï¬cient. It is expected that such an ongoing S-Matrix program will lead us eventually towards a well formulated quantum gravity in asymptotically ï¬at space.
â 5 â
# 2.3 Anti-de Sitter space: AdS/CFT correspondence
The best situation is for the spacetime which is asymptotically anti-de Sitter as
ds2 = l2 cos2 Ï (âdt2 + dÏ2 + sin2 Ïdâ¦2 dâ1) (2.4) | 1601.00257#15 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 16 | ds2 = l2 cos2 Ï (âdt2 + dÏ2 + sin2 Ïdâ¦2 dâ1) (2.4)
with Ï â [0, Ï 2 ). As seen from the Penrose diagram for anti-de Sitter space in Figure 3, the conformal inï¬nity I is timelike in this case, where we can have a well formulated quantum theory for gravity by AdS/CFT correspondence[6, 7, 8]. Namely the quantum gravity in the bulk AdSd+1 can be holographically formulated in terms of CFTd on the boundary without gravity and vice versa. We shall elaborate on AdS/CFT in the subsequent section. Here we would like to mention one very interesting feature about AdS/CFT, that is to say, generically we have no local Lagrangian for the dual CFT, which echoes the aforementioned S-Matrix program somehow.
# 3. Applied AdS/CFT
# 3.1 What AdS/CFT is
To be a little bit more precise about what AdS/CFT is, let us ï¬rst recall the very basic object in quantum ï¬eld theory, namely the generating functional, which is deï¬ned as
Za\J] _ ini [ Dweisal¥l+s da JO). (3.1) | 1601.00257#16 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 17 | Za\J] _ ini [ Dweisal¥l+s da JO). (3.1)
Whence one can obtain the n-point correlation function for the operator O by taking the n-th functional derivative of the generating functional with respect to the source J. For example,
\ bLa (0(e)) = Se, (3.2)
2 5O(x (O(01)O(a2)) = â 224 = 90(@) (3.3) bJ(a)dS (a2) dS (a2)
As we know, we can obtain such a generating functional by perturbative expansion using the Feynman diagram techniques for weakling coupled quantum ï¬eld theory, but obviously such a perturbation method breaks down when the involved quantum ï¬eld theory is strongly coupled except one can ï¬nd its weak dual. AdS/CFT provides us with such a dual for strongly coupled quantum ï¬eld theory by a classical gravitational theory with one extra dimension. So now let us turn to general relativity, where the basic object is the action given by
Sd+1 = 1 16ÏG dd+1x â âg(R + d(d â 1) l2 + Lmatter) (3.4) | 1601.00257#17 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 18 | Sd+1 = 1 16ÏG dd+1x â âg(R + d(d â 1) l2 + Lmatter) (3.4)
for AdS gravity. Here for the present illustration and later usage, we would like to choose the Lagrangian for the matter ï¬elds as
Lmatter = l2 Q2 (â 1 4 F abFab â |DΦ|2 â m2|Φ|2) (3.5)
â 6 â
with F = dA, D = â â iA and Q the charge of complex scalar ï¬eld. The variation of action gives rise to the equations of motion as follows
Gab â d(d â 1) 2l2 gab = l2 Q2 [FacFb c + 2DaΦDbΦ â ( 1 4 FcdF cd + |DΦ|2 + m2|Φ|2)gab], (3.6)
(3.7)
âaF ab = i(ΦDbΦ â ΦDbΦ), DaDaΦ â m2Φ = 0.
(3.8) | 1601.00257#18 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 19 | âaF ab = i(ΦDbΦ â ΦDbΦ), DaDaΦ â m2Φ = 0.
(3.8)
Note that the equations of motion are generically second order PDEs. So to extrapolate the bulk solution from the AdS boundary, one is required to specify a pair of boundary conditions for each bulk ï¬eld at the conformal boundary of AdS, which can be read oï¬ from the asymptotical behavior for the bulk ï¬elds near the AdS boundary
ds2 â l2 z2 [dz2 + (γµν + tµνzd)dxµdxν], (3.9)
(3.10)
# Aµ â aµ + bµzdâ2, Φ â Ïâzââ + Ï+zâ+
(3.11)
# a | 1601.00257#19 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 20 | (3.11)
# a
with â± = d 4 + m2l23. Namely (γµν, tµν) are the boundary data for the bulk metric ï¬eld, (aµ, bµ) for the bulk gauge ï¬eld, and (Ïâ, Ï+) for the bulk scalar ï¬eld. But such pairs usually lead to singular solutions deep into the bulk. To avoid these singular solutions, one can instead specify the only one boundary condition from each pair such as (γµν, aµ, Ïâ). We denote these boundary data by J, whose justiï¬cation will be obvious later on. At the same time we also require the regularity of the desired solution in the bulk. In this sense, the regular solution is uniquely determined by the boundary data J. Thus the on-shell action from the regular solution will be a functional of J.
What AdS/CFT tells us is that this on-shell action in the bulk can be identiï¬ed as the generating functional for strongly coupled quantum ï¬eld theory living on the boundary, i.e.,
Zd[J] = Sd+1[J], (3.12) | 1601.00257#20 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 21 | Zd[J] = Sd+1[J], (3.12)
where apparently J has a dual meaning, not only serving as the source for the boundary quantum ï¬eld theory but also being the boundary data for the bulk ï¬elds. In particular, γµν sources the operator for the boundary energy momentum tensor whose expectation value is given by (3.3) as tµν, aµ sources a global U (1) conserved current operator whose expectation value is given as bµ, and the expectation value for the operator dual to the source Ïâ is given as Ï+ up to a possible proportional coeï¬cient. The conformal dimension for these dual operators can be read oï¬ from (3.9) by making the scaling transformation (z, xµ) â (αz, αxµ) as d, d â 1, and â+ individually.
3Here we are working with the axial gauge for the bulk metric and gauge ï¬elds, which can always been achieved. In addition, although the mass square is allowed to be negative in the AdS it can not be below the BF bound â d2
â 7 â | 1601.00257#21 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 22 | â 7 â
Here is a caveat on the validity of (3.12). Although such a boundary/bulk duality is believed to hold in more general circumstances, (3.12) works for the large N strongly coupled quantum ï¬eld theory on the boundary where N and the coupling parameter of the dual , respectively. quantum ï¬eld theory are generically proportional to some powers of In order to capture the 1 N correction to the dual quantum ï¬eld theory by holography, one is required to calculate the one-loop partition function on top of the classical background solution in the bulk. On the other hand, to see the ï¬nite coupling eï¬ect in the dual quantum ï¬eld theory by holography, one is required to work with higher derivative gravity theory in the bulk. But in what follows, for simplicity we shall work exclusively with (3.12) in its applicability regime. | 1601.00257#22 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 23 | Among others, we would like to conclude this subsection with the three important im- plications of AdS/CFT. First, a ï¬nite temperature quantum ï¬eld theory at ï¬nite chemical potential is dual to a charged black hole in the bulk. Second, the entanglement entropy of the dual quantum ï¬eld theory can be calculated by holography as the the area of the bulk minimal surface anchored onto the entangling surface[11, 12, 13]. Third, the extra bulk di- mension represents the renormalization group ï¬ow direction for the boundary quantum ï¬eld theory with AdS boundary as UV, although the renormalization scheme is supposed to be diï¬erent from the conventional one implemented in quantum ï¬eld theory4.
# 3.2 Why AdS/CFT is reliable
But why AdS/CFT is reliable? In fact, besides its explicit implementations in string theory such as the duality between Type IIB string theory in AdS5 Ã S5 and N = 4 SYM theory on the four dimensional boundary, where some results can be computed on both sides and turn out to match each other, there exist many hints from the inside of general relativity indicating that gravity is holographic. Here we simply list some of them as follows. | 1601.00257#23 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 24 | ⢠Bekenstein-Hawkingâs black hole entropy formula SBH = A 4ldâ1 p [14].
⢠Brown-Henneauxâs asymptotic symmetry analysis for three dimensional gravity[15], 2G successfully reproduces the black hole entropy where the derived central charge 3l by the Cardy formula for conformal ï¬eld theory[16].
⢠Brown-Yorkâs surface tensor formulation of quasi local energy and conserved charges[17]. Once we are brave enough to declare that this surface tensor be not only for the purpose of the bulk gravity but also for a certain system living on the boundary, we shall end up with the long wave limit of AdS/CFT, namely the gravity/ï¬uid correspondence, which has been well tested[18].
On the other hand, we can also see how such an extra bulk dimension emerges from quantum ï¬eld theory perspective. In particular, inspired by Swingleâs seminal work on the connection between the MERA tensor network state for quantum critical systems and AdS
4This implication is sometimes dubbed as RG = GR.
â 8 â
space[19], Qi has recently proposed an exact holographic mapping to generate the bulk Hilbert space of the same dimension from the boundary Hilbert space[20], which echoes the afore- mentioned renormalization group ï¬ow implication of AdS/CFT. | 1601.00257#24 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 25 | Keeping all of these in mind, we shall take AdS/CFT as a ï¬rst principle and explore its various applications in what follows.
# 3.3 How useful AdS/CFT is
As alluded to above, AdS/CFT is naturally suited for us to address strongly coupled dy- namics and non-equilibrium processes by mapping the involved hard quantum many body problems to classical few body problems. There are two approaches towards the construction of holographic models. One is called the top-down approach, where the microscopic content of the dual boundary theory is generically known because the construction originates in string theory. The other is called the bottom-up approach, which can be regarded as kind of eï¬ective ï¬eld theory with one extra dimension for the dual boundary theory. | 1601.00257#25 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 26 | By either approach, we can apply AdS/CFT to QCD as well as the QCD underlying quark-gluon plasma, ending up with AdS/QCD[21, 22]. On the other hand, taking into account that there are a bunch of strongly coupled systems in condensed matter physics such as high Tc superconductor, liquid Helium, and non-Fermi liquid, we can also apply AdS/CFT to condensed matter physics, ending up with AdS/CMT[23, 24, 25, 26, 27]. Note that the bulk dynamics boils eventually down to a set of diï¬erential equations, whose solutions are generically not amenable to an analytic treatment. So one of the central tasks in applied AdS/CFT is to ï¬nd the numerical solutions to diï¬erential equations. In the next section, we shall provide a basic introduction to the main numerical methods for solving diï¬erential equations in applied AdS/CFT.
# 4. Numerics for Solving Diï¬erential Equations | 1601.00257#26 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 27 | # 4. Numerics for Solving Diï¬erential Equations
Roughly speaking, there are three numerical schemes to solve diï¬erential equations by trans- forming them into algebraic equations, namely ï¬nite diï¬erent method, ï¬nite element method, and spectral method. According to our experience with the numerics in applied AdS/CFT, it is favorable to make a code from scratch for each problem you are faced up with. In particular, the variant of spectral method, namely pseudo-spectral method turns out to be most eï¬cient in solving diï¬erential equations along the space direction where Newton-Raphson iteration method is extensively employed if the resultant algebraic equations are non-linear. On the other hand, ï¬nite diï¬erence method such as Runge-Kutta method is usually used to deal with the dynamical evolution along the time direction. So now we like to elaborate a little bit on Newton-Raphson method, pseudo-spectral method, as well as Runge-Kutta method one by one.
â 9 â
f(x)
Figure 4: Newton-Raphson iteration map is used to ï¬nd the rightmost root for a non-linear algebraic equation.
# 4.1 Newton-Raphson method | 1601.00257#27 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 28 | Figure 4: Newton-Raphson iteration map is used to ï¬nd the rightmost root for a non-linear algebraic equation.
# 4.1 Newton-Raphson method
To ï¬nd the desired root for a given non-linear function f (x), we can start with a wisely guessed initial point xk. Then as shown in Figure 4 by Newton-Raphson iteration map, we hit the next point xk+1 as
trp =p â f' (we) f(r), (4.1)
which is supposed to be closer to the desired root. By a ï¬nite number of iterations, we eventually end up with a good approximation to the desired root. If we are required to ï¬nd the root for a group of non-linear functions F (X), then the iteration map is given by
Xk+1 = Xk â [( âF âX )â1F ]|Xk , (4.2)
where the formidable Jacobian can be tamed by Taylor expansion trick since the expansion coeï¬cient of the linear term is simply the Jacobian in Taylor expansion F (X) = F (X0) + âF âX |X0(X â X0) + · · ·.
# 4.2 Pseudo-spectral method | 1601.00257#28 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 29 | # 4.2 Pseudo-spectral method
As we know, we can expand an analytic function in terms of a set of appropriate spectral functions as
f(a) = > CnTy (x) (4.3)
â 10 â
with N some truncation number, depending on the numerical accuracy you want to achieve. Then the derivative of this function is given by
N f(x) = > CnT" (2). (4.4) n=1
Whence the derivatives at the collocation points can be obtained from the values of this function at these points by the following diï¬erential matrix as
f'(@i) = 35 Dis f(x), (4.5) J
where the matrix D = TâT~! with Tj, = T,(a) and T/, = T/(x;). With this differential matrix, the differential equation in consideration can be massaged into a group of algebraic equations for us to solve the unknown f(x;) by requiring that both the equation hold at the collocation points and the prescribed boundary conditions be satisfied. | 1601.00257#29 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 30 | This is the underlying idea for pseudo-spectral method. Among others, we would like to point out the two very advantages of pseudo-spectral method, compared to ï¬nite diï¬erence method and ï¬nite element method. First, one can ï¬nd the interpolating function for f (x) by the built-in procedure as follows
f (x) = Tn(x)T â1 ni f (xi). n,i (4.6)
Second, the numerical error decays exponentially with the truncation number N rather than the power law decay followed by the other two methods.
# 4.3 Runge-Kutta method
As mentioned before, we should employ ï¬nite diï¬erence method to march along the time direction. But before that, we are required to massage the involved diï¬erential equation into the following ordinary diï¬erential equation
Ëy = f (y, t), (4.7)
which is actually the key step for one to investigate the temporal evolution in applied AdS/CFT. Once this non-trivial step is achieved, then there are a bunch of ï¬nite diï¬er- ence schemes available for one to move forward. Among others, here we simply present the classical fourth order Runge-Kutta method as follows | 1601.00257#30 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 31 | k1 = f (yi, ti), ât 2 ât 2 k4 = f (yi + âtk3, ti + ât), ât 2 ât 2 k2 = f (yi + k1, ti + k3 = f (yi + k2, ti + ), ), ti+1 = ti + ât, yi+1 = yi + ât 6 (k1 + 2k2 + 2k3 + k4), (4.8)
â 11 â
because it is user friendly and applicable to all the temporal evolution problems we have been considered so far[28, 29, 30, 31, 32, 33]5.
# 5. Holographic Superï¬uid at Zero Temperature
In this section, we would like to take the zero temperature holographic superï¬uid as an concrete example to demonstrate how to apply AdS/CFT with numerics. In due course, not only shall we introduce some relevant concepts, but also present some new results[34]. | 1601.00257#31 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 32 | The action for the simplest model of holographic superï¬uid is just given by (3.4). To make our life easier, we shall work with the probe limit, namely the back reaction of matter ï¬elds onto the metric is neglected, which can be achieved by taking the large Q limit. Thus we can put the matter ï¬elds on top of the background which is the solution to the vacuum Einstein equation with a negative cosmological constant Î = â d(dâ1) . For simplicity, we shall focus only on the zero temperature holographic superï¬uid, which can be implemented by choosing the AdS soliton as the bulk geometry[35], i.e.,
ds2 = l2 z2 [âdt2 + dx2 + dz2 f (z) + f (z)dθ2]. (5.1)
Here f (z) = 1 â ( z )d with z = z0 the tip where our geometry caps oï¬ and z = 0 the AdS z0 boundary. To guarantee the smooth geometry at the tip, we are required to impose the periodicity 4Ïz0 onto the θ coordinate. The inverse of this periodicity set by z0 is usually 3 interpreted as the conï¬ning scale for the dual boundary theory. | 1601.00257#32 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 33 | In what follows, we will take the units in which l = 1, 16ÏGQ2 = 1, and z0 = 1. In addition, we shall focus exclusively on the action of matter ï¬elds because the leading Q0 contribution has been frozen by the above ï¬xed background geometry.
# 5.1 Variation of action, Boundary terms, and Choice of ensemble
The variational principle gives rise to the equations of motion if and only if the boundary terms vanish in the variation of action. For our model, the variation of action is given by
5S = / d*leJâG|V.F® + i(®D°S â BD'S)|5 A, â / dleVâhing ES Ay + (f attey g(DaD* â m2) &5® pes hngD*8d5®) + C.C)]. (5.2)
To make the boundary terms vanish, we can fix A, and ® on the boundary. Fixing A, amounts to saying that we are working with the grand canonical ensemble. In order to work with the canonical ensemble where /âhn,F® is fixed instead, we are required to add the additional boundary term J Ba /âhn F Ay to the action, which is essentially the Legendre transformation. On the other hand, fixing ¢_ gives rise to the standard quantization. We can | 1601.00257#33 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 34 | 5It is worthwhile to keep in mind that the accumulated numerical error is of order O(ât4) for this classical Runge-Kutta method.
â 12 â
also have an alternative quantization by ï¬xing Ï+ when â d2 4 + 1[37]. In what follows, we shall restrict our attention onto the grand canonical ensemble and the standard quantization for the case of d = 3 and m2 = â2, whereby ââ = 1 and â+ = 2.
# 5.2 Asymptotic expansion, Counter terms, and Holographic renormalization
What we care about is the on-shell action, which can be shown to have IR divergence gener- ically in the bulk by the asymptotic expansion near the AdS boundary, corresponding to the UV divergence for the dual boundary theory. The procedure to make the on-shell action ï¬nite by adding some appropriate counter terms is called holographic renormalization[38]. For our case, the on-shell action is given by | 1601.00257#34 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 35 | Sonâshell = aif aev= G(VaF@)A [ee âhng FAs] + 1 ee â â af dey g®(DaD* â m?)® pes hngD*®®) + CC] _ 5 / d2/=Gi(BD â BD) A, â / BxV/âhngFâ Ay] â Lf. _ s(f aa âhngD%® + C.C.). (5.3)
By the asymptotic expansion in (3.10) and (3.11), the divergence comes only from the last \-P6 z two boundary terms and can be read off as . So the holographic renormalization can be readily achieved by adding the boundary term â J @Pa/âh|®|? to the original action. Whence we have
6S, ly ren op Gi") = Fa, 7 OSren a (O) = i =, (5.4)
where jµ corresponds to the conserved particle current and the expectation value for the scalar operator O is interpreted as the condensate order parameter of superï¬uid. If this scalar operator acquires a nonzero expectation value spontaneously in the situation where the source is turned oï¬, the boundary system is driven into a superï¬uid phase. | 1601.00257#35 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 36 | # 5.3 Background solution, Free energy, and Phase transition
With the assumption that the non-vanishing bulk matter ï¬elds (Φ = zÏ, At, Ax) do not depend on the coordinate θ, the equations of motion can be explicitly written as
6Note that the outward normal vector is given by na = âz( â
âz )a.
â 13 â
0 = â2
t Ï + (z + A2 +3z2âzÏ + (z3 â 1)â2 t Ax â âtâxAt â i(Ïâx Â¯Ï â ¯ÏâxÏ) + 2AxÏ Â¯Ï + 3z2âzAx + (z3 â 1)â2
(5.6)
0 = â2 0 = (z3 â 1)â2 0 = âtâzAt + i(Ïâz Â¯Ï â ¯ÏâzÏ) â âzâxAx,
z Ax, xAt + âtâxAx + 2 ¯ÏÏAt + i( ¯ÏâtÏ â Ψât ¯Ï),
z At + 3z2âzAt â â2 (5.7) | 1601.00257#36 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 37 | z At + 3z2âzAt â â2 (5.7)
(5.8)
where the third one is the constraint equation and the last one reduces to the conserved equation for the boundary current when evaluated at the AdS boundary, i.e.,
# âtÏ = ââxjx.
ap = âOnj®. (5.9)
To specialize into the homogeneous phase diagram for our holographic model, we further make the following ansatz for our non-vanishing bulk matter ï¬elds
Ï = Ï(z), At = At(z). (5.10)
Then the equations of motion for the static solution reduce to
(5.11)
0 = 3z2âzÏ + (z3 â 1)â2 z Ï + (z â A2 0 = 2AtÏ Â¯Ï + 3z2âzAt + (z3 â 1)â2 0 = Ïâz Â¯Ï â ¯ÏâzÏ,
t )Ï, z At,
(5.12)
(5.13)
where the last equation implies that we can always choose a gauge to make Ï real. It is not hard to see the above equations of motion have a trivial solution
Ï = 0, At = µ, (5.14) | 1601.00257#37 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 38 | which corresponds to the vacuum phase with zero particle density. On the other hand, to obtain the non-trivial solution dual to the superï¬uid phase, we are required to resort to pseudo-spectral method. As a demonstration, we here plot the nontrivial proï¬le for Ï and At at µ = 2 in Figure 5. The variation of particle density and condensate with respect to the chemical potential is plotted in Figure 6, which indicates that the phase transition from the vacuum to a superï¬uid occurs at µc = 1.715. It is noteworthy that such a phenomenon is rem- iniscent of the recently observed quantum critical behavior of ultra-cold cesium atoms in an optical lattice across the vacuum to superï¬uid transition by tuning the chemical potential[36]. Moreover, the compactiï¬ed dimension in the AdS soliton background can be naturally identi- ï¬ed as the reduced dimension in optical lattices by the very steep harmonic potential as both mechanisms make the eï¬ective dimension of the system in consideration reduced in the low energy regime. On the other hand, note that the particle | 1601.00257#38 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 40 | â 14 â
(5.5)
(5.9)
2.0F 4 "ash 4
Figure 5: The bulk proï¬le for the scalar ï¬eld and time component of gauge ï¬eld at the chemical potential µ = 2.
| 3.0F 3 4 2.55 2.0 Po IkO>l ; 1.06 | 0.5 0 0.0 0 1 2 3 4 0 1 2 3 4 u u
Figure 6: The variation of particle density and condensate with respect to the chemical potential, where we see the second order quantum phase transition take place at µc = 1.715.
a zero temperature superï¬uid where the normal ï¬uid component should disappear. As we will show later on by the linear response theory, this is actually the case.
But to make sure that Figure 6 represents the genuine phase diagram for our holographic model, we are required to check whether the corresponding free energy density is the lowest in the grand canonical ensemble. By holography, the free energy density can be obtained from the renormalized on shell Lagrangian of matter ï¬elds as follows7
1 _ _ F 5 / dz/âgi(®D°S â SDS) A, â Vâhng ApFâ¢|-=0] = Sno | de(aio)?, (5.15) | 1601.00257#40 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 42 | Figure 7: The diï¬erence of free energy density for the superï¬uid phase from that for the vacuum phase.
compared to the vacuum phase when the chemical potential is greater than the critical value. So we are done.
# 5.4 Linear response theory, Optical conductivity, and Superï¬uid density
Now let us set up the linear response theory for the later calculation of the optical conductivity of our holographic model. To achieve this, we ï¬rst decompose the ï¬eld Ï into its real and imaginary parts as
Ï = Ïr + iÏi, (5.16)
and assume that the perturbation bulk ï¬elds take the following form
δÏr = δÏr(z)eâiÏt+iqx, δÏi = δÏi(z)eâiÏt+iqx, δAt = δAt(z)eâiÏt+iqx, δAx = δAx(z)eâiÏt+iqx, (5.17) since the background solution is static and homogeneous. With this, the perturbation equa- tions can be simpliï¬ed as | 1601.00257#42 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 44 | (5.18)
0 = âÏ2δAx â ÏqδAt + 3z2âzδAx + (z3 â 1)â2 0 = (z3 â 1)â2 z δAx + 2Ï2 rδAx â 2iqÏrδÏi, z δAt + 3z2âzδAt + q2δAt + ÏqδAx + 2Ï2 rδAt + 4AtÏrδÏr (5.20)
+2iÏÏrδÏi, (5.21)
0 = âiÏâzδAt â iqâzδAx â 2(âzÏrδÏi â ÏrâzδÏi), (5.22)
where we have used Ïi = 0 for the background solution.
â 16 â
Note that the gauge transformation
A â A + âθ, Ï â Ïeiθ (5.23)
with
θ = 1 i λeâiÏt+iqx (5.24)
induces a spurious solution to the above perturbation equations as | 1601.00257#44 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 45 | with
θ = 1 i λeâiÏt+iqx (5.24)
induces a spurious solution to the above perturbation equations as
δAt = âλÏ, δAx = λq, Î´Ï = λÏ. (5.25)
We can remove such a redundancy by requiring δAt = 0 at the AdS boundary8. In addition, Î´Ï will also be set to zero at the AdS boundary later on. On the other hand, taking into account the fact that the perturbation equation (5.22) will be automatically satisï¬ed in the whole bulk once the other perturbations are satisï¬ed9, we can forget about (5.22) from now on. That is to say, we can employ the pseudo-spectral method to obtain the desired numerical solution by combining the rest perturbation equations with the aforementioned boundary conditions as well as the other boundary conditions at the AdS boundary, depending on the speciï¬c problem we want to solve.
In particular, to calculate the optical conductivity for our holographic model, we can simply focus on the q = 0 mode and further impose δAx = 1 at the AdS boundary. Then the optical conductivity can be extracted by holography as | 1601.00257#45 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 46 | Ï(Ï) = âzδAx|z=0 iÏ (5.26)
for any positive frequency Ï10. According to the perturbation equations, the whole calculation is much simpliï¬ed because δAx decouples from the other perturbation bulk ï¬elds. We simply plot the imaginary part of the optical conductivity in Figure 8 for both vacuum and superï¬uid phase, because the real part vanishes due to the reality of the perturbation equation and boundary condition for δAx. As it should be the case, the DC conductivity vanishes for the vacuum phase, but diverges for the superï¬uid phase due to the 1 Ï behavior of the imaginary part of optical conductivity by the Krames-Kronig relation
Im[o(w)] = =P [. a Rel) (5.27) T Joo JâW Ww
Furthermore, according to the hydrodynamical description of superï¬uid, the superï¬uid den- sity Ïs can be obtained by ï¬tting this zero pole as Ïs ÂµÏ [39, 40, 41]. As expected, our numerics shows that the resultant superï¬uid density is exactly the same as the particle density within
â 17 â | 1601.00257#46 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 47 | Figure 8: The left panel is the imaginary part of optical conductivity for the vacuum phase, and the right panel is for the superï¬uid phase at µ = 6.5.
our numerical accuracy. The other poles correspond to the gapped normal modes for δAx, which we are not interested in since we are focusing on the low energy physics.
Let us come back to the equality between the particle density and superï¬uid density. Although this numerical result is 100 percent reasonable from the physical perspective, it is highly non-trivial in the sense that the superï¬uid density comes from the linear response theory while the particle density is a quantity associated with the equilibrium state. So it is better to have an analytic understanding for this remarkable equality. Here we would like to develop an elegant proof for this equality by a boost trick. To this end, we are ï¬rst required to realize Ïs = âµâzδAx|z=0 with Ï = 0. Such an Ï = 0 perturbation can actually be implemented by a boost
1 1 t= woe" vaâ), x ize (a! â vtâ) (5.28)
acting on the superï¬uid phase. Note that the background metric is invariant under such a boost. As a result, we end up with a new non-trivial solution as follows | 1601.00257#47 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 48 | 1 v = ¢,A,= A,, Al, = At. 5.29 g = 9,4, Vin ae Vice (5.29)
We expand this solution up to the linear order in v as
¢! = $, A, = Ay, Al, = VAs, (5.30)
which means that the linear perturbation δAx is actually proportional to the background solution At. So we have Ïs = Ï immediately.
8The only exception is the Ï case, which can always be separately managed if necessary. 9This result comes from the following two facts. One is related to Bianchi âg âµ(
identity 0 = âava = z4 ) = 0 if the rest equations of motion hold. The other is special to our holographic model, in which the readers are encouraged to show that the z component of Maxwell equation turns out to be satisï¬ed automatically at z = 1 if the rest equations hold there.
10Note that Ï(â¯Ï) = Ï(Ï), so we focus only on the positive frequency here.
â 18 â
1000 600} 4 400+ 4 200 4 | fu | A 0.0 05 1.0 15 2.0 3.0 | 1601.00257#48 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 49 | â 18 â
1000 600} 4 400+ 4 200 4 | fu | A 0.0 05 1.0 15 2.0 3.0
Figure 9: The density plot of er | with g = 0.3 for the superfluid phase at pp = 6.5. The normal modes can be identified by the peaks, where the red one denotes the hydrodynamic normal mode wo = 0.209.
0.0 05 1.0 15 2.0 25 3.0
Figure 10: The spectral plot of ln |δ ËÏi(Ï, 1)| with q = 0.3 for the superï¬uid phase at µ = 6.5, where the initial data are chosen as δÏi = z with all the other perturbations turned oï¬. The normal modes can be identiï¬ed by the peaks, whose locations are the same as those by the frequency domain analysis within our numerical accuracy.
# 5.5 Time domain analysis, Normal modes, and Sound speed | 1601.00257#49 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 50 | # 5.5 Time domain analysis, Normal modes, and Sound speed
In what follows we shall use linear response theory to calculate the speed of sound by focusing solely on the hydrodynamic spectrum of normal modes of the gapless Goldstone from the spontaneous symmetry breaking, which is obviously absent from the vacuum phase. As such, the perturbation fields are required to have Dirichlet boundary conditions at the AdS boundary. Then we cast the linear perturbation equations and boundary conditions into the form £L(w)u = 0 with u the perturbation fields evaluated at the grid points by pseudo- spectral method. The normal modes are obtained by the condition det{[£(w)| = 0, which can be further identified by the density plot Feaena with the prime the derivative with respect to w. We demonstrate such a density plot in Figure 9, where the hydrodynamic mode is simply the closest mode to the origin, marked in red. Besides such a frequency domain
â 19 â
08 06+ 4 0.4 02+ 4 0.0 . rae . 0.0 0.2 04 0.6 08 1.0 1.2 14
Figure 11: The dispersion relation for the gapless Goldstone mode in the superï¬uid phase at µ = 6.5, where the sound speed vs = 0.697 is obtained by ï¬tting the long wave modes with Ï0 = vsq. | 1601.00257#50 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 51 | 0.7 | os! 05 0.4 Vs 035
Figure 12: The variation of sound speed with respect to the chemical potential. When the chemical potential is much larger than the conï¬ning scale, the conformality is restored and the sound speed approaches the predicted value 1â 2
analysis of spectrum of normal modes, there is an alternative called time domain analysis, which we would like to elaborate on below. We ï¬rst cast the equations of motion into the following Hamiltonian formalism
(5.31)
âtÏ = iAtÏ + P, âtP = iAtP â (z + A2 âtAx = Î x + âxAt, âtÎ x = i(Ïâx Â¯Ï â ¯ÏâxÏ) â 2AxÏ Â¯Ï â 3z2âzAx + (1 â z3)â2
x + iâxAx)Ï â 2iAxâxÏ + â2 xÏ â 3z2âzÏ + (1 â z3)â2 z Ï, (5.32) (5.33)
z Ax, (5.34) | 1601.00257#51 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 53 | â 20 â
(5.35)
perturbation equations on top of the superï¬uid phase is given by
âtδÏr = âAtδÏi + δPr, âtδÏi = ÏrδAt + AtδÏr + δPi, âtδPr = AtÏrδAt â AtδPi â (z + q2)δÏr â 3z2âzδÏr + (1 â z3)â2 âtδPi = âiqÏrδAx + AtδPr â (z + q2)δÏi â 3z2âzδÏi + (1 â z3)â2 âtδAx = δΠx + iqδAt, âtδΠx = 2iqÏrδÏi â 2Ï2 0 = (z3 â 1)â2
z δÏr, z δÏi,
âtâzδAt = 2âzÏrδÏi â 2ÏrâzδÏi + iqâzδAx. (5.44) | 1601.00257#53 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 54 | As before, using the source free boundary conditions for all the perturbation ï¬elds, we can obtain the temporal evolution of the perturbation ï¬elds for any given initial data by Runge- Kutta method, where δAt is solved by the constraint equation (5.43). The normal modes can then be identiï¬ed by the peaks in the Fourier transformation of the evolving data. We demonstrate such a spectral plot in Figure 10. As expected, such a time domain analysis gives rise to the same result for the locations of normal modes as that by the frequency domain analysis.
Then the dispersion relation for the gapless Goldstone can be obtained and plotted in Figure 11, whereby the sound speed vs can be obtained by the ï¬tting formula Ï0 = vsq. As shown in Figure 12, the sound speed increases with the chemical potential and saturate to the predicted value 1â by conformal ï¬eld theory when the chemical potential is much larger than 2 the conï¬ning scale[39, 40, 41], which is reasonable since it is believed that the conformality is restored in this limit.
# 6. Concluding Remarks | 1601.00257#54 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 55 | # 6. Concluding Remarks
Like any other uniï¬cation in physics, AdS/CFT correspondence has proven to be a unique tool for one to address various universal behaviors of near-equilibrium as well as far-from- equilibrium dynamics for a variety of strongly coupled systems, which otherwise would be hard to attack. During such an application, numerical computation has been playing a more and more important role in the sense that not only can numerics leave us with some conjectures to develop an analytic proof and some patterns to have an analytic understanding but also brings us to the regime where the analytic treatment is not available at all.
In these lecture notes, we have touched only upon the very basics for the numerics in applied AdS/CFT. In addition, we work only with the probe limit in the concrete example we make use of to demonstrate how to apply AdS/CFT with numerics. The situation will become a little bit involved when the back reaction is taken into account. Regarding this, the readers are suggested to refer to [42] to see how to obtain the stationary inhomogeneous solutions to fully back reacted Einstein equation by Einstein-DeTurck method. On the other
â 21 â
hand, the readers are recommended to refer to [43] to see how to evolve the fully back reacted dynamics, where with a black hole as the initial data it turns out that the Eddington like coordinates are preferred to the Schwarzschild like coordinates. | 1601.00257#55 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 57 | H.Z. would like to thank the organizers of the Eleventh International Modave Summer School on Mathematical Physics held in Modave, Belgium, September 2015, where the lectures on which these notes are based were given. He is indebted to Nabil Iqbal for his valuable discussions at the summer school. H.Z. would also like to thank the organizers of 2015 International School on Numerical Relativity and Gravitational Waves held in Daejeon, Korea, July 2015, where these lectures were geared to the audience mainly from general relativity and gravity community. He is grateful to Keun-Young Kim, Kyung Kiu Kim, Miok Park, and Sang-Jin Sin for the enjoyable conversations during the school. H.Z. is also grateful to Ben Craps and Alex Sevrin for the fantastic infrastructure they provide at HEP group of VUB and the very freedom as well as various opportunities they oï¬er to him. M.G. is partially supported by NSFC with Grant Nos.11235003, 11375026 and NCET-12-0054. C.N. is supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT & Future | 1601.00257#57 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 58 | C.N. is supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT & Future Planning(NRF- 2014R1A1A1003220) and the 2015 GIST Grant for the FARE Project (Further Advancement of Research and Education at GIST College). Y.T. is partially supported by NSFC with Grant No.11475179. H.Z. is supported in part by the Belgian Federal Science Policy Oï¬ce through the Interuniversity Attraction Pole P7/37, by FWO-Vlaanderen through the project G020714N, and by the Vrije Universiteit Brussel through the Strategic Research Program âHigh-Energy Physicâ. He is also an individual FWO Fellow supported by 12G3515N. | 1601.00257#58 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 59 | # References
[1] E. Witten, arXiv:hep-th/0106109.
[2] A. Strominger, arXiv:hep-th/0106113.
[3] M. Spradlin, A. Strominger, and A. Volovich, arXiv:hep-th/0110007.
[4] R. Britto, F. Cachazo, B. Feng, and E. Witten, Phys. Rev. Lett. 94 (181602)(2005).
[5] N. Arkani-Hamed, F. Cachazo, and J. Kaplan, JHEP 1009, 016(2010).
[6] J. Maldacena, Adv. Theor. Math. Phys. 2, 231(1998).
# Cole ND
[7] E. Witten, Adv. Theor. Math. Phys. 2, 253(1998).
[8] S. Gubser, I. R. Klebanov, and A. M. Polyakov, Phys. Lett. B 428, 105(1998).
[9] P. Breitenlohner and D. Z. Freedman, Annals Phys. 144, 249(1982).
[10] P. Breitenlohner and D. Z. Freedman, Phys. Lett. B 115, 197(1982). | 1601.00257#59 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 60 | [10] P. Breitenlohner and D. Z. Freedman, Phys. Lett. B 115, 197(1982).
[11] S. Ryu and T. Takayanagi, Phys. Rev. Lett. 96, 181602(2006).
â 22 â
[12] V. E. Hubeny, M. Rangamani, and T. Takayanagi, JHEP 0707, 062(2007).
[13] A. Lewkowycz and J. Maldacena, JHEP 08, 090(2013).
[14] R. M. Wald, Living. Rev. Rel. 4, 6(2001).
[15] J. D. Brown and M. Henneaux, Commun. Math. Phys. 104, 207(1986).
[16] A. Strominger, JHEP 02, 009(1998).
[17] J. D. Brown and J. W. York, Phys. Rev. D 47, 1407(1993).
[18] V. E. Hubeny, S. Minwalla, and M. Rangamani, arXiv:1107.5780.
[19] B. Swingle, Phys. Rev. D 86, 065007(2012).
[20] X. L. Qi, arXiv:1309.6282. | 1601.00257#60 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 61 | [19] B. Swingle, Phys. Rev. D 86, 065007(2012).
[20] X. L. Qi, arXiv:1309.6282.
[21] J. Casalderrey-Solana, H. Liu, D. Mateos, K. Rajagopal, and U. A. Wiedemann, arXiv:1101.0618.
[22] U. Gursoy, E. Kiritsis, L. Mazzanti, G. Michalogiorgakis, and F. Nitti, Lect. Notes Phys. 828, 79(2011).
[23] S. A. Hartnoll, Class. Quant. Grav. 26, 224002(2009).
[24] J. McGreevy, Adv. High Energy Phys. 2010, 723105(2010).
[25] C. P. Herzog, J. Phys. A 42, 343001(2009).
[26] G. T. Horowitz, arXiv:1002.1722.
[27] N. Iqbal, H. Liu, and M. Mezei, arXiv:1110.3814.
[28] W. J. Li, Y. Tian, and H. Zhang, JHEP 07, 030(2013). | 1601.00257#61 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 62 | [28] W. J. Li, Y. Tian, and H. Zhang, JHEP 07, 030(2013).
[29] N. Callebaut, B. Craps, F. Galli, D. C. Thompson, J. Vanhoof, J. Zaanen, and H. Zhang, JHEP 10, 172(2014).
[30] B. Craps, E. J. Lindgren, A. Taliotis, J. Vanhoof, and H. Zhang, Phys. Rev. D 90, 086004(2014).
[31] R. Li, Y. Tian, H. Zhang, and J. Zhao, Phys. Lett. B 750, 520(2015).
[32] Y. Du, C. Niu, Y. Tian, and H. Zhang, JHEP 12, 018(2015).
[33] Y. Du, S. Q. Lan, Y. Tian, and H. Zhang, JHEP 01, 016(2016).
[34] M. Guo, S. Q. Lan, C. Niu, Y. Tian, and H. Zhang, to appear.
[35] T. Nishioka, S. Ryu, and T. Takayanagi, JHEP 1003, 131(2010). | 1601.00257#62 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 63 | [35] T. Nishioka, S. Ryu, and T. Takayanagi, JHEP 1003, 131(2010).
[36] X. Zhang, C. L. Hung, S. K. Tung, and C. Chin, Science 335, 1070(2012).
[37] I. R. Klebanov and E. Witten, Nucl. Phys. B 556, 89(1999).
[38] K. Skenderis, Class. Quant. Grav.19, 5849(2002).
[39] C. P. Herzog, P. K. Kovtun, and D. T. Son, Phys. Rev. D 79, 066002(2009).
[40] A. Yarom, JHEP 0907, 070(2009).
[41] C. P. Herzog and A. Yarom, Phys. Rev. D 80, 106002(2009).
[42] O. J. C. Dias, J. E. Santos, and B. Way, arXiv:1510.02804.
[43] P. Chesler and L. G. Yaï¬e, JHEP 07, 086(2014).
â 23 â | 1601.00257#63 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1512.06473 | 1 | # Abstract
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typ- ically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efï¬- cient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and mem- ory overhead of CNN models. Both ï¬lter kernels in con- volutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layerâs response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4 â¼ 6à speed-up and 15 â¼ 20à compression with merely one percentage loss of classiï¬cation accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
Original Q-CNN 15 Time Consumption (s) 400 Storage Consumption (MB) 300 10 200 5 100 0 AlexNet CNN-S 0 AlexNet CNN-S 500 Memory Consumption (MB) 25 Top-5 Error Rate (%) 400 20 300 15 200 10 100 5 0 0 AlexNet CNN-S AlexNet CNN-S | 1512.06473#1 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 2 | Figure 1. Comparison on the efficiency and classification accuracy between the original and quantized AlexNet [16] and CNN-S [1] on a Huawei® Mate 7 smartphone.
# 1. Introduction
and compress the memory consumption for CNN models.
In recent years, we have witnessed the great success of convolutional neural networks (CNN) [19] in a wide range of visual applications, including image classiï¬cation [16, 27], object detection [10, 9], age estimation [24, 23], etc. This success mainly comes from deeper network ar- chitectures as well as the tremendous training data. How- ever, as the network grows deeper, the model complexity is also increasing exponentially in both the training and testing stages, which leads to the very high demand in the computa- tion ability. For instance, the 8-layer AlexNet [16] involves 60M parameters and requires over 729M FLOPs1to classify a single image. Although the training stage can be ofï¬ine carried out on high performance clusters with GPU acceler- ation, the testing computation cost may be unaffordable for common personal computers and mobile devices. Due to the limited computation ability and memory space, mobile devices are almost intractable to run deep convolutional net- works. Therefore, it is crucial to accelerate the computation | 1512.06473#2 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 3 | For most CNNs, convolutional layers are the most time- consuming part, while fully-connected layers involve mas- sive network parameters. Due to the intrinsical differ- ence between them, existing works usually focus on im- proving the efï¬ciency for either convolutional layers or fully-connected layers. In [7, 13, 32, 31, 18, 17], low- rank approximation or tensor decomposition is adopted to speed-up convolutional layers. On the other hand, param- eter compression in fully-connected layers is explored in [3, 7, 11, 30, 2, 12, 28]. Overall, the above-mentioned al- gorithms are able to achieve faster speed or less storage. However, few of them can achieve signiï¬cant acceleration and compression simultaneously for the whole network.
In this paper, we propose a uniï¬ed framework for con- volutional networks, namely Quantized CNN (Q-CNN), to simultaneously accelerate and compress CNN models with
1FLOPs: number of FLoating-point OPerations required to classify one image with the convolutional network.
1 | 1512.06473#3 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 4 | 1FLOPs: number of FLoating-point OPerations required to classify one image with the convolutional network.
1
only minor performance degradation. With network pa- rameters quantized, the response of both convolutional and fully-connected layers can be efï¬ciently estimated via the approximate inner product computation. We minimize the estimation error of each layerâs response during parameter quantization, which can better preserve the model perfor- mance. In order to suppress the accumulative error while quantizing multiple layers, an effective training scheme is introduced to take previous estimation error into consider- ation. Our Q-CNN model enables fast test-phase compu- tation, and the storage and memory consumption are also signiï¬cantly reduced. | 1512.06473#4 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 5 | We evaluate our Q-CNN framework for image classi- ï¬cation on two benchmarks, MNIST [20] and ILSVRC- 12 [26]. For MNIST, our Q-CNN approach achieves over 12à compression for two neural networks (no convolu- tion), with lower accuracy loss than several baseline meth- ods. For ILSVRC-12, we attempt to improve the test-phase efï¬ciency of four convolutional networks: AlexNet [16], CaffeNet [15], CNN-S [1], and VGG-16 [27]. Generally, Q-CNN achieves 4à acceleration and 15à compression (sometimes higher) for each network, with less than 1% drop in the top-5 classiï¬cation accuracy. Moreover, we im- plement the quantized CNN model on mobile devices, and dramatically improve the test-phase efï¬ciency, as depicted in Figure 1. The main contributions of this paper can be summarized as follows:
We propose a uniï¬ed Q-CNN framework to acceler- ate and compress convolutional networks. We demon- strate that better quantization can be learned by mini- mizing the estimation error of each layerâs response. ⢠We propose an effective training scheme to suppress the accumulative error while quantizing the whole con- volutional network. | 1512.06473#5 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 6 | ⢠Our Q-CNN framework achieves 4 â¼ 6à speed-up and 15 â¼ 20à compression, while the classiï¬cation accuracy loss is within one percentage. Moreover, the quantized CNN model can be implemented on mobile devices and classify an image within one second.
# 2. Preliminary
During the test phase of convolutional networks, the computation overhead is dominated by convolutional lay- ers; meanwhile, the majority of network parameters are stored in fully-connected layers. Therefore, for better test- phase efï¬ciency, it is critical to speed-up the convolution computation and compress parameters in fully-connected layers.
Our observation is that the forward-passing process of both convolutional and fully-connected layers is dominated by the computation of inner products. More formally, we consider a convolutional layer with input feature maps S â
RdsÃdsÃCs and response feature maps T â RdtÃdtÃCt, where ds, dt are the spatial sizes and Cs, Ct are the number of feature map channels. The response at the 2-D spatial position pt in the ct-th response feature map is computed as:
Tpt (ct) = X(pk,ps) hWct,pk , Spsi (1) | 1512.06473#6 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 7 | Tpt (ct) = X(pk,ps) hWct,pk , Spsi (1)
where Wct â RdkÃdkÃCs is the ct-th convolutional kernel and dk is the kernel size. We use ps and pk to denote the 2-D spatial positions in the input feature maps and convolu- tional kernels, and both Wct,pk and Sps are Cs-dimensional vectors. The layer response is the sum of inner products at all positions within the dk à dk receptive ï¬eld in the input feature maps.
Similarly, for a fully-connected layer, we have:
T (ct) = hWct , Si (2)
where S â RCs and T â RCt are the layer input and layer response, respectively, and Wct â RCs is the weighting vector for the ct-th neuron of this layer.
Product quantization [14] is widely used in approximate nearest neighbor search, demonstrating better performance than hashing-based methods [21, 22]. The idea is to de- compose the feature space as the Cartesian product of mul- tiple subspaces, and then learn sub-codebooks for each sub- space. A vector is represented by the concatenation of sub- codewords for efï¬cient distance computation and storage. | 1512.06473#7 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 8 | In this paper, we leverage product quantization to imple- ment the efï¬cient inner product computation. Let us con- sider the inner product computation between x, y â RD. At ï¬rst, both x and y are split into M sub-vectors, denoted as x(m) and y(m). Afterwards, each x(m) is quantized with a sub-codeword from the m-th sub-codebook, then we have
hy, xi = Xm hy(m), x(m)i â Xm hy(m), c(m) km i (3)
which transforms the O(D) inner product computation to M addition operations (M ⤠D), if the inner products be- tween each sub-vector y(m) and all the sub-codewords in the m-th sub-codebook have been computed in advance. | 1512.06473#8 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 9 | Quantization-based approaches have been explored in several works [11, 2, 12]. These approaches mostly fo- cus on compressing parameters in fully-connected layers [11, 2], and none of them can provide acceleration for the test-phase computation. Furthermore, [11, 12] require the network parameters to be re-constructed during the test- phase, which limit the compression to disk storage instead of memory consumption. On the contrary, our approach offers simultaneous acceleration and compression for both convolutional and fully-connected layers, and can reduce the run-time memory consumption dramatically.
# 3. Quantized CNN
In this section, we present our approach for accelerating and compressing convolutional networks. Firstly, we intro- duce an efï¬cient test-phase computation process with the network parameters quantized. Secondly, we demonstrate that better quantization can be learned by directly minimiz- ing the estimation error of each layerâs response. Finally, we analyze the computation complexity of our quantized CNN model.
# 3.1. Quantizing the Fully-connected Layer
For a fully-connected layer, we denote its weighting ma- trix as W â RCsÃCt, where Cs and Ct are the dimensions of the layer input and response, respectively. The weighting vector Wct is the ct-th column vector in W . | 1512.06473#9 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 10 | We evenly split the Cs-dimensional space (where Wct lies in) into M subspaces, each of Câ² s = Cs/M dimen- sions. Each Wct is then decomposed into M sub-vectors, denoted as W (m) . A sub-codebook can be learned for each subspace after gathering all the sub-vectors within this sub- space. Formally, for the m-th subspace, we optimize:
2 a (4) st. DOM ⬠REXK Bim) ⬠{0, 1} &xCe min Dim) Bim) |p Boâ _ wm)
where W (m) â RC â² sÃCt consists of the m-th sub-vectors of all weighting vectors. The sub-codebook D(m) contains K sub-codewords, and each column in B(m) is an indica- tor vector (only one non-zero entry), specifying which sub- codeword is used to quantize the corresponding sub-vector. The optimization can be solved via k-means clustering. The layer response is approximately computed as: , S(m)i â Xm = Xm | 1512.06473#10 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 11 | where B(m) is the ct-th column vector in B(m), and S(m) is the m-th sub-vector of the layer input. km(ct) is the index of the sub-codeword used to quantize the sub-vector W (m) . In Figure 2, we depict the parameter quantization and test-phase computation process of the fully-connected layer. By decomposing the weighting matrix into M sub-matrices, M sub-codebooks can be learned, one per subspace. During the test-phase, the layer input is split into M sub-vectors, denoted as S(m). For each subspace, we compute the inner products between S(m) and every sub-codeword in D(m), and store the results in a look-up table. Afterwards, only M addition operations are required to compute each response. As a result, the overall time complexity can be reduced from O(CsCt) to O(CsK + CtM ). On the other hand, only sub-codebooks and quantization indices need to be stored, which can dramatically reduce the storage consumption.
Layer Input Weighting Matrix Layer Response = Approximate re - 2 Response Computation iia | Le} Sub-vector 4 Codebook Splitting Learning x = Inner Product Pre-computation | 1512.06473#11 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 12 | Layer Input Weighting Matrix Layer Response = Approximate re - 2 Response Computation iia | Le} Sub-vector 4 Codebook Splitting Learning x = Inner Product Pre-computation
Figure 2. The parameter quantization and test-phase computation process of the fully-connected layer.
# 3.2. Quantizing the Convolutional Layer
Unlike the 1-D weighting vector in the fully-connected layer, each convolutional kernel is a 3-dimensional tensor: Wct â RdkÃdkÃCs. Before quantization, we need to deter- mine how to split it into sub-vectors, i.e. apply subspace splitting to which dimension. During the test phase, the in- put feature maps are traversed by each convolutional kernel with a sliding window in the spatial domain. Since these sliding windows are partially overlapped, we split each con- volutional kernel along the dimension of feature map chan- nels, so that the pre-computed inner products can be re- used at multiple spatial locations. Speciï¬cally, we learn the quantization in each subspace by:
min y DOM {BLP} at DI ROME, Bm) {0,1} 2 Dem) Bo) _ win) . Pk (6) | 1512.06473#12 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 13 | min y DOM {BLP} at DI ROME, Bm) {0,1} 2 Dem) Bo) _ win) . Pk (6)
where W (m) sÃCt contains the m-th sub-vectors of all convolutional kernels at position pk. The optimization can also be solved by k-means clustering in each subspace. With the convolutional kernels quantized, we approximately compute the response feature maps by:
Tpt(ct) = X(pk,ps) Xm â X(pk,ps) Xm = X(pk,ps) Xm hW (m) ct,pk , S(m) ps i hD(m)B(m) ct,pk , S(m) ps i hD(m) km(ct,pk), S(m) ps i (7)
where S(m) is the m-th sub-vector at position ps in the in- ps put feature maps, and km(ct, pk) is the index of the sub- codeword to quantize the m-th sub-vector at position pk in the ct-th convolutional kernel. | 1512.06473#13 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 15 | # 3.3. Quantization with Error Correction
So far, we have presented an intuitive approach to quan- tize parameters and improve the test-phase efï¬ciency of convolutional networks. However, there are still two crit- ical drawbacks. First, minimizing the quantization error of model parameters does not necessarily give the optimal quantized network for the classiï¬cation accuracy. In con- trast, minimizing the estimation error of each layerâs re- sponse is more closely related to the networkâs classiï¬ca- tion performance. Second, the quantization of one layer is independent of others, which may lead to the accumulation of error when quantizing multiple layers. The estimation error of the networkâs ï¬nal response is very likely to be quickly accumulated, since the error introduced by the pre- vious quantized layers will also affect the following layers. To overcome these two limitations, we introduce the idea of error correction into the quantization of network param- eters. This improved quantization approach directly min- imizes the estimation error of the response at each layer, and can compensate the error introduced by previous lay- ers. With the error correction scheme, we can quantize the network with much less performance degradation than the original quantization method.
# 3.3.1 Error Correction for the Fully-connected Layer | 1512.06473#15 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 16 | # 3.3.1 Error Correction for the Fully-connected Layer
Suppose we have N images to learn the quantization of a fully-connected layer, and the layer input and response of image In are denoted as Sn and Tn. In order to minimize the estimation error of the layer response, we optimize:
2 F (8) where the first term in the Frobenius norm is the desired layer response, and the second term is the approximated layer response computed via the quantized parameters. T, - YS (DeBâ¢/PSââ (oo Eom) a
A block coordinate descent approach can be applied to minimize this objective function. For the m-th subspace, its residual error is deï¬ned as:
R(m) n = Tn â Xmâ²6=m (D(mâ² )B(mâ² ))T S(mâ² n ) (9)
and then we attempt to minimize the residual error of this subspace, which is:
min y Dom _Blm) n ROâ) â (Dim) BOM) Tg) (10) 2 F | 1512.06473#16 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 17 | min y Dom _Blm) n ROâ) â (Dim) BOM) Tg) (10) 2 F
and the above optimization can be solved by alternatively updating the sub-codebook and sub-codeword assignment. Update D(m). We ï¬x the sub-codeword assignment B(m), and deï¬ne Lk = {ct|B(m)(k, ct) = 1}. The opti- mization in (10) can be re-formulated as:
min {D(m) k } Xn,k XctâLk n (ct) â D(m)T [R(m) k S(m) n ]2 (11)
which implies that the optimization over one sub-codeword does not affect other sub-codewords. Hence, for each sub- codeword, we construct a least square problem from (11) to update it.
Update B(m). With the sub-codebook D(m) ï¬xed, it is easy to discover that the optimization of each column in B(m) is mutually independent. For the ct-th column, its optimal sub-codeword assignment is given by: | 1512.06473#17 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 18 | kâ m(ct) = arg min k Xn n (ct) â D(m)T [R(m) k S(m) n ]2 (12)
# 3.3.2 Error Correction for the Convolutional Layer
We adopt the similar idea to minimize the estimation error of the convolutional layerâs response feature maps, that is:
min > Tripe â {DO} {BE} nape YS MOP Ber) ser. (Peps) m P The optimization also can be solved by block coordinate descent. More details on solving this optimization can be found in the supplementary material.
# 3.3.3 Error Correction for Multiple Layers
The above quantization method can be sequentially applied to each layer in the CNN model. One concern is that the estimation error of layer response caused by the previous layers will be accumulated and affect the quantization of the following layers. Here, we propose an effective training scheme to address this issue. | 1512.06473#18 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 19 | We consider the quantization of a speciï¬c layer, assum- ing its previous layers have already been quantized. The optimization of parameter quantization is based on the layer input and response of a group of training images. To quan- tize this layer, we take the layer input in the quantized net- work as {Sn}, and the layer response in the original net- work (not quantized) as {Tn} in Eq. (8) and (13). In this way, the optimization is guided by the actual input in the quantized network and the desired response in the original network. The accumulative error introduced by the previ- ous layers is explicitly taken into consideration during op- timization. In consequence, this training scheme can effec- tively suppress the accumulative error for the quantization of multiple layers.
2
Another possible solution is to adopt back-propagation to jointly update the sub-codebooks and sub-codeword as- signments in all quantized layers. However, since the sub- codeword assignments are discrete, the gradient-based op- timization can be quite difï¬cult, if not entirely impossible. Therefore, back-propagation is not adopted here, but could be a promising extension for future work.
# 3.4. Computation Complexity | 1512.06473#19 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 20 | # 3.4. Computation Complexity
Now we analyze the test-phase computation complex- ity of convolutional and fully-connected layers, with or without parameter quantization. For our proposed Q-CNN model, the forward-passing through each layer mainly con- sists of two procedures: pre-computation of inner products, and approximate computation of layer response. Both sub- codebooks and sub-codeword assignments are stored for the test-phase computation. We report the detailed comparison on the computation and storage overhead in Table 1.
Table 1. Comparison on the computation and storage overhead of convolutional and fully-connected layers.
t Ctd2 d2 CNN sCsK + d2 d2 Q-CNN CNN CsCt Q-CNN CsK + CtM 4d2 CNN kCsCt Q-CNN 4CsK + 1 8 d2 kM Ct log2 K CNN 4CsCt Q-CNN 8 M Ct log2 K kCs t Ctd2 Conv. kM FLOPs FCnt. Conv. Bytes FCnt. 4CsK + 1 | 1512.06473#20 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 21 | As we can see from Table 1, the reduction in the compu- tation and storage overhead largely depends on two hyper- parameters, M (number of subspaces) and K (number of sub-codewords in each subspace). Large values of M and K lead to more ï¬ne-grained quantization, but is less efï¬- cient in the computation and storage consumption. In prac- tice, we can vary these two parameters to balance the trade- off between the test-phase efï¬ciency and accuracy loss of the quantized CNN model.
# 4. Related Work | 1512.06473#21 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 22 | # 4. Related Work
There have been a few attempts in accelerating the test- phase computation of convolutional networks, and many are inspired from the low-rank decomposition. Denton et al. [7] presented a series of low-rank decomposition designs for convolutional kernels. Similarly, CP-decomposition was adopted in [17] to transform a convolutional layer into mul- tiple layers with lower complexity. Zhang et al. [32, 31] considered the subsequent nonlinear units while learning the low-rank decomposition. [18] applied group-wise prun- ing to the convolutional tensor to decompose it into the mul- tiplications of thinned dense matrices. Recently, ï¬xed-point based approaches are explored in [5, 25]. By representing
the connection weights (or even network activations) with ï¬xed-point numbers, the computation can greatly beneï¬t from hardware acceleration. | 1512.06473#22 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 23 | Another parallel research trend is to compress parame- ters in fully-connected layers. Ciresan et al. [3] randomly remove connection to reduce network parameters. Matrix factorization was adopted in [6, 7] to decompose the weight- ing matrix into two low-rank matrices, which demonstrated that signiï¬cant redundancy did exist in network parameters. Hinton et al. [8] proposed to use dark knowledge (the re- sponse of a well-trained network) to guide the training of a much smaller network, which was superior than directly training. By exploring the similarity among neurons, Srini- vas et al. [28] proposed a systematic way to remove redun- dant neurons instead of network connections. In [30], mul- tiple fully-connected layers were replaced by a single âFast- foodâ layer, which can be trained in an end-to-end style with [2] randomly grouped convolutional layers. Chen et al. connection weights into hash buckets, and then ï¬ne-tuned the network with back-propagation. [12] combined prun- ing, quantization, and Huffman coding to achieve higher compression rate. Gong et al. [11] adopted vector quanti- zation to compress the weighing matrix, which was actually a special case of our approach (apply Q-CNN without error correction to fully-connected layers only). | 1512.06473#23 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 24 | # 5. Experiments
In this section, we evaluate our quantized CNN frame- work on two image classiï¬cation benchmarks, MNIST [20] and ILSVRC-12 [26]. For the acceleration of convolutional layers, we compare with:
CPD [17]: CP-Decomposition; ⢠GBD [18]: Group-wise Brain Damage; ⢠LANR [31]: Low-rank Approximation of Non-linear
Responses.
and for the compression of fully-connected layers, we com- pare with the following approaches:
RER [3]: Random Edge Removal; ⢠LRD [6]: Low-Rank Decomposition; ⢠DK [8]: Dark Knowledge; ⢠HashNet [2]: Hashed Neural Nets; ⢠DPP [28]: Data-free Parameter Pruning; ⢠SVD [7]: Singular Value Decomposition; ⢠DFC [30]: Deep Fried Convnets.
For all above baselines, we use their reported results under the same setting for fair comparison. We report the theo- retical speed-up for more consistent results, since the real- istic speed-up may be affected by various factors, e.g. CPU, cache, and RAM. We compare the theoretical and realistic speed-up in Section 5.4, and discuss the effect of adopting the BLAS library for acceleration. | 1512.06473#24 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 25 | Our approaches are denoted as âQ-CNNâ and âQ-CNN (EC)â, where the latter one adopts error correction while the former one does not. We implement the optimization pro- cess of parameter quantization in MATLAB, and ï¬ne-tune the resulting network with Caffe [15]. Additional results of our approach can be found in the supplementary material.
# 5.1. Results on MNIST
The MNIST dataset contains 70k images of hand-written digits, 60k used for training and 10k for testing. To evalu- ate the compression performance, we pre-train two neural networks, one is 3-layer and another one is 5-layer, where each hidden layer contains 1000 units. Different compres- sion techniques are then adopted to compress these two net- work, and the results are as depicted in Table 2.
Table 2. Comparison on the compression rates and classiï¬cation error on MNIST, based on a 3-layer network (784-1000-10) and a 5-layer network (784-1000-1000-1000-10). | 1512.06473#25 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 27 | In our Q-CNN framework, the trade-off between accu- racy and efï¬ciency is controlled by M (number of sub- spaces) and K (number of sub-codewrods in each sub- space). Since M = Cs/Câ² s is given, we tune (Câ² s, K) to adjust the quantization precision. In Ta- ble 2, we set the hyper-parameters as Câ² s = 4 and K = 32. From Table 2, we observe that our Q-CNN (EC) ap- proach offers higher compression rates with less perfor- mance degradation than all baselines for both networks. The error correction scheme is effective in reducing the ac- curacy loss, especially for deeper networks (5-layer). Also, we ï¬nd the performance of both Q-CNN and Q-CNN (EC) quite stable, as the standard deviation of ï¬ve random runs is merely 0.05%. Therefore, we report the single-run perfor- mance in the remaining experiments.
# 5.2. Results on ILSVRC-12 | 1512.06473#27 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 28 | # 5.2. Results on ILSVRC-12
The ILSVRC-12 benchmark consists of over one million training images drawn from 1000 categories, and a disjoint validation set of 50k images. We report both the top-1 and top-5 classiï¬cation error rates on the validation set, using single-view testing (central patch only).
We demonstrate our approach on four convolutional net- works: AlexNet [16], CaffeNet [15], CNN-S [1], and VGG16 [27]. The ï¬rst two models have been adopted in several related works, and therefore are included for comparison. CNN-S and VGG-16 use a either wider or deeper structure for better classiï¬cation accuracy, and are included here to prove the scalability of our approach. We compare all these networksâ computation and storage overhead in Table 3, to- gether with their classiï¬cation error rates on ILSVRC-12.
Table 3. Comparison on the test-phase computation overhead (FLOPs), storage consumption (Bytes), and classiï¬cation error rates (Top-1/5 Err.) of AlexNet, CaffeNet, CNN-S, and VGG-16. Bytes 2.44e+8 2.44e+8 4.12e+8 5.53e+8 | 1512.06473#28 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 29 | Top-1 Err. 42.78% 42.53% 37.31% 28.89% FLOPs 7.29e+8 7.27e+8 2.94e+9 1.55e+10 Top-5 Err. 19.74% 19.59% 15.82% 10.05% Model AlexNet CaffeNet CNN-S VGG-16
# 5.2.1 Quantizing the Convolutional Layer
To begin with, we quantize the second convolutional layer of AlexNet, which is the most time-consuming layer during the test-phase. In Table 4, we report the performance un- der several (Câ² s, K) settings, comparing with two baseline methods, CPD [17] and GBD [18].
Table 4. Comparison on the speed-up rates and the increase of top- 1/5 error rates for accelerating the second convolutional layer in AlexNet, with or without ï¬ne-tuning (FT). The hyper-parameters of Q-CNN, C â² | 1512.06473#29 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 32 | From Table 4, we discover that with a large speed-up rate (over 4Ã), the performance loss of both CPD and GBD become severe, especially before ï¬ne-tuning. The naive parameter quantization method also suffers from the sim- ilar problem. By incorporating the idea of error correction, our Q-CNN model achieves up to 6à speed-up with merely 0.6% drop in accuracy, even without ï¬ne-tuning. The ac- curacy loss can be further reduced after ï¬ne-tuning the sub- sequent layers. Hence, it is more effective to minimize the estimation error of each layerâs response than minimize the quantization error of network parameters.
Next, we take one step further and attempt to speed-up all the convolutional layers in AlexNet with Q-CNN (EC).
Table 5. Comparison on the speed-up/compression rates and the increase of top-1/5 error rates for accelerating all the convolutional layers in AlexNet and VGG-16. | 1512.06473#32 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 34 | We ï¬x the quantization hyper-parameters (Câ² s, K) across all layers. From Table 5, we observe that the loss in accuracy grows mildly than the single-layer case. The speed-up rates reported here are consistently smaller than those in Table 4, since the acceleration effect is less signiï¬cant for some lay- ers (i.e. âconv 4â and âconv 5â). For AlexNet, our Q-CNN model (Câ² s = 8, K = 128) can accelerate the computation of all the convolutional layers by a factor of 4.27Ã, while the increase in the top-1 and top-5 error rates are no more than 2.5%. After ï¬ne-tuning the remaining fully-connected layers, the performance loss can be further reduced to less than 1%. | 1512.06473#34 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 35 | In Table 5, we also report the comparison against LANR [31] on VGG-16. For the similar speed-up rate (4Ã), their approach outperforms ours in the top-5 classiï¬cation error (an increase of 0.95% against 1.83%). After ï¬ne-tuning, the performance gap is narrowed down to 0.35% against 0.45%. At the same time, our approach offers over 14à compres- sion of parameters in convolutional layers, much larger than theirs 2.7à compression2. Therefore, our approach is effec- tive in accelerating and compressing networks with many convolutional layers, with only minor performance loss.
# 5.2.2 Quantizing the Fully-connected Layer
For demonstration, we ï¬rst compress parameters in a single fully-connected layer. In CaffeNet, the ï¬rst fully-connected layer possesses over 37 million parameters (9216 à 4096), more than 60% of whole network parameters. Our Q-CNN approach is adopted to quantize this layer and the results are as reported in Table 6. The performance loss of our Q-CNN model is negligible (within 0.4%), which is much smaller than baseline methods (DPP and SVD). Furthermore, error correction is effective in preserving the classiï¬cation accu- racy, especially under a higher compression rate. | 1512.06473#35 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 36 | Table 6. Comparison on the compression rates and the increase of top-1/5 error rates for compressing the ï¬rst fully-connected layer in CaffeNet, without ï¬ne-tuning. Compression Para. 1.19à - 1.47à - 1.91à - 2.75à - 1.38à - 2.77à - - 5.54à 11.08à - 15.06à 2/16 21.94à 3/16 16.70à 3/32 21.33à 4/32 15.06à 2/16 21.94à 3/16 16.70à 3/32 21.33à 4/32
Top-1 Err. â 0.16% 1.76% 4.08% 9.68% 0.03% 0.07% 0.36% 1.23% 0.19% 0.35% 0.18% 0.28% 0.10% 0.18% 0.14% 0.16% Method Top-5 Err. â - - - - -0.03% 0.07% 0.19% 0.86% 0.19% 0.28% 0.12% 0.16% 0.07% 0.03% 0.11% 0.12% DPP SVD Q-CNN Q-CNN (EC) | 1512.06473#36 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 37 | setting (Câ² s = 1, K = 16) for this layer. Although the speed-up effect no longer exists, we can still achieve around 8Ã compression for the last layer.
Table 7. Comparison on the compression rates and the increase of top-1/5 error rates for compressing all the fully-connected layers in CaffeNet. Both SVD and DFC are ï¬ne-tuned, while Q-CNN and Q-CNN (EC) are not ï¬ne-tuned. Para. - - - - 2/16 3/16 3/32 4/32 2/16 3/16 3/32 4/32 | 1512.06473#37 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 38 | Top-1 Err. â 0.14% 1.22% -0.66% 0.31% 0.28% 0.70% 0.44% 0.75% 0.31% 0.59% 0.31% 0.57% Compression 1.26Ã 2.52Ã 1.79Ã 3.58Ã 13.96Ã 19.14Ã 15.25Ã 18.71Ã 13.96Ã 19.14Ã 15.25Ã 18.71Ã Method Top-5 Err. â - - - - 0.29% 0.47% 0.34% 0.59% 0.30% 0.47% 0.27% 0.39% SVD DFC Q-CNN Q-CNN (EC)
Now we evaluate our approachâs performance for com- pressing all the fully-connected layers in CaffeNet in Ta- ble 7. The third layer is actually the combination of 1000 classiï¬ers, and is more critical to the classiï¬cation accuracy. Hence, we adopt a much more ï¬ne-grained hyper-parameter | 1512.06473#38 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 40 | 2The compression effect of their approach was not explicitly discussed in the paper; we estimate the compression rate based on their description.
3In Table 6, SVD means replacing the weighting matrix with the multi- plication of two low-rank matrices; in Table 7, SVD means ï¬ne-tuning the network after the low-rank matrix decomposition.
# 5.2.3 Quantizing the Whole Network
So far, we have evaluated the performance of CNN models with either convolutional or fully-connected layers quan- tized. Now we demonstrate the quantization of the whole network with a three-stage strategy. Firstly, we quantize all the convolutional layers with error correction, while fully- connected layers remain untouched. Secondly, we ï¬ne-tune fully-connected layers in the quantized network with the ILSVRC-12 training set to restore the classiï¬cation accu- racy. Finally, fully-connected layers in the ï¬ne-tuned net- work are quantized with error correction. We report the performance of our Q-CNN models in Table 8.
Table 8. The speed-up/compression rates and the increase of top- 1/5 error rates for the whole CNN model. Particularly, for the quantization of the third fully-connected layer in each network, we let C â² Model | 1512.06473#40 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 41 | s = 1 and K = 16. Para.
Compression Speed-up Top-1/5 Err. â FCnt. 3/32 4/32 3/32 4/32 3/32 4/32 3/32 4/32 Conv. 8/128 8/128 8/128 8/128 8/128 8/128 6/128 6/128 4.05Ã 4.15Ã 4.04Ã 4.14Ã 5.69Ã 5.78Ã 4.05Ã 4.06Ã 15.40Ã 18.76Ã 15.40Ã 18.76Ã 16.32Ã 20.16Ã 16.55Ã 20.34Ã 1.38% / 0.84% 1.46% / 0.97% 1.43% / 0.99% 1.54% / 1.12% 1.48% / 0.81% 1.64% / 0.85% 1.22% / 0.53% 1.35% / 0.58% AlexNet CaffeNet CNN-S VGG-16 | 1512.06473#41 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 42 | s = 8 and K = 128 for AlexNet, CaffeNet, and CNN-S, and let Câ² s = 6 and K = 128 for VGG-16, to ensure roughly 4 â¼ 6à speed- up for each network. Then we vary the hyper-parameter settings in fully-connected layers for different compression levels. For the former two networks, we achieve 18à com- pression with about 1% loss in the top-5 classiï¬cation accu- racy. For CNN-S, we achieve 5.78à speed-up and 20.16à compression, while the top-5 classiï¬cation accuracy drop is merely 0.85%. The result on VGG-16 is even more encour- aging: with 4.06à speed-up and 20.34Ã, the increase of top-5 error rate is only 0.58%. Hence, our proposed Q-CNN framework can improve the efï¬ciency of convolutional net- works with minor performance loss, which is acceptable in many applications.
# 5.3. Results on Mobile Devices | 1512.06473#42 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 43 | # 5.3. Results on Mobile Devices
We have developed an Android application to fulfill CNN-based image classification on mobile devices, based on our Q-CNN framework. The experiments are carried out on a Huawei® Mate 7 smartphone, equipped with an 1.8GHz Kirin 925 CPU. The test-phase computation is car- ried out on a single CPU core, without GPU acceleration.
In Table 9, we compare the computation efï¬ciency and classiï¬cation accuracy of the original and quantized CNN models. Our Q-CNN framework achieves 3à speed-up for AlexNet, and 4à speed-up for CNN-S. Whatâs more, we compress the storage consumption by 20 Ã, and the reTable 9. Comparison on the time, storage, memory consumption, and top-5 classiï¬cation error rates of the original and quantized AlexNet and CNN-S. Model
Memory 264.74MB 74.65MB 468.90MB 129.49MB Storage 232.56MB 12.60MB 392.57MB 20.13MB Time 2.93s 0.95s 10.58s 2.61s Top-5 Err. 19.74% 20.70% 15.82% 16.68% CNN Q-CNN CNN Q-CNN AlexNet CNN-S | 1512.06473#43 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 44 | quired run-time memory is only one quarter of the original model. At the same time, the loss in the top-5 classiï¬cation accuracy is no more than 1%. Therefore, our proposed ap- proach improves the run-time efï¬ciency in multiple aspects, making the deployment of CNN models become tractable on mobile platforms.
# 5.4. Theoretical vs. Realistic Speed-up
In Table 10, we compare the theoretical and realistic speed-up on AlexNet. The BLAS [29] library is used in Caffe [15] to accelerate the matrix multiplication in con- volutional and fully-connected layers. However, it may not always be an option for mobile devices. Therefore, we mea- sure the run-time speed under two settings, i.e. with BLAS enabled or disabled. The realistic speed-up is slightly lower with BLAS on, indicating that Q-CNN does not beneï¬t as much from BLAS as that of CNN. Other optimization tech- niques, e.g. SIMD, SSE, and AVX [4], may further improve our realistic speed-up, and shall be explored in the future. | 1512.06473#44 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 45 | Table 10. Comparison on the theoretical and realistic speed-up on AlexNet (CPU only, single-threaded). Here we use the ATLAS library, which is the default BLAS choice in Caffe [15].
Speed-up Time (ms) FLOPs BLAS Theo. Q-CNN 75.62 55.35 CNN 321.10 167.794 Q-CNN CNN Real. 4.25Ã 3.03Ã Off On 1.75e+8 7.29e+8 4.15Ã
# 6. Conclusion
In this paper, we propose a uniï¬ed framework to si- multaneously accelerate and compress convolutional neural networks. We quantize network parameters to enable ef- ï¬cient test-phase computation. Extensive experiments are conducted on MNIST and ILSVRC-12, and our approach achieves outstanding speed-up and compression rates, with only negligible loss in the classiï¬cation accuracy.
# 7. Acknowledgement
This work was supported in part by National Natural Sci- ence Foundation of China (Grant No. 61332016), and 863 program (Grant No. 2014AA015105).
4This is Caffeâs run-time speed. The code for the other three settings is on https://github.com/jiaxiang-wu/quantized-cnn.
# References | 1512.06473#45 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 46 | # References
[1] K. Chatï¬eld, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. In British Machine Vision Conference (BMVC), 2014. 1, 2, 6
[2] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. In International Conference on Machine Learning (ICML), pages 2285â2294, 2015. 1, 2, 5, 6
[3] D. C. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmid- huber. High-performance neural networks for visual object classiï¬- cation. CoRR, abs/1102.0183, 2011. 1, 5, 6
[4] I. Corporation. Intel architecture instruction set extensions program- ming reference. Technical report, Intel Corporation, Feb 2016. 8 [5] M. Courbariaux, Y. Bengio, and J. David. Training deep neural net- In International Conferworks with low precision multiplications. ence on Learning Representations (ICLR), 2015. 5 | 1512.06473#46 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 47 | [6] M. Denil, B. Shakibi, L. Dinh, M. A. Ranzato, and N. de Freitas. Predicting parameters in deep learning. In Advances in Neural In- formation Processing Systems (NIPS), pages 2148â2156, 2013. 5, 6
[7] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus. Ex- ploiting linear structure within convolutional networks for efï¬cient evaluation. In Advances in Neural Information Processing Systems (NIPS), pages 1269â1277, 2014. 1, 5
[8] J. D. Geoffrey Hinton, Oriol Vinyals. Distilling the knowledge in a neural network. CoRR, abs/1503.02531, 2015. 5, 6
[9] R. B. Girshick. Fast R-CNN. CoRR, abs/1504.08083, 2015. 1 [10] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 580â587, 2014. 1 | 1512.06473#47 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 48 | [11] Y. Gong, L. Liu, M. Yang, and L. D. Bourdev. Compressing deep convolutional networks using vector quantization. CoRR, abs/1412.6115, 2014. 1, 2, 5, 7
[12] S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2015. 1, 2, 5
[13] M. Jaderberg, A. Vedaldi, and A. Zisserman. Speeding up convolu- tional neural networks with low rank expansions. In British Machine Vision Conference (BMVC), 2014. 1
[14] H. Jegou, M. Douze, and C. Schmid. Product quantization for near- est neighbor search. IEEE Transactions on Pattern Analysis and Ma- chine Intelligence (TPAMI), 33(1):117â128, Jan 2011. 2 | 1512.06473#48 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 49 | [15] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. CoRR, abs/1408.5093, 2014. 2, 6, 8 [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classiï¬ca- tion with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 1106â1114, 2012. 1, 2, 6
[17] V. Lebedev, Y. Ganin, M. Rakhuba, I. V. Oseledets, and V. S. Lem- pitsky. Speeding-up convolutional neural networks using ï¬ne-tuned cp-decomposition. In International Conference on Learning Repre- sentations (ICLR), 2015. 1, 5, 6
[18] V. Lebedev and V. S. Lempitsky. Fast convnets using group-wise brain damage. CoRR, abs/1506.02515, 2015. 1, 5, 6 | 1512.06473#49 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 50 | [19] Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel. Backpropagation applied to hand- written zip code recognition. Neural Computation, 1(4):541â551, 1989. 1
[20] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998. 2, 5
[21] C. Leng, J. Wu, J. Cheng, X. Bai, and H. Lu. Online sketching hash- ing. In IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR), pages 2503â2511, 2015. 2
[22] C. Leng, J. Wu, J. Cheng, X. Zhang, and H. Lu. Hashing for dis- In International Conference on Machine Learning tributed data. (ICML), pages 1642â1650, 2015. 2 | 1512.06473#50 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 51 | [23] G. Levi and T. Hassncer. Age and gender classiï¬cation using convo- lutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 34â42, 2015. 1
[24] C. Li, Q. Liu, J. Liu, and H. Lu. Learning ordinal discriminative features for age estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2570â2577, 2012. 1 [25] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. CoRR, abs/1603.05279, 2016. 5
[26] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recognition challenge. Inter- national Journal of Computer Vision (IJCV), pages 1â42, 2015. 2, 5 | 1512.06473#51 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 52 | [27] K. Simonyan and A. Zisserman. Very deep convolutional networks In International Conference on for large-scale image recognition. Learning Representations (ICLR), 2015. 1, 2, 6
[28] S. Srinivas and R. V. Babu. Data-free parameter pruning for deep In British Machine Vision Conference (BMVC), neural networks. pages 31.1â31.12, 2015. 1, 5
[29] R. C. Whaley and A. Petitet. Minimizing development and mainte- nance costs in supporting persistently optimized BLAS. Software: Practice and Experience, 35(2):101â121, Feb 2005. 8
[30] Z. Yang, M. Moczulski, M. Denil, N. de Freitas, A. J. Smola, L. Song, and Z. Wang. Deep fried convnets. CoRR, abs/1412.7149, 2014. 1, 5
[31] X. Zhang, J. Zou, K. He, and J. Sun. Accelerating very deep convolutional networks for classiï¬cation and detection. CoRR, abs/1505.06798, 2015. 1, 5, 7 | 1512.06473#52 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 53 | [32] X. Zhang, J. Zou, X. Ming, K. He, and J. Sun. Efï¬cient and accurate approximations of nonlinear convolutional networks. In IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR), pages 1984â1992, 2015. 1, 5
# Appendix A: Additional Results
In the submission, we report the performance after quan- tizing all the convolutional layers in AlexNet, and quan- tizing all the full-connected layers in CaffeNet. Here, we present experimental results for some other settings.
# Quantizing Convolutional Layers in CaffeNet
We quantize all the convolutional layers in CaffeNet, and the results are as demonstrated in Table 11. Furthermore, we ï¬ne-tune the quantized CNN model learned with error correction (Câ² s = 8, K = 128), and the increase of top-1/5 error rates are 1.15% and 0.75%, compared to the original CaffeNet.
Table 11. Comparison on the speed-up rates and the increase of top-1/5 error rates for accelerating all the convolutional layers in CaffeNet, without ï¬ne-tuning. | 1512.06473#53 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 54 | Top-1 Err. â 18.69% 32.84% 20.08% 35.48% 1.22% 2.44% 1.57% 2.30% Top-5 Err. â 16.73% 33.55% 18.31% 37.82% 0.97% 1.83% 1.12% 1.71% Speed-up 3.32Ã 4.32Ã 3.71Ã 4.27Ã 3.32Ã 4.32Ã 3.71Ã 4.27Ã Para. 4/64 6/64 6/128 8/128 4/64 6/64 6/128 8/128 Method Q-CNN Q-CNN (EC)
# Quantizing Convolutional Layers in CNN-S
We quantize all the convolutional layers in CNN-S, and the results are as demonstrated in Table 12. Furthermore, we ï¬ne-tune the quantized CNN model learned with error correction (Câ² s = 8, K = 128), and the increase of top-1/5 error rates are 1.24% and 0.63%, compared to the original CNN-S.
Table 12. Comparison on the speed-up rates and the increase of top-1/5 error rates for accelerating all the convolutional layers in CNN-S, without ï¬ne-tuning. | 1512.06473#54 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 55 | Top-1 Err. â 19.87% 45.74% 27.86% 46.18% 1.60% 3.49% 2.07% 3.42% Top-5 Err. â 16.77% 48.67% 25.09% 50.26% 0.92% 2.32% 1.32% 2.17% Para. 4/64 6/64 6/128 8/128 4/64 6/64 6/128 8/128 Speed-up 3.69Ã 5.17Ã 4.78Ã 5.92Ã 3.69Ã 5.17Ã 4.78Ã 5.92Ã Method Q-CNN Q-CNN (EC)
# Quantizing Fully-connected Layers in AlexNet
We quantize all the fully-connected layers in AlexNet, and the results are as demonstrated in Table 13.
# Quantizing Fully-connected Layers in CNN-S
We quantize all the fully-connected layers in CNN-S, and the results are as demonstrated in Table 14. | 1512.06473#55 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
1512.06473 | 56 | # Quantizing Fully-connected Layers in CNN-S
We quantize all the fully-connected layers in CNN-S, and the results are as demonstrated in Table 14.
Table 13. Comparison on the compression rates and the increase of top-1/5 error rates for compressing all the fully-connected layers in AlexNet, without ï¬ne-tuning. Compression Para. 13.96à 2/16 19.14à 3/16 15.25à 3/32 18.71à 4/32 13.96à 2/16 19.14à 3/16 15.25à 3/32 18.71à 4/32
Top-1 Err. â 0.25% 0.77% 0.54% 0.71% 0.14% 0.40% 0.40% 0.46% Top-5 Err. â 0.27% 0.64% 0.33% 0.69% 0.20% 0.22% 0.21% 0.38% Method Q-CNN Q-CNN (EC)
Table 14. Comparison on the compression rates and the increase of top-1/5 error rates for compressing all the fully-connected layers in CNN-S, without ï¬ne-tuning. Para. 2/16 3/16 3/32 4/32 2/16 3/16 3/32 4/32 | 1512.06473#56 | Quantized Convolutional Neural Networks for Mobile Devices | Recently, convolutional neural networks (CNN) have demonstrated impressive
performance in various computer vision tasks. However, high performance
hardware is typically indispensable for the application of CNN models due to
the high computation complexity, which prohibits their further extensions. In
this paper, we propose an efficient framework, namely Quantized CNN, to
simultaneously speed-up the computation and reduce the storage and memory
overhead of CNN models. Both filter kernels in convolutional layers and
weighting matrices in fully-connected layers are quantized, aiming at
minimizing the estimation error of each layer's response. Extensive experiments
on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression
with merely one percentage loss of classification accuracy. With our quantized
CNN model, even mobile devices can accurately classify images within one
second. | http://arxiv.org/pdf/1512.06473 | Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng | cs.CV | Accepted by the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | cs.CV | 20151221 | 20160516 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.