doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1601.00257
7
As we know, quantum field theory is a powerful framework for us to understand a huge range of phenomena in Nature such as high energy physics and condensed matter physics. Although the underlying philosophies are different, they share quantum field theory as their common language. In high energy physics, the philosophy is reductionism, where the goal is to figure out the UV physics for our effective low energy IR physics. The standard model for particle physics is believed to be an effective low energy theory. To see what really happens at UV, we are required to go beyond the standard model by reaching a higher energy scale. This is the reason why we built LHC in Geneva. This is also the reason why we plan to go to the Great Collider from the Great Wall in China. While in condensed matter physics, the philosophy is emergence. Actually we have a theory of everything for condensed matter physics, namely QED, or the Schrodinger equation for electrons with Coulomb interaction – 2 – J (u=7) North South Pole Pole (y=9) (y=) J (u=0) Figure 1: The Penrose diagram for the global de Sitter space, where the planar de Sitter space associated with the observer located at the south pole is given by the shaded portion.
1601.00257#7
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
8
Figure 1: The Penrose diagram for the global de Sitter space, where the planar de Sitter space associated with the observer located at the south pole is given by the shaded portion. among them. What condensed matter physicists are concerned with is how to engineer various low temperature IR fixed points, namely various phases from such a known UV theory. Such a variety of phases gives rise to a man-made multiverse, which is actually resonant to the landscape suggested by string theory. On the other hand, general relativity tells us that gravity is geometry. Gravity is dif- ferent, so subtle is gravity. The very longstanding issue in fundamental physics is trying to reconcile general relativity with quantum field theory. People like to give a name to it, called Quantum Gravity although we have not fully succeeded along this lane. Here is a poor man’s perspective into the current status of quantum gravity, depending on the asymptotic 1. The reason is twofold. First, due to the existence of Planck scale geometry of spacetime lp = (yar, spacetime is doomed such that one can not define local field operators in a d+1 dimensional gravitational theory. Instead, the observables can live only on the boundary of spacetime. Second, it is the dependence on the asymptopia that embodies the background independence of quantum gravity. # 2.1 De Sitter space: Meta-observables
1601.00257#8
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
9
# 2.1 De Sitter space: Meta-observables If the spacetime is asymptotically de Sitter as ds2 = −dt2 + l2 cosh2 t l dΩ2 d, (2.1) when t → ±∞, then by the coordinate transformation u = 2 tan−1 e # t l , the metric becomes ds2 = l2 sin2 u (du2 + dχ2 + sin2 χdΩ2 d−1) (2.2) 1This is a poor man’s perspective because we shall try our best not to touch upon string theory although it is evident that this perspective is well shaped by string theory in a direct or indirect way throughout these lecture notes. – 3 – with χ the polar angle for the d-sphere. We plot the Penrose diagram in Figure 1 for de Sitter space. Whence both the past and future conformal infinity I ∓ are spacelike. As a result, any observer can only detect and influence portion of the whole spacetime. Moreover, any point in I + is causally connected by a null geodesic to its antipodal point in I − for de Sitter. In view of this, Witten has proposed the meta-observables for quantum gravity in de Sitter space, namely ["
1601.00257#9
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
10
[" cit) = [" Dge (2.3) Ii Ii with gf and g; a set of data specified on .% ~ respectively. Then one can construct the Hilbert space H; at %~ for quantum gravity in de Sitter space with the inner product (j,i) = (Q¥|#) by CPT transformation ©. The Hilbert space Hy at %+ can be constructed in a similar fashion. At the perturbative level, the dimension of Hilbert space for quantum gravity in de Sitter is infinite, which is evident from the past-future singularity of the meta-correlation functions at those points connected by the aforementioned geodesics. But it is suspected that the non-perturbative dimension of Hilbert space is supposed to be finite. This is all one can say with such mata-observables[1].
1601.00257#10
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
11
However, there are also different but more optimistic perspectives. Among others, in- spired by AdS/CFT, Strominger has proposed DS/CFT correspondence. First, with I + identified as I − by the above null geodesics, the dual CFT lives only on one sphere rather than two spheres. Second, instead of working with the global de Sitter space, DS/CFT cor- respondence can be naturally formulated in the causal past of any given observer, where the bulk spacetime is the planar de Sitter and the dual CFT lives on I −. For details, the readers are referred to Strominger’s original paper as well as his Les Houches lectures[2, 3]. # 2.2 Minkowski space: S-Matrix program
1601.00257#11
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
12
# 2.2 Minkowski space: S-Matrix program The situation is much better if the spacetime is asymptotically flat. As the Penrose diagram for Minkowski space shows in Figure 2, the conformal infinity is lightlike. In this case, the only observable is scattering amplitude, abbreviated as S-Matrix, which connects the out states at I + to the in states at I −2. One can claim to have a well defined quantum gravity in asymptotically flat space once a sensible recipe is made for the computation of S-Matrix with gravitons. Actually, inspired by BCFW recursion relation[4], there has been much progress achieved over the last few years along this direction by the so called S-Matrix program, in which the scattering amplitude is constructed without the local Lagrangian, resonant to the non-locality of quantum gravity[5]. Traditionally, S-Matrix is computed by the Feynman diagram techniques, where the Feynman rules come from the local Lagrangian. But the computation becomes more and more complicated when the scattering process involves either more external legs or higher loops. While in the S-Matrix program the recipe for the
1601.00257#12
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
15
Figure 3: The Penrose diagram for the global anti-de Sitter space, where the conformal infinity I itself can be a spacetime on which the dynamics can live. computation of scattering amplitude, made out of the universal properties of S-Matrix, such as Poincare or BMS symmetry, unitarity and analyticity of S-Matrix, turns out to be far more efficient. It is expected that such an ongoing S-Matrix program will lead us eventually towards a well formulated quantum gravity in asymptotically flat space. – 5 – # 2.3 Anti-de Sitter space: AdS/CFT correspondence The best situation is for the spacetime which is asymptotically anti-de Sitter as ds2 = l2 cos2 χ (−dt2 + dχ2 + sin2 χdΩ2 d−1) (2.4)
1601.00257#15
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
16
ds2 = l2 cos2 χ (−dt2 + dχ2 + sin2 χdΩ2 d−1) (2.4) with χ ∈ [0, π 2 ). As seen from the Penrose diagram for anti-de Sitter space in Figure 3, the conformal infinity I is timelike in this case, where we can have a well formulated quantum theory for gravity by AdS/CFT correspondence[6, 7, 8]. Namely the quantum gravity in the bulk AdSd+1 can be holographically formulated in terms of CFTd on the boundary without gravity and vice versa. We shall elaborate on AdS/CFT in the subsequent section. Here we would like to mention one very interesting feature about AdS/CFT, that is to say, generically we have no local Lagrangian for the dual CFT, which echoes the aforementioned S-Matrix program somehow. # 3. Applied AdS/CFT # 3.1 What AdS/CFT is To be a little bit more precise about what AdS/CFT is, let us first recall the very basic object in quantum field theory, namely the generating functional, which is defined as Za\J] _ ini [ Dweisal¥l+s da JO). (3.1)
1601.00257#16
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
17
Za\J] _ ini [ Dweisal¥l+s da JO). (3.1) Whence one can obtain the n-point correlation function for the operator O by taking the n-th functional derivative of the generating functional with respect to the source J. For example, \ bLa (0(e)) = Se, (3.2) 2 5O(x (O(01)O(a2)) = — 224 = 90(@) (3.3) bJ(a)dS (a2) dS (a2) As we know, we can obtain such a generating functional by perturbative expansion using the Feynman diagram techniques for weakling coupled quantum field theory, but obviously such a perturbation method breaks down when the involved quantum field theory is strongly coupled except one can find its weak dual. AdS/CFT provides us with such a dual for strongly coupled quantum field theory by a classical gravitational theory with one extra dimension. So now let us turn to general relativity, where the basic object is the action given by Sd+1 = 1 16πG dd+1x √ −g(R + d(d − 1) l2 + Lmatter) (3.4)
1601.00257#17
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
18
Sd+1 = 1 16πG dd+1x √ −g(R + d(d − 1) l2 + Lmatter) (3.4) for AdS gravity. Here for the present illustration and later usage, we would like to choose the Lagrangian for the matter fields as Lmatter = l2 Q2 (− 1 4 F abFab − |DΦ|2 − m2|Φ|2) (3.5) – 6 – with F = dA, D = ∇ − iA and Q the charge of complex scalar field. The variation of action gives rise to the equations of motion as follows Gab − d(d − 1) 2l2 gab = l2 Q2 [FacFb c + 2DaΦDbΦ − ( 1 4 FcdF cd + |DΦ|2 + m2|Φ|2)gab], (3.6) (3.7) ∇aF ab = i(ΦDbΦ − ΦDbΦ), DaDaΦ − m2Φ = 0. (3.8)
1601.00257#18
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
19
∇aF ab = i(ΦDbΦ − ΦDbΦ), DaDaΦ − m2Φ = 0. (3.8) Note that the equations of motion are generically second order PDEs. So to extrapolate the bulk solution from the AdS boundary, one is required to specify a pair of boundary conditions for each bulk field at the conformal boundary of AdS, which can be read off from the asymptotical behavior for the bulk fields near the AdS boundary ds2 → l2 z2 [dz2 + (γµν + tµνzd)dxµdxν], (3.9) (3.10) # Aµ → aµ + bµzd−2, Φ → φ−z∆− + φ+z∆+ (3.11) # a
1601.00257#19
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
20
(3.11) # a with ∆± = d 4 + m2l23. Namely (γµν, tµν) are the boundary data for the bulk metric field, (aµ, bµ) for the bulk gauge field, and (φ−, φ+) for the bulk scalar field. But such pairs usually lead to singular solutions deep into the bulk. To avoid these singular solutions, one can instead specify the only one boundary condition from each pair such as (γµν, aµ, φ−). We denote these boundary data by J, whose justification will be obvious later on. At the same time we also require the regularity of the desired solution in the bulk. In this sense, the regular solution is uniquely determined by the boundary data J. Thus the on-shell action from the regular solution will be a functional of J. What AdS/CFT tells us is that this on-shell action in the bulk can be identified as the generating functional for strongly coupled quantum field theory living on the boundary, i.e., Zd[J] = Sd+1[J], (3.12)
1601.00257#20
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
21
Zd[J] = Sd+1[J], (3.12) where apparently J has a dual meaning, not only serving as the source for the boundary quantum field theory but also being the boundary data for the bulk fields. In particular, γµν sources the operator for the boundary energy momentum tensor whose expectation value is given by (3.3) as tµν, aµ sources a global U (1) conserved current operator whose expectation value is given as bµ, and the expectation value for the operator dual to the source φ− is given as φ+ up to a possible proportional coefficient. The conformal dimension for these dual operators can be read off from (3.9) by making the scaling transformation (z, xµ) → (αz, αxµ) as d, d − 1, and ∆+ individually. 3Here we are working with the axial gauge for the bulk metric and gauge fields, which can always been achieved. In addition, although the mass square is allowed to be negative in the AdS it can not be below the BF bound − d2 – 7 –
1601.00257#21
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
22
– 7 – Here is a caveat on the validity of (3.12). Although such a boundary/bulk duality is believed to hold in more general circumstances, (3.12) works for the large N strongly coupled quantum field theory on the boundary where N and the coupling parameter of the dual , respectively. quantum field theory are generically proportional to some powers of In order to capture the 1 N correction to the dual quantum field theory by holography, one is required to calculate the one-loop partition function on top of the classical background solution in the bulk. On the other hand, to see the finite coupling effect in the dual quantum field theory by holography, one is required to work with higher derivative gravity theory in the bulk. But in what follows, for simplicity we shall work exclusively with (3.12) in its applicability regime.
1601.00257#22
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
23
Among others, we would like to conclude this subsection with the three important im- plications of AdS/CFT. First, a finite temperature quantum field theory at finite chemical potential is dual to a charged black hole in the bulk. Second, the entanglement entropy of the dual quantum field theory can be calculated by holography as the the area of the bulk minimal surface anchored onto the entangling surface[11, 12, 13]. Third, the extra bulk di- mension represents the renormalization group flow direction for the boundary quantum field theory with AdS boundary as UV, although the renormalization scheme is supposed to be different from the conventional one implemented in quantum field theory4. # 3.2 Why AdS/CFT is reliable But why AdS/CFT is reliable? In fact, besides its explicit implementations in string theory such as the duality between Type IIB string theory in AdS5 × S5 and N = 4 SYM theory on the four dimensional boundary, where some results can be computed on both sides and turn out to match each other, there exist many hints from the inside of general relativity indicating that gravity is holographic. Here we simply list some of them as follows.
1601.00257#23
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
24
• Bekenstein-Hawking’s black hole entropy formula SBH = A 4ld−1 p [14]. • Brown-Henneaux’s asymptotic symmetry analysis for three dimensional gravity[15], 2G successfully reproduces the black hole entropy where the derived central charge 3l by the Cardy formula for conformal field theory[16]. • Brown-York’s surface tensor formulation of quasi local energy and conserved charges[17]. Once we are brave enough to declare that this surface tensor be not only for the purpose of the bulk gravity but also for a certain system living on the boundary, we shall end up with the long wave limit of AdS/CFT, namely the gravity/fluid correspondence, which has been well tested[18]. On the other hand, we can also see how such an extra bulk dimension emerges from quantum field theory perspective. In particular, inspired by Swingle’s seminal work on the connection between the MERA tensor network state for quantum critical systems and AdS 4This implication is sometimes dubbed as RG = GR. – 8 – space[19], Qi has recently proposed an exact holographic mapping to generate the bulk Hilbert space of the same dimension from the boundary Hilbert space[20], which echoes the afore- mentioned renormalization group flow implication of AdS/CFT.
1601.00257#24
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
25
Keeping all of these in mind, we shall take AdS/CFT as a first principle and explore its various applications in what follows. # 3.3 How useful AdS/CFT is As alluded to above, AdS/CFT is naturally suited for us to address strongly coupled dy- namics and non-equilibrium processes by mapping the involved hard quantum many body problems to classical few body problems. There are two approaches towards the construction of holographic models. One is called the top-down approach, where the microscopic content of the dual boundary theory is generically known because the construction originates in string theory. The other is called the bottom-up approach, which can be regarded as kind of effective field theory with one extra dimension for the dual boundary theory.
1601.00257#25
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
26
By either approach, we can apply AdS/CFT to QCD as well as the QCD underlying quark-gluon plasma, ending up with AdS/QCD[21, 22]. On the other hand, taking into account that there are a bunch of strongly coupled systems in condensed matter physics such as high Tc superconductor, liquid Helium, and non-Fermi liquid, we can also apply AdS/CFT to condensed matter physics, ending up with AdS/CMT[23, 24, 25, 26, 27]. Note that the bulk dynamics boils eventually down to a set of differential equations, whose solutions are generically not amenable to an analytic treatment. So one of the central tasks in applied AdS/CFT is to find the numerical solutions to differential equations. In the next section, we shall provide a basic introduction to the main numerical methods for solving differential equations in applied AdS/CFT. # 4. Numerics for Solving Differential Equations
1601.00257#26
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
27
# 4. Numerics for Solving Differential Equations Roughly speaking, there are three numerical schemes to solve differential equations by trans- forming them into algebraic equations, namely finite different method, finite element method, and spectral method. According to our experience with the numerics in applied AdS/CFT, it is favorable to make a code from scratch for each problem you are faced up with. In particular, the variant of spectral method, namely pseudo-spectral method turns out to be most efficient in solving differential equations along the space direction where Newton-Raphson iteration method is extensively employed if the resultant algebraic equations are non-linear. On the other hand, finite difference method such as Runge-Kutta method is usually used to deal with the dynamical evolution along the time direction. So now we like to elaborate a little bit on Newton-Raphson method, pseudo-spectral method, as well as Runge-Kutta method one by one. – 9 – f(x) Figure 4: Newton-Raphson iteration map is used to find the rightmost root for a non-linear algebraic equation. # 4.1 Newton-Raphson method
1601.00257#27
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
28
Figure 4: Newton-Raphson iteration map is used to find the rightmost root for a non-linear algebraic equation. # 4.1 Newton-Raphson method To find the desired root for a given non-linear function f (x), we can start with a wisely guessed initial point xk. Then as shown in Figure 4 by Newton-Raphson iteration map, we hit the next point xk+1 as trp =p — f' (we) f(r), (4.1) which is supposed to be closer to the desired root. By a finite number of iterations, we eventually end up with a good approximation to the desired root. If we are required to find the root for a group of non-linear functions F (X), then the iteration map is given by Xk+1 = Xk − [( ∂F ∂X )−1F ]|Xk , (4.2) where the formidable Jacobian can be tamed by Taylor expansion trick since the expansion coefficient of the linear term is simply the Jacobian in Taylor expansion F (X) = F (X0) + ∂F ∂X |X0(X − X0) + · · ·. # 4.2 Pseudo-spectral method
1601.00257#28
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
29
# 4.2 Pseudo-spectral method As we know, we can expand an analytic function in terms of a set of appropriate spectral functions as f(a) = > CnTy (x) (4.3) – 10 – with N some truncation number, depending on the numerical accuracy you want to achieve. Then the derivative of this function is given by N f(x) = > CnT" (2). (4.4) n=1 Whence the derivatives at the collocation points can be obtained from the values of this function at these points by the following differential matrix as f'(@i) = 35 Dis f(x), (4.5) J where the matrix D = T’T~! with Tj, = T,(a) and T/, = T/(x;). With this differential matrix, the differential equation in consideration can be massaged into a group of algebraic equations for us to solve the unknown f(x;) by requiring that both the equation hold at the collocation points and the prescribed boundary conditions be satisfied.
1601.00257#29
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
30
This is the underlying idea for pseudo-spectral method. Among others, we would like to point out the two very advantages of pseudo-spectral method, compared to finite difference method and finite element method. First, one can find the interpolating function for f (x) by the built-in procedure as follows f (x) = Tn(x)T −1 ni f (xi). n,i (4.6) Second, the numerical error decays exponentially with the truncation number N rather than the power law decay followed by the other two methods. # 4.3 Runge-Kutta method As mentioned before, we should employ finite difference method to march along the time direction. But before that, we are required to massage the involved differential equation into the following ordinary differential equation ˙y = f (y, t), (4.7) which is actually the key step for one to investigate the temporal evolution in applied AdS/CFT. Once this non-trivial step is achieved, then there are a bunch of finite differ- ence schemes available for one to move forward. Among others, here we simply present the classical fourth order Runge-Kutta method as follows
1601.00257#30
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
31
k1 = f (yi, ti), ∆t 2 ∆t 2 k4 = f (yi + ∆tk3, ti + ∆t), ∆t 2 ∆t 2 k2 = f (yi + k1, ti + k3 = f (yi + k2, ti + ), ), ti+1 = ti + ∆t, yi+1 = yi + ∆t 6 (k1 + 2k2 + 2k3 + k4), (4.8) – 11 – because it is user friendly and applicable to all the temporal evolution problems we have been considered so far[28, 29, 30, 31, 32, 33]5. # 5. Holographic Superfluid at Zero Temperature In this section, we would like to take the zero temperature holographic superfluid as an concrete example to demonstrate how to apply AdS/CFT with numerics. In due course, not only shall we introduce some relevant concepts, but also present some new results[34].
1601.00257#31
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
32
The action for the simplest model of holographic superfluid is just given by (3.4). To make our life easier, we shall work with the probe limit, namely the back reaction of matter fields onto the metric is neglected, which can be achieved by taking the large Q limit. Thus we can put the matter fields on top of the background which is the solution to the vacuum Einstein equation with a negative cosmological constant Λ = − d(d−1) . For simplicity, we shall focus only on the zero temperature holographic superfluid, which can be implemented by choosing the AdS soliton as the bulk geometry[35], i.e., ds2 = l2 z2 [−dt2 + dx2 + dz2 f (z) + f (z)dθ2]. (5.1) Here f (z) = 1 − ( z )d with z = z0 the tip where our geometry caps off and z = 0 the AdS z0 boundary. To guarantee the smooth geometry at the tip, we are required to impose the periodicity 4πz0 onto the θ coordinate. The inverse of this periodicity set by z0 is usually 3 interpreted as the confining scale for the dual boundary theory.
1601.00257#32
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
33
In what follows, we will take the units in which l = 1, 16πGQ2 = 1, and z0 = 1. In addition, we shall focus exclusively on the action of matter fields because the leading Q0 contribution has been frozen by the above fixed background geometry. # 5.1 Variation of action, Boundary terms, and Choice of ensemble The variational principle gives rise to the equations of motion if and only if the boundary terms vanish in the variation of action. For our model, the variation of action is given by 5S = / d*leJ—G|V.F® + i(®D°S — BD'S)|5 A, — / dleV—hing ES Ay + (f attey g(DaD* — m2) &5® pes hngD*8d5®) + C.C)]. (5.2) To make the boundary terms vanish, we can fix A, and ® on the boundary. Fixing A, amounts to saying that we are working with the grand canonical ensemble. In order to work with the canonical ensemble where /—hn,F® is fixed instead, we are required to add the additional boundary term J Ba /—hn F Ay to the action, which is essentially the Legendre transformation. On the other hand, fixing ¢_ gives rise to the standard quantization. We can
1601.00257#33
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
34
5It is worthwhile to keep in mind that the accumulated numerical error is of order O(∆t4) for this classical Runge-Kutta method. – 12 – also have an alternative quantization by fixing φ+ when − d2 4 + 1[37]. In what follows, we shall restrict our attention onto the grand canonical ensemble and the standard quantization for the case of d = 3 and m2 = −2, whereby ∆− = 1 and ∆+ = 2. # 5.2 Asymptotic expansion, Counter terms, and Holographic renormalization What we care about is the on-shell action, which can be shown to have IR divergence gener- ically in the bulk by the asymptotic expansion near the AdS boundary, corresponding to the UV divergence for the dual boundary theory. The procedure to make the on-shell action finite by adding some appropriate counter terms is called holographic renormalization[38]. For our case, the on-shell action is given by
1601.00257#34
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
35
Son—shell = aif aev= G(VaF@)A [ee —hng FAs] + 1 ee ‘ — af dey g®(DaD* — m?)® pes hngD*®®) + CC] _ 5 / d2/=Gi(BD — BD) A, — / BxV/—hngF” Ay] — Lf. _ s(f aa —hngD%® + C.C.). (5.3) By the asymptotic expansion in (3.10) and (3.11), the divergence comes only from the last \-P6 z two boundary terms and can be read off as . So the holographic renormalization can be readily achieved by adding the boundary term — J @Pa/—h|®|? to the original action. Whence we have 6S, ly ren op Gi") = Fa, 7 OSren a (O) = i =, (5.4) where jµ corresponds to the conserved particle current and the expectation value for the scalar operator O is interpreted as the condensate order parameter of superfluid. If this scalar operator acquires a nonzero expectation value spontaneously in the situation where the source is turned off, the boundary system is driven into a superfluid phase.
1601.00257#35
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
36
# 5.3 Background solution, Free energy, and Phase transition With the assumption that the non-vanishing bulk matter fields (Φ = zφ, At, Ax) do not depend on the coordinate θ, the equations of motion can be explicitly written as 6Note that the outward normal vector is given by na = −z( ∂ ∂z )a. – 13 – 0 = ∂2 t φ + (z + A2 +3z2∂zφ + (z3 − 1)∂2 t Ax − ∂t∂xAt − i(φ∂x ¯φ − ¯φ∂xφ) + 2Axφ ¯φ + 3z2∂zAx + (z3 − 1)∂2 (5.6) 0 = ∂2 0 = (z3 − 1)∂2 0 = ∂t∂zAt + i(φ∂z ¯φ − ¯φ∂zφ) − ∂z∂xAx, z Ax, xAt + ∂t∂xAx + 2 ¯φφAt + i( ¯φ∂tφ − Ψ∂t ¯φ), z At + 3z2∂zAt − ∂2 (5.7)
1601.00257#36
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
37
z At + 3z2∂zAt − ∂2 (5.7) (5.8) where the third one is the constraint equation and the last one reduces to the conserved equation for the boundary current when evaluated at the AdS boundary, i.e., # ∂tρ = −∂xjx. ap = —Onj®. (5.9) To specialize into the homogeneous phase diagram for our holographic model, we further make the following ansatz for our non-vanishing bulk matter fields φ = φ(z), At = At(z). (5.10) Then the equations of motion for the static solution reduce to (5.11) 0 = 3z2∂zφ + (z3 − 1)∂2 z φ + (z − A2 0 = 2Atφ ¯φ + 3z2∂zAt + (z3 − 1)∂2 0 = φ∂z ¯φ − ¯φ∂zφ, t )φ, z At, (5.12) (5.13) where the last equation implies that we can always choose a gauge to make φ real. It is not hard to see the above equations of motion have a trivial solution φ = 0, At = µ, (5.14)
1601.00257#37
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
38
which corresponds to the vacuum phase with zero particle density. On the other hand, to obtain the non-trivial solution dual to the superfluid phase, we are required to resort to pseudo-spectral method. As a demonstration, we here plot the nontrivial profile for φ and At at µ = 2 in Figure 5. The variation of particle density and condensate with respect to the chemical potential is plotted in Figure 6, which indicates that the phase transition from the vacuum to a superfluid occurs at µc = 1.715. It is noteworthy that such a phenomenon is rem- iniscent of the recently observed quantum critical behavior of ultra-cold cesium atoms in an optical lattice across the vacuum to superfluid transition by tuning the chemical potential[36]. Moreover, the compactified dimension in the AdS soliton background can be naturally identi- fied as the reduced dimension in optical lattices by the very steep harmonic potential as both mechanisms make the effective dimension of the system in consideration reduced in the low energy regime. On the other hand, note that the particle
1601.00257#38
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
40
– 14 – (5.5) (5.9) 2.0F 4 "ash 4 Figure 5: The bulk profile for the scalar field and time component of gauge field at the chemical potential µ = 2. | 3.0F 3 4 2.55 2.0 Po IkO>l ; 1.06 | 0.5 0 0.0 0 1 2 3 4 0 1 2 3 4 u u Figure 6: The variation of particle density and condensate with respect to the chemical potential, where we see the second order quantum phase transition take place at µc = 1.715. a zero temperature superfluid where the normal fluid component should disappear. As we will show later on by the linear response theory, this is actually the case. But to make sure that Figure 6 represents the genuine phase diagram for our holographic model, we are required to check whether the corresponding free energy density is the lowest in the grand canonical ensemble. By holography, the free energy density can be obtained from the renormalized on shell Lagrangian of matter fields as follows7 1 _ _ F 5 / dz/—gi(®D°S — SDS) A, — V—hng ApF™|-=0] = Sno | de(aio)?, (5.15)
1601.00257#40
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
42
Figure 7: The difference of free energy density for the superfluid phase from that for the vacuum phase. compared to the vacuum phase when the chemical potential is greater than the critical value. So we are done. # 5.4 Linear response theory, Optical conductivity, and Superfluid density Now let us set up the linear response theory for the later calculation of the optical conductivity of our holographic model. To achieve this, we first decompose the field φ into its real and imaginary parts as φ = φr + iφi, (5.16) and assume that the perturbation bulk fields take the following form δφr = δφr(z)e−iωt+iqx, δφi = δφi(z)e−iωt+iqx, δAt = δAt(z)e−iωt+iqx, δAx = δAx(z)e−iωt+iqx, (5.17) since the background solution is static and homogeneous. With this, the perturbation equa- tions can be simplified as
1601.00257#42
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
44
(5.18) 0 = −ω2δAx − ωqδAt + 3z2∂zδAx + (z3 − 1)∂2 0 = (z3 − 1)∂2 z δAx + 2φ2 rδAx − 2iqφrδφi, z δAt + 3z2∂zδAt + q2δAt + ωqδAx + 2φ2 rδAt + 4Atφrδφr (5.20) +2iωφrδφi, (5.21) 0 = −iω∂zδAt − iq∂zδAx − 2(∂zφrδφi − φr∂zδφi), (5.22) where we have used φi = 0 for the background solution. – 16 – Note that the gauge transformation A → A + ∇θ, φ → φeiθ (5.23) with θ = 1 i λe−iωt+iqx (5.24) induces a spurious solution to the above perturbation equations as
1601.00257#44
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
45
with θ = 1 i λe−iωt+iqx (5.24) induces a spurious solution to the above perturbation equations as δAt = −λω, δAx = λq, δφ = λφ. (5.25) We can remove such a redundancy by requiring δAt = 0 at the AdS boundary8. In addition, δφ will also be set to zero at the AdS boundary later on. On the other hand, taking into account the fact that the perturbation equation (5.22) will be automatically satisfied in the whole bulk once the other perturbations are satisfied9, we can forget about (5.22) from now on. That is to say, we can employ the pseudo-spectral method to obtain the desired numerical solution by combining the rest perturbation equations with the aforementioned boundary conditions as well as the other boundary conditions at the AdS boundary, depending on the specific problem we want to solve. In particular, to calculate the optical conductivity for our holographic model, we can simply focus on the q = 0 mode and further impose δAx = 1 at the AdS boundary. Then the optical conductivity can be extracted by holography as
1601.00257#45
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
46
σ(ω) = ∂zδAx|z=0 iω (5.26) for any positive frequency ω10. According to the perturbation equations, the whole calculation is much simplified because δAx decouples from the other perturbation bulk fields. We simply plot the imaginary part of the optical conductivity in Figure 8 for both vacuum and superfluid phase, because the real part vanishes due to the reality of the perturbation equation and boundary condition for δAx. As it should be the case, the DC conductivity vanishes for the vacuum phase, but diverges for the superfluid phase due to the 1 ω behavior of the imaginary part of optical conductivity by the Krames-Kronig relation Im[o(w)] = =P [. a Rel) (5.27) T Joo J—W Ww Furthermore, according to the hydrodynamical description of superfluid, the superfluid den- sity ρs can be obtained by fitting this zero pole as ρs µω [39, 40, 41]. As expected, our numerics shows that the resultant superfluid density is exactly the same as the particle density within – 17 –
1601.00257#46
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
47
Figure 8: The left panel is the imaginary part of optical conductivity for the vacuum phase, and the right panel is for the superfluid phase at µ = 6.5. our numerical accuracy. The other poles correspond to the gapped normal modes for δAx, which we are not interested in since we are focusing on the low energy physics. Let us come back to the equality between the particle density and superfluid density. Although this numerical result is 100 percent reasonable from the physical perspective, it is highly non-trivial in the sense that the superfluid density comes from the linear response theory while the particle density is a quantity associated with the equilibrium state. So it is better to have an analytic understanding for this remarkable equality. Here we would like to develop an elegant proof for this equality by a boost trick. To this end, we are first required to realize ρs = −µ∂zδAx|z=0 with ω = 0. Such an ω = 0 perturbation can actually be implemented by a boost 1 1 t= woe" va’), x ize (a! — vt’) (5.28) acting on the superfluid phase. Note that the background metric is invariant under such a boost. As a result, we end up with a new non-trivial solution as follows
1601.00257#47
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
48
1 v = ¢,A,= A,, Al, = At. 5.29 g = 9,4, Vin ae Vice (5.29) We expand this solution up to the linear order in v as ¢! = $, A, = Ay, Al, = VAs, (5.30) which means that the linear perturbation δAx is actually proportional to the background solution At. So we have ρs = ρ immediately. 8The only exception is the ω case, which can always be separately managed if necessary. 9This result comes from the following two facts. One is related to Bianchi −g ∂µ( identity 0 = ∇ava = z4 ) = 0 if the rest equations of motion hold. The other is special to our holographic model, in which the readers are encouraged to show that the z component of Maxwell equation turns out to be satisfied automatically at z = 1 if the rest equations hold there. 10Note that σ(−¯ω) = σ(ω), so we focus only on the positive frequency here. – 18 – 1000 600} 4 400+ 4 200 4 | fu | A 0.0 05 1.0 15 2.0 3.0
1601.00257#48
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
49
– 18 – 1000 600} 4 400+ 4 200 4 | fu | A 0.0 05 1.0 15 2.0 3.0 Figure 9: The density plot of er | with g = 0.3 for the superfluid phase at pp = 6.5. The normal modes can be identified by the peaks, where the red one denotes the hydrodynamic normal mode wo = 0.209. 0.0 05 1.0 15 2.0 25 3.0 Figure 10: The spectral plot of ln |δ ˆφi(ω, 1)| with q = 0.3 for the superfluid phase at µ = 6.5, where the initial data are chosen as δφi = z with all the other perturbations turned off. The normal modes can be identified by the peaks, whose locations are the same as those by the frequency domain analysis within our numerical accuracy. # 5.5 Time domain analysis, Normal modes, and Sound speed
1601.00257#49
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
50
# 5.5 Time domain analysis, Normal modes, and Sound speed In what follows we shall use linear response theory to calculate the speed of sound by focusing solely on the hydrodynamic spectrum of normal modes of the gapless Goldstone from the spontaneous symmetry breaking, which is obviously absent from the vacuum phase. As such, the perturbation fields are required to have Dirichlet boundary conditions at the AdS boundary. Then we cast the linear perturbation equations and boundary conditions into the form £L(w)u = 0 with u the perturbation fields evaluated at the grid points by pseudo- spectral method. The normal modes are obtained by the condition det{[£(w)| = 0, which can be further identified by the density plot Feaena with the prime the derivative with respect to w. We demonstrate such a density plot in Figure 9, where the hydrodynamic mode is simply the closest mode to the origin, marked in red. Besides such a frequency domain – 19 – 08 06+ 4 0.4 02+ 4 0.0 . rae . 0.0 0.2 04 0.6 08 1.0 1.2 14 Figure 11: The dispersion relation for the gapless Goldstone mode in the superfluid phase at µ = 6.5, where the sound speed vs = 0.697 is obtained by fitting the long wave modes with ω0 = vsq.
1601.00257#50
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
51
0.7 | os! 05 0.4 Vs 035 Figure 12: The variation of sound speed with respect to the chemical potential. When the chemical potential is much larger than the confining scale, the conformality is restored and the sound speed approaches the predicted value 1√ 2 analysis of spectrum of normal modes, there is an alternative called time domain analysis, which we would like to elaborate on below. We first cast the equations of motion into the following Hamiltonian formalism (5.31) ∂tφ = iAtφ + P, ∂tP = iAtP − (z + A2 ∂tAx = Πx + ∂xAt, ∂tΠx = i(φ∂x ¯φ − ¯φ∂xφ) − 2Axφ ¯φ − 3z2∂zAx + (1 − z3)∂2 x + i∂xAx)φ − 2iAx∂xφ + ∂2 xφ − 3z2∂zφ + (1 − z3)∂2 z φ, (5.32) (5.33) z Ax, (5.34)
1601.00257#51
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
53
– 20 – (5.35) perturbation equations on top of the superfluid phase is given by ∂tδφr = −Atδφi + δPr, ∂tδφi = φrδAt + Atδφr + δPi, ∂tδPr = AtφrδAt − AtδPi − (z + q2)δφr − 3z2∂zδφr + (1 − z3)∂2 ∂tδPi = −iqφrδAx + AtδPr − (z + q2)δφi − 3z2∂zδφi + (1 − z3)∂2 ∂tδAx = δΠx + iqδAt, ∂tδΠx = 2iqφrδφi − 2φ2 0 = (z3 − 1)∂2 z δφr, z δφi, ∂t∂zδAt = 2∂zφrδφi − 2φr∂zδφi + iq∂zδAx. (5.44)
1601.00257#53
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
54
As before, using the source free boundary conditions for all the perturbation fields, we can obtain the temporal evolution of the perturbation fields for any given initial data by Runge- Kutta method, where δAt is solved by the constraint equation (5.43). The normal modes can then be identified by the peaks in the Fourier transformation of the evolving data. We demonstrate such a spectral plot in Figure 10. As expected, such a time domain analysis gives rise to the same result for the locations of normal modes as that by the frequency domain analysis. Then the dispersion relation for the gapless Goldstone can be obtained and plotted in Figure 11, whereby the sound speed vs can be obtained by the fitting formula ω0 = vsq. As shown in Figure 12, the sound speed increases with the chemical potential and saturate to the predicted value 1√ by conformal field theory when the chemical potential is much larger than 2 the confining scale[39, 40, 41], which is reasonable since it is believed that the conformality is restored in this limit. # 6. Concluding Remarks
1601.00257#54
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
55
# 6. Concluding Remarks Like any other unification in physics, AdS/CFT correspondence has proven to be a unique tool for one to address various universal behaviors of near-equilibrium as well as far-from- equilibrium dynamics for a variety of strongly coupled systems, which otherwise would be hard to attack. During such an application, numerical computation has been playing a more and more important role in the sense that not only can numerics leave us with some conjectures to develop an analytic proof and some patterns to have an analytic understanding but also brings us to the regime where the analytic treatment is not available at all. In these lecture notes, we have touched only upon the very basics for the numerics in applied AdS/CFT. In addition, we work only with the probe limit in the concrete example we make use of to demonstrate how to apply AdS/CFT with numerics. The situation will become a little bit involved when the back reaction is taken into account. Regarding this, the readers are suggested to refer to [42] to see how to obtain the stationary inhomogeneous solutions to fully back reacted Einstein equation by Einstein-DeTurck method. On the other – 21 – hand, the readers are recommended to refer to [43] to see how to evolve the fully back reacted dynamics, where with a black hole as the initial data it turns out that the Eddington like coordinates are preferred to the Schwarzschild like coordinates.
1601.00257#55
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
57
H.Z. would like to thank the organizers of the Eleventh International Modave Summer School on Mathematical Physics held in Modave, Belgium, September 2015, where the lectures on which these notes are based were given. He is indebted to Nabil Iqbal for his valuable discussions at the summer school. H.Z. would also like to thank the organizers of 2015 International School on Numerical Relativity and Gravitational Waves held in Daejeon, Korea, July 2015, where these lectures were geared to the audience mainly from general relativity and gravity community. He is grateful to Keun-Young Kim, Kyung Kiu Kim, Miok Park, and Sang-Jin Sin for the enjoyable conversations during the school. H.Z. is also grateful to Ben Craps and Alex Sevrin for the fantastic infrastructure they provide at HEP group of VUB and the very freedom as well as various opportunities they offer to him. M.G. is partially supported by NSFC with Grant Nos.11235003, 11375026 and NCET-12-0054. C.N. is supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT & Future
1601.00257#57
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
58
C.N. is supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT & Future Planning(NRF- 2014R1A1A1003220) and the 2015 GIST Grant for the FARE Project (Further Advancement of Research and Education at GIST College). Y.T. is partially supported by NSFC with Grant No.11475179. H.Z. is supported in part by the Belgian Federal Science Policy Office through the Interuniversity Attraction Pole P7/37, by FWO-Vlaanderen through the project G020714N, and by the Vrije Universiteit Brussel through the Strategic Research Program “High-Energy Physic”. He is also an individual FWO Fellow supported by 12G3515N.
1601.00257#58
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
59
# References [1] E. Witten, arXiv:hep-th/0106109. [2] A. Strominger, arXiv:hep-th/0106113. [3] M. Spradlin, A. Strominger, and A. Volovich, arXiv:hep-th/0110007. [4] R. Britto, F. Cachazo, B. Feng, and E. Witten, Phys. Rev. Lett. 94 (181602)(2005). [5] N. Arkani-Hamed, F. Cachazo, and J. Kaplan, JHEP 1009, 016(2010). [6] J. Maldacena, Adv. Theor. Math. Phys. 2, 231(1998). # Cole ND [7] E. Witten, Adv. Theor. Math. Phys. 2, 253(1998). [8] S. Gubser, I. R. Klebanov, and A. M. Polyakov, Phys. Lett. B 428, 105(1998). [9] P. Breitenlohner and D. Z. Freedman, Annals Phys. 144, 249(1982). [10] P. Breitenlohner and D. Z. Freedman, Phys. Lett. B 115, 197(1982).
1601.00257#59
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
60
[10] P. Breitenlohner and D. Z. Freedman, Phys. Lett. B 115, 197(1982). [11] S. Ryu and T. Takayanagi, Phys. Rev. Lett. 96, 181602(2006). – 22 – [12] V. E. Hubeny, M. Rangamani, and T. Takayanagi, JHEP 0707, 062(2007). [13] A. Lewkowycz and J. Maldacena, JHEP 08, 090(2013). [14] R. M. Wald, Living. Rev. Rel. 4, 6(2001). [15] J. D. Brown and M. Henneaux, Commun. Math. Phys. 104, 207(1986). [16] A. Strominger, JHEP 02, 009(1998). [17] J. D. Brown and J. W. York, Phys. Rev. D 47, 1407(1993). [18] V. E. Hubeny, S. Minwalla, and M. Rangamani, arXiv:1107.5780. [19] B. Swingle, Phys. Rev. D 86, 065007(2012). [20] X. L. Qi, arXiv:1309.6282.
1601.00257#60
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
61
[19] B. Swingle, Phys. Rev. D 86, 065007(2012). [20] X. L. Qi, arXiv:1309.6282. [21] J. Casalderrey-Solana, H. Liu, D. Mateos, K. Rajagopal, and U. A. Wiedemann, arXiv:1101.0618. [22] U. Gursoy, E. Kiritsis, L. Mazzanti, G. Michalogiorgakis, and F. Nitti, Lect. Notes Phys. 828, 79(2011). [23] S. A. Hartnoll, Class. Quant. Grav. 26, 224002(2009). [24] J. McGreevy, Adv. High Energy Phys. 2010, 723105(2010). [25] C. P. Herzog, J. Phys. A 42, 343001(2009). [26] G. T. Horowitz, arXiv:1002.1722. [27] N. Iqbal, H. Liu, and M. Mezei, arXiv:1110.3814. [28] W. J. Li, Y. Tian, and H. Zhang, JHEP 07, 030(2013).
1601.00257#61
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
62
[28] W. J. Li, Y. Tian, and H. Zhang, JHEP 07, 030(2013). [29] N. Callebaut, B. Craps, F. Galli, D. C. Thompson, J. Vanhoof, J. Zaanen, and H. Zhang, JHEP 10, 172(2014). [30] B. Craps, E. J. Lindgren, A. Taliotis, J. Vanhoof, and H. Zhang, Phys. Rev. D 90, 086004(2014). [31] R. Li, Y. Tian, H. Zhang, and J. Zhao, Phys. Lett. B 750, 520(2015). [32] Y. Du, C. Niu, Y. Tian, and H. Zhang, JHEP 12, 018(2015). [33] Y. Du, S. Q. Lan, Y. Tian, and H. Zhang, JHEP 01, 016(2016). [34] M. Guo, S. Q. Lan, C. Niu, Y. Tian, and H. Zhang, to appear. [35] T. Nishioka, S. Ryu, and T. Takayanagi, JHEP 1003, 131(2010).
1601.00257#62
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1601.00257
63
[35] T. Nishioka, S. Ryu, and T. Takayanagi, JHEP 1003, 131(2010). [36] X. Zhang, C. L. Hung, S. K. Tung, and C. Chin, Science 335, 1070(2012). [37] I. R. Klebanov and E. Witten, Nucl. Phys. B 556, 89(1999). [38] K. Skenderis, Class. Quant. Grav.19, 5849(2002). [39] C. P. Herzog, P. K. Kovtun, and D. T. Son, Phys. Rev. D 79, 066002(2009). [40] A. Yarom, JHEP 0907, 070(2009). [41] C. P. Herzog and A. Yarom, Phys. Rev. D 80, 106002(2009). [42] O. J. C. Dias, J. E. Santos, and B. Way, arXiv:1510.02804. [43] P. Chesler and L. G. Yaffe, JHEP 07, 086(2014). – 23 –
1601.00257#63
Modave Lectures on Applied AdS/CFT with Numerics
These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor man's review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum gravity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving differential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superfluid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superfluid density and particle density, namely $\rho_s=\rho$, and the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field theory for the sound speed in the large chemical potential limit.
http://arxiv.org/pdf/1601.00257
Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang
gr-qc, hep-th
typos corrected, clarifications made, JHEP style, 1+23 pages, 12 figures, Mathematica code available upon request
PoS Modave2015 (2016) 003
gr-qc
20160103
20160106
[ { "id": "1510.02804" } ]
1512.06473
1
# Abstract Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typ- ically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an effi- cient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and mem- ory overhead of CNN models. Both filter kernels in con- volutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer’s response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4 ∼ 6× speed-up and 15 ∼ 20× compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second. Original Q-CNN 15 Time Consumption (s) 400 Storage Consumption (MB) 300 10 200 5 100 0 AlexNet CNN-S 0 AlexNet CNN-S 500 Memory Consumption (MB) 25 Top-5 Error Rate (%) 400 20 300 15 200 10 100 5 0 0 AlexNet CNN-S AlexNet CNN-S
1512.06473#1
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
2
Figure 1. Comparison on the efficiency and classification accuracy between the original and quantized AlexNet [16] and CNN-S [1] on a Huawei® Mate 7 smartphone. # 1. Introduction and compress the memory consumption for CNN models. In recent years, we have witnessed the great success of convolutional neural networks (CNN) [19] in a wide range of visual applications, including image classification [16, 27], object detection [10, 9], age estimation [24, 23], etc. This success mainly comes from deeper network ar- chitectures as well as the tremendous training data. How- ever, as the network grows deeper, the model complexity is also increasing exponentially in both the training and testing stages, which leads to the very high demand in the computa- tion ability. For instance, the 8-layer AlexNet [16] involves 60M parameters and requires over 729M FLOPs1to classify a single image. Although the training stage can be offline carried out on high performance clusters with GPU acceler- ation, the testing computation cost may be unaffordable for common personal computers and mobile devices. Due to the limited computation ability and memory space, mobile devices are almost intractable to run deep convolutional net- works. Therefore, it is crucial to accelerate the computation
1512.06473#2
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
3
For most CNNs, convolutional layers are the most time- consuming part, while fully-connected layers involve mas- sive network parameters. Due to the intrinsical differ- ence between them, existing works usually focus on im- proving the efficiency for either convolutional layers or fully-connected layers. In [7, 13, 32, 31, 18, 17], low- rank approximation or tensor decomposition is adopted to speed-up convolutional layers. On the other hand, param- eter compression in fully-connected layers is explored in [3, 7, 11, 30, 2, 12, 28]. Overall, the above-mentioned al- gorithms are able to achieve faster speed or less storage. However, few of them can achieve significant acceleration and compression simultaneously for the whole network. In this paper, we propose a unified framework for con- volutional networks, namely Quantized CNN (Q-CNN), to simultaneously accelerate and compress CNN models with 1FLOPs: number of FLoating-point OPerations required to classify one image with the convolutional network. 1
1512.06473#3
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
4
1FLOPs: number of FLoating-point OPerations required to classify one image with the convolutional network. 1 only minor performance degradation. With network pa- rameters quantized, the response of both convolutional and fully-connected layers can be efficiently estimated via the approximate inner product computation. We minimize the estimation error of each layer’s response during parameter quantization, which can better preserve the model perfor- mance. In order to suppress the accumulative error while quantizing multiple layers, an effective training scheme is introduced to take previous estimation error into consider- ation. Our Q-CNN model enables fast test-phase compu- tation, and the storage and memory consumption are also significantly reduced.
1512.06473#4
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
5
We evaluate our Q-CNN framework for image classi- fication on two benchmarks, MNIST [20] and ILSVRC- 12 [26]. For MNIST, our Q-CNN approach achieves over 12× compression for two neural networks (no convolu- tion), with lower accuracy loss than several baseline meth- ods. For ILSVRC-12, we attempt to improve the test-phase efficiency of four convolutional networks: AlexNet [16], CaffeNet [15], CNN-S [1], and VGG-16 [27]. Generally, Q-CNN achieves 4× acceleration and 15× compression (sometimes higher) for each network, with less than 1% drop in the top-5 classification accuracy. Moreover, we im- plement the quantized CNN model on mobile devices, and dramatically improve the test-phase efficiency, as depicted in Figure 1. The main contributions of this paper can be summarized as follows: We propose a unified Q-CNN framework to acceler- ate and compress convolutional networks. We demon- strate that better quantization can be learned by mini- mizing the estimation error of each layer’s response. • We propose an effective training scheme to suppress the accumulative error while quantizing the whole con- volutional network.
1512.06473#5
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
6
• Our Q-CNN framework achieves 4 ∼ 6× speed-up and 15 ∼ 20× compression, while the classification accuracy loss is within one percentage. Moreover, the quantized CNN model can be implemented on mobile devices and classify an image within one second. # 2. Preliminary During the test phase of convolutional networks, the computation overhead is dominated by convolutional lay- ers; meanwhile, the majority of network parameters are stored in fully-connected layers. Therefore, for better test- phase efficiency, it is critical to speed-up the convolution computation and compress parameters in fully-connected layers. Our observation is that the forward-passing process of both convolutional and fully-connected layers is dominated by the computation of inner products. More formally, we consider a convolutional layer with input feature maps S ∈ Rds×ds×Cs and response feature maps T ∈ Rdt×dt×Ct, where ds, dt are the spatial sizes and Cs, Ct are the number of feature map channels. The response at the 2-D spatial position pt in the ct-th response feature map is computed as: Tpt (ct) = X(pk,ps) hWct,pk , Spsi (1)
1512.06473#6
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
7
Tpt (ct) = X(pk,ps) hWct,pk , Spsi (1) where Wct ∈ Rdk×dk×Cs is the ct-th convolutional kernel and dk is the kernel size. We use ps and pk to denote the 2-D spatial positions in the input feature maps and convolu- tional kernels, and both Wct,pk and Sps are Cs-dimensional vectors. The layer response is the sum of inner products at all positions within the dk × dk receptive field in the input feature maps. Similarly, for a fully-connected layer, we have: T (ct) = hWct , Si (2) where S ∈ RCs and T ∈ RCt are the layer input and layer response, respectively, and Wct ∈ RCs is the weighting vector for the ct-th neuron of this layer. Product quantization [14] is widely used in approximate nearest neighbor search, demonstrating better performance than hashing-based methods [21, 22]. The idea is to de- compose the feature space as the Cartesian product of mul- tiple subspaces, and then learn sub-codebooks for each sub- space. A vector is represented by the concatenation of sub- codewords for efficient distance computation and storage.
1512.06473#7
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
8
In this paper, we leverage product quantization to imple- ment the efficient inner product computation. Let us con- sider the inner product computation between x, y ∈ RD. At first, both x and y are split into M sub-vectors, denoted as x(m) and y(m). Afterwards, each x(m) is quantized with a sub-codeword from the m-th sub-codebook, then we have hy, xi = Xm hy(m), x(m)i ≈ Xm hy(m), c(m) km i (3) which transforms the O(D) inner product computation to M addition operations (M ≤ D), if the inner products be- tween each sub-vector y(m) and all the sub-codewords in the m-th sub-codebook have been computed in advance.
1512.06473#8
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
9
Quantization-based approaches have been explored in several works [11, 2, 12]. These approaches mostly fo- cus on compressing parameters in fully-connected layers [11, 2], and none of them can provide acceleration for the test-phase computation. Furthermore, [11, 12] require the network parameters to be re-constructed during the test- phase, which limit the compression to disk storage instead of memory consumption. On the contrary, our approach offers simultaneous acceleration and compression for both convolutional and fully-connected layers, and can reduce the run-time memory consumption dramatically. # 3. Quantized CNN In this section, we present our approach for accelerating and compressing convolutional networks. Firstly, we intro- duce an efficient test-phase computation process with the network parameters quantized. Secondly, we demonstrate that better quantization can be learned by directly minimiz- ing the estimation error of each layer’s response. Finally, we analyze the computation complexity of our quantized CNN model. # 3.1. Quantizing the Fully-connected Layer For a fully-connected layer, we denote its weighting ma- trix as W ∈ RCs×Ct, where Cs and Ct are the dimensions of the layer input and response, respectively. The weighting vector Wct is the ct-th column vector in W .
1512.06473#9
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
10
We evenly split the Cs-dimensional space (where Wct lies in) into M subspaces, each of C′ s = Cs/M dimen- sions. Each Wct is then decomposed into M sub-vectors, denoted as W (m) . A sub-codebook can be learned for each subspace after gathering all the sub-vectors within this sub- space. Formally, for the m-th subspace, we optimize: 2 a (4) st. DOM € REXK Bim) € {0, 1} &xCe min Dim) Bim) |p Bo” _ wm) where W (m) ∈ RC ′ s×Ct consists of the m-th sub-vectors of all weighting vectors. The sub-codebook D(m) contains K sub-codewords, and each column in B(m) is an indica- tor vector (only one non-zero entry), specifying which sub- codeword is used to quantize the corresponding sub-vector. The optimization can be solved via k-means clustering. The layer response is approximately computed as: , S(m)i ≈ Xm = Xm
1512.06473#10
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
11
where B(m) is the ct-th column vector in B(m), and S(m) is the m-th sub-vector of the layer input. km(ct) is the index of the sub-codeword used to quantize the sub-vector W (m) . In Figure 2, we depict the parameter quantization and test-phase computation process of the fully-connected layer. By decomposing the weighting matrix into M sub-matrices, M sub-codebooks can be learned, one per subspace. During the test-phase, the layer input is split into M sub-vectors, denoted as S(m). For each subspace, we compute the inner products between S(m) and every sub-codeword in D(m), and store the results in a look-up table. Afterwards, only M addition operations are required to compute each response. As a result, the overall time complexity can be reduced from O(CsCt) to O(CsK + CtM ). On the other hand, only sub-codebooks and quantization indices need to be stored, which can dramatically reduce the storage consumption. Layer Input Weighting Matrix Layer Response = Approximate re - 2 Response Computation iia | Le} Sub-vector 4 Codebook Splitting Learning x = Inner Product Pre-computation
1512.06473#11
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
12
Layer Input Weighting Matrix Layer Response = Approximate re - 2 Response Computation iia | Le} Sub-vector 4 Codebook Splitting Learning x = Inner Product Pre-computation Figure 2. The parameter quantization and test-phase computation process of the fully-connected layer. # 3.2. Quantizing the Convolutional Layer Unlike the 1-D weighting vector in the fully-connected layer, each convolutional kernel is a 3-dimensional tensor: Wct ∈ Rdk×dk×Cs. Before quantization, we need to deter- mine how to split it into sub-vectors, i.e. apply subspace splitting to which dimension. During the test phase, the in- put feature maps are traversed by each convolutional kernel with a sliding window in the spatial domain. Since these sliding windows are partially overlapped, we split each con- volutional kernel along the dimension of feature map chan- nels, so that the pre-computed inner products can be re- used at multiple spatial locations. Specifically, we learn the quantization in each subspace by: min y DOM {BLP} at DI ROME, Bm) {0,1} 2 Dem) Bo) _ win) . Pk (6)
1512.06473#12
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
13
min y DOM {BLP} at DI ROME, Bm) {0,1} 2 Dem) Bo) _ win) . Pk (6) where W (m) s×Ct contains the m-th sub-vectors of all convolutional kernels at position pk. The optimization can also be solved by k-means clustering in each subspace. With the convolutional kernels quantized, we approximately compute the response feature maps by: Tpt(ct) = X(pk,ps) Xm ≈ X(pk,ps) Xm = X(pk,ps) Xm hW (m) ct,pk , S(m) ps i hD(m)B(m) ct,pk , S(m) ps i hD(m) km(ct,pk), S(m) ps i (7) where S(m) is the m-th sub-vector at position ps in the in- ps put feature maps, and km(ct, pk) is the index of the sub- codeword to quantize the m-th sub-vector at position pk in the ct-th convolutional kernel.
1512.06473#13
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
15
# 3.3. Quantization with Error Correction So far, we have presented an intuitive approach to quan- tize parameters and improve the test-phase efficiency of convolutional networks. However, there are still two crit- ical drawbacks. First, minimizing the quantization error of model parameters does not necessarily give the optimal quantized network for the classification accuracy. In con- trast, minimizing the estimation error of each layer’s re- sponse is more closely related to the network’s classifica- tion performance. Second, the quantization of one layer is independent of others, which may lead to the accumulation of error when quantizing multiple layers. The estimation error of the network’s final response is very likely to be quickly accumulated, since the error introduced by the pre- vious quantized layers will also affect the following layers. To overcome these two limitations, we introduce the idea of error correction into the quantization of network param- eters. This improved quantization approach directly min- imizes the estimation error of the response at each layer, and can compensate the error introduced by previous lay- ers. With the error correction scheme, we can quantize the network with much less performance degradation than the original quantization method. # 3.3.1 Error Correction for the Fully-connected Layer
1512.06473#15
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
16
# 3.3.1 Error Correction for the Fully-connected Layer Suppose we have N images to learn the quantization of a fully-connected layer, and the layer input and response of image In are denoted as Sn and Tn. In order to minimize the estimation error of the layer response, we optimize: 2 F (8) where the first term in the Frobenius norm is the desired layer response, and the second term is the approximated layer response computed via the quantized parameters. T, - YS (DeB™/PS—” (oo Eom) a A block coordinate descent approach can be applied to minimize this objective function. For the m-th subspace, its residual error is defined as: R(m) n = Tn − Xm′6=m (D(m′ )B(m′ ))T S(m′ n ) (9) and then we attempt to minimize the residual error of this subspace, which is: min y Dom _Blm) n RO”) — (Dim) BOM) Tg) (10) 2 F
1512.06473#16
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
17
min y Dom _Blm) n RO”) — (Dim) BOM) Tg) (10) 2 F and the above optimization can be solved by alternatively updating the sub-codebook and sub-codeword assignment. Update D(m). We fix the sub-codeword assignment B(m), and define Lk = {ct|B(m)(k, ct) = 1}. The opti- mization in (10) can be re-formulated as: min {D(m) k } Xn,k Xct∈Lk n (ct) − D(m)T [R(m) k S(m) n ]2 (11) which implies that the optimization over one sub-codeword does not affect other sub-codewords. Hence, for each sub- codeword, we construct a least square problem from (11) to update it. Update B(m). With the sub-codebook D(m) fixed, it is easy to discover that the optimization of each column in B(m) is mutually independent. For the ct-th column, its optimal sub-codeword assignment is given by:
1512.06473#17
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
18
k∗ m(ct) = arg min k Xn n (ct) − D(m)T [R(m) k S(m) n ]2 (12) # 3.3.2 Error Correction for the Convolutional Layer We adopt the similar idea to minimize the estimation error of the convolutional layer’s response feature maps, that is: min > Tripe — {DO} {BE} nape YS MOP Ber) ser. (Peps) m P The optimization also can be solved by block coordinate descent. More details on solving this optimization can be found in the supplementary material. # 3.3.3 Error Correction for Multiple Layers The above quantization method can be sequentially applied to each layer in the CNN model. One concern is that the estimation error of layer response caused by the previous layers will be accumulated and affect the quantization of the following layers. Here, we propose an effective training scheme to address this issue.
1512.06473#18
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
19
We consider the quantization of a specific layer, assum- ing its previous layers have already been quantized. The optimization of parameter quantization is based on the layer input and response of a group of training images. To quan- tize this layer, we take the layer input in the quantized net- work as {Sn}, and the layer response in the original net- work (not quantized) as {Tn} in Eq. (8) and (13). In this way, the optimization is guided by the actual input in the quantized network and the desired response in the original network. The accumulative error introduced by the previ- ous layers is explicitly taken into consideration during op- timization. In consequence, this training scheme can effec- tively suppress the accumulative error for the quantization of multiple layers. 2 Another possible solution is to adopt back-propagation to jointly update the sub-codebooks and sub-codeword as- signments in all quantized layers. However, since the sub- codeword assignments are discrete, the gradient-based op- timization can be quite difficult, if not entirely impossible. Therefore, back-propagation is not adopted here, but could be a promising extension for future work. # 3.4. Computation Complexity
1512.06473#19
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
20
# 3.4. Computation Complexity Now we analyze the test-phase computation complex- ity of convolutional and fully-connected layers, with or without parameter quantization. For our proposed Q-CNN model, the forward-passing through each layer mainly con- sists of two procedures: pre-computation of inner products, and approximate computation of layer response. Both sub- codebooks and sub-codeword assignments are stored for the test-phase computation. We report the detailed comparison on the computation and storage overhead in Table 1. Table 1. Comparison on the computation and storage overhead of convolutional and fully-connected layers. t Ctd2 d2 CNN sCsK + d2 d2 Q-CNN CNN CsCt Q-CNN CsK + CtM 4d2 CNN kCsCt Q-CNN 4CsK + 1 8 d2 kM Ct log2 K CNN 4CsCt Q-CNN 8 M Ct log2 K kCs t Ctd2 Conv. kM FLOPs FCnt. Conv. Bytes FCnt. 4CsK + 1
1512.06473#20
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
21
As we can see from Table 1, the reduction in the compu- tation and storage overhead largely depends on two hyper- parameters, M (number of subspaces) and K (number of sub-codewords in each subspace). Large values of M and K lead to more fine-grained quantization, but is less effi- cient in the computation and storage consumption. In prac- tice, we can vary these two parameters to balance the trade- off between the test-phase efficiency and accuracy loss of the quantized CNN model. # 4. Related Work
1512.06473#21
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
22
# 4. Related Work There have been a few attempts in accelerating the test- phase computation of convolutional networks, and many are inspired from the low-rank decomposition. Denton et al. [7] presented a series of low-rank decomposition designs for convolutional kernels. Similarly, CP-decomposition was adopted in [17] to transform a convolutional layer into mul- tiple layers with lower complexity. Zhang et al. [32, 31] considered the subsequent nonlinear units while learning the low-rank decomposition. [18] applied group-wise prun- ing to the convolutional tensor to decompose it into the mul- tiplications of thinned dense matrices. Recently, fixed-point based approaches are explored in [5, 25]. By representing the connection weights (or even network activations) with fixed-point numbers, the computation can greatly benefit from hardware acceleration.
1512.06473#22
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
23
Another parallel research trend is to compress parame- ters in fully-connected layers. Ciresan et al. [3] randomly remove connection to reduce network parameters. Matrix factorization was adopted in [6, 7] to decompose the weight- ing matrix into two low-rank matrices, which demonstrated that significant redundancy did exist in network parameters. Hinton et al. [8] proposed to use dark knowledge (the re- sponse of a well-trained network) to guide the training of a much smaller network, which was superior than directly training. By exploring the similarity among neurons, Srini- vas et al. [28] proposed a systematic way to remove redun- dant neurons instead of network connections. In [30], mul- tiple fully-connected layers were replaced by a single “Fast- food” layer, which can be trained in an end-to-end style with [2] randomly grouped convolutional layers. Chen et al. connection weights into hash buckets, and then fine-tuned the network with back-propagation. [12] combined prun- ing, quantization, and Huffman coding to achieve higher compression rate. Gong et al. [11] adopted vector quanti- zation to compress the weighing matrix, which was actually a special case of our approach (apply Q-CNN without error correction to fully-connected layers only).
1512.06473#23
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
24
# 5. Experiments In this section, we evaluate our quantized CNN frame- work on two image classification benchmarks, MNIST [20] and ILSVRC-12 [26]. For the acceleration of convolutional layers, we compare with: CPD [17]: CP-Decomposition; • GBD [18]: Group-wise Brain Damage; • LANR [31]: Low-rank Approximation of Non-linear Responses. and for the compression of fully-connected layers, we com- pare with the following approaches: RER [3]: Random Edge Removal; • LRD [6]: Low-Rank Decomposition; • DK [8]: Dark Knowledge; • HashNet [2]: Hashed Neural Nets; • DPP [28]: Data-free Parameter Pruning; • SVD [7]: Singular Value Decomposition; • DFC [30]: Deep Fried Convnets. For all above baselines, we use their reported results under the same setting for fair comparison. We report the theo- retical speed-up for more consistent results, since the real- istic speed-up may be affected by various factors, e.g. CPU, cache, and RAM. We compare the theoretical and realistic speed-up in Section 5.4, and discuss the effect of adopting the BLAS library for acceleration.
1512.06473#24
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
25
Our approaches are denoted as “Q-CNN” and “Q-CNN (EC)”, where the latter one adopts error correction while the former one does not. We implement the optimization pro- cess of parameter quantization in MATLAB, and fine-tune the resulting network with Caffe [15]. Additional results of our approach can be found in the supplementary material. # 5.1. Results on MNIST The MNIST dataset contains 70k images of hand-written digits, 60k used for training and 10k for testing. To evalu- ate the compression performance, we pre-train two neural networks, one is 3-layer and another one is 5-layer, where each hidden layer contains 1000 units. Different compres- sion techniques are then adopted to compress these two net- work, and the results are as depicted in Table 2. Table 2. Comparison on the compression rates and classification error on MNIST, based on a 3-layer network (784-1000-10) and a 5-layer network (784-1000-1000-1000-10).
1512.06473#25
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
27
In our Q-CNN framework, the trade-off between accu- racy and efficiency is controlled by M (number of sub- spaces) and K (number of sub-codewrods in each sub- space). Since M = Cs/C′ s is given, we tune (C′ s, K) to adjust the quantization precision. In Ta- ble 2, we set the hyper-parameters as C′ s = 4 and K = 32. From Table 2, we observe that our Q-CNN (EC) ap- proach offers higher compression rates with less perfor- mance degradation than all baselines for both networks. The error correction scheme is effective in reducing the ac- curacy loss, especially for deeper networks (5-layer). Also, we find the performance of both Q-CNN and Q-CNN (EC) quite stable, as the standard deviation of five random runs is merely 0.05%. Therefore, we report the single-run perfor- mance in the remaining experiments. # 5.2. Results on ILSVRC-12
1512.06473#27
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
28
# 5.2. Results on ILSVRC-12 The ILSVRC-12 benchmark consists of over one million training images drawn from 1000 categories, and a disjoint validation set of 50k images. We report both the top-1 and top-5 classification error rates on the validation set, using single-view testing (central patch only). We demonstrate our approach on four convolutional net- works: AlexNet [16], CaffeNet [15], CNN-S [1], and VGG16 [27]. The first two models have been adopted in several related works, and therefore are included for comparison. CNN-S and VGG-16 use a either wider or deeper structure for better classification accuracy, and are included here to prove the scalability of our approach. We compare all these networks’ computation and storage overhead in Table 3, to- gether with their classification error rates on ILSVRC-12. Table 3. Comparison on the test-phase computation overhead (FLOPs), storage consumption (Bytes), and classification error rates (Top-1/5 Err.) of AlexNet, CaffeNet, CNN-S, and VGG-16. Bytes 2.44e+8 2.44e+8 4.12e+8 5.53e+8
1512.06473#28
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
29
Top-1 Err. 42.78% 42.53% 37.31% 28.89% FLOPs 7.29e+8 7.27e+8 2.94e+9 1.55e+10 Top-5 Err. 19.74% 19.59% 15.82% 10.05% Model AlexNet CaffeNet CNN-S VGG-16 # 5.2.1 Quantizing the Convolutional Layer To begin with, we quantize the second convolutional layer of AlexNet, which is the most time-consuming layer during the test-phase. In Table 4, we report the performance un- der several (C′ s, K) settings, comparing with two baseline methods, CPD [17] and GBD [18]. Table 4. Comparison on the speed-up rates and the increase of top- 1/5 error rates for accelerating the second convolutional layer in AlexNet, with or without fine-tuning (FT). The hyper-parameters of Q-CNN, C ′
1512.06473#29
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
32
From Table 4, we discover that with a large speed-up rate (over 4×), the performance loss of both CPD and GBD become severe, especially before fine-tuning. The naive parameter quantization method also suffers from the sim- ilar problem. By incorporating the idea of error correction, our Q-CNN model achieves up to 6× speed-up with merely 0.6% drop in accuracy, even without fine-tuning. The ac- curacy loss can be further reduced after fine-tuning the sub- sequent layers. Hence, it is more effective to minimize the estimation error of each layer’s response than minimize the quantization error of network parameters. Next, we take one step further and attempt to speed-up all the convolutional layers in AlexNet with Q-CNN (EC). Table 5. Comparison on the speed-up/compression rates and the increase of top-1/5 error rates for accelerating all the convolutional layers in AlexNet and VGG-16.
1512.06473#32
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
34
We fix the quantization hyper-parameters (C′ s, K) across all layers. From Table 5, we observe that the loss in accuracy grows mildly than the single-layer case. The speed-up rates reported here are consistently smaller than those in Table 4, since the acceleration effect is less significant for some lay- ers (i.e. “conv 4” and “conv 5”). For AlexNet, our Q-CNN model (C′ s = 8, K = 128) can accelerate the computation of all the convolutional layers by a factor of 4.27×, while the increase in the top-1 and top-5 error rates are no more than 2.5%. After fine-tuning the remaining fully-connected layers, the performance loss can be further reduced to less than 1%.
1512.06473#34
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
35
In Table 5, we also report the comparison against LANR [31] on VGG-16. For the similar speed-up rate (4×), their approach outperforms ours in the top-5 classification error (an increase of 0.95% against 1.83%). After fine-tuning, the performance gap is narrowed down to 0.35% against 0.45%. At the same time, our approach offers over 14× compres- sion of parameters in convolutional layers, much larger than theirs 2.7× compression2. Therefore, our approach is effec- tive in accelerating and compressing networks with many convolutional layers, with only minor performance loss. # 5.2.2 Quantizing the Fully-connected Layer For demonstration, we first compress parameters in a single fully-connected layer. In CaffeNet, the first fully-connected layer possesses over 37 million parameters (9216 × 4096), more than 60% of whole network parameters. Our Q-CNN approach is adopted to quantize this layer and the results are as reported in Table 6. The performance loss of our Q-CNN model is negligible (within 0.4%), which is much smaller than baseline methods (DPP and SVD). Furthermore, error correction is effective in preserving the classification accu- racy, especially under a higher compression rate.
1512.06473#35
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
36
Table 6. Comparison on the compression rates and the increase of top-1/5 error rates for compressing the first fully-connected layer in CaffeNet, without fine-tuning. Compression Para. 1.19× - 1.47× - 1.91× - 2.75× - 1.38× - 2.77× - - 5.54× 11.08× - 15.06× 2/16 21.94× 3/16 16.70× 3/32 21.33× 4/32 15.06× 2/16 21.94× 3/16 16.70× 3/32 21.33× 4/32 Top-1 Err. ↑ 0.16% 1.76% 4.08% 9.68% 0.03% 0.07% 0.36% 1.23% 0.19% 0.35% 0.18% 0.28% 0.10% 0.18% 0.14% 0.16% Method Top-5 Err. ↑ - - - - -0.03% 0.07% 0.19% 0.86% 0.19% 0.28% 0.12% 0.16% 0.07% 0.03% 0.11% 0.12% DPP SVD Q-CNN Q-CNN (EC)
1512.06473#36
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
37
setting (C′ s = 1, K = 16) for this layer. Although the speed-up effect no longer exists, we can still achieve around 8× compression for the last layer. Table 7. Comparison on the compression rates and the increase of top-1/5 error rates for compressing all the fully-connected layers in CaffeNet. Both SVD and DFC are fine-tuned, while Q-CNN and Q-CNN (EC) are not fine-tuned. Para. - - - - 2/16 3/16 3/32 4/32 2/16 3/16 3/32 4/32
1512.06473#37
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
38
Top-1 Err. ↑ 0.14% 1.22% -0.66% 0.31% 0.28% 0.70% 0.44% 0.75% 0.31% 0.59% 0.31% 0.57% Compression 1.26× 2.52× 1.79× 3.58× 13.96× 19.14× 15.25× 18.71× 13.96× 19.14× 15.25× 18.71× Method Top-5 Err. ↑ - - - - 0.29% 0.47% 0.34% 0.59% 0.30% 0.47% 0.27% 0.39% SVD DFC Q-CNN Q-CNN (EC) Now we evaluate our approach’s performance for com- pressing all the fully-connected layers in CaffeNet in Ta- ble 7. The third layer is actually the combination of 1000 classifiers, and is more critical to the classification accuracy. Hence, we adopt a much more fine-grained hyper-parameter
1512.06473#38
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
40
2The compression effect of their approach was not explicitly discussed in the paper; we estimate the compression rate based on their description. 3In Table 6, SVD means replacing the weighting matrix with the multi- plication of two low-rank matrices; in Table 7, SVD means fine-tuning the network after the low-rank matrix decomposition. # 5.2.3 Quantizing the Whole Network So far, we have evaluated the performance of CNN models with either convolutional or fully-connected layers quan- tized. Now we demonstrate the quantization of the whole network with a three-stage strategy. Firstly, we quantize all the convolutional layers with error correction, while fully- connected layers remain untouched. Secondly, we fine-tune fully-connected layers in the quantized network with the ILSVRC-12 training set to restore the classification accu- racy. Finally, fully-connected layers in the fine-tuned net- work are quantized with error correction. We report the performance of our Q-CNN models in Table 8. Table 8. The speed-up/compression rates and the increase of top- 1/5 error rates for the whole CNN model. Particularly, for the quantization of the third fully-connected layer in each network, we let C ′ Model
1512.06473#40
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
41
s = 1 and K = 16. Para. Compression Speed-up Top-1/5 Err. ↑ FCnt. 3/32 4/32 3/32 4/32 3/32 4/32 3/32 4/32 Conv. 8/128 8/128 8/128 8/128 8/128 8/128 6/128 6/128 4.05× 4.15× 4.04× 4.14× 5.69× 5.78× 4.05× 4.06× 15.40× 18.76× 15.40× 18.76× 16.32× 20.16× 16.55× 20.34× 1.38% / 0.84% 1.46% / 0.97% 1.43% / 0.99% 1.54% / 1.12% 1.48% / 0.81% 1.64% / 0.85% 1.22% / 0.53% 1.35% / 0.58% AlexNet CaffeNet CNN-S VGG-16
1512.06473#41
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
42
s = 8 and K = 128 for AlexNet, CaffeNet, and CNN-S, and let C′ s = 6 and K = 128 for VGG-16, to ensure roughly 4 ∼ 6× speed- up for each network. Then we vary the hyper-parameter settings in fully-connected layers for different compression levels. For the former two networks, we achieve 18× com- pression with about 1% loss in the top-5 classification accu- racy. For CNN-S, we achieve 5.78× speed-up and 20.16× compression, while the top-5 classification accuracy drop is merely 0.85%. The result on VGG-16 is even more encour- aging: with 4.06× speed-up and 20.34×, the increase of top-5 error rate is only 0.58%. Hence, our proposed Q-CNN framework can improve the efficiency of convolutional net- works with minor performance loss, which is acceptable in many applications. # 5.3. Results on Mobile Devices
1512.06473#42
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
43
# 5.3. Results on Mobile Devices We have developed an Android application to fulfill CNN-based image classification on mobile devices, based on our Q-CNN framework. The experiments are carried out on a Huawei® Mate 7 smartphone, equipped with an 1.8GHz Kirin 925 CPU. The test-phase computation is car- ried out on a single CPU core, without GPU acceleration. In Table 9, we compare the computation efficiency and classification accuracy of the original and quantized CNN models. Our Q-CNN framework achieves 3× speed-up for AlexNet, and 4× speed-up for CNN-S. What’s more, we compress the storage consumption by 20 ×, and the reTable 9. Comparison on the time, storage, memory consumption, and top-5 classification error rates of the original and quantized AlexNet and CNN-S. Model Memory 264.74MB 74.65MB 468.90MB 129.49MB Storage 232.56MB 12.60MB 392.57MB 20.13MB Time 2.93s 0.95s 10.58s 2.61s Top-5 Err. 19.74% 20.70% 15.82% 16.68% CNN Q-CNN CNN Q-CNN AlexNet CNN-S
1512.06473#43
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
44
quired run-time memory is only one quarter of the original model. At the same time, the loss in the top-5 classification accuracy is no more than 1%. Therefore, our proposed ap- proach improves the run-time efficiency in multiple aspects, making the deployment of CNN models become tractable on mobile platforms. # 5.4. Theoretical vs. Realistic Speed-up In Table 10, we compare the theoretical and realistic speed-up on AlexNet. The BLAS [29] library is used in Caffe [15] to accelerate the matrix multiplication in con- volutional and fully-connected layers. However, it may not always be an option for mobile devices. Therefore, we mea- sure the run-time speed under two settings, i.e. with BLAS enabled or disabled. The realistic speed-up is slightly lower with BLAS on, indicating that Q-CNN does not benefit as much from BLAS as that of CNN. Other optimization tech- niques, e.g. SIMD, SSE, and AVX [4], may further improve our realistic speed-up, and shall be explored in the future.
1512.06473#44
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
45
Table 10. Comparison on the theoretical and realistic speed-up on AlexNet (CPU only, single-threaded). Here we use the ATLAS library, which is the default BLAS choice in Caffe [15]. Speed-up Time (ms) FLOPs BLAS Theo. Q-CNN 75.62 55.35 CNN 321.10 167.794 Q-CNN CNN Real. 4.25× 3.03× Off On 1.75e+8 7.29e+8 4.15× # 6. Conclusion In this paper, we propose a unified framework to si- multaneously accelerate and compress convolutional neural networks. We quantize network parameters to enable ef- ficient test-phase computation. Extensive experiments are conducted on MNIST and ILSVRC-12, and our approach achieves outstanding speed-up and compression rates, with only negligible loss in the classification accuracy. # 7. Acknowledgement This work was supported in part by National Natural Sci- ence Foundation of China (Grant No. 61332016), and 863 program (Grant No. 2014AA015105). 4This is Caffe’s run-time speed. The code for the other three settings is on https://github.com/jiaxiang-wu/quantized-cnn. # References
1512.06473#45
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
46
# References [1] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. In British Machine Vision Conference (BMVC), 2014. 1, 2, 6 [2] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. In International Conference on Machine Learning (ICML), pages 2285–2294, 2015. 1, 2, 5, 6 [3] D. C. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmid- huber. High-performance neural networks for visual object classifi- cation. CoRR, abs/1102.0183, 2011. 1, 5, 6 [4] I. Corporation. Intel architecture instruction set extensions program- ming reference. Technical report, Intel Corporation, Feb 2016. 8 [5] M. Courbariaux, Y. Bengio, and J. David. Training deep neural net- In International Conferworks with low precision multiplications. ence on Learning Representations (ICLR), 2015. 5
1512.06473#46
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
47
[6] M. Denil, B. Shakibi, L. Dinh, M. A. Ranzato, and N. de Freitas. Predicting parameters in deep learning. In Advances in Neural In- formation Processing Systems (NIPS), pages 2148–2156, 2013. 5, 6 [7] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus. Ex- ploiting linear structure within convolutional networks for efficient evaluation. In Advances in Neural Information Processing Systems (NIPS), pages 1269–1277, 2014. 1, 5 [8] J. D. Geoffrey Hinton, Oriol Vinyals. Distilling the knowledge in a neural network. CoRR, abs/1503.02531, 2015. 5, 6 [9] R. B. Girshick. Fast R-CNN. CoRR, abs/1504.08083, 2015. 1 [10] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 580–587, 2014. 1
1512.06473#47
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
48
[11] Y. Gong, L. Liu, M. Yang, and L. D. Bourdev. Compressing deep convolutional networks using vector quantization. CoRR, abs/1412.6115, 2014. 1, 2, 5, 7 [12] S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2015. 1, 2, 5 [13] M. Jaderberg, A. Vedaldi, and A. Zisserman. Speeding up convolu- tional neural networks with low rank expansions. In British Machine Vision Conference (BMVC), 2014. 1 [14] H. Jegou, M. Douze, and C. Schmid. Product quantization for near- est neighbor search. IEEE Transactions on Pattern Analysis and Ma- chine Intelligence (TPAMI), 33(1):117–128, Jan 2011. 2
1512.06473#48
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
49
[15] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. CoRR, abs/1408.5093, 2014. 2, 6, 8 [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classifica- tion with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 1106–1114, 2012. 1, 2, 6 [17] V. Lebedev, Y. Ganin, M. Rakhuba, I. V. Oseledets, and V. S. Lem- pitsky. Speeding-up convolutional neural networks using fine-tuned cp-decomposition. In International Conference on Learning Repre- sentations (ICLR), 2015. 1, 5, 6 [18] V. Lebedev and V. S. Lempitsky. Fast convnets using group-wise brain damage. CoRR, abs/1506.02515, 2015. 1, 5, 6
1512.06473#49
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
50
[19] Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel. Backpropagation applied to hand- written zip code recognition. Neural Computation, 1(4):541–551, 1989. 1 [20] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. 2, 5 [21] C. Leng, J. Wu, J. Cheng, X. Bai, and H. Lu. Online sketching hash- ing. In IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR), pages 2503–2511, 2015. 2 [22] C. Leng, J. Wu, J. Cheng, X. Zhang, and H. Lu. Hashing for dis- In International Conference on Machine Learning tributed data. (ICML), pages 1642–1650, 2015. 2
1512.06473#50
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
51
[23] G. Levi and T. Hassncer. Age and gender classification using convo- lutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 34–42, 2015. 1 [24] C. Li, Q. Liu, J. Liu, and H. Lu. Learning ordinal discriminative features for age estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2570–2577, 2012. 1 [25] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. CoRR, abs/1603.05279, 2016. 5 [26] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recognition challenge. Inter- national Journal of Computer Vision (IJCV), pages 1–42, 2015. 2, 5
1512.06473#51
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
52
[27] K. Simonyan and A. Zisserman. Very deep convolutional networks In International Conference on for large-scale image recognition. Learning Representations (ICLR), 2015. 1, 2, 6 [28] S. Srinivas and R. V. Babu. Data-free parameter pruning for deep In British Machine Vision Conference (BMVC), neural networks. pages 31.1–31.12, 2015. 1, 5 [29] R. C. Whaley and A. Petitet. Minimizing development and mainte- nance costs in supporting persistently optimized BLAS. Software: Practice and Experience, 35(2):101–121, Feb 2005. 8 [30] Z. Yang, M. Moczulski, M. Denil, N. de Freitas, A. J. Smola, L. Song, and Z. Wang. Deep fried convnets. CoRR, abs/1412.7149, 2014. 1, 5 [31] X. Zhang, J. Zou, K. He, and J. Sun. Accelerating very deep convolutional networks for classification and detection. CoRR, abs/1505.06798, 2015. 1, 5, 7
1512.06473#52
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
53
[32] X. Zhang, J. Zou, X. Ming, K. He, and J. Sun. Efficient and accurate approximations of nonlinear convolutional networks. In IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR), pages 1984–1992, 2015. 1, 5 # Appendix A: Additional Results In the submission, we report the performance after quan- tizing all the convolutional layers in AlexNet, and quan- tizing all the full-connected layers in CaffeNet. Here, we present experimental results for some other settings. # Quantizing Convolutional Layers in CaffeNet We quantize all the convolutional layers in CaffeNet, and the results are as demonstrated in Table 11. Furthermore, we fine-tune the quantized CNN model learned with error correction (C′ s = 8, K = 128), and the increase of top-1/5 error rates are 1.15% and 0.75%, compared to the original CaffeNet. Table 11. Comparison on the speed-up rates and the increase of top-1/5 error rates for accelerating all the convolutional layers in CaffeNet, without fine-tuning.
1512.06473#53
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
54
Top-1 Err. ↑ 18.69% 32.84% 20.08% 35.48% 1.22% 2.44% 1.57% 2.30% Top-5 Err. ↑ 16.73% 33.55% 18.31% 37.82% 0.97% 1.83% 1.12% 1.71% Speed-up 3.32× 4.32× 3.71× 4.27× 3.32× 4.32× 3.71× 4.27× Para. 4/64 6/64 6/128 8/128 4/64 6/64 6/128 8/128 Method Q-CNN Q-CNN (EC) # Quantizing Convolutional Layers in CNN-S We quantize all the convolutional layers in CNN-S, and the results are as demonstrated in Table 12. Furthermore, we fine-tune the quantized CNN model learned with error correction (C′ s = 8, K = 128), and the increase of top-1/5 error rates are 1.24% and 0.63%, compared to the original CNN-S. Table 12. Comparison on the speed-up rates and the increase of top-1/5 error rates for accelerating all the convolutional layers in CNN-S, without fine-tuning.
1512.06473#54
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
55
Top-1 Err. ↑ 19.87% 45.74% 27.86% 46.18% 1.60% 3.49% 2.07% 3.42% Top-5 Err. ↑ 16.77% 48.67% 25.09% 50.26% 0.92% 2.32% 1.32% 2.17% Para. 4/64 6/64 6/128 8/128 4/64 6/64 6/128 8/128 Speed-up 3.69× 5.17× 4.78× 5.92× 3.69× 5.17× 4.78× 5.92× Method Q-CNN Q-CNN (EC) # Quantizing Fully-connected Layers in AlexNet We quantize all the fully-connected layers in AlexNet, and the results are as demonstrated in Table 13. # Quantizing Fully-connected Layers in CNN-S We quantize all the fully-connected layers in CNN-S, and the results are as demonstrated in Table 14.
1512.06473#55
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]
1512.06473
56
# Quantizing Fully-connected Layers in CNN-S We quantize all the fully-connected layers in CNN-S, and the results are as demonstrated in Table 14. Table 13. Comparison on the compression rates and the increase of top-1/5 error rates for compressing all the fully-connected layers in AlexNet, without fine-tuning. Compression Para. 13.96× 2/16 19.14× 3/16 15.25× 3/32 18.71× 4/32 13.96× 2/16 19.14× 3/16 15.25× 3/32 18.71× 4/32 Top-1 Err. ↑ 0.25% 0.77% 0.54% 0.71% 0.14% 0.40% 0.40% 0.46% Top-5 Err. ↑ 0.27% 0.64% 0.33% 0.69% 0.20% 0.22% 0.21% 0.38% Method Q-CNN Q-CNN (EC) Table 14. Comparison on the compression rates and the increase of top-1/5 error rates for compressing all the fully-connected layers in CNN-S, without fine-tuning. Para. 2/16 3/16 3/32 4/32 2/16 3/16 3/32 4/32
1512.06473#56
Quantized Convolutional Neural Networks for Mobile Devices
Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4~6x speed-up and 15~20x compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.
http://arxiv.org/pdf/1512.06473
Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng
cs.CV
Accepted by the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
null
cs.CV
20151221
20160516
[]