id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2306.04346
|
Quantum Electronic Circuits for Multicritical Ising Models
|
Multicritical Ising models and their perturbations are paradigmatic models of
statistical mechanics. In two space-time dimensions, these models provide a
fertile testbed for investigation of numerous non-perturbative problems in
strongly-interacting quantum field theories. In this work, analog
superconducting quantum electronic circuit simulators are described for the
realization of these multicritical Ising models. The latter arise as
perturbations of the quantum sine-Gordon model with $p$-fold degenerate minima,
$p =2, 3,4,\ldots$. The corresponding quantum circuits are constructed with
Josephson junctions with $\cos(n\phi + \delta_n)$ potential with $1\leq n\leq
p$ and $\delta_n\in[-\pi,\pi]$. The simplest case, $p = 2$, corresponds to the
quantum Ising model and can be realized using conventional Josephson junctions
and the so-called $0-\pi$ qubits. The lattice models for the Ising and
tricritical Ising models are analyzed numerically using the density matrix
renormalization group technique. Evidence for the multicritical phenomena are
obtained from computation of entanglement entropy of a subsystem and
correlation functions of relevant lattice operators. The proposed quantum
circuits provide a systematic approach for controlled numerical and
experimental investigation of a wide-range of non-perturbative phenomena
occurring in low-dimensional quantum field theories.
|
Ananda Roy
|
2023-06-07T11:24:43Z
|
http://arxiv.org/abs/2306.04346v1
|
# Quantum Electronic Circuits for Multicritical Ising Models
###### Abstract
Multicritical Ising models and their perturbations are paradigmatic models of statistical mechanics. In two space-time dimensions, these models provide a fertile testbed for investigation of numerous non-perturbative problems in strongly-interacting quantum field theories. In this work, analog superconducting quantum electronic circuit simulators are described for the realization of these multicritical Ising models. The latter arise as perturbations of the quantum sine-Gordon model with \(p\)-fold degenerate minima, \(p=2,3,4,\dots\). The corresponding quantum circuits are constructed with Josephson junctions with \(\cos(n\phi+\delta_{n})\) potential with \(1\leq n\leq p\) and \(\delta_{n}\in[-\pi,\pi]\). The simplest case, \(p=2\), corresponds to the quantum Ising model and can be realized using conventional Josephson junctions and the so-called \(0-\pi\) qubits. The lattice models for the Ising and tricritical Ising models are analyzed numerically using the density matrix renormalization group technique. Evidence for the multicritical phenomena are obtained from computation of entanglement entropy of a subsystem and correlation functions of relevant lattice operators. The proposed quantum circuits provide a systematic approach for controlled numerical and experimental investigation of a wide-range of non-perturbative phenomena occurring in low-dimensional quantum field theories.
## I Introduction
Quantum simulation [1; 2] is an indispensable technique for investigation of strongly-interacting quantum field theories (QFTs) [3; 4]. With the advent of noisy intermediate-scale quantum simulators and algorithms, gate-based digital quantum simulation has been used to investigate lattice models for a wide-range of non-perturbative QFT problems. These include simulation of quantum many-body dynamics [5], topological phase-transitions [6] and confinement in perturbed Ising models [7; 8; 9]. However, given the number of available qubits and the coherence properties of existing quantum simulators, generalization of the aforementioned simulation protocols to investigate generic QFTs with thousands, potentially millions, of qubits remains a daunting challenge in the near term. Such large system sizes are necessary for most QFT problems since the convergence of the lattice model to the scaling limit is usually slow, examples being QFTs describing quantum critical points and their vicinities.
Analog quantum simulation [10; 11; 12; 13; 14] provides a near-term, more tractable alternative to the aforementioned digital approach. This is particularly relevant for the investigation of those QFT problems which require probing properties at longer length-scales than that permitted using current digital quantum simulators. Indeed, analog simulation has had considerable success in probing complex quantum many-body systems with simulators based on trapped atoms [15; 16; 17; 18; 19; 20], trapped ions [21; 22] and superconducting quantum electronic circuits (QECs) [23; 24; 25; 26]. In this work, we focus on QEC-based quantum simulators in two space-time dimensions. These QEC simulators rely on the robust, tunable, dispersive Josephson nonlinearity to give rise to strongly-interacting nonlinear QFTs [27; 28]. In fact, investigation of arrays with thousands of _quantum_ Josephson junction have already been experimentally performed [24; 25].
In a QEC array, the fundamental lattice degree of freedom is the superconducting phase at a lattice site. This provides a convenient starting point for discretization of a wide range of bosonic QFTs realizable as perturbations of
Figure 1: Quantum circuit scheme for realization of multicritical Ising models. The latter occur as infrared fixed points of the renormalization group flow trajectory (black arrows) starting from the free compactified boson QFT in the ultraviolet. The blue arrows indicate the logic of the circuit construction. The corresponding unit cells of the quantum circuit are shown. In all cases, the horizontal link contains a Josephson junction (junction energy \(E_{J}\) and junction capacitance \(C_{J}\)). The node flux at the \(i^{\text{th}}\) site is indicated. The circuit element on the vertical link determines the nature of the QFT. The latter element is a capacitor [\(\cos(p\phi)\) Josephson junction] for the free boson model [since-Gordon model with \(p\) degenerate minima]. A parallel circuit of \(\cos(n\phi)\) Josephson junctions with \(n=1,2,\dots,p\) on the vertical link realizes the \(p-\)critical Ising model. The \(p-1\) phase-differences between the different circuit elements, denoted by \(\delta_{n}\), \(n=1,\dots,p-1\), are selected depending on the specific model.
the free boson QFT. The latter occurs naturally as a long-wavelength description of a one-dimensional Josephson junction array in the limit of large Cooper-pair tunneling strength [29; 30]. In the continuum limit, the bosonic field arises from the corresponding lattice superconducting phase after coarse-graining. This approach has been utilized to give rise to the quantum sine-Gordon (sG) model [27], a non-integrable, two-frequency sG model [28] and a multi-field generalization of the sG model [14].
In contrast to the earlier proposals which considered perturbations of the free boson QFT that lead to flows to strongly-interacting massive QFTs, this work describes QEC simulators that realize quantum critical points of varying universality classes. In particular, we analyze QEC simulators which, in the scaling limit, are described by multicritical Ising models. The latter are diagonal, unitary, minimal models of conformal field theories [31]. These have played a central role in the understanding of two dimensional, critical, classical statistical mechanics models [32] and are the starting point for systematic analysis of perturbed, integrable or otherwise, conformal field theories [33; 34]. Furthermore, stacks of such critical models, with appropriate couplings, give rise to a large class of topological phases [35; 36] relevant for topological quantum computation [37].
It is well-known that these multicritical Ising QFTs arise in the scaling limit of restricted solid-on-solid (RSOS) models [38]. The corresponding quantum Hamiltonians, owing their integrable construction, have been analyzed extensively using Bethe ansatz [39]. The tricritical Ising model has been shown to occur also in the Blume-Capel model [40; 41] and in interacting chains of Majorana zero modes [42; 43]. However, unlike RSOS models, there is no clear path towards generalization of the Blume-Capel model or the Majorana chains to generic multicritical Ising models. The goal of this work is to present QEC lattices that have the same versatility as the RSOS models while having the merit of being potentially realized in an experiment.
Starting with the QEC lattice model for the quantum sG field theory with \(p\)-fold degenerate minima, \(p=2,3,\ldots\), perturbations of the form \(\cos(n\phi+\delta_{n})\) are systematically added. Here, \(1\leq n<p\) and \(\delta_{n}\in[-\pi,\pi]\). As shown in this work, these lattice models give rise to multicritical Ising models upon appropriate choice of parameters. In contrast to the RSOS models, the proposed lattice models are non-integrable, even though they give rise to the integrable multicritical Ising QFTs in the scaling limit. Due to their non-integrable nature, these lattice models are analyzed numerically, using the density matrix renormalization group (DMRG) technique [44].
Note that similar non-integrable lattice models could be conceived starting with the XYZ spin chain regularization of the quantum sG model [45; 46]. In comparison to the models proposed in this work which use only nearest-neighbor interactions, the generalizations of the XYZ chain would require longer-range interactions between the spins. Furthermore, the generalized XYZ models suffer from larger corrections to scaling compared to the proposed models (see Ref. [27] for a numerical demonstration). As a result, QEC circuits are a more suitable platform for realization of the perturbed sG models considered in this work.
The article is organized as follows. Sec. II describes the general scheme for realizing arbitrary multicritical Ising models. Secs. III and IV are devoted to the Ising and the tricritical Ising models respectively. Sec. V provides a concluding summary and outlook.
## II General scheme
The main idea behind the QEC realization of multicritical Ising models is based on the well-known notion that the diagonal, unitary minimal models of conformal field theories arise as multicritical points of an effective Ginzburg-Landau action [47]. To arrive at this effective action, we consider perturbations of the euclidean quantum sG action with \(p\)-fold degenerate minima describing the scalar field \(\varphi\):
\[\mathcal{A} =\int d^{2}x\left[\frac{1}{16\pi}(\partial_{\nu}\varphi)^{2}-2\mu \cos(\beta\varphi)\right]\] \[\quad-\sum_{n=1}^{p-1}2\lambda_{n}\int d^{2}x\cos\left(\frac{n \beta\varphi}{p}+\delta_{n}\right), \tag{1}\]
where \(\mu,\lambda_{n}\)-s are coupling constants and \(\delta_{n}\)-s are suitably chosen phases. The case \(\lambda_{n}=0\)\(\forall\)\(n\) corresponds to the ordinary sG model with \(p\)-fold degenerate minima with coupling constant \(\beta\), where we consider the case \(0\leq\beta^{2}\leq 1\). Appropriate choices of \(\{\lambda_{n},\delta_{n}\}\) induces a flow to the quantum critical points of multicritical Ising universality class. The latter are characterized by the central charges
\[c_{p}=1-\frac{6}{(p+1)(p+2)},\ p=2,3,\ldots. \tag{2}\]
The perturbed sG action of Eq. (II) describes the scaling limit of the QEC lattice shown in Fig. 1 (top right). Each unit cell has a Josephson junction on the horizontal link with junction energy, \(E_{J}\), and junction capacitance, \(C_{J}\). The vertical link contains the most crucial circuit element. When the latter is an ordinary capacitor with capacitance \(C_{g}\), the QEC array can be in a superconducting phase [29; 30]. This happens when \(E_{J}>E_{c}\), where \(E_{c}=(2e)^{2}/2C_{g}\)[48]. The long-wavelength properties of the array are described by the free, compactified boson QFT. Choosing the circuit element on the vertical link to be a \(\cos(p\phi)\) Josephson junction gives rise to the quantum sG model with \(p\)-fold degenerate minima (see Ref. [28] for the analysis of \(p=2\)). Finally, the parallel circuit configuration shown in Fig. 1 (top right) gives rise to the perturbed sG model of Eq. (II) in the scaling limit.
The Hamiltonian of the QEC array with \(L\) sites and
periodic boundary conditions is given by
\[H =E_{c}\sum_{j=1}^{L}n_{j}^{2}+\epsilon E_{c}\sum_{j=1}^{L}n_{j}n_{j+1}\] \[\quad-E_{g}\sum_{j=1}^{L}n_{j}-E_{J}\sum_{j=1}^{L}\cos(\phi_{j}- \phi_{j+1})\] \[\quad-\sum_{n=1}^{p}\sum_{j=1}^{L}E_{J_{n}}\cos(n\phi_{j}+\delta_ {n}), \tag{3}\]
where \(\delta_{p}\) is set to \(0\). Here, \(n_{j}\), the excess number of Cooper-pairs on each island, and \(\phi_{k}\), the node-flux, are canonically conjugate satisfying: \([n_{j},\mathrm{e}^{\pm\mathrm{i}\phi_{k}}]=\pm\delta_{jk}\mathrm{e}^{\pm \mathrm{i}\phi_{k}}\). The nearest-neighbor interaction proportional to \(\epsilon\leq 1\) arises due to the capacitance \(C_{J}\)[30]. In this work, we chose \(\epsilon=0.2\) and \(E_{g}/E_{c}=1.2\)[27; 28], but similar results could be obtained for other choices. Furthermore, the numerical computations were performed by choosing \(E_{c}=1.0\). The third and fourth terms arises due to a gate voltage at each node taken to be uniform and the coherent tunneling of Cooper-pairs. The last term in Eq. (3) arises from the parallel circuit arrangement shown in Fig. 1(b). Setting \(E_{J_{n}}=0\) for \(1\leq n\leq p\) and \(1\leq n\leq p-1\) in the Eq. (3), in the scaling limit, gives rise to the free boson and the quantum sG model with \(p\)-fold degenerate minima respectively. In general, tuning the \(E_{J_{n}}\)-s and \(\delta_{n}\)-s leads to the QEC array being in a gapless state with the universality class of the critical point determined by \(p\). The Ising and tricritical Ising cases are described below.
## III The Ising model
The simplest realization of the Ising model from a perturbed sG model is obtained by choosing \(p=2\) and \(\delta_{1}=\pi/2\). For the continuum model, this phase-transition has been analyzed using form-factors [49], semi-classical methods [50] and truncated conformal space approaches [51]. The existence of the Ising critical point can be straightforwardly inferred already from the classical potential. The latter is obtained by rescaling the fields \(\tilde{\varphi}=\beta\varphi\) in the limit \(\beta\to 0\). The potential is \(V(\tilde{\varphi})=-2\mu\cos\tilde{\varphi}-2\lambda_{1}\sin(\tilde{\varphi}/2)\). For \(\lambda_{1}<4\mu\), the \(V(\tilde{\varphi})\) has two degenerate minima characteristic of the ferromagnetic phase of the Ising model. The minima occur at \(\varphi_{0}\) and \(2\pi-\varphi_{0}\), where \(\varphi_{0}\) is some classical field minima. The latter two values are related by a \(\mathbb{Z}_{2}\) symmetry operator (see below for the construction of this operator for the QEC model). These two minima coalesce at \(\lambda_{1}=4\mu\) indicating an Ising-type phase-transition. Further increase of \(\lambda_{1}/\mu\) results in a non-degenerate potential minimum, as is expected from the paramagnetic phase of the Ising model. Obviously, the actual value of the ratio \(\lambda_{1}/\mu\) when the phase-transition occurs is different for finite \(\beta^{2}\) (see Fig. 3 for DMRG results).
The corresponding circuit Hamiltonian to realize the Ising model is obtained by setting \(p=2,\delta_{1}=\pi/2\) in Eq. (3). The circuit element on the vertical link is a parallel circuit of a conventional Josephson junction (junction energy \(E_{J_{1}}\)) and a \(\cos 2\phi\) Josephson junction (also known as the \(0-\pi\) qubit [52; 53; 54; 55; 56; 57; 58]) with a magnetic flux threading the loop [59]. The different couplings are chosen as follows. First, \(E_{J}/E_{c}\) and \(E_{g}/E_{c}\) are chosen such that the QEC array is in the superconducting (free boson) phase when \(E_{J_{n}}=0\)\(\forall n\). Subsequently, the parameters \(E_{J_{1}},E_{J_{2}}\) are chosen to give rise to the perturbed sG model in the scaling limit. The sG coupling is given by: \(\beta^{2}=K/2\), where \(K\) is the Luttinger parameter of the free boson theory. The relationship between the lattice and the continuum parameters can be summarized as (see Supplementary Materials of Ref. [28] for details):
\[\mu=C\ E_{J_{2}}E_{c}^{1-2\beta^{2}},\ \lambda_{1}=C^{\prime}\ E_{J_{1}}E_{c}^{1 -\beta^{2}/2}, \tag{4}\]
where \(C,C^{\prime}\) are non-universal functions of \(\beta^{2}\), which can, in principle, be determined numerically. Note that the \(\mathbb{Z}_{2}\) symmetry for the potential term [the last term of Eq. (3)] is \(\phi_{j}\rightarrow\pi-\phi_{j}\), \(j=1,\ldots,L\). The corresponding symmetry operator that performs the transformation \(\phi_{j}\rightarrow\pi-\phi_{j}\) is:
\[\mathcal{O}=\left(\prod_{j=1}^{L}\mathrm{e}^{\mathrm{i}\pi n_{j}}\right) \mathcal{C}, \tag{5}\]
where \(\mathcal{C}\) is the charge-conjugation operator that acts as
\[\mathcal{C}\mathrm{e}^{\pm\mathrm{i}\phi_{j}}\mathcal{C}=\mathrm{e}^{\mp \mathrm{i}\phi_{j}},\ \mathcal{C}n_{j}\mathcal{C}=-n_{j}. \tag{6}\]
Using Eq. (6), it is straightforward to show that \(\mathcal{O}\) is indeed a symmetry of the potential term. Notice however that it is not a symmetry of the lattice Hamiltonian unless \(E_{g}=0\). Nevertheless, as will be numerically demonstrated below, the symmetry associated with \(\mathcal{O}\) emerges in the scaling limit also for \(E_{g}\neq 0\). In the scaling limit, the QEC lattice operators that correspond to the primary fields of the Ising model are:
\[\sigma\sim\cos\phi_{j}+\ldots,\ \varepsilon\sim\sin\phi_{j}+\ldots \tag{7}\]
where the dots correspond to subleading corrections. The two fields have scaling dimensions \(1/8\) and \(1\) respectively. This is verified below in the numerical analysis. Furthermore, a Jordan-Wigner mapping from the Ising spin chain to the free-fermion Hamiltonian leads to identification of the lattice fermion operators as \((\prod_{k<j}\mathrm{e}^{\mathrm{i}\pi n_{k}})\mathrm{e}^{\mathrm{i}\phi_{j}}\). It is straightforward to check the fermionic nature of these nonlocal operators using the commutation relation of \(\phi_{j}\) and \(n_{k}\) given below Eq. (3). Notice that these nonlocal operators are the soliton creation operators of the parent sine-Gordon model with two-fold degenerate minima. These soliton-creation operators have Lorentz spin \(1/2\) and thus, also exhibit fermionic statistics [60].
To unambiguously obtain the critical point, entanglement entropy, \(S\), was computed using DMRG for a QEC
array of \(L=128\) sites with periodic boundary conditions (see Fig. 2 for lattice parameters). At the critical point, standard results of conformal field theory predict [61]
\[S(r)=\frac{c}{3}\ln\left[\frac{L}{\pi a}\sin\frac{\pi r}{L}\right]+S_{0}, \tag{8}\]
where \(a\) is the lattice spacing and \(S_{0}\) is some non-universal constant. Here, \(c\) is the central charge and equals \(1/2\) [set \(p=2\) in Eq. (2)]. The obtained value of \(c\) is close to \(1/2\) [see Fig. 2(a)] with the discrepancy arising due to rather strong finite-size effects. This was verified by simulating different system sizes \(L=40,64\), and \(128\). As an additional check of the numerical results, the scaling of the ground-state energy is computed with system size. For periodic boundary conditions, this obeys [31]
\[E=E_{0}L-\frac{\pi cv}{6L}+o(1/L), \tag{9}\]
where \(v\) is the 'Fermi' velocity. Since the additional sine and cosine perturbations do not renormalize \(v\), the ratio of \(vc\) computed for the Ising and the free-boson critical points should be \(1/2\). This is verified in Fig. 2(b). Note that the simulating the QEC lattice Hamiltonian requires manipulating Hilbert space dimension of \(17\) at each site (see Supplementary Materials of Ref. [28] for details), which made analysis of larger system sizes rather challenging. The single-site DMRG implementation of the TeNPy package was used throughout this work.
At the critical point, the correlation functions of lattice operators \(\cos\phi_{j}\) and \(\sin\phi_{j}\) are computed using infinite DMRG. These correlation functions are algebraic:
\[\langle\cos\phi_{j}\cos\phi_{j+r}\rangle\propto\frac{1}{r^{2\Delta_{\sigma}}}, \ \langle\sin\phi_{j}\sin\phi_{j+r}\rangle\propto\frac{1}{r^{2\Delta_{\epsilon}}}. \tag{10}\]
The obtained results for the above correlation functions are shown in Fig. 2(c,d). The small discrepancy between the obtained values of \(\Delta_{\sigma},\Delta_{\epsilon}\) and the predicted ones are due to the uncertainty in the location of the critical point as well as finite entanglement truncation [62].
Fig. 3 shows the phase-diagram around the Ising transition as a function of a dimensionless variable \(\eta=E_{J_{1}}/E_{J_{2}}^{\alpha}\) and \(\beta^{2}\). Here, \(\alpha=(1-\beta^{2}/4)/(1-\beta^{2})\). The values of \(E_{J}/E_{c}\) corresponding to the different choices of \(\beta^{2}\) are shown on the right y-axis. For a given choice of \(\beta^{2}\), increasing \(\eta\) induces the Ising phase-transition from the ferromagnetic to the paramagnetic phase (see also Fig. 12 of Ref. [51] for a truncated conformal space analysis performed for \(\beta^{2}<1/2\)). The location of the critical point was obtained by sweeping the coupling ratio \(E_{J_{2}}/E_{J_{3}}\) for
Figure 2: DMRG results for the Ising transition. The different couplings were chosen as shown on the top left panel. (a) A QEC array of \(L=128\) sites with periodic boundary conditions was analyzed. The entanglement entropy, \(S\), as a function of subsystem size \(r\) exhibits a characteristic logarithmic dependence [Eq. (8)]. The obtained central charge is close to the expected value of \(1/2\) [set \(p=2\) in Eq. (2)]. The discrepancy with the expected value is due to finite size effect. (b) The product of the central charge and the ‘Fermi’ velocity is determined from the scaling of the Casimir energy [Eq. (9)]. The ratio of this product is close to \(1/2\) as expected. (c, d) Infinite DMRG results for the algebraic decay of the lattice operators corresponding to the Ising field operators \(\sigma,\varepsilon\) [see Eq. (10)]. The small discrepancy with the expected values of \((\Delta_{\sigma},\Delta_{\epsilon})=(1/8,1)\) is due to the difficulty of locating the exact critical point as well as finite truncation errors occurring in the infinite DMRG simulation.
Figure 3: DMRG results for the phase-diagram associated with the Ising transition as a function of the dimensionless coupling \(\eta=E_{J_{1}}/E_{J_{2}}^{\alpha}\) and \(\beta^{2}\). Here, \(\alpha=(1-\beta^{2}/4)/(1-\beta^{2})\). The results were obtained for a periodic QEC array with \(L=128\) and \(E_{J_{2}}/E_{c}=0.175\). The solid markers correspond to the numerically obtained location of the phase-transition, while the black dashed line is a guide to the eye. The magenta dotted line corresponds to the free-fermion point (\(\beta^{2}=1/2\)) of the sG model. For a fixed \(\beta^{2}\), increasing \(\eta\) induces an Ising phase-transition from the ferromagnetic to the paramagnetic phase. The locations of the critical points are determined with a precision of \(E_{J_{1}}/E_{J_{2}}=0.001\) (the corresponding error bars are too small to be visible).
different choices of \(E_{J}/E_{c}\) for a periodic chain of \(L=128\) sites and computing the central charge as in Fig. 2.
With the aforementioned phase-diagram determined, it is straightforward to consider the different perturbations of the Ising critical point. The 'thermal' perturbation has been analyzed here and involves tuning the ratio \(E_{J_{1}}/E_{J_{2}}\) away from the Ising critical point. The'magnetic' perturbation [34] could be analyzed by adding a 'longitudinal field' to the Hamiltonian of Eq. (3). This involves adding an extra \(-\sum_{j=1}^{L}\cos\phi_{j}\) term to the QEC Hamiltonian. As a simple application, consider the case of the free-boson and the Ising models with boundary fields [63, 64]. First, in the QEC Hamiltonian [Eq. (3)], with \(E_{J_{n}}=0\)\(\forall n\), boundary fields \(-E_{J_{b}}\cos\phi_{j}\) are turned on at sites \(j=1,L\). In the continuum limit, this corresponds to the boundary sine-Gordon model, with a boundary potential \(\cos\beta\varphi/2\) added to the euclidean action of Eq. (3). The boundary potential induces a flow from the free to the fixed boundary condition. The change in the boundary condition manifests itself in a change in the boundary entropy [65]. The latter can be measured by computing the change in the subleading \(O(1)\) term in the entanglement entropy. The change for the free-boson case is well-known and given by \(-(\ln\beta^{2})/2\) (see, for example, Refs. [48, 66]). The DMRG results are shown in Fig. 4, left panel for \(\beta^{2}\approx 0.326\). Next, upon turning on \(E_{J_{1}},E_{J_{2}}\) to the values which realize the critical Ising model in the bulk while keeping all other parameters the same, the QEC array realizes the Ising model with or without a longitudinal boundary field. The corresponding change in boundary entropy is \((\ln 2)/2\)[65]. The DMRG results are shown in Fig. 4, right panel.
## IV The tricritical Ising model
The tricritical Ising model is the next in the series of models that can be realized with QECs. Consider the case \(p=3\) and \(\delta_{1}=0\), \(\delta_{2}=\pi\) in Eqs. (1, 3). For \(E_{J_{1}}=E_{J_{2}}=0\), the lattice model of Eq. (3) realizes the sine-Gordon model with three degenerate minima. Similar analysis as in the Ising case leads to \(\beta^{2}=9K/8\), where \(K\) is the Luttinger parameter of the parent free boson theory. First, consider the classical potential:
\[V(\tilde{\varphi})=-2\mu\cos\tilde{\varphi}-2\lambda_{1}\cos\frac{\tilde{ \varphi}}{3}+2\lambda_{2}\cos\frac{2\tilde{\varphi}}{3}. \tag{11}\]
Straightforward computation yields a critical Ising line
\[\lambda_{2}=\frac{\lambda_{1}}{4}+\frac{9\mu}{4} \tag{12}\]
terminating at a tricritical Ising point for \(\lambda_{1}=15\mu,\lambda_{2}=6\mu\). The phase-transition turns first order after the tricritical point. A pictorial depiction of the change in the potential landscape for this model can be found in Fig. 1 of Ref. [67]. The actual tricritical point for the quantum Hamiltonian is located numerically using DMRG (see below).
In the Ginzburg-Landau formulation, the six primary fields of the tricritical Ising model can be identified with various (normal ordered) powers of a field \(\Phi\) [with Kac label (2,2)] [31]. Two of the six fields are odd under the \(\mathbb{Z}_{2}\) symmetry in the tricritical Ising model associated with the transformation \(\Phi\rightarrow-\Phi\), while the others are even (see Sec. 6.1 of Ref. [68] for a recent summary). For the quantum circuit model, this translates to the symmetry of the lattice operator \(\phi_{j}\) under the transformation \(\phi_{j}\rightarrow-\phi_{j}\). Note that this is different from the \(\mathbb{Z}_{2}\) symmetry in the Ising case [see around Eq. (5)]. In the current model, the \(\mathbb{Z}_{2}\)-symmetry operator is simply the charge conjugation operator [Eq. (6)].
The lattice operators corresponding to the two \(\mathbb{Z}_{2}\)-odd fields of the tricritical Ising model are given by
\[\sigma \sim\sum_{k=1,2,\ldots}c_{k}\sin(k\phi_{j})+\ldots,\] \[\sigma^{\prime} \sim\sum_{k=1,2,\ldots}c^{\prime}_{k}\sin(k\phi_{j})+\ldots, \tag{13}\]
Figure 4: DMRG results for the boundary renormalization group flows for the free boson (left) and the Ising (right) models. The parameters, \(E_{J},E_{g}\) were chosen as in Fig. 2. In contrast to the rest of the paper, in this computation, the interaction terms of Eq. (3) between sites \(L\) and \(1\) are absent. Choosing \(E_{J_{n}}=0\)\(\forall n\) realizes the free boson model with free boundary condition with \(\beta^{2}\approx 0.326\). The corresponding entanglement entropy is shown in dark green for different sizes of the subsystem. To realize the fixed boundary condition, we chose \(E_{J_{k}}/E_{c}=1\) in the numerical simulations. The corresponding entanglement entropies are shown in maroon. The change in the boundary entropy in the infrared is obtained by taking the difference between the two curves at the center of the chain. The obtained and expected results are shown. Now, turning on the couplings \(E_{J_{1}},E_{J_{2}}\) throughout the array realizes the Ising model [see Fig. 2]. This Hamiltonian without (with) the same boundary potential realizes the Ising model with free (fixed) boundary condition. The corresponding entanglement entropies with the change associated with the change in the boundary condition are shown in the right panel.
while the same for the \(\mathbb{Z}_{2}\)-even fields are
\[\varepsilon \sim\sum_{k=1,2,\ldots}d_{k}\cos(k\phi_{j})+\ldots,\] \[\varepsilon^{\prime} \sim\sum_{k=1,2,\ldots}d_{k}^{\prime}\cos(k\phi_{j})+\ldots,\] \[\varepsilon^{\prime\prime} \sim\sum_{k=1,2,\ldots}d_{k}^{\prime\prime}\cos(k\phi_{j})+\ldots. \tag{14}\]
Here \(c_{k},c_{k}^{\prime}\), \(d_{k},d_{k}^{\prime},d_{k}^{\prime\prime}\) are non-universal lattice-dependent coefficients and the dots indicate subleading contributions.
The DMRG analysis for this model is computationally more challenging due to the larger local Hilbert space dimension that needs to be manipulated to avoid truncation errors. In contrast to the Ising case, the local Hilbert space dimension was truncated to 27 and system sizes between 44 and 80 were simulated. Fig. 5 presents the results obtained using DMRG for \(\beta^{2}\approx 0.376\) (similar results were obtained for \(\beta^{2}\approx 0.481\) which are not shown for brevity). The left panel shows the location of the Ising phase-transition line (red dashes). The orange triangle indicates the location of the tricritical point. The central charge at the critical point was computed in two ways. First, the scaling of the entanglement entropy [Eq. (8)] is shown on the top right panel for \(L=80\). The obtained result is close to the expected value of 7/10. The discrepancy is due to finite size effect. This was verified by simulating system sizes between \(L=44\) to 80 in steps of 2. Second, the scaling of the Casimir energy with system size [Eq. (9)] is computed and compared with that for the free-boson. As in the Ising case, the ratio of the product \(vc\) for the tricritical Ising to the free boson case should yield the central charge of the tricritical Ising model. This is computed to be \(\approx 0.681\), which is close to the expected value. The discrepancy is due to the slow convergence of the energy with increasing bond-dimension. With larger scale computations, the precision of these computations could be improved. The Ising transition line continues further as \(E_{J_{2}}/E_{c}\) is increased, but the numerical simulations were restricted to the region shown in Fig. 5.
The correlation functions of the primary fields can be verified by choosing the coefficients in Eqs. (13, 14) such that the resultant correlation function yields a scaling exponent that is close to the predictions for the model. Notice that the renormalization group flow from the tricritical Ising to the Ising critical point (left panel of Fig. 5) is induced by changing the couplings \(E_{J_{1}}\) and \(E_{J_{2}}\). The corresponds to perturbing the lattice Hamiltonian at the tricritical point by a superposition of operators \(\cos\phi_{j}\) and \(\cos 2\phi_{j}\). This is compatible with the identification of the field \(\varepsilon^{\prime}\) [with Kac label (1,3)] in Eq. (14), which is known to induce this flow. Finally, we note that in addition to the listed primary fields, the tricritical Ising model contains supersymmetric fields. The latter could be constructed for the quantum circuit by considering the fermionic operators built out of \(\mathrm{e}^{\mathrm{i}\alpha_{1}\pi n_{j}},\mathrm{e}^{\mathrm{i}\alpha_{2} \phi_{k}}\) by appropriately choosing \(\alpha_{1},\alpha_{2}\). We leave a more detailed analysis of the supersymmetric fields for a later work.
## V Summary and outlook
In summary, a set of QEC lattices are described to realize multicritical Ising models in two space-time dimensions. The QEC lattices are based on circuit elements which are generalizations of ordinary Josephson junctions and give rise to potentials of the form \(\cos(p\phi)\), \(p\in\mathbb{N}\). The elements for \(p=1,2\) are well-known with the elements for \(p>2\) being straightforwardly realizable using recursive application of the scheme of Ref. [55] with the elements for \(p=1,2\). Starting with the QEC realization of the quantum sine-Gordon model with \(p\)-fold degenerate minima, systematic perturbations are constructed using QEC elements to give rise to the multicritical Ising models. The cases of the Ising and the tricritical Ising models were analyzed. The next model in the series is the tetracritical Ising model corresponding to \(p=4\). The corresponding phases should be chosen as:
\[\delta_{1}=-\frac{\pi}{4},\delta_{2}=\frac{\pi}{2},\delta_{3}=-\frac{3\pi}{4}. \tag{15}\]
Figure 5: DMRG results for the tricritical Ising point. The sine-Gordon coupling was chosen to be \(\beta^{2}\approx 0.376\). (Left) The tricritical Ising point (filled triangle) located at the end of an Ising critical line (horizontal line markers). The critical points are located by computing the scaling of the entanglement entropy \(S\) as a function of the subsystem size \(r\) [Eq. (8)]. (Top right) Scaling of the entanglement entropy at the tricritical point for a system size \(L=80\). The obtained value of the central charge is close to the expected value of 0.7 for the tricritical Ising field theory. Similar to the Ising case (see Fig. 2), the discrepancy is due to the finite size effect. This was checked by computing the central charge for \(L=44\) to 80 in steps of 2. (Bottom right) The scaling of the Casimir energy as a function of \(1/L\) [see Eq. (9)]. The obtained central charge is close to the expected value. The discrepancy occurs due to the slow convergence of energy with increasing bond-dimension. Note that the precision of the simulation is lower than the Ising case due to the larger local Hilbert space dimension (27 instead of 17). The critical points were located with an accuracy of 0.002 (the Ising markers should be not confused with error bars).
The location of the tetracritical point can, in principle, be obtained by tuning the three couplings: \(E_{J_{n}}\), \(n=1,2,3\).
In this way, quantum circuits can be used to systematically probe bulk and boundary perturbations of the multicritical Ising models venturing beyond the usually analyzed case of perturbed free boson theory [69; 70] (see also Ref. [71] for a recent work). Further generalizations can give rise to topological/perfectly-transmissive defects of conformal field theories [72; 73; 74; 75]. These defects commute with the generators of the conformal transformations and are deeply intertwined with the symmetries of the theory; for the Ising case, see Ref. [76] for the spectrum of the lattice Hamiltonian and Refs. [77; 78] for the entanglement properties of the ground state. Despite their importance in conformal field theories and 2+1D topological quantum field theories [79], no systematic scheme is currently available for realizing these topological defects in a physical system. Their realization with QECs together with computation of transport signatures of the topological defects would serve as a crucial step towards solving this problem. Finally, stacks of multicritical Ising chains with appropriate couplings can give rise to topologically ordered phases [35; 36] which are a precious resource for the realization of topological quantum computation [37]. This amounts to stacking the one-dimensional chains of Fig. 1 with suitable interactions and can lead to a systematic scheme for realization of topological matter with quantum circuits. We hope to return to some of these questions in the future.
Before concluding, we note that the current experimental works [24; 25] (see Refs. [80; 81] for related theoretical works) have been performed on systems where disorder plays a dominant role. It is conceivable that engineering a clean enough system with a suitable number of Josephson junctions will permit investigation of the multicritical phenomena described in this work (see Ref. [82] for a related experimental work).
## Acknowledgements
Discussions with Michael Levin, Sergei Lukyanov and Hubert Saleur are gratefully acknowledged. AR was supported by a grant from the Simons Foundation (825876, TDN).
|
2304.05621
|
Hydrodynamic aggregation of membrane inclusions due to non-Newtonian
surface rheology
|
Biological membranes are self-assembled complex fluid interfaces that host
proteins, molecular motors and other macromolecules essential for cellular
function. These membranes have a distinct in-plane fluid response with a
surface viscosity that has been well characterized. The resulting quasi-2D
fluid dynamical problem describes the motion of embedded proteins or particles.
However, the viscous response of biological membranes is often non-Newtonian:
in particular, the surface shear viscosity of phospholipids that comprise the
membrane depends strongly on the surface pressure. We use the Lorentz
reciprocal theorem to extract the effective long-ranged hydrodynamic
interaction among membrane inclusions that arises due to such non-trivial
rheology. We show that the corrective force that emerges ties back to the
interplay between membrane flow and non-constant viscosity, which suggests a
mechanism for biologically favorable protein aggregation within membranes. We
quantify and describe the mechanism for such a large-scale concentration
instability using a mean-field model. Finally, we employ numerical simulations
to demonstrate the formation of hexatic crystals due to the effective
hydrodynamic interactions within the membrane.
|
Vishnu Vig, Harishankar Manikantan
|
2023-04-12T05:39:32Z
|
http://arxiv.org/abs/2304.05621v1
|
# Hydrodynamic Aggregation of Membrane Inclusions due to Non-Newtonian Surface Rheology
###### Abstract
Biological membranes are self-assembled complex fluid interfaces that host proteins, molecular motors and other macromolecules essential for cellular function. These membranes have a distinct in-plane fluid response with a surface viscosity that has been well characterized. The resulting quasi-2D fluid dynamical problem describes the motion of embedded proteins or particles. However, the viscous response of biological membranes is often non-Newtonian: in particular, the surface shear viscosity of phospholipids that comprise the membrane depends strongly on the surface pressure. We use the Lorentz reciprocal theorem to extract the effective long-ranged hydrodynamic interaction among membrane inclusions that arises due to such non-trivial rheology. We show that the corrective force that emerges ties back to the interplay between membrane flow and non-constant viscosity, which suggests a mechanism for biologically favorable protein aggregation within membranes. We quantify and describe the mechanism for such a large-scale concentration instability using a mean-field model. Finally, we employ numerical simulations to demonstrate the formation of hexatic crystals due to the effective hydrodynamic interactions within the membrane.
## I Introduction
Lipids and other surface-active macromolecules play critical roles in biological interfaces. Membranes of eukaryotic cells are self-assembled phospholipid bilayers [1; 2]. These complex systems display rich dynamics due to their inherent quasi-2D viscous nature that emerges from the coupling to the surrounding bulk phases [3; 4]. Structural and rheological properties of fluid membranes have been well characterized in the past decades, conceptually driven by connecting in-plane protein diffusion to membrane fluidity. Surface viscosities are now known for single and multi-component lipid monolayers [5; 6; 7], phospholipid bilayers [8; 9; 10], protein-laden interfaces [11], and polymeric multilayers [12]. Simultaneously, continuum fluid dynamical models have been widely successful in describing the isolated and cooperative motion of disk-like [13; 14; 15], rod-like [16; 17; 18], active [19; 20; 21], and polymeric [22] inclusions in simple 2D Newtonian membranes.
However, the rheology of real membranes is rarely Newtonian. Insoluble surfactant species often phase separate to form 2D dispersions or 'rafts' of condensed liquid crystalline phases suspended in a liquid expanded disordered phase [23; 24; 25]. Unlike 3D fluids, surfactants are much easier to compress and even a facile in-plane deformation triggers significant changes in the rheological signature of the lipid interface. The transport of surfactant molecules between coexisting phases as well as the morphology and interlocking of stiffer domains contribute to a strongly nonlinear surface rheological response. Insoluble surfactant layers exhibit jamming and yield stress behavior [5], surface viscoelasticity [26], surface shear thinning [27], and surface-concentration-dependent surface rheology [7; 28]. While a lot of the intuition of 3D fluid mechanics can be ported to the analysis of 2D viscous manifolds, the nature of momentum transport in interfacial systems and the novel rheological relations present unique challenges.
In this work, we aim to theoretically investigate and numerically demonstrate the role of a class of non-Newtonian membrane behavior on driven or anchored trans-membrane proteins or protein-associated domains that play critical roles in cell motility and structure [29; 30]. These membrane inclusions are anchored to and driven relative to the membrane by the intracellular network of actin and microtubules, or by external motor proteins such as kinesin or dynein [1]. We will specifically target the nontrivial hydrodynamic interactions expected to arise due to surface-pressure-dependent surface viscosity of lipids. Despite this strong nonlinearity, recent mathematical efforts have demonstrated the qualitatively new phenomena that emerge in thin interfacial gaps [31], in pore-spanning monolayers or membranes [32], and in the resulting stability and dynamics of drops containing such surfactants [33; 34]. These past works point to kinematic symmetry breaking within the 2D interfacial layer, leading to trajectories of probe particles, lipid rafts, or naturally occuring membrane inclusions that are not expected in Newtonian systems. Yet, the exact nature of the hydrodynamic interactions between pairs of driven inclusions representing cytoskeletal anchors, membrane motors, and adhesion junction proteins remain a mystery. We derive analytic results for such hydrodynamic interactions in this work, shedding light on the effective inter-particle attraction or repulsion between interfacially driven particles. Collectively, such interactions can lead to large-scale aggregation, which might be biologically favorable in immune response, locomotion, and tubulation [30]. Past work on membrane inclusions have demonstrated long-ranged interactions and assembly purely due to protein activity [20; 21], membrane curvature [35; 36], and inclusion size mismatch [30]. Building on our analysis of pair dynamics, we demonstrate here that non-Newtonian surface rheological response can also lead to aggregation and packing of '2D suspensions' of driven or anchored inclusions on the membrane.
This paper is organized as follow: we first evaluate the motion of a single particle driven in a rheologically complex membrane in SSII. In this section, we formulate the fluid dynamical problem of a disk-like particle embedded in the membrane and driven by an external force while subject to the 2D flow in the plane of the membrane. We will develop a per
turbative solution for weakly non-Newtonian behavior, derive the corrected velocity of the particle, and physically rationalize the result. In SSIII, we extend the study to collective behavior via membrane hydrodynamic interactions due to '2D suspensions' of particles driven in the plane of a membrane. We propose a mean-field description to quantify the instability that leads to large-scale aggregation, develop a Langevin description that allows us to simulate multi-particle dynamics, and explore the hexatic order that emerges from crystalline aggregation due to attractive hydrodynamic interactions. We conclude with a discussion of the relevance of these results to real biological systems, and connections to other problems where viscosity depends on pressure.
## II Single particle dynamics in non-Newtonian membranes
### Problem formulation
The geometry of our fluid dynamical system is shown in Fig. 1. A bulk fluid phase of viscosity \(\eta\) representing intracellular space underlies a membrane of potentially non-constant surface shear viscosity \(\eta_{s}\). Fluid flow in the bulk phase with velocity \(\mathbf{v}\) is governed by the Navier-Stokes equation, with a no-slip condition to match the membrane velocity \(\mathbf{u}\). Additionally, the stress jump at the interface describes conservation of 2D momentum [37; 38]:
\[\rho_{s}\frac{D\mathbf{u}_{s}}{Dt}=\mathbf{\nabla}_{s}\cdot\mathbf{\sigma}_{s}-\eta \left.\frac{\partial\mathbf{v}}{\partial z}\right|_{z=0}, \tag{1}\]
where we have assumed a flat membrane along the \(x\)-\(y\) plane for simplicity, and \(D/Dt\) is the material derivative. \(\mathbf{\nabla}_{s}=\mathbf{I}_{s}\cdot\mathbf{\nabla}\) is the surface gradient operator with \(\mathbf{I}_{s}=\mathbf{I}-\hat{\mathbf{n}}\hat{\mathbf{n}}\) the surface identity tensor on a plane with unit normal \(\hat{\mathbf{n}}\). Equation (1) is readily generalized to bulk fluids on either side of the membrane: the results that follow are qualitatively unchanged and are only modified by a prefactor in that case.
Equation (1) is a 2D Cauchy equation for a species of density \(\rho_{s}\) and 2D stress tensor \(\mathbf{\sigma}_{s}\) that is forced via viscous tractions from the adjacent bulk phase(s). Alternatively, we may interpret Eq. (1) as a boundary condition for the Navier-Stokes equation governing 3D fluid field \(\mathbf{v}\). The 2D stress tensor \(\mathbf{\sigma}_{s}\) may be decomposed into isotropic and deviatoric parts:
\[\mathbf{\sigma}_{s}=-\Pi\mathbf{I}_{s}+\mathbf{\tau}_{s}, \tag{2}\]
where \(\Pi\) is the 2D pressure and the deviatoric stress tensor \(\mathbf{\tau}_{s}\) is prescribed by a constitutive relation. For instance,
\[\mathbf{\tau}_{s}=\mathbf{\eta}_{s}\left[\mathbf{\nabla}_{s}\mathbf{u}\cdot\mathbf{I}_{s }+\mathbf{I}_{s}\cdot(\mathbf{\nabla}_{s}\mathbf{u})^{\mathrm{T}}\right] \tag{3}\]
in the simple case of a 2D Newtonian membrane. Underlying such a constitutive relation is the approximation that the interface is 2D incompressible. In other words, we assume that the surface flow rearranges to that of a 2D incompressible material so long as Marangoni flows (driven by gradients in the surface pressure \(\Pi\)) are generated faster than the rate of in-plane compression. This is almost always the case for insoluble phospholipids that make up most biological membranes [38; 39; 40]. Then, neglecting inertia in these overdamped systems, Eq. (1) simplifies to
\[\eta\left.\frac{\partial\mathbf{v}}{\partial z}\right|_{z=0}=-\mathbf{\nabla}_{s} \Pi+\eta_{s}\nabla^{2}\mathbf{u}_{s},\quad\mathbf{\nabla}_{s}\cdot\mathbf{u}_{s} =0. \tag{4}\]
Equation (4) governs momentum and mass conservation of an insoluble incompressible Newtonian membrane with constant surface shear viscosity. A wide range of theoretical studies spanning decades [3; 4; 13; 14] describing in-plane motion of particles in Newtonian membranes start with Eq. (4).
However, we intend to explore the role of non-constant \(\eta_{s}\) on membrane inclusion dynamics. Specifically, we wish to account for surface-pressure-dependent surface viscosity of lipids. In what follows, we assume that surface pressure enforces incompressibility and \(\Pi\)-dependent viscosities are incorporated by treating the the interfacial momentum equation (4) as a generalized Newtonian model. This is analogous to dropping the equation of state for density in the incompressible Navier-Stokes equation: the thermodynamic quantity \(\Pi\) plays the role of the'mechanical' pressure within this approximation. The validity of such an approximation is diskussed in detail in past works [31] and this model has been widely successful in studying these complex systems [32; 33; 34].
Such a description of the non-constant surface shear viscosity also relates directly to typical experiments where the surface viscosity is measured as a function of surface pressure. \(\eta_{s}\) changes order of magnitude over a range of \(\Pi\) commonly accessible in experiments, unlike 3D fluids whose viscosity only changes under extreme pressures, if at all. For example, the phospholipid dipalmitoylphosphatidylcholine (DPPC) is a major constituent of cell membranes and forms stable
Figure 1: Illustration of a disk-like inclusion representing a transmembrane protein embedded in a phospholipid membrane. The inclusion may be driven by intracellular forces exerted by the cytoskeleton, or by extracellular forces due to motor protein activity during transport or at adhesion junctions. The fluid dynamical problem is equivalent to an anchored immobile inclusion within a flowing membrane. Forces driving such inclusions can generate in-plane hydrodynamic disturbance flows and interactions.
monolayers at an air-water interface that undergo a liquid expanded (LE) to liquid condensed (LC) phase transition at \(\Pi\sim 8\,\mathrm{mN/\,m}\) at room temperature [23]. Above this critical surface-pressure, the monolayer viscosity increases exponentially with surface pressure [28]. Similar '\(\Pi\)-thickening' behavior has been observed in certain fatty acids like nonadeconic (\(\mathrm{C_{19}}\)) acid, heeiscanonic (\(\mathrm{C_{21}}\)) acid and bebenic (\(\mathrm{C_{22}}\)) acid [41]. Conversely, phase transitions associated with chain-tilt lead to molecules shearing past each other: as a result \(\eta_{s}\) decreases exponentially with increasing \(\Pi\) from 10 to 20 mN/m in the '\(\Pi\)-thinning' surfactant eicosanol [42].
The simplest constitutive relation for \(\eta_{s}(\Pi)\) follows free-area models of surface viscosity [41; 43] to give an exponential relation:
\[\eta_{s}(\Pi)=\eta_{s}^{0}e^{\Pi-\Pi\mathrm{L}_{\mathrm{c}}/\Pi_{\mathrm{c}}}, \tag{5}\]
where \(\Pi_{\mathrm{c}}\) is the characteristic surface pressure change required to produce a noticeable change in \(\eta_{s}\), and \(\eta_{s}^{0}\) is a reference or unperturbed viscosity at a reference pressure \(\Pi_{\infty}\). Setting \(\Pi_{\mathrm{c}}\to\infty\) retrieves the Newtonian limit of constant surface viscosity. Equation (5) accommodates both \(\Pi\)-thickening (\(\Pi_{\mathrm{c}}>0\)) and \(\Pi\)-thinning (\(\Pi_{\mathrm{c}}<0\)) surfactants. Plugging Eq. (5) into the Cauchy momentum equation and simplifying in the overdamped limit then gives
\[-\mathbf{\nabla}_{s}\Pi+\mathbf{\nabla}_{s}\cdot\left[\eta_{s}(\Pi)\left(\mathbf{\nabla}_ {s}\mathbf{\mathrm{u}}_{s}+\mathbf{\nabla}_{s}\mathbf{\mathrm{u}}_{s}^{T}\right)\right]= \eta\frac{\partial\mathbf{\mathrm{v}}}{\partial z}\bigg{|}_{z=0}, \tag{6}\]
which along with the incompressibility condition \(\mathbf{\nabla}_{s}\cdot\mathbf{\mathrm{u}}_{s}=0\) governs our system.
### Non-dimensionalization and perturbation expansion
Scaling velocities and lengths over characteristic values \(U\) and \(a\) gives a characteristic surface pressure \(\Pi_{0}=\eta_{s}^{0}U/a\), and the dimensionless surface momentum equation becomes:
\[-\mathbf{\nabla}_{s}\tilde{\Pi}+\mathbf{\nabla}_{s}\cdot\left[\mathbf{\eta}_{s}(\tilde{ \Pi})\left(\mathbf{\nabla}_{s}\mathbf{\mathrm{u}}_{s}+\mathbf{\nabla}_{s}\mathbf{\mathrm{u}}_ {s}^{T}\right)\right]=\frac{1}{Bq}\frac{\partial\mathbf{\mathrm{v}}}{\partial z} \bigg{|}_{z=0}. \tag{7}\]
In problems where a characteristic external force \(F\) (instead of a velocity \(U\)) is specified, we can equivalently define \(U=F/\eta_{s}^{0}\) or \(\Pi_{0}=F/a\). The dimensionless surface viscosity is \(\tilde{\eta}_{s}=\eta_{s}/\eta_{s}^{0}=e^{\Pi/\Pi_{\mathrm{c}}}\) and the Boussinesq number
\[Bq=\frac{\eta_{s}}{\eta a} \tag{8}\]
compares surface viscous stresses to subphase viscous stresses. Equation (7) is essentially a 2D Stokes equation with non-constant viscosity forced by the traction from the bulk phase. Indeed, the solution of the corresponding Newtonian problem at large \(Bq\) limits to 2D Stokes flow. Such a solution is classically known to be singular at large distances (due to the 'Stokes paradox' [38; 44]). In the case of membranes or other viscous interfaces, traction from the bulk ultimately catches up to interfacial viscous stresses over the Saffman-Delbruck length [3; 38] and regularizes the singularity. To keep the analysis tractable, we will work under the large-\(Bq\) limit, and assume that the Saffman-Delbruck length is sufficiently large so that we can safely ignore the forcing term on the RHS of Eq. (7). We will appeal later to the physically realistic situation of a finite-size membrane to regularize the 2D Stokes description as described by Saffman & Delbruck [3].
In what follows, we drop the hats on dimensionless variables. The dimensionless governing equations are
\[\mathbf{\nabla}_{s}\cdot\mathbf{\sigma}=-\mathbf{\nabla}_{s}\Pi+\mathbf{\nabla}_{s}\cdot[\eta _{s}(\Pi)\mathbf{S}]=0,\quad\mathbf{\nabla}_{\mathbf{s}}\cdot\mathbf{\mathrm{u}}=0, \tag{9}\]
where \(\mathbf{S}=\mathbf{\nabla}_{s}\mathbf{\mathrm{u}}_{s}+\mathbf{\nabla}_{s}\mathbf{\mathrm{u}}_ {s}^{T}\) is twice the typical notation of the rate-of-strain tensor. Equation (9) is heavily nonlinear due to the dependence on \(\eta(\Pi)\). To make analytical progress, we take a perturbative approach for large \(\Pi_{\mathrm{c}}\) or small departures of \(\eta_{s}(\Pi)\) from \(\eta_{s}^{0}\):
\[\eta_{s}(\Pi)=1+\frac{\Pi}{\Pi_{\mathrm{c}}}+\mathcal{O}(\Pi_{\mathrm{c}}^{-2} )=1+\beta\,\frac{\Pi}{\Pi_{0}}+\mathcal{O}(\beta^{2}), \tag{10}\]
where
\[\beta=\frac{\Pi_{0}}{\Pi_{\mathrm{c}}}=\frac{\eta_{s}^{0}U}{\Pi_{\mathrm{c}}a} =\frac{F}{\Pi_{\mathrm{c}}a}, \tag{11}\]
is a dimensionless parameter that is small for weak pressure dependence of viscosity. Note that \(\beta\) is positive when \(\Pi\)-thickening and negative when \(\Pi\)-thinning.
The momentum equation then expands to
\[\mathbf{\nabla}_{s}\cdot\mathbf{\sigma}=-\mathbf{\nabla}_{s}\Pi+\mathbf{\nabla}_{s}\cdot \mathbf{S}+\beta\mathbf{\nabla}_{s}\cdot[\Pi\mathbf{S}]+\mathcal{O}(\beta^{2}). \tag{12}\]
In the weakly non-linear regime when \(\beta\ll 1\), we asymptotically treat the interfacial velocity and surface pressure fields as regular expansions in \(\beta\):
\[\mathbf{\mathrm{u}} =\mathbf{\mathrm{u}}^{(0)}+\beta\mathbf{\mathrm{u}}^{(1)}+\mathcal{O}( \beta^{2}), \tag{13a}\] \[\Pi =\Pi^{(0)}+\beta\Pi^{(1)}+\mathcal{O}(\beta^{2}), \tag{13b}\]
so that the corresponding dimensionless stress tensor can be written as:
\[\mathbf{\sigma} =\mathbf{\sigma}^{(0)}+\beta\mathbf{\sigma}^{(1)}+\mathcal{O}(\beta^{2}), \tag{14a}\] \[\mathbf{\sigma}^{(0)} =-\Pi^{(0)}\mathbf{\mathrm{I}}+\mathbf{S}^{(0)},\] (14b) \[\mathbf{\sigma}^{(1)} =-\Pi^{(1)}\mathbf{\mathrm{I}}+\mathbf{S}^{(1)}+\Pi^{(0)}\mathbf{S}^{ (0)}. \tag{14c}\]
We note that the non-Newtonian term in the \(\mathcal{O}(\beta)\) stress tensor \(\mathbf{\sigma}^{(1)}\) depends on the leading-order solution. The momentum and mass conservation equations become:
\[\mathbf{\nabla}_{s}\cdot\mathbf{\sigma}^{(0)} =\mathbf{\nabla}_{s}\cdot\mathbf{\mathrm{u}}^{(0)}=0 \tag{15a}\] \[\mathbf{\nabla}_{s}\cdot\mathbf{\sigma}^{(1)} =\mathbf{\nabla}_{s}\cdot\mathbf{\mathrm{u}}^{(1)}=0 \tag{15b}\]
We will first solve the leading-order Newtonian problem for a disk placed in arbitrary background surface flow. Then, we use the resulting stress field as the heterogeneity in the governing equation in \(\mathcal{O}(\beta)\) to obtain the correction at \(\mathcal{O}(\beta)\).
### Isolated disk driven in a background flow
In building towards a description of the coupled dynamics of multiple disks, we first examine the Newtonian response of a single disk driven by an external force while placed in a background flow. We hope to then port the intuition as well as mathematical results to the case of a dilute 2D suspension of interacting disks by summing over the pair-wise hydrodynamic interactions. Such a Newtonian solution describes the leading order flow corresponding to \(\mathbf{\sigma}^{(0)}\), \(\mathbf{u}^{(0)}\), and \(\Pi^{(0)}\).
We analyze the translation of an isolated particle driven by a force \(\mathbf{F}_{p}\) in the non-Newtonian interface with an imposed linear velocity field given by:
\[\mathbf{V}(\mathbf{x})=\mathbf{V}_{0}+\mathbf{x}\cdot\mathbf{\Gamma}, \tag{16}\]
where \(\mathbf{\Gamma}=\mathbf{\nabla}_{s}\mathbf{V}\) is the imposed background velocity gradient. Let \(q(x)\) and \(\mathbf{A}=\mathbf{\nabla}_{s}\mathbf{V}+(\mathbf{\nabla}_{s}\mathbf{V})^{T}\) be the surface pressure and surface rate-of-strain fields associated with the imposed background surface flow. Meanwhile, we set \(\Pi\) and \(\mathbf{S}\) to be the pressure and rate of strain due to forced translation of the disk due to an applied external force \(\mathbf{F}_{p}\).
The stress fields at the leading and perturbed order, following Eq. (14), then take the forms
\[\mathbf{\sigma}^{(0)} =-(\Pi^{(0)}+q)\mathbf{I}_{s}+(\mathbf{S}^{(0)}+\mathbf{A}), \tag{17}\] \[\mathbf{\sigma}^{(1)} =-\Pi^{(1)}\mathbf{I}_{s}+\mathbf{S}^{(1)}+(\Pi^{(0)}+q)(\mathbf{ S}^{(0)}+\mathbf{A}). \tag{18}\]
It is advantageous in what follows to have disturbance fields decay to zero far from the particle. So, we define disturbance flow variables \(\mathbf{\hat{u}}\), \(\hat{\Pi}\), and \(\hat{\mathbf{\sigma}}\) by subtracting imposed background flow variables from the total flow variables:
\[\mathbf{\hat{u}}(\mathbf{x}) =\mathbf{u}(\mathbf{x})-\mathbf{v}(\mathbf{x}), \tag{19a}\] \[\hat{\Pi}(\mathbf{x}) =\Pi_{t}-q(\mathbf{x}),\] (19b) \[\hat{\mathbf{\sigma}}(\mathbf{x}) =\mathbf{\sigma}(\mathbf{x})-\mathbf{\tau}(\mathbf{x}), \tag{19c}\]
where \(\mathbf{\tau}=-q\mathbf{I}_{s}+\mathbf{A}+\beta q\mathbf{A}\). These disturbance variables will also satisfy the momentum conservation and continuity equations and can be asymptotically expanded as a regular expansion in \(\beta\):
\[\mathbf{\hat{u}} =\mathbf{\hat{u}}^{(0)}+\beta\mathbf{\hat{u}}^{(1)}+\ldots \tag{20a}\] \[\hat{\Pi} =\hat{\Pi}^{(0)}+\beta\hat{\Pi}^{(1)}+\ldots\] (20b) \[\hat{\mathbf{\sigma}} =\hat{\mathbf{\sigma}}^{(0)}+\beta\hat{\mathbf{\sigma}}^{(1)}+\ldots \tag{20c}\]
The \(\mathcal{O}(1)\) disturbance problem corresponds to the linear Newtonian solution and can be written as the sum of disturbance velocity due to a translating disk and disturbance velocity due to the presence of a disk in an ambient flow. Using standard singularity methods of 2D Stokes flow [45; 46], this velocity field can be shown to be
\[\mathbf{\hat{u}}^{(0)}(\mathbf{x}) =\left[2\left(-\ln\left(r\right)\mathbf{I}+\frac{\mathbf{x} \mathbf{x}}{r^{2}}\right)+\left(\frac{\mathbf{I}}{r^{2}}-\frac{2\mathbf{x} \mathbf{x}}{r^{4}}\right)\right]\cdot\frac{\mathbf{F}_{p}}{4\pi} \tag{21}\] \[\qquad\qquad-\left[\frac{\mathbf{x}\mathbf{x}}{r^{4}}+\frac{1}{2} \left(\frac{\mathbf{I}\mathbf{x}}{r^{4}}-\frac{2\mathbf{x}\mathbf{x}}{r^{6}} \right)\right]\cdot\mathbf{A}.\]
The terms proportional to \(\mathbf{F}_{p}\) represent flow due to forced translation, whereas those proportional to the imposed velocity gradient via \(\mathbf{A}\) are disturbances to background flow due to the presence of the disk. Note that the leading term due to the external force, corresponding to the 2D stokeslet, has a logarithmic singularity that is tied to the Stokes paradox mentioned previously. In reality, flow in the plane of the membrane is governed by 3D hydrodynamics (\(\mathbf{\hat{u}}^{(0)}\propto 1/r\)) beyond the Saffman-Delbruck length, decaying to zero as the viscous traction from the subphase becomes dominant in the momentum balance. We will safely assume that the membrane is finite and within the momentum crossover or Saffman-Delbruck length, and this justifies the use of Eq. (21) in capturing membrane hydrodynamics when \(Bq\gg 1\).
The leading-order pressure disturbances also enter the \(\mathcal{O}(\beta)\) governing equation. Like with the velocity, the Newtonian response can be written as a sum of contributions due to external force and imposed background velocity:
\[\mathbf{\hat{\Pi}}^{(0)}(\mathbf{x})=\left[\frac{4\mathbf{x}}{r^{2}}\right] \cdot\frac{\mathbf{F}_{p}}{4\pi}-\left[\frac{2\mathbf{x}\mathbf{x}}{r^{4}} \right]:\mathbf{A}. \tag{22}\]
Force and torque balance (\(\mathbf{F}_{\text{net}}=\mathbf{F}_{p}\) and \(\mathbf{T}_{\text{net}}=\mathbf{0}\)) with these Newtonian fields gives the translational velocity \(\mathbf{U}_{p}=\mathbf{V}_{0}+\mathbf{F}_{p}/4\pi\) and rotational velocity \(\mathbf{\Omega}_{p}=\mathbf{\nabla}_{s}\times\mathbf{V}\) at leading order.
### Reciprocal theorem: non-Newtonian problem
With the leading-order solution now known, we can write the non-Newtonian stress tensor at \(\mathcal{O}(\beta)\) as
\[\hat{\mathbf{\sigma}}^{(1)}=-\Pi^{(1)}\mathbf{I}+\mathbf{S}^{(1)}+(\Pi^{(0)}+q)( \mathbf{S}^{(0)}+\mathbf{A})-(q\mathbf{A}), \tag{23}\]
The effect of pressure-dependent surface viscosity could yield a non-zero \(\mathbf{\hat{F}}^{(1)}\) at \(\mathcal{O}(\beta)\), given by:
\[\mathbf{\hat{F}}^{(1)}=\int\mathbf{\hat{n}}\cdot\hat{\mathbf{\sigma}}^{(1)}dl \tag{24}\]
where, \(\mathbf{\hat{n}}\) is the normal vector pointing from the disk into the 2D fluid and \(l\) denotes the boundary of the domain. As we have set up velocity field to decay far from the disk, this boundary is simply the perimeter of the disk. Without loss of generality, we choose to constrain the particle to its leading-order motion and determine the force or torque generated at \(\mathcal{O}(\beta)\), i.e., \(\mathbf{U}_{p}^{(1)}=\mathbf{\Omega}_{p}^{(1)}=\mathbf{0}\). The governing equations at this order, Eq. (15b), thus satisfy the boundary conditions
\[\mathbf{\hat{u}}^{(1)}=\mathbf{0}\quad\text{for}\quad|\mathbf{x}| \leq 1, \tag{25}\] \[\mathbf{\hat{u}}^{(1)}\rightarrow\mathbf{0}\quad\text{as}\quad| \mathbf{x}|\rightarrow\infty. \tag{26}\]
Evaluating the non-linear effects on the disk involves solving the \(\mathcal{O}(\beta)\) problem for \(\mathbf{\hat{u}}^{(1)}\). This is an inhomogeneous Stokes equation that is extremely difficult to solve analytically. Instead, we make use of the Lorentz reciprocal theorem [47] to get around solving for the full flow and pressure fields using standard methods. This framework is
well-established, and has previously been employed to evaluate perturbative solutions to several classes of weakly non-Newtonian problems involving viscoelasticity [32; 48; 49].
To set up the reciprocal theorem, we first need an auxiliary solution \(\{\Pi_{\text{aux}},\,\mathbf{u}_{\text{aux}}\,\boldsymbol{\sigma}_{\text{aux}}\}\) that satisfies the homogeneous Stokes problem. We choose the solution to the translation of a disk in a 2D Newtonian interface with a velocity \(\mathbf{U}_{\text{aux}}\). This solution is standard and is readily given by the singularity solutions corresponding to steady translation (i.e., the parts containing \(\mathbf{F}_{p}\)) in Eqs. (21) and (22), with \(\mathbf{U}_{\text{aux}}\) replacing \(\mathbf{F}_{p}/4\pi\) in both cases. This Newtonian problem satisfies \(\boldsymbol{\nabla}_{s}\cdot\boldsymbol{\sigma}_{\text{aux}}=\mathbf{0}\).
The reciprocal relation can be derived starting with the following vector relations:
\[\boldsymbol{\nabla}_{s}\cdot\left(\boldsymbol{\sigma}_{\text{aux }}\cdot\hat{\mathbf{u}}^{(1)}\right) =\boldsymbol{\sigma}_{\text{aux}}:\boldsymbol{\nabla}\hat{ \mathbf{u}}^{(1)}, \tag{27a}\] \[\boldsymbol{\nabla}_{s}\cdot\left(\hat{\boldsymbol{\sigma}}^{(1)} \cdot\mathbf{u}_{\text{aux}}\right) =\hat{\boldsymbol{\sigma}}^{(1)}:\boldsymbol{\nabla}\mathbf{u}_{ \text{aux}}. \tag{27b}\]
Subtracting Eq. (27b) from Eq. (27a) and integrating over the entire 2D fluid domain gives:
\[\begin{split}\int\boldsymbol{\nabla}_{s}\cdot&\left( \boldsymbol{\sigma}_{\text{aux}}\cdot\hat{\mathbf{u}}^{(1)}\right)- \boldsymbol{\nabla}_{s}\cdot\left(\hat{\boldsymbol{\sigma}}^{(1)}\cdot \mathbf{u}_{\text{aux}}\right)dS\\ &=\int\boldsymbol{\sigma}_{\text{aux}}:\boldsymbol{\nabla}\hat{ \mathbf{u}}^{(1)}-\hat{\boldsymbol{\sigma}}^{(1)}:\boldsymbol{\nabla}\mathbf{ u}_{\text{aux}}\ dS.\end{split} \tag{28}\]
The LHS of Eq. (28) can be simplified using the divergence theorem and force balance to give
\[\begin{split}-\int\hat{\mathbf{h}}\cdot\boldsymbol{\sigma}_{ \text{aux}}\cdot\hat{\mathbf{u}}^{(1)}\,\,dl+\int\hat{\mathbf{h}}\cdot\hat{ \boldsymbol{\sigma}}^{(1)}\cdot\mathbf{u}_{\text{aux}}\,\,dl\\ =-\mathbf{F}_{\text{aux}}\cdot\mathbf{U}^{(1)}+\mathbf{F}^{(1)} \cdot\mathbf{U}_{\text{aux}}.\end{split} \tag{29}\]
Since we constrain the particle to its leading-order motion, the particle velocity at \(\mathcal{O}(\beta)\) vanishes: \(\mathbf{U}^{(1)}=\mathbf{0}\). Then, using Eq. (29) for the LHS in Eq. (28) and simplifying gives
\[\begin{split}\mathbf{F}^{(1)}\cdot\mathbf{U}_{\text{aux}}=-\int \left[(\Pi^{(0)}+q)(\mathbf{S}^{(0)}+\mathbf{A})-q\mathbf{A}\right]:\boldsymbol {\nabla}_{s}\mathbf{u}_{\text{aux}}\,\,dS.\end{split} \tag{30}\]
Importantly, Eq. (30) uses only the zeroth order (Newtonian) solution to solve a first order (non-Newtonian) problem.
As we are interested in far-field effects in dilute suspensions, we seek to incorporate leading order effects of the the disturbance velocity due to other disks as the imposed background flow. Over the lengthscale of a particle, this disturbance is primarily a unidirectional shear flow (so that \(q(\mathbf{x})=0\)). With this assumption, the integration in Eq. (30) is still cumbersome but can be be performed analytically. Additionally, integrals of the leading order flow field involve the logarithm that diverges at large distances as noted before. We therefore restrict our fluid domain to a (dimensionless) lateral membrane size \(R\) that is smaller that or comparable to the Saffman-Delbruck length: this corresponds to the finite membrane size resolution of the Stokes paradox Saffman and Delbruck [3]. With the velocity and pressure fields of the auxiliary Newtonian problem fully known, we finally get:
\[\mathbf{F}^{(1)}=\zeta\mathbf{F}_{p}\cdot\mathbf{A}, \tag{31a}\] \[\zeta=4\left[\ln{(R)}-\frac{1}{4}-\frac{3}{R^{2}}+\frac{11}{4R^{4}}-\frac{1}{R ^{6}}\right], \tag{31b}\]
where all quantities are dimensionless. The logarithmic dependence on membrane size that is characteristic of a 2D calculation of a driven object (unless otherwise regularized by bulk fluid momentum at large distances) shows up here and dominates the nonlinear response.
The force that arises due to surface-pressure-dependent surface viscosity becomes evident if we revert to dimensional variables. Putting everything together, the leading-order dimensional force on a driven particle in an ambient linear flow field in this 2D fluid is
\[\mathbf{F}_{\text{tot}}=\mathbf{F}_{p}+\frac{\zeta\eta_{s}^{0}}{\Pi_{c}} \mathbf{F}_{p}\cdot\mathbf{A}+\mathcal{O}(\beta^{2}). \tag{32}\]
This is the main result of this paper. As expected, the non-Newtonian correction vanishes as \(\Pi_{c}\to\infty\). More useful in many-particle simulations or continuum descriptions (SS III) is the net translational velocity resulting from such a force. Using the leading-order translational mobility \(\mathbf{M}=\mathbf{I}_{s}/4\pi\eta_{s}^{0}=M_{0}\mathbf{I}_{s}\), and acknowledging the uniform component \(\mathbf{V}_{0}\) of the background velocity, we find
\[\mathbf{U}_{p}=\mathbf{V}_{0}+M_{0}\left[\mathbf{I}_{s}+\frac{\zeta\eta_{s}^{ 0}}{\Pi_{c}}\mathbf{A}\right]\cdot\mathbf{F}_{p}+\mathcal{O}(\beta^{2}). \tag{33}\]
Notably, the correction proportional to \(\mathbf{A}\cdot\mathbf{F}_{p}\) is not necessarily along the direction of the driving force \(\mathbf{F}_{p}\), thus setting the stage for lateral motion and concentration fluctuations in large-scale'suspensions' of trans-membrane inclusions.
### Physical interpretation and direction of motion
We will first aim to mechanistically understand the direction of motion due to the non-Newtonian effect. A simple case is a background shear flow that is along the direction of the driving force (Fig. 2). This geometry is particularly relevant as it depicts the dominant effect of disturbance fields from neighboring particles, and is a common setup in classic studies of 3D sedimenting suspension mechanics [48; 50]. In our case, we will take \(\hat{\mathbf{e}}_{x}\) to be the direction along which a disk embedded within a membrane is driven with an external horizontal force, so that \(\mathbf{F}_{p}=F_{p}\hat{\mathbf{e}}_{x}\). Simultaneously, the 2D fluid is subject to an in-plane shear flow \(\mathbf{v}(\mathbf{x})=\hat{\gamma}\hat{\mathbf{v}}\hat{\mathbf{e}}_{x}\) centered at the particle, so that the background strain tensor is
\[\mathbf{A}=\boldsymbol{\nabla}_{s}\mathbf{v}+\boldsymbol{\nabla}_{s}\mathbf{v} ^{T}=\hat{\gamma}(\hat{\mathbf{e}}_{x}\hat{\mathbf{e}}_{y}+\hat{\mathbf{e}}_{y} \hat{\mathbf{e}}_{x}). \tag{34}\]
Then, the resulting net velocity following Eq. (33) is
\[\mathbf{U}_{p}=M_{0}F_{p}\hat{\mathbf{e}}_{x}+\frac{\zeta\eta_{s}^{0}\hat{ \gamma}M_{0}F_{p}}{\Pi_{c}}\hat{\mathbf{e}}_{y}. \tag{35}\]
For \(\Pi\)-thickening suspensions, \(\Pi_{c}>0\), and so the disk drifts in the \(y\) direction in addition to being driven as expected in the \(x\) direction by the applied force.
The direction of the non-Newtonian force can be gleaned by examining the nature of the pressure perturbation and disturbance flow around the disk. A shear flow can be locally decomposed into an extension and a rotation. The extension associated with the supplied shear is shown in Fig. 2(c). In a Newtonian medium, such an extensional flow does not generate a net force on the particle as surface viscous stresses around the perimeter of the particle cancel out. In a pressure-thickening fluid, however, the dipolar surface pressure field generates an associated surface viscosity field \(\eta_{s}(\Pi)\) that makes the 2D medium more viscous ahead of the disk in the direction of \(\mathbf{F}_{p}\) and less viscous behind it. The resulting surface viscous stresses are asymmetric around the particle, leading to a net force due to the extensional disturbance flow. The local surface viscous stresses are qualitatively illustrated in Fig. 2(c), which suggests that the resulting net force is to the right in this configuration.
This intuitive understanding of the interplay between extension due to background flow and local viscosity variations helps build up to more sophisticated models for coordinated behavior of collections of proteins or particles embedded within a membrane. In the following section, we take the mathematical result for the corrective force \(\mathbf{F}^{(1)}\) and the mechanistic picture drawn here to quantify such collective dynamics using mean-field kinetic models and particle simulations.
## III Collective dynamics: concentration instability and crystallization
### Mean-field model for dilute suspensions
While the model for a single particle in a weakly non-Newtonian membrane is analytically tractable, the collective behavior of large-scale '2D suspensions' is much more complicated. Nevertheless, the direction of the corrective force derived above suggests a mechanism for large-scale aggregation of particles driven in a membrane. Simply put, the disturbance flow associated with the motion of each inclusion drives a local shear flow around every other inclusion. Cross-streamline motion within such a flow must directly lead to effective hydrodynamic drift towards regions of higher concentration of particles (Fig. 3). We intend to describe the concentration instability that results from such interactions.
We will first turn to a continuum description to glean physical features and stabilizing mechanisms of such an instability. We will follow the classic approach of Koch and Shaqfeh [50] in studying 3D suspensions of sedimenting rods, which has since been widely adapted to examine viscoelastic fluids [48], active suspensions [51], and flexible fibers [52].
We begin with a continuous variable \(c(\mathbf{x},t)\) that describes the concentration of particles embedded within this 2D medium at position \(\mathbf{x}\) at time \(t\). We will use \(n\) to describe the mean number density (per unit area) on a interface of area \(A\) such that
\[\frac{1}{A}\int c(\mathbf{x},t)\,dA=n. \tag{36}\]
Then, the evolution of local concentration is described by a conservation equation
\[\frac{\partial c}{\partial t}+\mathbf{\nabla}_{s}\cdot(\hat{\mathbf{x}}\,c)=D \nabla_{s}^{2}c, \tag{37}\]
where \(\hat{\mathbf{x}}\) is the flux velocity and \(D\) is a diffusivity that may represent Brownian fluctuations or hydrodynamic diffusion. We assume that the diffusivity is constant and isotropic, although such an assumption can be relaxed without qualitative changes to the conclusions drawn below. The flux velocity can be written as a combination of the background velocity field and the response of the particle to the applied force following Eq. (33):
\[\hat{\mathbf{x}}=\mathbf{u}(\mathbf{x})+M_{0}\left[\mathbf{I}_{s}+\frac{ \zeta\eta_{s}^{0}}{\Pi_{c}}\left(\mathbf{\nabla}_{s}\mathbf{u}+\mathbf{\nabla}_{s} \mathbf{u}^{T}\right)\right]\cdot\mathbf{F}_{p}. \tag{38}\]
Written this way, the conservation equation (37) captures the Newtonian response of each particle to the external force (via the term \(M_{0}\mathbf{F}\), the non-Newtonian correction due to weak pressure-dependence (via the term proportional to \(\zeta\)), and the disturbance field due to the presence of neighboring particles at the location of each particle (via the background velocity field \(\mathbf{u}(\mathbf{x})\)). This disturbed fluid velocity field is still unknown, and obeys
\[-\mathbf{\nabla}_{s}\Pi+\eta_{s}^{0}\nabla_{s}^{2}\mathbf{u}+\frac{\eta_{s}^{0}} {\Pi_{c}}\mathbf{\nabla}_{s}\cdot(\Pi\mathbf{S})+\mathbf{F}_{p}[c(\mathbf{x})-n] =\mathbf{0}, \tag{39}\]
Figure 2: Physical mechanism leading to transverse migration in a driven particle placed in shear flow within the membrane as shown in (a). Background shear as shown in (b) sets up an extensional flow as shown (c). The external force \(\mathbf{F}_{p}\) on the particle also sets up pressure fields that lead to increase (in red) in viscosity ahead of the particle and decrease (in green) behind the particle. The resulting in-plane viscous stresses are proportional to the local viscosity and are asymmetric as indicated with the solid arrows. This results in a net force \(\mathbf{F}^{(1)}\) to the right in this particular configuration.
which is the 2D momentum equation perturbed by the external force at the location of the particles.
We will follow past works [48] to first argue that the non-linear term in Eq. (39) is sub-dominant in determining the flux velocity if the suspension is dilute and concentration fluctuations are small. We will stay in the regime of small perturbations in local concentration away from the uniform value \(n\), so that \(c(\mathbf{x},t)=n+\varepsilon c^{\prime}(\mathbf{x},t)\), with \(|\varepsilon|\ll 1\) and \(c^{\prime}(\mathbf{x},t)=\mathcal{O}(n)\). We will also expand \(\mathbf{u}\) as a perturbation in \(\beta\), so \(\mathbf{u}=\mathbf{u}^{(0)}+\beta\mathbf{u}^{(1)}+\mathcal{O}(\beta^{2})\) like in previous sections. The leading-order disturbance velocity \(\mathbf{u}^{(0)}\) then has characteristic magnitude \(U_{0}\sim\varepsilon F_{P}nL^{2}/\eta_{S}^{0}\) where \(L\) is the suspension length scale (typically, the extent of the membrane or interface). The perturbative correction \(\mathbf{u}^{(1)}\), by design, has characteristic scale \(U_{1}\sim\beta U_{0}\). We can compare this disturbance velocity field correction to the direct contribution due to the non-Newtonian effect obtained in Eq. (33) which scales like \(U_{c}\sim\beta F_{P}/\eta_{S}^{0}\). So, the nonlinear term in Eq. (39) is relevant only if \(U_{1}\) is comparable to \(U_{c}\). However, \(U_{1}/U_{c}\sim U_{0}\eta_{S}^{0}/F_{P}\sim\varepsilon nL^{2}\). In a dilute 2D suspension, the area fraction of particles is small (\(nL^{2}\ll 1\)) and the associated perturbation to linear stability is also small (\(\varepsilon\ll 1\)). The non-Newtonian correction in equation (39) can therefore be safely neglected, and the disturbance field due to concentration fluctuations follow
\[\mathbf{\nabla}_{s}\Pi=\eta_{S}^{0}\mathbf{\nabla}_{s}^{2}\mathbf{u}+\mathbf{F}_{p}[c (\mathbf{x})-n], \tag{40}\]
accurate to \(\mathcal{O}(\varepsilon\beta)\).
We wish to examine the linear stability of such a system to perturbations in concentrations away from the uniform state of \(c(\mathbf{x})=n\). The corresponding base state fluid velocity and surface pressure fields are \(\mathbf{u}=\mathbf{0}\) are \(\Pi(\mathbf{x})=\Pi_{0}\), respectively. Perturbing the concentration as \(c=n+\varepsilon c^{\prime}(\mathbf{x},t)\) where \(c^{\prime}=\mathcal{O}(n)\) generates associated fluid fields \(\mathbf{u}=\varepsilon\mathbf{u}^{\prime}(\mathbf{x},t)\) and \(\Pi=\Pi_{0}+\varepsilon\Pi^{\prime}(\mathbf{x},t)\). Using these perturbed fields in the conservation equation (37) and linearizing gives the following form at \(\mathcal{O}(\varepsilon)\):
\[\frac{\partial c^{\prime}}{\partial t}+\frac{nM_{0}\zeta\eta_{S}^{0}}{\Pi_{c} }\nabla_{s}^{2}\mathbf{u}^{\prime}\cdot\mathbf{F}_{p}+M_{0}\mathbf{F}_{p} \cdot\mathbf{\nabla}_{s}c^{\prime}=D\nabla_{s}^{2}c^{\prime}. \tag{41}\]
For stability analysis, we will consider normal modes of the form \(c^{\prime}=\tilde{c}(\mathbf{k})\exp[i\mathbf{k}\cdot\mathbf{u}+\sigma t]\) and \(\mathbf{u}^{\prime}=\tilde{\mathbf{u}}(\mathbf{k})\exp[i\mathbf{k}\cdot \mathbf{u}+\sigma t]\) with a 2D wavevector \(\mathbf{k}\). The forced Stokes equation can be solved by applying Fourier transforms and using standard methods of projecting perpendicular to the pressure term [48; 38; 53] to eliminate \(\tilde{\mathbf{u}}(\mathbf{k})\) in favor of \(\tilde{c}(\mathbf{k})\):
\[\tilde{\mathbf{u}}(\mathbf{k})=\frac{1}{\eta_{S}^{0}k^{2}}\left(\mathbf{I}_{s }-\hat{\mathbf{k}}\hat{\mathbf{k}}\right)\cdot\mathbf{F}_{p}\tilde{c}( \mathbf{k}), \tag{42}\]
where \(k=|\mathbf{k}|\), and \(\hat{\mathbf{k}}=\mathbf{k}/k^{2}\) is a unit wavevector. Using these Fourier coefficients in the linearized conservation equation (41) and simplifying gives
\[\begin{split}\mathbf{\sigma}\,\tilde{c}(\mathbf{k})-\frac{nM_{0} \zeta}{\Pi_{c}}\mathbf{F}_{p}\cdot\left(\mathbf{I}_{s}-\hat{\mathbf{k}}\hat{ \mathbf{k}}\right)\cdot\mathbf{F}_{p}\,\tilde{c}(\mathbf{k})\\ +ikM_{0}\mathbf{F}_{p}\cdot\mathbf{k}\,\tilde{c}(\mathbf{k})+k^{ 2}D\tilde{c}(\mathbf{k})=0.\end{split} \tag{43}\]
Eliminating \(\tilde{c}(\mathbf{k})\) and defining \(\theta=\cos^{-1}(\hat{\mathbf{c}}_{x}\cdot\hat{\mathbf{k}})\) as the angle between the applied external force (taken to be along the \(x\) direction, without loss of generality) and the wave vector, we obtain the real part of \(\mathbf{\sigma}\) representing the growth rate of concentration fluctuations:
\[\mathbf{\sigma}_{\mathrm{R}}=\mathrm{Re}[\mathbf{\sigma}]=\frac{nM_{0}\zeta F_{P}^{2} }{\Pi_{c}}\sin^{2}\theta-k^{2}D. \tag{44}\]
Positive \(\mathbf{\sigma}_{\mathrm{R}}\) indicates an exponentially growing perturbation based on the form of the normal modes. As \(\zeta\), \(\Pi_{c}\) and \(M_{0}\) are all positive, the system is linearly unstable to concentration fluctuations so long as \(\theta\) is non-zero.
Figure 4 shows the dispersion relation from Eq. (44) made dimensionless by scaling the growth rate over \(nM_{0}\zeta F_{p}^{2}/\Pi_{c}\), the wave number over \(\sqrt{n}\) and the diffusion constant over \(M_{0}\zeta F_{p}^{2}/\Pi_{c}\). Such a driven suspension is evidently unstable to long-wavelength perturbations. Indeed, the growth rate is always largest at \(k=0\), corresponding to concentration fluctuations that span the system size. Perturbations in the direction perpendicular to the driving force (so that \(\theta=\pi/2\)) are the most destabilizing, analogous to classic theories of sedimentation stability in 3D suspensions [50; 51; 52]. The only stabilizing mechanism is diffusion, that suppresses large wave numbers (as shown in Fig. 4b). As expected, the instability does not occur in the Newtonian limit as \(\Pi_{c}\rightarrow\infty\).
The associated mechanism is illustrated in Fig. 3. A perturbation that increases local particle concentration drives a shear flow that interacts with the non-uniform viscosity field around each particle to generate a non-Newtonian correction to the drift velocity as shown in previous sections (see Fig. 2). In a suspension, this drift draws particles towards regions of high concentration. This magnifies the concentration fluctuation and the instability grows.
Figure 3: Mechanism for suspension instability. A local increase in particle concentration drives a shear flow that interacts with the non-uniform viscosity field around each particle in a manner that draws particles towards regions of high concentration. This magnifies the concentration fluctuation and the instability grows.
### Langevin description
The mean-field model above describes short-term linear growth of concentration fluctuations and suggests a mechanism for aggregation. In what follow, we will demonstrate the long-term effect of the nonlinear force on collections of membrane inclusions using a Langevin description. We will consider \(N\) disks embedded in a membrane with an infinite sub-phase, where each disk translates with a self velocity due to an applied force \(\mathbf{F}_{p}\). This could be due to anchoring proteins on a membrane, or equivalently due to externally applied electrical, thermal, or optical forces on particles embedded within a complex interface. We assume that the suspension is sufficiently dilute such that pairwise interactions can be added to obtain net force on each disk. The trajectory of each disk is then determined by the Newtonian membrane hydrodynamic disturbance field due to the other \(N-1\) disks, the correction due to non-Newtonian nature of the membrane as derived above, and Brownian fluctuations.
The corresponding Langevin equation for disk \(i\) reads:
\[\frac{\partial\mathbf{x}_{i}}{\partial t}=M_{0}F_{p}\hat{\mathbf{e}}_{x}+ \mathbf{u}_{i}^{\mathrm{Br}}+\sum_{i\neq j}^{N}\left[\mathbf{u}_{j}^{d}( \mathbf{x}_{i})+\mathbf{u}_{ij}^{\mathrm{St}}+M_{0}\beta\mathbf{F}_{ij}\right]. \tag{45}\]
The first term on the RHS describes the translational response of disk \(i\) to the applied force. We will take \(\mathbf{F}_{p}=F_{p}\hat{\mathbf{e}}_{x}\) to be constant and along the \(x\) direction. The second term describes Brownian fluctuations, which in general depend on the collective hydrodynamic motility of the system [14]. Owing to the diluteness of our system, we only account for the local mobilities for each disk. This simplification decouples the fluctuation-dissipation relation of each disk from the rest so that the translational Brownian velocity satisfies:
\[\langle\mathbf{u}^{\mathrm{Br}}(t)\rangle=0,\quad\langle\mathbf{u}^{\mathrm{ Br}}(t)\mathbf{u}^{\mathrm{Br}}(t^{\prime})\rangle=2k_{B}T\mathbf{M}\delta(t-t^{ \prime}), \tag{46}\]
where \(\mathbf{M}=M_{0}\mathbf{I}_{s}=\mathbf{I}_{s}/4\pi\eta_{s}^{0}\) is the Newtonian translational mobility of a disk. Using \(U=F_{p}/\eta_{s}^{0}\) and \(a/U\) as the characteristic scales for velocity and time, we obtain the dimensionless Brownian velocity from Eq. (46) as
\[\mathbf{u}_{i}^{\mathrm{Br}}=\sqrt{\frac{\tilde{T}}{2\pi\Delta\tilde{T}}}\; \mathbf{w},\qquad\tilde{T}=\frac{k_{B}T}{F_{p}a}, \tag{47}\]
where \(\Delta\tilde{T}\) is the dimensionless time step and \(\tilde{T}\) is a dimensionless temperature. The white noise vector \(\mathbf{w}\) is populated by random numbers sampled from a normal distribution of mean 0 and variance 1 such that the fluctuation-dissipation relation Eq. (46) is satisfied.
The next term in the Langevin equation describes leading-order disturbance field at the location of disk \(i\) due to an external force on neighboring disk \(j\):
\[\mathbf{u}_{j}^{d}(\mathbf{x}_{i})=\mathbf{G}_{ij}(\mathbf{x}_{i}-\mathbf{x} _{j})\cdot\mathbf{F}_{p}. \tag{48}\]
Here, \(\mathbf{G}_{ij}\) is the Green's function or 2D stokeslet for the Newtonian problem. This tensor is readily obtained from the term corresponding to \(\mathbf{F}_{p}\) in Eq. (21).
Although the suspension is dilute and particles are initially far from each other, fluctuations due to pair-wise addition of attractive hydrodynamic forces can bring disks within contact distance of each other. To prevent overlap, we add a soft repulsion between disks at short range. These repulsive excluded volume interactions are accounted via a steric velocity
\[\mathbf{u}_{ij}^{\mathrm{St}}=U_{s}\frac{e^{-a(r-a)}}{1+e^{-a(r-a)}}\,\hat{ \mathbf{r}}_{ij}, \tag{49}\]
where \(r=|\mathbf{x}_{i}-\mathbf{x}_{j}|\), and which acts along the unit vector \(\hat{\mathbf{r}}_{ij}=(\mathbf{x}_{i}-\mathbf{x}_{j})/r\) that connects disk \(i\) to disk \(j\). This soft repulsion decays exponentially over a length scale \(\alpha^{-1}=\mathcal{O}(a)\) and \(U_{s}\) is the contact velocity.
The final term in the Langevin description is the non-Newtonian correction from Eq. (33) obtained via the reciprocal calculation in section II. This term also acts pair-wise as the local background velocity gradient felt by disk \(i\) is that generated by the disturbance velocity associated with disk \(j\). We time integrate the Langevin equation numerically using standard Brownian dynamics methods established for similar systems [20; 21].
### Effective hydrodynamic interactions and chaining
Before discussing long-time features and hydrodynamic crystallization, we first take a closer look at the effective hydrodynamic interactions in large-scale suspensions to understand the mechanism of particle aggregation. The interplay between local fluid extension and non-uniform viscosity field around each particle dictates the effective hydrodynamic interactions that drive relative particle motion. Fig. 5(a) borrows from the classic work of Koch and Shaqfeh [8] to show local extension around a particle that is placed near a point force (corresponding to a reference particle) at the center. Each surrounding particle represents a possible location of a membrane anchor protein in the disturbance field induced by the reference particle. The orientation of the extensional field changes based on the location of the particle, but always acts to draw fluid towards the reference point force in the upper half and to draw fluid away from it in the lower half.
Recalling that inclusions are also driven downward by external forces, the local viscosity field around each inclusion is simultaneously weakly perturbed. The particles experience a relatively higher viscosity ahead of them. The effective hydrodynamic force, or the resulting drift velocity, can then be determined by evaluating the asymmetric surface viscous stresses around each particle. Again, such extensional flows would not generate a net force on disks in a Newtonian system (akin to spheres in 3D fluids [50]): the non-uniform viscosity around each inclusion breaks this symmetry.
We can readily evaluate the form of such an interaction. The flow field set up by the reference particle (the dashed lines in Fig. 5a) is given by the term proportional to the force in Eq. (21). For illustration, we consider only the leading-order disturbance field corresponding to a 2D stokeslet or a point force. The correction due to the interplay between the external forces and the strain rate corresponding to such a flow at a location \(\mathbf{x}\) is given, following Eq. (31), by
\[\mathbf{F}^{(1)}=\zeta\mathbf{F}_{p}\cdot\mathbf{A}=\zeta\mathbf{F}_{p}\cdot \left[\frac{\mathbf{I}_{\mathbf{x}}\mathbf{x}}{r^{2}}-\frac{2\mathbf{x} \mathbf{x}\mathbf{x}}{r^{4}}\right]\cdot\mathbf{F}_{p}, \tag{50}\]
where \(r=|\mathbf{x}|\). Taking \(\mathbf{F}^{(1)}=F_{p}\hat{\mathbf{e}}_{x}\) and denoting the radial unit vector originating at the reference particle as \(\hat{\mathbf{e}}_{r}\), Eq. (50) becomes
\[\mathbf{F}^{(1)}=\zeta F_{p}^{2}\left(\frac{\cos\theta}{r}\hat{\mathbf{e}}_{x} -\frac{2\cos^{2}\theta}{r}\hat{\mathbf{e}}_{r}\right), \tag{51}\]
where now \(\cos\theta=\hat{\mathbf{e}}_{x}\cdot\hat{\mathbf{e}}_{r}\).
This force leads to a velocity \(M_{0}\mathbf{F}^{(1)}\) at the location \(\mathbf{x}\), which can be interpreted as an effective hydrodynamic interaction due to pressure-dependent viscosity. \(M_{0}\) is the isotropic mobility of a disk, and the direction of the velocity follows that of \(\mathbf{F}^{(1)}\). This correction to the Newtonian-flow field around each particle, following Eq. (51), is shown in Fig. 5(b). As expected, this correction vanishes in the horizontal line where \(\theta=\pi/2\), where the extensional field due to a stokeslet is zero. The correction is non-zero at all other orientations and maintains a front-back symmetry relative to \(\hat{\mathbf{e}}_{x}\). Notably, the direction of \(\mathbf{F}^{(1)}\) is such that it acts to draw neighboring particles into chains along the direction of external force.
Such a mechanism of chaining is clearly seen in Fig. 5(c) in two-particle simulations following the Langevin description from the preceding section. While a pair of particles in a 2D Newtonian fluid would maintain relative separation and orientation as a they translate in response to an external force,
Figure 5: (a) Local extension field around particles due to the flow generated by a reference particle at the center. (b) Effective non-Newtonian interaction due to the interplay between local extension and non-uniform viscosity reveals a propensity to chain particles along the direction of the applied force. (c) Snapshots at equal intervals of time of a pair of particles translating in response to a downward force, simulated using the Langevin description. Dashed unfilled particles are the Newtonian reference case, which maintain relative separation and orientation. Solid filled circles account for pressure-dependent viscosity, which draws them together to chain up along the direction of the force.
the non-Newtonian correction derived here causes them to line up along the direction of the external force. We therefore expect particles to form chains. By analogy with molecular systems that display a long-ranged attraction and short-range repulsion, we expect clustering and aggregation when a large number of particles are present.
Indeed, our simulations show that chains form initially but eventually give way to larger-scale stable clusters (Fig. 6) with increasing \(\beta\). Aggregates are stable and do not disintegrate for long simulation times (as observed till \(\tilde{t}=120\) in units of time made dimensionless over characteristic time \(a\eta_{s}^{0}/F_{p}\), Fig. 6). We see crystalline order emerge as expected from analogous problems with 'hydrosteric' interactions [21], i.e, long-ranged attraction and short-rainge repulsive interactions. Previous works have examined aggregation of driven transmembrane proteins due membrane curvature [35; 36], and inclusion size mismatch [30]. Our simulations show that stable hexatic clusters form as a result of the hydrodynamic attraction, physically representing a collection of membrane anchor points crystallizing for a potential biological advantage.
### Aggregation and hexatic Order
We will use particle pair proximity and resulting packing order to quantify the degree of aggregation in this system. A straightforward metric is the \(n\)-th packing order parameter:
\[\Psi_{n}^{j}=\frac{1}{n_{j}}\sum_{k}e^{in\Phi_{kj}}. \tag{52}\]
\(\Psi_{n}^{j}\) measures the orientation and packing order around particle \(j\), where \(n_{j}\) is the number of nearest neighbors and \(\theta_{kj}\) is the angle between the line joining disks \(k,j\) and the x-axis. The average local \(\langle|\Psi_{n}|\rangle\) order parameter quantifies the packing order, with 0 corresponding to an unordered aggregate and 1 representing a perfect \(n\)-th order aggregate. The local order parameter tells us whether each particle in an aggregate forms a \(n\)-th order lattice with its nearest neighbor. Note that the _global_ hexatic order parameter \(|\langle\Psi_{n}\rangle|\) can indicate system-spanning crystalline structure [21]. Our simulations are non-periodic and have a finite number of particles, and we confirm that no qualitative difference exists between the local and global order parameters for systems at the scales shown.
Given the 2D nature of our systems and the short-ranged repulsion, we always see hexatic (\(n=6\)) order formation for non-zero \(\beta\). This is comparable to other works on 2D non-linear hydrodynamics [21; 54; 55] that have shown that rigid inclusions rotating on a membrane form stable hexagonal crystals. The effective attractive potential in our case is orientation-dependent (Fig. 5b) and arises due to surface-pressure dependent rheology. As expected, disks self-assemble faster with increasing \(\beta\) (Fig. 7) corresponding to a stronger role of pressure-dependent surface viscosity.
We can also use the evolution of \(\langle|\Psi_{6}|\rangle\) to quantitatively describe the rate of assembly, and qualitatively describe the transition towards hexatic order. Hexagonal order of the disk crystals plateaus with time to attain a quasi-steady value (Fig. 8a). The rate at which this plateau is reached, as well the magnitude of the plateau are clear indicators of departure from Newtonian behavior. The transition from disorder to crystalline order due to \(\Pi\)-dependent \(\eta_{s}\) is evident in Fig. 8(b).
Finally, note that the strong thermal fluctuations can melt crystals and restabilize the suspension towards a uniform concentration. We have set the effective temperature \(\tilde{T}=k_{B}T/F_{p}a=0.001\) in the results shown here, which corresponds to \(F_{p}=\mathcal{O}(\mathrm{pN})\) for particles or membrane inclusions in the size range of \(a=\mathcal{O}(100\ \mathrm{nm})\). This is in the range of forces exerted on membranes by biological activity such as intracellular pinning and cytoskeleton-associated growth and locomotive forces [56; 29]. Larger forces or bigger particles would only accentuate the effect, and we have tested that crystallization transition is qualitatively unchanged upon increasing thermal noise upto \(\tilde{T}=0.01\). Increasing thermal fluctuations within this biologically relevant range slightly decreases the plateau value of \(\langle|\Psi_{6}|\rangle\): the rate at which the plateau value is reached for all \(\beta\), however, remains unchanged.
Figure 6: Time series of a 2D suspension of disks driven by an external (downward) force leading to hydrodynamic aggregation. Chains form initially which eventually give way to large scale hexatically ordered crystals. Here, \(\beta=0.1\) and \(\tilde{T}=0.001\).
Figure 7: A stronger pressure-dependence of viscosity (smaller \(\Pi_{c}\) or larger \(\beta\)) leads to faster aggregation and crystallization. Snapshots taken at \(\tilde{t}=25\).
## IV Conclusion
Collective motility, aggregation, and crystallization of membrane proteins has distinct biological advantage in immune response, locomotion, and tubulation [29; 30; 35]. Previous works have examined aggregation that arises due to protein activity [20; 21], membrane curvature [35; 36], and inclusion size mismatch [30]. We have shown that a commonly measured class of non-Newtonian membrane viscosity can have the same effect, and that these non-trivial fluid dynamics can play critical roles in membrane organization principles [24; 25].
We have made several approximations to keep this complicated problem tractable. Neglecting the coupling to the bulk phase will have quantitative changes to the corrective force, especially for large membranes where 3D fluid stresses catch up over the Saffman-Delbruck crossover length. We expect this change to be minimal for finite membrane sizes or crowded system, as the bulk flow effect is screened at much shorter length scales, and so the qualitative picture remains unchanged. A more careful analysis of the coupling of a non-Newtonian membrane to a 3D fluid phase is left as future work. We have also chosen to develop a minimal point-particle-based Langevin approach for simulations. Our objective was to demonstrate the long-time dynamics with a minimal simulation model. We expect that high-fidelity computations will confirm these dynamics, and also go beyond the weakly nonlinear regime. We also recognize that not all inclusions within a membrane are driven: a real membrane hosts a collection of anchored, driven, active, and passive inclusions. Nor are inclusions disk-like. While shape does not significantly alter long-ranged membrane hydrodynamics, the near-field steric effects and packing order will shift to accommodate inclusion shape. Nonetheless, we expect that the physical intuition gained in this study will port over to future analyses of combinations of mobile and immobile inclusions
We end by noting that the mathematical framework and physical picture that emerges in this work is generalizable to a broader class of problems beyond lipids with pressure-dependent viscosity. The formulation using the reciprocal theorem is fairly general and can be extended to other common non-Newtonian constitutive relations. Force corrections due to shear-thinning [49] and viscoelastic effects [48] follow their 3D analogs. The mean-field model can also be readily adapted to any form of the corrective force, opening a rich avenue of problems that can be viewed through the lens of this framework. Further, this model can be applied to a broader class of 3D fluids with pressure-dependent viscosity, such as those that appear in piezoviscous problems [57], high-pressure polymer melts [58], geophysical flows [59], and crude oil mixtures [60].
**Conflict of Interest:** The authors have no conflicts to disclose.
**Acknowledgments:** We thank Ronald J. Phillips and Gregory H. Miller for insightful feedback on this work.
|
2307.03700
|
Complete metrics with constant fractional higher order $Q$-curvature on
the punctured sphere
|
This manuscript is devoted to constructing complete metrics with constant
higher fractional curvature on punctured spheres with finitely many isolated
singularities. Analytically, this problem is reduced to constructing singular
solutions for a conformally invariant integro-differential equation that
generalizes the critical GJMS problem. Our proof follows the earlier
construction in Ao {\it et al.} \cite{MR3694645}, based on a gluing method,
which we briefly describe. Our main contribution is to provide a unified
approach for fractional and higher order cases. This method relies on proving
Fredholm properties for the linearized operator around a suitably chosen
approximate solution. The main challenge in our approach is that the solutions
to the related blow-up limit problem near isolated singularities need to be
fully classified; hence we are not allowed to use a simplified ODE method. To
overcome this issue, we approximate solutions near each isolated singularity by
a family of half-bubble tower solutions. Then, we reduce our problem to solving
an (infinite-dimensional) Toda-type system arising from the interaction between
the bubble towers at each isolated singularity. Finally, we prove that this
system's solvability is equivalent to the existence of a balanced
configuration.
|
João Henrique Andrade, Juncheng Wei, Zikai Ye
|
2023-07-07T16:22:09Z
|
http://arxiv.org/abs/2307.03700v1
|
# Complete metrics with constant fractional higher order \(Q\)-curvature on the punctured sphere
###### Abstract.
This manuscript is devoted to constructing complete metrics with constant higher fractional curvature on punctured spheres with finitely many isolated singularities. Analytically, this problem is reduced to constructing singular solutions for a conformally invariant integro-differential equation that generalizes the critical GJMS problem. Our proof follows the earlier construction in Ao _et al._[36], based on a gluing method, which we briefly describe. Our main contribution is to provide a unified approach for fractional and higher order cases. This method relies on proving Fredholm properties for the linearized operator around a suitably chosen approximate solution. The main challenge in our approach is that the solutions to the related blow-up limit problem near isolated singularities need to be fully classified; hence we are not allowed to use a simplified ODE method. To overcome this issue, we approximate solutions near each isolated singularity by a family of half-bubble tower solutions. Then, we reduce our problem to solving an (infinite-dimensional) Toda-type system arising from the interaction between the bubble towers at each isolated singularity. Finally, we prove that this system's solvability is equivalent to the existence of a balanced configuration.
Key words and phrases:Fractional poly-Laplacian, Higher order PDEs, GJMS operators, Critical exponent, Gluing technique, Toda systems 2020 Mathematics Subject Classification: 35J60, 35B09, 35J30, 35B40, 35R11 This work was partially supported by Sao Paulo Research Foundation (FAPESP) #2020/07566-3 and #2021/15139-0 and Natural Sciences and Engineering Research Council of Canada (NSERC).
## 1. Introduction
The problem of constructing complete metrics on punctured spheres with prescribed fractional higher order curvature is longstanding in differential geometry. In [32], Graham, Jenne, Mason, and Sparling constructed conformally covariant differential operators \(P_{2m}(g)\) on a given compact \(n\)-dimensional Riemannian manifold \((M^{n},g)\) for any \(m\in\mathbb{N}\) such that the leading order term of \(P_{2m}(g)\) is \((-\Delta_{g})^{m}\) with \(n>2m\). One can then construct the associated \(Q\)-curvature of order \(2m\) by \(Q_{2m}(g)=P_{2m}(g)(1)\). When \(m=1\), one recovers the conformal Laplacian
\[P_{2}(g)=-\Delta_{g}+\frac{n-2}{4(n-1)}R_{g}\quad\text{with}\quad Q_{2}(g)= \frac{n-2}{4(n-1)}R_{g},\]
where \(\Delta_{g}\) is the Laplace-Beltrami operator of \(g\) and \(R_{g}\) is its scalar curvature. We also refer to [3, Appendix A] for the explicit formulae for \(P_{2}(g)\), \(P_{4}(g)\) and \(P_{6}(g)\). Subsequently, Graham and Zworski [33] and Chang and Gonzalez [22] extended these definitions in the case the background metric is the round metric on the sphere to obtain (nonlocal) operators \(P_{2\sigma}(g)\) of any order \(\sigma\in(0,\frac{n}{2})\) as well as its corresponding \(Q\)-curvature. Once again, the leading order part of \(P_{2\sigma}(g)\) is \((-\Delta_{g})^{\sigma}\), understood as the principal value of a singular integral operator.
Nevertheless, the expressions for \(P_{2\sigma}(g)\) and \(Q_{2\sigma}(g)\) for a general \(\sigma\in\mathbb{R}_{+}\) are far more complicated. Namely, the fractional curvature \(Q_{2\sigma}(g)\) is defined from the conformal fractional Laplacian \(P_{2\sigma}(g)\) as \(Q_{2\sigma}(g)=P_{2\sigma}(g)(1)\). It is a nonlocal version of the scalar curvature (corresponding to the local case \(\sigma=1\)). The conformal higher order fractional Laplacian \(P_{2\sigma}(g)\) is a (nonlocal) pseudo-differential operator of order \(2\sigma\), which can be constructed from scattering theory on the conformal infinity \(M^{n}\) of a conformally compact Einstein manifold \((X^{n+1},g^{+})\) as a generalized Dirichlet-to-Neumann operator for the eigenvalue problem
\[-\Delta_{g^{+}}U-\frac{(n+2\sigma)^{2}}{4}U=0\quad\text{ in }\quad X,\]
where \(U\in\mathcal{C}^{\infty}(X)\) is the respective extension of \(u\in\mathcal{C}^{\infty}(M)\) This construction is a natural one from the point of view of the AdS/CFT correspondence in Physics, also known as Maldacena's duality [45]. We refer the reader to [1, 55] for more details.
In this manuscript, we are restricted to the \(n\)-dimensional sphere \(\mathbb{S}^{n}\subset\mathbb{R}^{n+1}\), where \(n>2\sigma\) and \(\sigma\in(1,+\infty)\) equipped with the standard round metric \(g_{0}\), which is given by the pullback of the usual Euclidean metric \(\delta\) under the stereographic projection \(\Pi:\mathbb{S}^{n}\setminus\{\mathrm{e}_{1}\}\to\mathbb{R}^{n}\setminus\{0\}\) with \(\mathrm{e}_{1}=(1,0,\ldots,0)\in\mathbb{S}^{n}\) denoting its north pole. For any \(k\in\mathbb{R}\) with \(0\leqslant k\leqslant n\), we seek complete metrics on \(\mathbb{S}^{n}\setminus\Lambda^{k}\) of the form \(g=\mathrm{u}^{4/(n-2\sigma)}g_{0}\), where \(\Lambda\subset\mathbb{S}^{n}\) is such that \(\#\Sigma=N\). In order to \(g\) to be complete on \(\mathbb{S}^{n}\setminus\Lambda\), one has to impose \(\liminf_{\mathrm{d}(p,\Lambda)}\mathrm{u}(p)=+\infty\). Also, we prescribe the resulting metric to have constant \(Q_{2\sigma}\)-curvature, which we normalize to be
\[Q_{n,\sigma}=Q_{2\sigma}(g_{0})=\Gamma\left(\frac{n+2s}{2}\right)\Gamma\left( \frac{n-2s}{2}\right)^{-1},\]
where \(\Gamma(z)=\int_{0}^{\infty}\tau^{z-1}e^{-\tau}\mathrm{d}\tau\) is the standard Gamma function.
Let us now introduce some standard terminology. For any \(\sigma\in(1,+\infty]\) with \(n>2\sigma\) and \(N\geqslant 2\), we respectively denote by
\[\mathcal{M}_{2\sigma,\Lambda}(g_{0})=\{g\in[g_{0}]:g\text{ is complete on }\mathbb{S}^{n}\setminus\Lambda\text{ and }Q_{2\sigma}(g)\equiv Q_{n,\sigma}\}\]
and
\[\mathcal{M}_{2\sigma,N}(g_{0})=\{g\in[g_{0}]:g\text{ is complete on }\mathbb{S}^{n} \setminus\Lambda\text{ with }\#\Lambda=N\text{ and }Q_{2\sigma}(g)\equiv Q_{n,\sigma}\} \tag{1}\]
the marked and unmarked moduli spaces of complete constant higher order fractional \(Q\)-curvature metrics with isolated singularities. We also denote by \(\operatorname{sing}(g)=\Lambda\) its respective singular set.
In this fashion, our main theorem in this paper is the following
**Theorem 1**.: _Let \(\sigma\in(1,+\infty)\) with \(n>2\sigma\). For any configuration \(\Lambda\subset\mathbb{S}^{n}\) such that \(\#\Lambda=N\) with \(N\geqslant 2\), there exists a metric \(g\in\mathcal{M}_{2\sigma,N}(g_{0})\) satisfying \(\operatorname{sing}(g)=\Lambda\) and is unmarked nondegenerate. For a generic set of \(\Lambda=\{p_{1},\ldots,p_{N}\}\), this solution is marked nondegenerate, and for such a metric \((p_{1},\ldots,p_{N},\varepsilon_{1},\ldots,\varepsilon_{N})\in\mathbb{R}^{N(n+ 1)}\) constitute a full set of coordinates in \(\mathcal{M}_{2\sigma,N}(g_{0})\) near \(g_{0}\). In particular, one has \(\mathcal{M}_{2\sigma,N}(g_{0})\neq\varnothing\)._
Let us derive an analytical formulation for our main result. The family of higher order fractional curvatures transform nicely under a conformal change. Indeed, for any \(\bar{g}\in[g]\), one has
\[Q_{2\sigma}(\bar{g})=\frac{2}{n-2\sigma}u^{-\frac{n+2\sigma}{n-2\sigma}}P_{2 \sigma}(g_{0})u,\]
where \(P_{2\sigma}(g_{0}):\mathcal{C}^{\infty}(M)\to\mathcal{C}^{\infty}(M)\) is the fractional higher order GJMS operator on the sphere
\[P_{2\sigma}(g_{0}):=\Gamma\left(\sqrt{-\Delta_{g_{0}}+\frac{(n-1)^{2}}{4}}+2 \sigma+\frac{1}{2}\right)\Gamma\left(\sqrt{-\Delta_{g_{0}}+\frac{(n-1)^{2}}{4 }}-2\sigma+\frac{1}{2}\right)^{-1},\]
where \(\Delta_{g_{0}}\) is the Laplace-Beltrami operator and \([g]=\{\bar{g}=u^{4/(n-2\sigma)}g:u\in\mathcal{C}^{\infty}_{+}(M)\}\) is the conformal class of \(g\), where \(u\in\mathcal{C}^{\infty}_{+}(M)\) if and only if \(u\in\mathcal{C}^{\infty}(M)\) and \(u>0\). Furthermore, one has the transformation law
\[P_{2\sigma}(g)\phi=\mathrm{u}^{-\frac{n+2\sigma}{n-2\sigma}}P_{2\sigma}(g_{0}) (\mathrm{u}\phi)\quad\text{for all}\quad\phi\in\mathcal{C}^{\infty}(\mathbb{S}^ {n}\setminus\Lambda),\]
which means that GJMS operators are conformally covariant. Hence, finding conformal complete metrics \(g=\mathrm{u}^{4/(n-2\sigma)}g_{0}\) with prescribed curvature \(Q_{2\sigma}(g)=Q_{n,\sigma}\) on \(\mathbb{S}^{n}\setminus\Lambda\) is equivalent to finding smooth positive solutions \(\mathrm{u}\in\mathcal{C}^{\infty}(\mathbb{S}^{n}\setminus\Lambda)\) to the nonlocal higher order geometric PDE
\[\begin{cases}P_{2\sigma}(g_{0})\mathrm{u}=c_{n,\sigma}\mathrm{u}^{\frac{n+2 \sigma}{n-2\sigma}}\quad\text{on}\quad\mathbb{S}^{n}\setminus\Lambda,\\ \liminf\mathrm{f}_{\mathrm{d}(p,\Lambda)\to 0}\mathrm{u}(p)=+\infty,\end{cases}\] ( \[\mathcal{Q}_{2\sigma,\Lambda,g_{0}}\] )
where \(c_{n,\sigma}>0\) is a normalizing constant and \(\operatorname{sing}(\mathrm{u}):=\Lambda\) denotes the singular set.
Next, it will be convenient to transfer the PDE \((\mathcal{Q}_{2\sigma,\Lambda,g_{0}})\) to Euclidean space, which we can do using the standard stereographic projection. In these coordinates, our conformal metric takes the form \(g=\mathrm{u}^{4/(n-2\sigma)}g_{0}=(\mathrm{u}\cdot u_{\mathrm{sph}})^{4/(n-2 \sigma)}\delta\). Thus, \(u\in\mathcal{C}^{\infty}(\mathbb{R}^{n}\setminus\Sigma)\) given by \(u=\mathrm{u}\cdot u_{\mathrm{sph}}\) is a positive singular solution to \((\mathcal{Q}_{2\sigma,\Sigma})\). As a notational shorthand, we adopt the convention that \(\mathrm{u}\) refers to a conformal factor relating the metric \(g\) to the round metric, _i.e._\(g=\mathrm{u}^{4/(n-2\sigma)}g_{0}\), while \(u\) refers to a conformal factor relating the metric \(g\) to the Euclidean metric, _i.e._\(g=u^{4/(n-2\sigma)}\delta\), with the two related by \(u=\mathrm{u}\cdot u_{\mathrm{sph}}\). Hence, we aim to construct positive singular solutions \(u\in\mathcal{C}^{\infty}(\mathbb{R}^{n}\setminus\Sigma)\) to the following higher order fractional Yamabe equation with prescribed isolated singularities
\[\begin{cases}(-\Delta)^{\sigma}u=f_{\sigma}(u)\quad\text{in}\quad\mathbb{R}^{ n}\setminus\Sigma,\\ u(x)=\mathcal{O}(|x|^{2\sigma-n})\quad\text{as}\quad|x|\to+\infty,\end{cases}\] ( \[\mathcal{Q}_{2\sigma,\Sigma}\] )
where \(\sigma\in(1,+\infty)\) with \(n>2\sigma\). The subset \(\operatorname{sing}(u):=\Sigma\subset\mathbb{R}^{n}\) is called the singular set, which is assumed to be \(\Sigma=\{x_{1},\cdots,x_{N}\}\) for some \(N\in\mathbb{N}\) and such that
\[\liminf_{\mathrm{d}(x,\Sigma)\to 0}u(x)=+\infty.\]
We are interested in fast-decaying solutions; we assume the following condition \(\lim_{|x|\to+\infty}u(x)=0\). The integral operator on the right-hand side of \((\mathcal{Q}_{2\sigma,\Sigma})\) is the so-called higher order fractional Laplacian which is defined as
\[(-\Delta)^{\sigma}:=(-\Delta)^{s}\circ(-\Delta)^{m},\]
where \(m:=[\sigma]\) and \(s:=\sigma-[\sigma]\).
Here \((-\Delta)^{m}=(-\Delta)\circ\cdots\circ(-\Delta)\) denotes the poly-Laplacian and \((-\Delta)^{s}\) denotes the fractional Laplacian defined as
\[(-\Delta)^{s}u(x):=\mathrm{p.v.}\int_{\mathbb{R}^{n}}\mathcal{K}_{s}(x-y)[u(x)-u (y)]\mathrm{d}y,\]
where \(\mathcal{K}_{s}:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}\) is a singular potential given by
\[\mathcal{K}_{s}(x-y):=\kappa_{n,s}|x-y|^{-(n+2s)} \tag{2}\]
with
\[\kappa_{n,s}=\pi^{-\frac{n}{2}}2^{2s}s\Gamma\left(\frac{n}{2}+s\right)\Gamma(1 -s)^{-1}.\]
The nonlinearity \(f_{\sigma}:\mathbb{R}\to\mathbb{R}\) in the left-hand side of \((\mathcal{Q}_{2\sigma,\Sigma})\) is given by
\[f_{\sigma}(\xi)=c_{n,\sigma}|\xi|^{\frac{n+2\sigma}{n-2\sigma}},\]
where
\[c_{n,\sigma}:=2^{2\sigma}\Gamma\left(\frac{n+2\sigma}{4}\right)^{2}\Gamma \left(\frac{n-2\sigma}{4}\right)^{-2}.\]
is a normalizing constant. We remark that this nonlinearity has critical growth in the sense of the Sobolev embedding \(H^{\sigma}(\mathbb{R}^{n})\hookrightarrow L^{2^{\sigma}_{\sigma}}(\mathbb{R} ^{n})\), where \(2^{*}_{\sigma}:=\frac{2n}{n-2\sigma}\).
Our main result in this manuscript extends this result for the remaining cases. We are based on the unified approach given by Ao _et al._[36] and Jin and Xiong [37] to prove the existence of solutions to our integral equation, which can be stated as follows
**Theorem 2**.: _Let \(\sigma\in(1,+\infty]\) with \(n>2\sigma\). For any configuration \(\Sigma=\{x_{1},\ldots,x_{N}\}\) with \(N\geqslant 2\), one can find a smooth positive singular solution to \((\mathcal{Q}_{2\sigma,\Sigma})\) such that \(\mathrm{sing}(u)=\Sigma\)._
In [37], the authors use a dual representation and maximization methods to study the existence of Emden-Fowler solution on the range \(\sigma\in(0,\frac{n}{2})\). Although this representation is enough to prove the existence of blow-up limit solutions by direct maximization methods, it is unsuitable for our gluing technique. This paper follows the approach in [36] with the dual equation \((\mathcal{Q}^{\prime}_{2\sigma,\Sigma})\). Nevertheless, we need to give an alternative proof to describe the local behavior near each isolated singularity in terms of the bubble tower solution (see Lemma 4.10). This alternative proof is the main feature of this paper since it enables us to extend the techniques in [36] for integral equations that cannot be realized as the dual formulation of a differential equation, which is undoubted of independent interest.
Instead, we notice that \((\mathcal{Q}_{2\sigma,\Sigma})\) has a dual counterpart, which is given by
\[\begin{cases}u=(-\Delta)^{-\sigma}(f_{\sigma}\circ u)\quad\text{in}\quad \mathbb{R}^{n}\setminus\Sigma,\\ u(x)=\mathcal{O}(|x|^{2\sigma-n})\quad\text{as}\quad|x|\to+\infty,\end{cases}\] ( \[\mathcal{Q}^{\prime}_{2\sigma,\Sigma}\] )
where \((-\Delta)^{-\sigma}\) denotes the inverse operator of the standard higher order fractional Laplacian, namely
\[(-\Delta)^{-\sigma}f_{\sigma}(u(x)):=(\mathcal{R}_{\sigma}*f_{\sigma}(u))(x)= \mathrm{p.v.}\int_{\mathbb{R}^{n}}\mathcal{R}_{\sigma}(x-y)f_{\sigma}(u(y)) \mathrm{d}y,\]
where \(\mathcal{R}_{\sigma}:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}\) is the Riesz potential given by
\[\mathcal{R}_{\sigma}(x-y):=C_{n,\sigma}|x-y|^{-(n-2\sigma)} \tag{3}\]
with \(C_{n,\sigma}>0\) a normalizing constant. Our starting point in this paper will be to prove that \((\mathcal{Q}_{2\sigma,\Sigma})\) and \((\mathcal{Q}^{\prime}_{2\sigma,\Sigma})\) are equivalent (see Lemma 3.3).
When \(\sigma\in\mathbb{N}\) is integer, that is \(\sigma=m\), Eq. (\(\mathcal{Q}_{2\sigma,\Sigma}\)) becomes the poly-harmonic equation
\[\begin{cases}(-\Delta)^{m}u=f_{m}(u)\quad\text{in}\quad\mathbb{R}^{n}\setminus \Sigma,\\ u(x)=\mathcal{O}(|x|^{2\sigma-n})\quad\text{as}\quad|x|\to+\infty.\end{cases}\] ( \[\mathcal{P}_{2m,\Sigma}\] )
The most natural case of (\(\mathcal{P}_{2m,\Sigma}\)) is when \(m=1\). In this situation, this equation becomes the classical Lane-Emden equation. On this subject, Mazzeo and Pacard [46, Theorem 2] based on a gluing technique via ODE theory to prove an existence theorem. Furthermore, when \(\sigma\in(0,1)\), we arrive at
\[\begin{cases}(-\Delta)^{s}u=f_{s}(u)\quad\text{in}\quad\mathbb{R}^{n}\setminus \Sigma,\\ u(x)=\mathcal{O}(|x|^{2\sigma-n})\quad\text{as}\quad|x|\to+\infty.\end{cases}\] ( \[\mathcal{F}_{2s,\Sigma}\] )
Recently, Ao _et al._[8, Theorem 1.1] extended the earlier existence results for this case. Their construction is substantially different from the previous one and relies on the concept of a bubble tower (or Half-Dancer solutions). We can summarize these results in the following statement
**Theorem A**.: _Let \(\sigma\in(0,1]\) with \(n>2\sigma\). For any configuration \(\Sigma=\{x_{1},\ldots,x_{N}\}\) with \(N\geqslant 2\), one can find a smooth positive singular solution to (\(\mathcal{Q}_{2\sigma,\Sigma}\)) such that \(\operatorname{sing}(u)=\Sigma\)._
Let us briefly explain our strategy for the proof. We are based Schoen's [53] tactic, which consists in finding an explicit infinite set of functions that span an approximate nullspace, such that the linearized nonlinear nonlocal operator around this infinite-dimensional family of solutions is invertible on its orthogonal complement. He first solves the equation on the complement. Then he provides a set of balancing conditions to ensure that the solution to this restricted problem is a solution to the original problem. This method was recently extended for fractional operators [6].
This technique set differs substantially from the one in [46]. In their construction, the authors obtain an one-parameter family of solutions which blows up quickly enough near the singular set. These solutions are different in spirit from the ones in the nonlocal case. Since blow-limit Delaunay solutions for the scalar curvature problem are classified to depend only on two parameters in [18]. Then, by linearizing the problem around these solutions families, the resulting linear operator is proved to be surjective on some reasonable space of functions, at least when the neck size parameter is sufficiently small. A standard iteration argument may be used to obtain an exact solution to (\(\mathcal{Q}_{2\sigma,\Sigma}\)) with a suitable blow-up rate. This strategy derives from its connections with the earlier constructions of the CMC with Delaunay-type ends [48]. Compared with the fractional case \(\sigma\in(0,1)\), the main difference in our strategy is the proof of the refined asymptotics near half-bubble towers solutions, which holds in a more general setting.
Using this approach, we can perturb each bubble within the tower separately and construct a bubble tower at each singularity, and as an appropriate approximate solution to (\(\mathcal{Q}_{2\sigma,\Sigma}\)). However, it is essential to note that the linearization of this approximate solution is not injective, as there is an infinite-dimensional kernel. As a result, an infinite-dimensional Lyapunov-Schmidt reduction procedure is employed. This approach is similar to Kapouleas' CMC construction [38], which Malchiodi adapted in [44] to produce new entire solutions for a semilinear equation with a subcritical exponent that differ from the well-known spike solutions since they decay to zero when moving away from three half lines and do not tend to zero at infinity. For this, he constructed a half-Dancer solution along each half-line. Whence, to solve the original problem from the perturbed one, an infinite-dimensional system of Toda-type needs to be solved, which arises from studying the interactions between the different bubbles in the tower. The most robust interactions occur in the zero-mode level and turn into some compatibility conditions (see Definition 5.5). In this fashion, a configuration satisfying such conditions is called balanced, related to well-known balancing properties enjoyed by the sum of the Pohozaev invariants. Nonetheless, the remaining interactions can be made small and are dealt with through a fixed-point argument.
These compatibility conditions do not restrict the location of the singularity points but only affect the Delaunay parameter (neck size) at each end. We also note that due to the heavy influence of the underlying geometry, the first compatibility condition is similar to the ones found in [47] for the local case \(\sigma=1\). However, the rest of the configuration depends on the Toda-type system. In the local setting, a similar procedure to remove the resonances of the linearized problem was considered in [10], but the Toda-type system was finite-dimensional in their case.
On the technical level, our strategy is to employ the gluing method and Lyapunov-Schmidt reduction method. First, we find a suitable approximate solution: a perturbation of the summation of half-Delaunay solutions with a singularity at each puncture. Then, use the reduction method to find a perturbed solution that satisfies the associated linearized problem with the right-hand side given by some Lagrangian multiplier containing the approximate kernels of the linearized operator. This family of kernels spans an infinite-dimensional set called the approximate null space. The last step is to determine the infinitely-dimensional free parameter set such that all the coefficients of the projection on approximate null space vanish. This problem is reduced to the solvability of some infinite-dimensional Toda system around each singular point. A fundamental property in the proof is to have a sufficiently good approximate solution (a half-Dancer) so that all the estimates are exponentially decreasing in terms of the bubble tower parameter. A fixed point argument in suitable weighted sequence spaces then solves the problem of adjusting the parameters to have all equal to zero.
We remark that instead of relying on the well-known extension problem for the fractional Laplacian [17], we are inspired by the approach in the approach given by Delatorre _et al._[26] to rewrite the fractional Laplacian in radial coordinates in terms of a new integro-differential operator in logarithmic cylindrical coordinates. In our case, such an extension does not exist in general. We emphasize that our proof is written solely in the dual formulation, and it can be extended to general integral equations not arising as the dual representation of a differential equation.
In light of the seminal result of Mazzeo, Pollack, and Uhlenbeck [49] (see also [41]), it is natural to wonder if the marked moduli space in (1) can be furnished with more structure. It is believed that the result below should hold
**Conjecture 1**.: _Let \(\sigma\in(0,+\infty]\) with \(n>2\sigma\). For any singular set \(\Lambda\subset\mathbb{S}^{n}\) such that \(\#\Lambda=N\) with \(N\geqslant 2\), the marked moduli space of complete constant higher order fractional singular \(Q\)-curvature metrics on the punctured round sphere \(\mathcal{M}_{2\sigma,N}(g_{0})\) is an analytic manifold with formal dimension equal the number of isolated singularities, that is, \(\dim(\mathcal{M}_{2\sigma,N}(g_{0}))=N\)._
Another possible development is to study the case in which a singular set is a disjoint union of smooth submanifolds with possible distinct positive Hausdorff dimensions. In this situation, it would be interesting to prove that the moduli space defined in (1) is still non-empty and, in strong contrast with the case of isolated singularities, is infinite-dimensional; this will be the topic of a forthcoming paper.
Let us explain this case in more detail. It is well-known that the character of the analysis required to prove the existence of solutions when \(R(g)<0\), which dates back to the work of Loewner and Nirenberg [43] (see also [29, 11]), is fundamentally different than in the positive scalar curvature case. Therefore, most of the literature is concentrated on the positive scalar curvature case \(R(g)>0\). In this setting, it is natural to have a solution one needs to impose some necessary conditions on the dimension. More challenging it would be to construct solutions to \((\mathcal{Q}_{2\sigma,\Sigma})\) with uncountable isolated singularities, for instance, in the lattice \(\Sigma=\mathbb{Z}^{n}\). The existence of weak solutions with larger dimension singular set for the singular Yamabe equation has been studied by Mazzeo and Smale [50] and by Mazzeo and Pacard [46] for the scalar curvature case. As well as by Hyder and Sire [35] for the (fourth order) \(Q\)-curvature metrics, and by Ao _et al._[6] for the (fractional order) \(Q\)-curvature metrics, based in the construction of entire solutions from [7].
More generally, such solutions may be constructed on an arbitrary compact manifold \((M^{n},g)\) of nonnegative scalar curvature \(R(g)\geqslant 0\) whenever the singular set is a finite disjoint union of submanifolds with positive bounded Hausdorff dimension, which we describe as follows. Given \(\sigma\in(0,+\infty]\) with \(n>2\sigma\) and \(N\geqslant 2\), we let \(\Lambda\subset\mathbb{S}^{n}\) be a finite disjoint union of submanifolds \(\Lambda=\Lambda^{0}\cup\Lambda^{+}\), where \(\Lambda_{+}=\cup_{\ell=1}^{d}\Lambda_{\ell}^{k_{\ell}}\) with \(k_{\ell}:=\dim_{\mathcal{H}}(\Lambda_{\ell})\) denoting its Hausdorff dimension. Furthermore, we denote by
\[\mathcal{M}^{k}_{2\sigma,\Lambda}(g_{0})=\left\{g\in[g_{0}]:g\text{ is complete on }\mathbb{S}^{n}\setminus\Lambda^{k}\text{ and }Q_{2\sigma}(g)\equiv Q_{n,\sigma}\right\}\]
the moduli space of complete constant higher order fractional \(Q\)-curvature metrics with higher dimensional singularities. Notice that we simply denote \(\mathcal{M}^{0}_{2\sigma,\Lambda}(g_{0})=\mathcal{M}_{2\sigma,\Lambda}(g_{0})\).
To summarize this discussion, we have the following statement
**Theorem B**.: _Let \(\sigma\in(0,+\infty]\) with \(n>2\sigma\). Assume that \(\Lambda=\Lambda^{0}\cup\Lambda^{+}\) is a finite disjoint union of submanifolds satisfying \(\Lambda^{0}=\varnothing\) and \(\Lambda_{+}=\cup_{\ell=1}^{d}\Lambda_{\ell}^{k_{\ell}}\) with \(0<k_{\ell}<\frac{n-2\sigma}{2}\). Then, there exists a metric \(g\in\mathcal{M}^{k}_{2\sigma,\Lambda}(g_{0})\) that \(\operatorname{sing}(g)=\Lambda\). In particular, one has \(\mathcal{M}^{k}_{2\sigma,\Lambda}(g_{0})\neq\varnothing\) and it is an infinite-dimensional analytic manifold._
With a mind on this statement, it would be natural to prove a similar result as below
**Conjecture 2**.: _Let \(\sigma\in(0,+\infty]\) with \(n>2\sigma\). Assume that \(\Lambda=\Lambda^{0}\cup\Lambda^{+}\) such that \(\#\Lambda^{0}=N\) with \(N\geqslant 2\) and \(\Lambda_{+}=\cup_{\ell=1}^{d}\Lambda_{\ell}^{k_{\ell}}\) with \(0<k_{\ell}<\frac{n-2\sigma}{2}\). Then, there exists a metric \(g\in\mathcal{M}^{k}_{2\sigma,\Lambda}(g_{0})\) satisfying that \(\operatorname{sing}(g)=\Lambda\). In particular, one has \(\mathcal{M}^{k}_{2\sigma,\Lambda}(g_{0})\neq\varnothing\) and it is an infinite-dimensional analytic manifold._
As usual, we need to prove the existence of positive solutions to the GJMS equation on the conformally flat case \(\mathbb{S}^{n}\setminus\mathbb{S}^{k}\simeq\mathbb{R}^{n}\setminus\mathbb{R}^ {k}\) with fast-decay away from the singular set. Moreover, the dimension estimate above is sharp in the same sense of Gonzalez, Mazzeo, and Sire [31]. Namely, if a complete metric blows up at a smooth \(k\)-dimensional submanifold and is polyhomogeneous, then \(k\in\mathbb{R}_{+}\) must satisfy the restriction above. All the analysis for this type of equation comes from its conformal properties, which produce a geometric interpretation of scattering theory and conformally covariant operators. Exploiting the conformal equivalence \(\mathbb{R}^{n}\setminus\mathbb{R}^{k}\simeq\mathbb{S}^{n-k-1}\times\mathbb{H}^ {k+1}\), where \(\mathbb{R}^{n+1}_{+}\) is replaced by anti-de Sitter (AdS) space, but the arguments run in parallel. In the same direction but with another flavor, we quote the multiplicity results in [4, 12, 13, 14, 21], which also exploit this conformal invariance and are based on a topological bifurcation technique; this is believed to be true in the much broader case of conformally variational invariants (c.f. [19, 20]).
One could extend this construction in a more geometric direction for not-round metrics. On this subject, we cite [52, 15] for non-flat gluing constructions for the constant curvature equation. Recently, in [2], a similar gluing construction is used to prove existence results for fourth order constant \(Q\)-curvature nondegenerate metrics with suitable growth conditions on the Weyl tensor.
We now describe the plan for the rest of the paper. In Section 2, we establish some terminology that will be used throughout the paper. In Section 3, we prove the dual representation formula relating \((\mathcal{Q}_{2\sigma,\Sigma})\) with \((\mathcal{Q}^{\prime}_{2\sigma,\Sigma})\). In Section 4, we classify Delaunay-type solutions as bubble towers. In Section 5, we provide balancing equations. Next, we define balanced configuration parameters and admissible perturbation sequences. We use this to define approximate solutions and prove some estimates for the linearized operator around this approximating family. In Section 6, we summarize some estimates involving the coefficients of the projection on the approximate null space. In Section 7, we reduce the proof of Theorem 2 to solving an infinite dimensional Toda system. We prove that under admissibility conditions, this system can be solved. In Appendix A, we recall some estimates concerning the interaction between two spherical solutions with different centers and radii.
## 2. Notations
We establish some notations and definitions that we will use frequently throughout the text for easy reference.
* \(m:=\lfloor\sigma\rfloor\) is the integer part of \(\sigma\), that is, be the greatest integer that does not exceed \(\sigma\);
* \(s:=\{\sigma\}\) is the fractional part of \(\sigma\), that is, \(s:=\sigma-\lfloor\sigma\rfloor\);
* \(0<\xi,\nu,\zeta_{1}\ll 1\) are small constants;
* \(C>0\) is a universal constant that may vary from line to line and even in the same line.
* \(a_{1}\lesssim a_{2}\) if \(a_{1}\leqslant Ca_{2}\), \(a_{1}\gtrsim a_{2}\) if \(a_{1}\geqslant Ca_{2}\), and \(a_{1}\simeq a_{2}\) if \(a_{1}\lesssim a_{2}\) and \(a_{1}\gtrsim a_{2}\).
* \(u=\mathcal{O}(f)\) as \(x\to x_{0}\) for \(x_{0}\in\mathbb{R}\cup\{\pm\infty\}\), if \(\limsup_{x\to x_{0}}(u/f)(x)<\infty\) is the Big-O notation;
* \(u=o(f)\) as \(x\to x_{0}\) for \(x_{0}\in\mathbb{R}\cup\{\pm\infty\}\), if \(\lim_{x\to x_{0}}(u/f)(x)=0\) is the little-o notation;
* \(u\simeq\widetilde{u}\), if \(u=\mathcal{O}(\widetilde{u})\) and \(\widetilde{u}=\mathcal{O}(u)\) as \(x\to x_{0}\) for \(x_{0}\in\mathbb{R}\cup\{\pm\infty\}\);
* \(\mathcal{C}^{j,\alpha}(\mathbb{R}^{n})\), where \(j\in\mathbb{N}\) and \(\alpha\in(0,1)\), is the Holder space; we simply denote \(\mathcal{C}^{j}(\mathbb{R}^{n})\) if \(\alpha=0\).
* \(W^{m,q}(\mathbb{R}^{n})\) is the classical Sobolev space, where \(m\in\mathbb{N}\) and \(q\in[1,+\infty]\); when \(m=0\) we simply denote \(L^{q}(\mathbb{R}^{n})\) when \(q=2\), we simply denote \(H^{\ell}(\mathbb{R}^{n})\);
* \(\mathcal{C}^{2\sigma}(\mathbb{R}^{n})=\mathcal{C}^{2m,2s}(\mathbb{R}^{n})\) is the classical Holder space;
* \(\gamma_{\sigma}=\frac{n-2\sigma}{2}\) is the Fowler rescaling exponent with \(\gamma_{\sigma}^{\prime}=\frac{n+2\sigma}{2}\) its dual;
* \(2_{\sigma}^{*}=\frac{2n}{n-2\sigma}\) is the critical Sobolev exponent with \(2_{\sigma}^{\prime\prime}=\frac{2n}{n+2\sigma}\) its dual;
* \(A_{1},A_{2}>0,A_{3}<0\) are constant defined by (A.1), (A.2), and (A.3), respectively;
* \(\mathcal{I}_{\infty}:=\{1,\ldots,N\}\times\mathbb{N}\times\{0,\ldots,n\} \simeq\ell^{\infty}(\mathbb{R}^{(n+1)N})\) is total index set;
* \(\boldsymbol{p}:=(p_{1},\ldots,p_{N})\in\mathbb{S}^{nN}\) with \(\Lambda:=\{p_{1},\ldots,p_{N}\}\subset\mathbb{S}^{n}\);
* \(\boldsymbol{x}:=(x_{1},\ldots,x_{N})\in\mathbb{R}^{nN}\) with \(\Sigma:=\{x_{1},\ldots,x_{N}\}\subset\mathbb{R}^{n}\);
* \(\boldsymbol{L}=(L_{1},\ldots,L_{N})\in\mathbb{R}^{N}_{+}\) is a vector of periods such that \(|\boldsymbol{L}|\gg 1\) is large enough arising from Proposition 5.3. Equivalently, \(\boldsymbol{\varepsilon}=(\varepsilon_{1},\ldots,\varepsilon_{N})\in\mathbb{ R}^{N}_{+}\) is a vector of necksizes such that \(0<|\boldsymbol{\varepsilon}|\ll 1\) is small enough;
* \((\boldsymbol{x},\boldsymbol{L})\in\mathbb{R}^{(n+1)N}\) are the moduli space parameters and \(\Upsilon_{\mathrm{conf}}:\mathbb{R}^{(n+1)N}\to\mathbb{R}^{(n+2)N}\) is the configuration map;
* \(\boldsymbol{q}=(q_{1},\ldots,q_{N})\in\mathbb{R}^{N}_{+}\) is a vector of comparable periods such that \(|\boldsymbol{q}|\gg 1\) is also large enough and satisfy (5.10), \(\boldsymbol{R}=(R^{1},\ldots,R^{N})\in\mathbb{R}^{N}\) and \(\boldsymbol{a}_{0}=(a_{0}^{1},\ldots,a_{0}^{N})\in\mathbb{R}^{nN}\) are the deformation parameters;
* \((\boldsymbol{q},\boldsymbol{a}_{0},\boldsymbol{R})\in\mathbb{R}^{(n+2)N}\) are the configurations parameters and \(\Upsilon_{\mathrm{per}}:\mathbb{R}^{(n+2)N}\to\ell_{\tau}^{\infty}(\mathbb{R} ^{(n+1)N})\) is the configuration map;
* \((\boldsymbol{q}^{b},\boldsymbol{a}_{0}^{b},\boldsymbol{R}^{b})\in\mathrm{ Bal}_{\sigma}(\Sigma)\) denotes a balanced configuration, that is, it satisfies \((\mathscr{B}_{1})\) and \((\mathscr{B}_{2})\).
* \((\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})\in\ell_{\tau}^{\infty}(\mathbb{ R}^{(n+1)N})\) (or \((\boldsymbol{a}_{j},\boldsymbol{r}_{j})\in\ell_{\tau}^{\infty}(\mathbb{R}^{(n+1)N})\)) are the perturbation sequences and \(\Upsilon_{\mathrm{per}}:\mathbb{R}^{(n+2)N}\to\ell_{\tau}^{\infty}(\mathbb{ R}^{(n+1)N})\) is the perturbation map;
* \((\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})\in\mathrm{Adm}_{\sigma}(\Sigma)\) denotes the admissible perturbation sequences, that is, it satisfies \((\mathscr{A}_{0})\) and \((\mathscr{A}_{1})\); equivalently \(\Upsilon_{\mathrm{per}}^{-1}(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})\in \mathrm{Bal}_{\sigma}(\Sigma)\);
* \(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a}_{j},\boldsymbol{\lambda}_ {j})}\in\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)\) denotes a Delaunay solution with associated error denoted by \(\phi_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a}_{j},\boldsymbol{\lambda}_ {j})}\in\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)\) and \(\Upsilon_{\mathrm{sol}}:\ell_{\tau}^{\infty}(\mathbb{R}^{(n+1)N})\to\mathcal{C}_ {*,\tau}(\mathbb{R}^{n}\setminus\Sigma)\) is the solution map;
* \(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a}_{j},\boldsymbol{\lambda}_ {j})}\in\mathrm{Apx}_{\sigma}(\Sigma)\) is an approximate solution, that is, \(\Upsilon_{\mathrm{sol}}^{-1}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L}, \boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})})\in\mathrm{Adm}_{\sigma}(\Sigma)\);
* \(\{Z_{j\ell}^{i}(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})\}_{(i,j,\ell)\in \mathcal{I}_{\infty}}\subset\mathcal{C}^{0}(\mathbb{R}^{n}\setminus\Sigma)\) is the associated family of approximating normalized cokernels;
* \(\{c_{j\ell}^{i}(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})\}_{(i,j,\ell)\in \mathcal{I}_{\infty}}\subset\mathcal{C}^{0}(\mathbb{R}^{n}\setminus\Sigma)\) is the associated family of coefficients of the projections on approximating normalized cokernels;
* \(\{\beta_{j\ell}^{i}(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})\}_{(i,j,\ell)\in \mathcal{I}_{\infty}}\subset\mathcal{C}^{0}(\mathbb{R}^{n}\setminus\Sigma)\) is the associated family of projections on approximating normalized cokernels.
## 3. Dual representation formula
This section shows that our equation and its dual are correspondents. We are based on the removable singularity result from [9, Theorem 1.1]. We also refer to [51, Proposition 4.1] for the integer cases \(\sigma\in\mathbb{N}\). In what follows, we consider the space
\[L_{s}(\mathbb{R}^{n}):=\left\{u\in L^{1}_{\mathrm{loc}}(\mathbb{R}^{n}):\int_{ \mathbb{R}^{n}}\frac{|u(x)|}{1+|x|^{n+2s}}\mathrm{d}x<+\infty\right\}\]
with \(s\in(0,1)\).
We first introduce the notation of distributional solutions to \((\mathcal{Q}_{2\sigma,\Sigma})\).
**Definition 3.1**.: _Let \(\sigma\in\mathbb{R}_{+}\) and \(n>2\sigma\). We say that a smooth solution \(u\in\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\Sigma)\cap L^{1}_{\mathrm{ loc}}(\mathbb{R}^{n})\) to \((\mathcal{Q}_{2\sigma,\Sigma})\) is a solution in the distributional sense to \((\mathcal{Q}_{2\sigma,\Sigma})\) if the equality below holds_
\[\int_{\mathbb{R}^{n}}u(-\Delta)^{\sigma}\varphi\mathrm{d}x=\int_{\mathbb{R}^{ n}}f_{\sigma}(u)\varphi\mathrm{d}x\quad\mathrm{in}\quad\mathbb{R}^{n}\setminus\Sigma \tag{3.1}\]
_for all \(\varphi\in\mathcal{C}^{\infty}_{c}(\mathbb{R}^{n}\setminus\Sigma)\)._
**Remark 3.2**.: _One can check that smooth solution to \((\mathcal{Q}_{2\sigma,\Sigma})\) are indeed distributional solutions._
We need the following auxiliary result to prove the equivalence: a combination of [9, Theorem 1.1 and Lemma 5.4].
**Lemma A**.: _Let \(\sigma\in\mathbb{R}_{+}\) and \(n>2\sigma\). If \(u\in\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\Sigma)\cap L^{1}_{\mathrm{ loc}}(\mathbb{R}^{n})\) is a distributional solution to \((\mathcal{Q}_{2\sigma,\Sigma})\), then \(f_{\sigma}\circ u\in L^{1}_{\mathrm{loc}}(\mathbb{R}^{n})\) and \(u\in L^{1}_{\mathrm{loc}}(\mathbb{R}^{n})\) is a distributional solution in \(\mathbb{R}^{n}\), that is, the distributional equation (3.1) holds. Moreover, one has_
\[\int_{\mathbb{R}^{n}}\frac{f_{\sigma}(u(x))}{1+|x|^{n-2\sigma}}\mathrm{d}x<+\infty. \tag{3.2}\]
_Consequently, we obtain that \(w\in\mathcal{C}^{\infty}(\mathbb{R}^{n}\setminus\Sigma)\) is defined as_
\[w(x):=\int_{\mathbb{R}^{n}}\mathcal{R}_{\sigma}(x-y)f_{\sigma}(u(y))\mathrm{d}y \tag{3.3}\]
_is well-defined and belongs to \(L_{s}(\mathbb{R}^{n})\) for every \(s>0\)._
Finally, we also recall the Liouville theorem from [34, Lemma 2.4].
**Lemma B**.: _Let \(\sigma\in\mathbb{R}_{+}\) and \(n>2\sigma\). Assume that \(w\in L_{s}(\mathbb{R}^{n})\) for some \(s\geqslant 0\) and_
\[(-\Delta)^{\sigma}w=0\quad\mathrm{in}\quad\mathbb{R}^{n},\]
_for some \(\sigma\geqslant s\). Then, one has that \(w\) is a polynomial of degree at most \(\lfloor 2s\rfloor\)._
With the lemmas above, we have our main result in this section.
**Proposition 3.3**.: _Let \(\sigma\in\mathbb{R}_{+}\) and \(n>2\sigma\). It holds that \((\mathcal{Q}_{2\sigma,\Sigma})\) and \((\mathcal{Q}^{\prime}_{2\sigma,\Sigma})\) are equivalents._
Proof of Proposition 3.3.: Let \(u\in\mathcal{C}^{\infty}(\mathbb{R}^{n}\setminus\Sigma)\) be a positive singular fast-decaying solution to \((\mathcal{Q}_{2\sigma,\Sigma})\). From (3.2), we have that \(w\in L_{s}(\mathbb{R}^{n})\) for every \(s>0\), \(s\neq 2\sigma\). Hence, if we define \(\widehat{w}=u-w\), then \(\widehat{w}\in L_{s}(\mathbb{R}^{n})\) for all \(s>0\) with \(s\neq 2\sigma\). In addition, since \((-\Delta)^{\sigma}\widehat{w}=0\) in \(\mathbb{R}^{n}\), we conclude that \(\widehat{w}\) is a polynomial of degree at most \(2m\), thanks to the Liouville theorem in Lemma B. Recall that we are considering solutions satisfying \(\lim_{|x|\to+\infty}u(x)=0\). Consequently, \(\widehat{w}\equiv 0\), and the dual representation holds.
## 4. Delaunay-type solutions
This section is devoted to the construction of solutions for the case of a single isolated singularity, that is, \(\Sigma=\{0\}\). We are inspired in [37], which is an adaption of the earlier constructions in [8, 26, 27] for the cases \(\sigma\in(0,1)\) and \(\sigma=1\).
### Integral Emden-Fowler coordinates
As a matter of fact, when \(\Sigma=\{0\}\), Eq. \((\mathcal{Q}_{2\sigma,\Sigma})\) can be rewritten as
\[\begin{cases}(-\Delta)^{\sigma}u=f_{\sigma}(u)\quad\text{in}\quad\mathbb{R}^{ n}\setminus\{0\},\\ \lim_{|x|\to+\infty}u(x)=0,\end{cases}\] ( \[\mathcal{Q}_{2\sigma,\infty}\] )
or into its dual form
\[\begin{cases}u=(-\Delta)^{-\sigma}(f_{\sigma}\circ u)\quad\text{in}\quad \mathbb{R}^{n}\setminus\{0\}\\ \lim_{|x|\to+\infty}u(x)=0.\end{cases}\] ( \[\mathcal{Q}^{\prime}_{2\sigma,\infty}\] )
It is straightforward to see from Proposition 3.3 that \((\mathcal{Q}_{2\sigma,\infty})\) are \((\mathcal{Q}^{\prime}_{2\sigma,\infty})\) equivalents.
**Remark 4.1**.: _For any \(\sigma\in(1,+\infty]\) and \(n>2\sigma\), there are two distinguished solutions to \((\mathcal{Q}^{\prime}_{2\sigma,\infty})\), which we describe as follows:_
1. _The cylindrical solution_ \[u_{\rm cyl}(|x|)=a_{n,\sigma}|x|^{-\gamma_{\sigma}},\] (4.1) _which is singular at the origin._
2. _The standard spherical solution_ (_also known as "bubble" solution_)__ \[u_{\rm sph}(|x|)=\left(\frac{2}{1+\left|x\right|^{2}}\right)^{\gamma_{\sigma}},\] (4.2) _which is non-singular at the origin._
We remark that all non-singular solutions to the blow-up limit problem were classified in [24], which are given by deformations of the standard bubble solution. This reflects the invariance of equation \((\mathcal{Q}^{\prime}_{2\sigma,\infty})\) with respect to the entire Euclidean group with translations and dilations.
**Proposition A**.: _Let \(\sigma\in(1,+\infty]\) and \(n>2\sigma\). If \(u\in\mathcal{C}^{2\sigma}(\mathbb{R}^{n})\) is a positive smooth non-singular solution to \((\mathcal{Q}^{\prime}_{2\sigma,\infty})\), then there exists \(\lambda\in\mathbb{R}\) and \(x_{0}\in\mathbb{R}^{n}\) such that_
\[u\equiv U_{\lambda,x_{0}}, \tag{4.3}\]
_where_
\[U_{\lambda,x_{0}}(x)=\left(\frac{2\lambda}{\lambda^{2}+\left|x-x_{0}\right|^{ 2}}\right)^{\gamma_{\sigma}} \tag{4.4}\]
_for some \(\lambda>0\) and \(x_{0}\in\mathbb{R}^{n}\). This family of solutions will be called spherical or bubble solutions._
The problem of classifying the complete set of positive smooth singular solutions to \((\mathcal{Q}^{\prime}_{2\sigma,\infty})\) is much more challenging and only accomplished for a few cases. On this subject, Chen, Li, and Ou proved that all solutions are radially symmetric with respect to the origin. In addition, Jin and Xiong [37] only proved the existence of such a solution by a direct maximization method. Furthermore, they also study the local asymptotic behavior of positive singular solutions to
\[(-\Delta)^{\sigma}u=f_{\sigma}(u)\quad\text{in}\quad B_{R}^{*},\] ( \[\mathcal{Q}_{2\sigma,R}\] )
or into its dual form
\[u=(-\Delta)^{-\sigma}(f_{\sigma}\circ u)\quad\text{in}\quad B_{R}^{*},\] ( \[\mathcal{Q}^{\prime}_{2\sigma,R}\] )
where \(B_{R}^{*}\subset\mathbb{R}^{n}\setminus\{0\}\) given by \(B_{R}^{*}=B_{R}(0)\setminus\{0\}\) is the punctured ball of radius \(R>0\).
To study this class of equations, we define an important change of variables that turns \((\mathcal{Q}_{2\sigma,\infty})\) into an integral one-dimensional problem.
**Definition 4.2**.: _Let \(\sigma\in(1,+\infty]\) and \(n>2\sigma\) We define the integral Emden-Fowler change of variables \((\)or cylindrical logarithm coordinates\()\) given by_
\[\mathfrak{F}_{\sigma}:\mathcal{C}_{c}^{\infty}(B_{R}^{*})\to\mathcal{C}_{c}^{ \infty}(\mathcal{C}_{L})\quad\text{given by}\quad\mathfrak{F}_{\sigma}(u)=e^{- \gamma_{\sigma}t}u(e^{-t},\theta), \tag{4.5}\]
_where \(t=-\ln R\), \(\theta=x/|x|\), \(\mathcal{C}_{L}:=(L,+\infty)\) with \(L=-\ln|x|\) and \(\gamma_{\sigma}:=\frac{n-2\sigma}{2}\). The inverse of this isomorphism is_
\[(\mathfrak{F}_{\sigma})^{-1}:\mathcal{C}_{c}^{\infty}(\mathcal{C}_{L})\to \mathcal{C}_{c}^{\infty}(B_{R}^{*})\quad\text{given by}\quad(\mathfrak{F}_{ \sigma})^{-1}\left(v\right)=|x|^{\gamma_{\sigma}}v(-\ln|x|,\theta). \tag{4.6}\]
_The quantity \(\gamma_{\sigma}>0\) will be referred to as the Fowler rescaling exponent. From now on, we denote by \(v(t,\theta):=\mathfrak{F}_{\sigma}(u(x))\) and \(u(x):=(\mathfrak{F}_{\sigma})^{-1}(v(t,\theta))\), conversely._
Using this change of variable, Eq. \((\mathcal{Q}_{2\sigma,R})\) can be reformulated as the following one-dimensional problem
\[\begin{cases}(-\Delta)_{\mathrm{cyl}}^{\sigma}v=f_{\sigma}(v)\quad\text{in} \quad\mathcal{C}_{L},\\ \lim_{t\to+\infty}v(t)=0.\end{cases}\] ( \[\mathcal{O}_{2\sigma,L}\] )
Here \((-\Delta)_{\mathrm{cyl}}^{\sigma}:\mathcal{C}^{2\sigma}(\mathcal{C}_{L})\to \mathcal{C}^{0}(\mathcal{C}_{L})\) is the operator higher-order operator given by
\[(-\Delta)_{\mathrm{cyl}}^{\sigma}:=(-\Delta)_{\mathrm{cyl}}^{s}\circ(-\Delta) _{\mathrm{cyl}}^{m}, \tag{4.7}\]
where \((-\Delta)_{\mathrm{cyl}}^{m}\) and \((-\Delta)_{\mathrm{cyl}}^{s}\) denote the cylindrical poly-Laplacian and the fractional Laplacian, respectively, defined as
\[(-\Delta)_{\mathrm{cyl}}^{m}:=\sum_{\ell=0}^{2m}\sum_{j=0}^{2m}K_{2m,j}^{(\ell )}\partial_{t}^{(j)}(-\Delta_{\theta})^{\ell},\]
where \(K_{2m,j}^{(\ell)}=K_{2m,j}^{(\ell)}(n)>0\) for \(j,\ell\in\{0,\dots,2m\}\) are dimensional constants, and
\[(-\Delta)_{\mathrm{cyl}}^{s}v(t,\theta):=\int_{-L}^{+L}\widehat{\mathcal{K}}_ {\sigma}(t-\tau,\theta-\varsigma)[v(t,\theta)-v(\tau,\varsigma)]\mathrm{d} \tau\mathrm{d}\varsigma,\]
where \(\mathcal{K}_{\sigma,\mathrm{cyl}}:\mathcal{C}_{L}\times\mathcal{C}_{L}\to \mathbb{R}\) is the kernel (2) written in Emden-Fowler coordinates. As usual, the dual form of this equation is given by
\[\begin{cases}v=(-\Delta)_{\mathrm{cyl}}^{-\sigma}(f_{\sigma}\circ v)\quad \text{in}\quad\mathcal{C}_{L},\\ \lim_{t\to+\infty}v(t)=0.\end{cases}\] ( \[\mathcal{O}_{2\sigma,L}^{\prime}\] )
Here \((-\Delta)_{\mathrm{cyl}}^{-\sigma}\) is the integral linear operator defined by
\[(-\Delta)_{\mathrm{cyl}}^{-\sigma}(f_{\sigma}\circ v)(t,\theta):=(\widehat{ \mathcal{R}}_{\sigma}*(f_{\sigma}\circ v))(t,\theta)=\int_{-\infty}^{+\infty} \widehat{\mathcal{R}}_{\sigma}(t-\tau,\theta-\varsigma)f_{\sigma}(v(\tau, \varsigma))\mathrm{d}\tau,\]
where \(\widehat{\mathcal{R}}_{\sigma}:\mathcal{C}_{L}\times\mathcal{C}_{L}\to \mathbb{R}\) is the Riesz kernel (3) written in Emden-Fowler coordinates. Henceforth, we keep the notation \(\mathcal{K}_{\sigma,\mathrm{cyl}}=\widehat{\mathcal{K}}_{\sigma}\) and \(\mathcal{R}_{\sigma,\mathrm{cyl}}=\widehat{\mathcal{R}}_{\sigma}\) for the sake of simplicity.
**Remark 4.3**.: _Notice that \((-\Delta)_{\mathrm{cyl}}^{-\sigma}\) is an abuse of notation, which we keep for simplicity. In the geometric language, this change of variables corresponds to a restriction of the conformal diffeomorphism between the entire cylinder and the punctured space. In other words, one has_
\[(-\Delta)_{\mathrm{cyl}}^{-\sigma}=P_{2\sigma}(g_{\mathrm{cyl}}),\]
_where \(g_{\mathrm{cyl}}=\mathrm{d}t^{2}+\mathrm{d}\theta^{2}\) stands for the cylindrical metric and \(\mathrm{d}\theta=e^{-2t}\delta\), where \(\delta\) is the standard flat metric._
Notice that since in the blow-up limit situation (\(R=+\infty\)), solutions to (\(\mathcal{Q}_{2\sigma,\infty}\)) are rotationally invariant, that is, \(u(x)=u(r)\) with \(r=|x|\). Using this change of variable, Eq. (\(\mathcal{Q}_{2\sigma,\infty}\)) can be reformulated as the following one-dimensional problem
\[\begin{cases}(-\Delta)^{\sigma}_{\rm cyl}v=f_{\sigma}(v)\quad\text{in}\quad \mathbb{R},\\ \lim_{t\to+\infty}v(t)=0.\end{cases}\] ( \[\mathcal{O}_{2\sigma,\infty}\] )
Here \((-\Delta)^{\sigma}_{\rm cyl}\) represents the operator higher-order operator (written in Emden-Fowler coordinates (4.5)), namely
\[(-\Delta)^{\sigma}_{\rm cyl}v(t):=\int_{-\infty}^{+\infty}\widehat{\mathcal{K }}_{\sigma}(t-\tau)[v(t)-v(\tau)]\mathrm{d}\tau, \tag{4.8}\]
where \(\widehat{\mathcal{K}}_{\sigma}:\mathbb{R}\times\mathbb{R}\to\mathbb{R}\) is a kernel given by
\[\widehat{\mathcal{K}}_{\sigma}(t)=2^{-\gamma^{\prime}_{\sigma}}\int_{ \mathbb{S}^{n-1}}|\cosh(t)-\langle\theta,\tau\rangle|^{-\gamma^{\prime}_{ \sigma}}\mathrm{d}\tau=\int_{\mathbb{S}^{n-1}}e^{-\gamma^{\prime}_{\sigma}t }\left(1+e^{-2t}-2e^{-t}\langle\theta,\tau\rangle\right)^{-\gamma^{\prime}_{ \sigma}}\mathrm{d}\tau. \tag{4.9}\]
As before, the dual form of this equation is given by
\[\begin{cases}v=(-\Delta)^{-\sigma}_{\rm cyl}(f_{\sigma}\circ v)\quad\text{in} \quad\mathbb{R},\\ \lim_{t\to+\infty}v(t)=0.\end{cases}\] ( \[\mathcal{O}^{\prime}_{2\sigma,\infty}\] )
Here \((-\Delta)^{-\sigma}_{\rm cyl}\) is the integral linear operator defined by
\[(-\Delta)^{-\sigma}_{\rm cyl}(f_{\sigma}\circ v)(t):=(\widehat{\mathcal{R}}_{ \sigma}*(f_{\sigma}\circ v))(t)=\int_{-\infty}^{+\infty}\widehat{\mathcal{R}} _{\sigma}(t-\tau)f_{\sigma}(v(\tau))\mathrm{d}\tau,\]
where \(\widehat{\mathcal{R}}_{\sigma}:\mathbb{R}\times\mathbb{R}\to\mathbb{R}\) is a kernel given by
\[\widehat{\mathcal{R}}_{\sigma}(t)=2^{-\gamma_{\sigma}}\omega_{n-2}\int_{-1}^ {1}\left(1-\zeta_{1}^{2}\right)^{\frac{n-3}{2}}\lvert\cosh(t)-\zeta_{1} \rvert^{-\gamma_{\sigma}}\mathrm{d}\zeta_{1}. \tag{4.10}\]
**Remark 4.4**.: _It is possible to express this kernel in terms of hypergeometric functions. We also observe \(\widehat{\mathcal{R}}_{\sigma}(\xi)\sim 1\) is bounded and Holder continuous, whereas \(\widehat{\mathcal{K}}_{\sigma}(\xi)\sim|\xi|^{1-2s}\) when \(\sigma\in(1,+\infty)\). Furthermore, they behave qualitatively as_
\[\widehat{\mathcal{K}}_{\sigma}(\xi)\sim e^{-\gamma^{\prime}_{\sigma}|\xi|} \quad\text{as}\quad|\xi|\to+\infty \tag{4.11}\]
_and_
\[\widehat{\mathcal{R}}_{\sigma}(\xi)\sim e^{-\gamma_{\sigma}|\xi|}\quad\text{as }\quad|\xi|\to+\infty, \tag{4.12}\]
_where \(\xi:=|t-\tau|\). We refer to [8, 37] for proof of these facts._
Using this new formulation, one has the following
**Remark 4.5**.: _As before, there are two distinguished solutions to (\(\mathcal{O}^{\prime}_{2\sigma,L}\)), which we describe as follows:_
* _The cylindrical solution, which is_ \[v_{\rm cyl}(t)\equiv a_{n,\sigma},\] _where_ \(v_{\rm cyl}=\mathfrak{F}_{\sigma}(u_{\rm cyl})\in\mathcal{C}^{2\sigma}( \mathbb{R})\) _with_ \(u_{\rm cyl}\in\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\{0\})\) _given by (_4.1_)._
* _The standard spherical solution (also known as "bubble"_) which is_ \[v_{\rm sph}(t)=\cosh(t)^{\gamma_{\sigma}},\] (4.13) _where_ \(v_{\rm sph}=\mathfrak{F}_{\sigma}(u_{\rm sph})\in\mathcal{C}^{2\sigma}( \mathbb{R})\) _with_ \(u_{\rm sph}\in\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\{0\})\) _given by (_4.2_)._
### Asymptotic classification of Delaunay-type solutions
Now we prove the existence of even solutions to \((\mathcal{O}^{\prime}_{2\sigma,L})\) with large periods which are close to the standard bubble tower solution given by (5.5) in a suitable weighted Holder norm.
First, for the standard bubble solution, we have the following nondegeneracy result, which is based on [42, Lemma 5.1] and [25, Lemma 5.1]. In our situation, this is proved in [39, Lemma A.1]. Nevertheless, we include a sketch of the proof in Appendix B for completeness.
**Lemma 4.6**.: _Let \(\sigma\in(1,\infty)\) and \(n>2\sigma\). The standard bubble solution \(u_{\rm sph}\in\mathcal{C}^{2\sigma}(\mathbb{R}^{n})\) given by (4.2) satisfying \((\mathcal{Q}_{2\sigma,\infty})\) is nondegenerate in a sense, the set of bounded solutions to the linearized equation_
\[\phi-(-\Delta)^{-\sigma}(f^{\prime}_{\sigma}(u_{\rm sph})\phi)=0\quad\text{in} \quad\mathbb{R}^{n} \tag{4.14}\]
_are spanned by the functions_
\[\gamma_{\sigma}u_{\rm sph}+x\cdot\nabla u_{\rm sph}\quad\text{ and }\quad \partial_{x_{i}}u_{\rm sph}\quad\text{for}\quad i\in\{1,\ldots,n\}.\]
Proof.: See Appendix B.
One can also reformulate the last result as follows
**Lemma 4.7**.: _Let \(\sigma\in(1,\infty)\) and \(n>2\sigma\). The standard bubble solution \(v_{\rm sph}\in\mathcal{C}^{2\sigma}(\mathbb{R})\) given by (4.13) satisfying \((\mathcal{O}^{\prime}_{2\sigma,L})\) is nondegenerate in the sense that all bounded solutions of the linearized equation_
\[\psi-(-\Delta)^{-\sigma}_{\rm cyl}(f^{\prime}_{\sigma}(v_{\rm sph})\psi)=0 \quad\text{in}\quad\mathbb{R}\]
_are spanned by the translations \(v_{\rm sph}(\cdot-T)\) with \(T>0\)._
Proof.: It follows by undoing the Emden-Fowler change of variables in (4.5).
Second, we restrict ourselves to the open interval \((-L,L)\) equipped with Dirichlet boundary conditions. In what follows, we fix \(L\in\mathbb{N}\). Let \(j\in\mathbb{N}\) and \(\alpha\in(0,1)\), we denote by \(\mathcal{C}^{j,\alpha}_{L}(\mathbb{R})\) the classical Holder space \(\mathcal{C}^{j,\alpha}(\mathbb{R})\) restricted to \(2L\)-periodic functions on the open interval \((-L,L)\). For \(\alpha=0\), we simply denote \(\mathcal{C}^{j}_{L}(\mathbb{R})\). Let \(\ell\in\mathbb{N}\) and \(q\in[1,+\infty]\), we will keep the notation \(W^{\ell,q}_{L}(\mathbb{R})\) for the classical Sobolev space \(W^{\ell,q}(\mathbb{R})\) restricted to \(2L\)-periodic functions on the open interval \((-L,L)\). For \(q=2\), we simply denote \(H^{\ell}_{L}(\mathbb{R})\).
To seek \(2L\)-periodic solutions, we consider the following periodic problem
\[\begin{cases}v=(-\Delta)^{-\sigma,L}_{\rm cyl}(f_{\sigma}\circ v)\quad\text{ in}\quad\mathbb{R},\\ \lim_{t\to+\infty}v(t)=0,\end{cases}\] ( \[\mathcal{O}^{\prime}_{2\sigma,L}\] )
where \((-\Delta)^{-\sigma,L}_{\rm cyl}:\mathcal{C}^{0}_{L}(\mathbb{R})\to\mathcal{C} ^{2\sigma}_{L}(\mathbb{R})\) is the integral periodic linear operator defined by
\[(-\Delta)^{-\sigma,L}_{\rm cyl}(f_{\sigma}\circ v)(t):={\rm p.v.}\int_{-L}^{L} f_{\sigma}(v(\tau))\widehat{\mathcal{R}}_{\sigma,L}(t-\tau){\rm d}\tau.\]
For this, we shall work with the norm given by
\[\|v\|_{H^{\sigma}_{L,0}(\mathbb{R})}:=\left([v^{(m)}]_{L^{s}_{L}(\mathbb{R})}+ \sum_{\ell=0}^{m}\|v^{(\ell)}\|^{2}_{L^{2}_{L}(\mathbb{R})}\right)^{1/2},\]
where
\[[v^{(m)}]_{L^{s}_{L}(\mathbb{R})}:=\int_{-L}^{L}\int_{-L}^{L}[v^{(m)}(t)-v^{(m )}(\tau)]^{2}\widehat{\mathcal{K}}_{s,L}(t-\tau){\rm d}\tau{\rm d}t.\]
We also define the following higher-order functional space
\[H^{\sigma}_{L}(\mathbb{R})=\{v\in\mathcal{C}^{2\sigma}_{L}(\mathbb{R}):\ \|v\|_{H^{ \sigma}_{L}(\mathbb{R})}<\infty\}.\]
Furthermore, the evenness and periodicity
\[H^{\sigma}_{L,*}(\mathbb{R})=\{v\in H^{\sigma}_{L}(\mathbb{R}):v(t)=v(-t)\text{ and }v(t+2L)=v(t)\text{ for all }t\in\mathbb{R}\}.\]
As well as, taking into consideration the boundary condition
\[H^{\sigma}_{L,0}(\mathbb{R})=\{v\in H^{\sigma}_{L}(\mathbb{R}):v^{(\ell)}(-L)=v ^{(\ell)}(L)=0\text{ for }\ell\in\{1,\ldots,m\}\}.\]
Finally, a suitable space to work is
\[H^{\sigma}_{L,0,*}(\mathbb{R})=H^{\sigma}_{L,0}(\mathbb{R})\cap H^{\sigma}_{L, *}(\mathbb{R}).\]
Here \(\widehat{\mathcal{K}}_{s,L}:(-L,L)\times(-L,L)\to\mathbb{R}\) given by
\[\widehat{\mathcal{K}}_{s,L}(t-\tau)=\sum_{j\in\mathbb{Z}}\widehat{\mathcal{K} }_{s}(t-\tau-jL) \tag{4.15}\]
is a periodic Kernel, where \(\widehat{\mathcal{K}}_{s}:\mathbb{R}\times\mathbb{R}\to\mathbb{R}\) is defined as (4.9).
Now we will introduce some standard Holder fractional from [28, Theorem 8.2].
**Lemma C**.: _Let \(s\in(0,1)\) and \(n>2s\). Assume that \(p\in[1,+\infty)\). Then, there exists a constant \(C>0\), depending on \(\sigma\) and \(p\), such that_
\[\|v\|_{\mathcal{C}^{0,\alpha}_{L}(\mathbb{R})}\leqslant C\left(\|v\|_{L^{p}_{ L}(\mathbb{R})}^{p}+\int_{-L}^{L}\int_{-L}^{L}\frac{|v(t)-v(\tau)|^{p}}{|t-\tau|^{1+ sp}}\mathrm{d}t\mathrm{d}\tau\right)^{\frac{1}{p}} \tag{4.16}\]
_for any \(v\in L^{p}_{L}(\mathbb{R})\), where \(\alpha=s-\frac{1}{p}\)._
**Lemma 4.8**.: _Let \(\sigma\in(1,+\infty]\) and \(n>2\sigma\). Assume that \(p\in[1,+\infty)\) and \(\sigma\in(1,\frac{n}{2})\) is such that \(\sigma-\frac{1}{p}\notin\mathbb{Z}\). Then, there exists a constant \(C>0\), depending on \(\sigma\) and \(p\), such that_
\[\|v\|_{\mathcal{C}^{\ell,\alpha}_{L}(\mathbb{R})}\leqslant C\left(\|v\|_{W^{m, p}_{L}(\mathbb{R})}^{p}+\int_{-L}^{L}\int_{-L}^{L}\frac{|v^{(m)}(t)-v^{(m)}(\tau)| ^{p}}{|t-\tau|^{1+sp}}\mathrm{d}t\mathrm{d}\tau\right)^{\frac{1}{p}} \tag{4.17}\]
_for any \(v\in W^{\sigma,p}_{L}(\mathbb{R})\), where \(\ell=\lfloor\sigma-\frac{1}{p}\rfloor\) and \(\alpha=\sigma-\frac{1}{p}-\lfloor\sigma-\frac{1}{p}\rfloor\)._
Proof.: It is a direct consequence of Lemma C by using a standard induction argument.
We also need the following strong maximum principle.
**Lemma 4.9**.: _Let \(\sigma\in(1,+\infty]\) and \(n>2\sigma\). If \(v\in H^{\sigma}_{L}(\mathbb{R})\cap\mathcal{C}^{0}(\mathbb{R})\) is a nonnegative solution to \((\mathcal{O}^{\prime}_{2\sigma,L})\). Then, either \(v>0\) or \(v\equiv 0\)._
Proof.: Indeed, since \(v\geqslant 0\), it follows that
\[v=(-\Delta)^{-\sigma}_{\mathrm{cyl}}(f_{\sigma}\circ v)\geqslant 0. \tag{4.18}\]
Assume that there exists a point \(t_{0}\in\mathbb{R}\) with \(v(t_{0})=0\), then
\[v(t_{0})-v(t)=(-\Delta)^{-\sigma}_{\mathrm{cyl}}[f_{\sigma}(v(t_ {0}))-f_{\sigma}(v(t))]\] \[= \mathrm{p.v.}\int_{-\infty}^{+\infty}f_{\sigma}(v(t_{0}))\widehat {\mathcal{R}}_{\sigma}(t_{0}-\tau)\mathrm{d}\tau-\mathrm{p.v.}\int_{-\infty}^{ +\infty}f_{\sigma}(v(t))\widehat{\mathcal{R}}_{\sigma}(t-\tau)\mathrm{d}\tau\] \[= -\mathrm{p.v.}\int_{-\infty}^{+\infty}f_{\sigma}(v(\tau))\widehat {\mathcal{R}}_{\sigma}(t-\tau)\mathrm{d}\tau\leqslant 0\]
satisfies (4.18) only in the case \(v\equiv 0\).
Now we have the most important lemma in this section
**Lemma 4.10**.: _Let \(\sigma\in(1,+\infty)\) and \(n>2\sigma\). For any \(L\gg 1\) sufficiently large, there exist there exist a sequence of periods \((L_{j})\in\ell^{\infty}(\mathbb{R}_{+})\), an error function \(\psi_{(0,L_{j})}\in H^{\sigma}_{L}(\mathbb{R})\) and a unique positive even solution \(\bar{v}_{(0,L_{j})}\in H^{\sigma}_{L}(\mathbb{R})\) to the following periodic boundary value problem_
\[\begin{cases}v=(-\Delta)^{-\sigma,L}_{\rm{cyl}}(f_{\sigma}\circ v)\quad\text{ in }\quad(-L,L),\\ v^{(\ell)}(-L)=v^{(\ell)}(L)=0\quad\text{ for }\quad\ell=1,3,\ldots,2m-1, \end{cases}\] ( \[\overline{\mathcal{O}}^{\prime}_{2\sigma,L}\] )
_which satisfy_
\[\bar{v}_{(0,L_{j})}(t)=\widehat{V}^{+}_{(0,L_{j})}(t)+\psi_{(0,L_{j})}(t)\]
_and_
\[\|\psi_{(0,L_{j})}\|_{H^{\sigma}(\mathbb{R})}\to 0\quad\text{as}\quad L\to+\infty,\]
_where \(\widehat{V}^{+}_{(0,L_{j})}=\sum_{j\in\mathbb{Z}}V_{(0,L_{j})}(t)\) with \(V_{(0,L_{j})}(t)=\cosh(t-L_{j})\) and \(L_{j}=2jL\) for \(j\in\mathbb{Z}\) is the standard bubble tower solution \((\)see Definition 5.2\()\). Moreover, we have the following Holder estimate_
\[\|\psi_{(0,L_{j})}\|_{\mathcal{C}^{2\sigma}_{L}(\mathbb{R})}\lesssim e^{- \gamma_{\sigma}L(1+\xi)} \tag{4.19}\]
_for some \(\alpha\in(0,1)\) and \(\xi>0\) independent of the period \(L\gg 1\) large._
Proof.: First, by symmetry \(\bar{v}_{L}\in\mathcal{C}^{2\sigma}_{L}(\mathbb{R})\) given by (5.2) satisfies the boundary condition at \(t=\pm L\), that is, \(\widehat{V}^{+}_{(0,L_{j})}\in H^{\sigma}_{(0,L_{j})}(\mathbb{R})\). Now writing \(v=\widehat{V}^{+}_{(0,L_{j})}+\psi\), we can reformulate \((\mathcal{O}^{\prime}_{2\sigma,L})\) as
\[\mathscr{N}_{\sigma,L}(\widehat{V}^{+}_{(0,L_{j})}+\psi)=0\quad\text{in} \quad\mathbb{R},\]
where
\[\mathscr{N}_{\sigma,L}(v):=v-(-\Delta)^{-\sigma,L}_{\rm{cyl}}(f_{\sigma}\circ v). \tag{4.20}\]
From now on, let us fix the notation
\[\mathscr{N}_{\sigma}(0,L_{j})(\psi):=\mathscr{N}_{\sigma,L}(\widehat{V}^{+}_{ (0,L_{j})}+\psi)\]
Next, by linearizing this functional around the standard bubble tower solution, we find
\[\mathscr{L}_{\sigma}(0,L_{j})(\psi)=\mathscr{E}_{\sigma}(0,L_{j})(\widehat{V} ^{+}_{(0,L_{j})})+\mathscr{S}_{\sigma}(0,L_{j})(\psi), \tag{4.21}\]
where \(\mathscr{L}_{\sigma}(0,L_{j}):H^{\sigma}_{L}(\mathbb{R})\to H^{\sigma}_{L}( \mathbb{R})\) defined as \(\mathscr{L}_{\sigma}(0,L_{j}):={\rm d}\mathscr{N}_{\sigma}[\widehat{V}^{+}_{( 0,L_{j})}]\) satisfies
\[\mathscr{L}_{\sigma}(0,L_{j})(\psi):=\psi-\mathscr{K}_{\sigma}(0,L_{j})(\psi), \tag{4.22}\]
where
\[\mathscr{K}_{\sigma}(0,L_{j})(\psi):=(-\Delta)^{-\sigma}(f^{\prime}_{\sigma} \circ\widehat{V}^{+}_{(0,L_{j})})\psi=\int_{-L}^{L}f^{\prime}_{\sigma}( \widehat{V}^{+}_{(0,L_{j})})\psi\widehat{\mathcal{R}}_{\sigma,L}(t-\tau){\rm d}\tau \tag{4.23}\]
represents the derivative of the nonlinear functional (4.20) at the standard bubble tower solution (5.2). Also, the superlinear term \(\mathscr{L}_{\sigma}(0,L_{j}):H^{\sigma}_{L}(\mathbb{R})\to H^{\sigma}_{L}( \mathbb{R})\) is given by
\[\mathscr{S}_{\sigma}(0,L_{j})(\psi)=\int_{-L}^{L}\left[f_{\sigma}(\widehat{V} ^{+}_{(0,L_{j})}+\psi)-f_{\sigma}(\widehat{V}^{+}_{(0,L_{j})})-f^{\prime}_{ \sigma}(\widehat{V}^{+}_{(0,L_{j})})\psi\right]\widehat{\mathcal{R}}_{\sigma,L }(t-\tau){\rm d}\tau.\]
is a superlinear term, and the remainder error term is given by
\[\mathscr{E}_{\sigma}(0,L_{j})(\widehat{V}^{+}_{(0,L_{j})})=\int_{-L}^{L} \left[f_{\sigma}\left(\sum_{j\in\mathbb{Z}}V_{(0,L_{j})}(\tau)\right)-\sum_{j \in\mathbb{Z}}f_{\sigma}\left(V_{(0,L_{j})}(\tau)\right)\right]\widehat{ \mathcal{R}}_{\sigma,L}(t-\tau){\rm d}\tau, \tag{4.24}\]
which represents the error in approximating a solution to (5.2) by a standard bubble-tower solution.
To apply classical Fredholm theory we need to prove the following claim:
**Claim 1:** The operator \(\mathscr{K}_{\sigma}(0,L_{j}):H^{\sigma}_{L}(\mathbb{R})\to H^{\sigma}_{L}( \mathbb{R})\) defined in (4.23) is bounded and compact and satisfies
\[\|\mathscr{K}_{\sigma}(0,L_{j})(\psi)\|_{H^{\sigma}_{L}(\mathbb{R})}\lesssim\| \psi\|_{L^{2}_{L}(\mathbb{R})} \tag{4.25}\]
uniformly on \(L\gg 1\) large.
Initially, we prove that the operator is bounded and compact. Indeed, by its definition, \(\widehat{\mathcal{K}}_{\sigma,L}\in\mathcal{C}^{j,\alpha}(\mathbb{R})\) for any \(j\in\{0,\dots,m\}\) and some \(\alpha\in(0,1)\). Moreover, for each \(j\), we have
\[\left|\frac{\mathrm{d}^{j}}{\mathrm{d}t^{j}}\widehat{\mathcal{K}}_{\sigma,L}( t)\right|\lesssim e^{-c_{j}|t|} \tag{4.26}\]
uniformly on \(L\gg 1\) large. Similarly, we have
\[|\widehat{V}^{+}_{(0,L_{j})}|\lesssim e^{-c|t|}\]
uniformly on \(L\gg 1\) large. By Holder's inequality, it follows directly
\[\left\|\mathscr{K}_{\sigma}(0,L_{j})(\psi)^{(j)}\right\|_{L^{2}_{L}(\mathbb{R })}\lesssim\|\psi\|_{L^{2}_{L}(\mathbb{R})}\quad\text{for all}\quad j\in\{0, \dots,m\}\]
uniformly on \(L\gg 1\) large.
Also, using the Holder continuity of \((\widehat{\mathcal{K}}_{\sigma,L})^{(m)}\in\mathcal{C}^{0,\alpha+s}(\mathbb{ R})\), we have
\[\left|\mathscr{K}_{\sigma}(0,L_{j})(\psi)^{(m)}(\tau)-\mathscr{K }_{\sigma}(0,L_{j})(\psi)^{(m)}(t)\right|\] \[\lesssim\int_{-L}^{L}f^{\prime}_{\sigma}(\widehat{V}^{+}_{(0,L_{j })}(\xi))\left|\frac{\mathrm{d}^{m}}{\mathrm{d}\xi^{m}}\left(\widehat{ \mathcal{K}}_{\sigma,L}(\tau-\xi)-\widehat{\mathcal{K}}_{\sigma,L}(t-\xi) \right)\right||\psi(\xi)|\mathrm{d}\xi\] \[\lesssim\int_{-L}^{L}|\psi(\xi)||t-\tau|^{\alpha}\mathrm{d}\xi.\]
Thus, by using the asymptotic behavior of the kernel near the origin given by (4.11) and (4.26) combined with the last inequality, we obtain
\[\left[\mathscr{K}_{\sigma}(0,L_{j})(\psi)\right]_{L^{s}_{L}( \mathbb{R})} =\int_{-L}^{L}\left|\mathscr{K}_{\sigma}(0,L_{j})(\psi)^{(m)}(\tau )-\mathscr{K}_{\sigma}(0,L_{j})(\psi)^{(m)}(t)\right|^{2}\widehat{\mathcal{K} }_{\sigma,L}(t-\tau)\mathrm{d}\tau\mathrm{d}t\] \[\lesssim\|\psi\|_{L^{2}_{L}(\mathbb{R})}\,,\]
uniformly on \(L\gg 1\) large, which proves (4.25). In conclusion, by compact embedding, the desired conclusion holds for the map \(\mathscr{K}_{\sigma}(0,L_{j}):H^{\sigma}_{L}(\mathbb{R})\to H^{\sigma}_{L}( \mathbb{R})\).
The proof of the first claim is now finished.
Second, now in order to apply Fredholm alternative to conclude that for any \(h\in L^{2}_{L}(\mathbb{R})\), there exists a unique solution \(\psi_{(0,L_{j})}\in H^{\sigma}_{L}(\mathbb{R})\) to the linear inhomogeneous problem
\[\psi_{(0,L_{j})}-\mathscr{K}_{\sigma}(0,L_{j})(\psi_{(0,L_{j})})=h\quad\text{ in}\quad(-L,L).\]
One needs to prove the uniqueness result below:
**Claim 2:** The linear homogeneous equation
\[\psi-\mathscr{K}_{\sigma}(0,L_{j})(\psi)=0\quad\text{in}\quad(-L,L)\]
admits only zero solutions in \(L^{2}_{L}(\mathbb{R})\).
As a matter of fact, note that the equation above with Holder's inequality yields directly that
\[\|\psi\|_{L^{\infty}_{L}(\mathbb{R})}\lesssim\|\psi\|_{L^{2}_{L}(\mathbb{R})}\,. \tag{4.27}\]
Next, we use the nondegeneracy of the standard bubble solution in Lemma 4.6 to conclude that \(\psi\equiv 0\), and thus we prove Claim 2.
Lastly, by the standard fixed-point argument, a unique solution \(\psi_{(0,L_{j})}\in H^{\sigma}_{L}(\mathbb{R})\) to (4.21) satisfying the estimate
\[\|\psi_{(0,L_{j})}\|_{H^{\sigma}_{L,0}(\mathbb{R})}\lesssim\|\mathscr{E}_{\sigma }(0,L_{j})(\widehat{V}^{+}_{(0,L_{j})})\|_{L^{2}_{L}(\mathbb{R})}, \tag{4.28}\]
to conclude the proof of the proposition, we are left to obtain estimates for the right-hand side of the last inequality.
This is the content of our third claim.
**Claim 3:** It holds that \(\|\psi\|_{H^{\sigma}_{L,0}(\mathbb{R})}\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\) for some \(\xi>0\) uniformly on \(L\gg 1\) large. In fact, using (5.2) it follows
\[\mathscr{E}_{\sigma}(0,L_{j})(\widehat{V}^{+}_{(0,L_{j})})=c_{n,\sigma}\int_{- L}^{L}\left[\left(\sum_{j\in\mathbb{Z}}V_{(0,L_{j})}(\tau)\right)^{\frac{n+2 \sigma}{n-2\sigma}}-\left(\sum_{j\in\mathbb{Z}}V_{(0,L_{j})}(\tau)^{\frac{n+2 \sigma}{n-2\sigma}}\right)\right]\widehat{\mathcal{R}}_{\sigma,L}(t-\tau) \mathrm{d}\tau.\]
Since by symmetry, we have \(V_{(0,-L_{j})}(t)\leqslant V_{(0,L_{j})}(t)\) for \(t\geqslant 0\), it holds
\[|\mathscr{E}_{\sigma}(0,L_{j})(\widehat{V}^{+}_{(0,L_{j})})| \lesssim\int_{-L}^{L}\left(V^{\frac{4\sigma}{n-2\sigma}}_{(0, \infty)}\sum_{j\in\mathbb{Z}^{*}}V_{(0,L_{j})}(\tau)+\sum_{j\in\mathbb{Z}^{*}} V_{(0,L_{j})}(\tau)^{\frac{n+2\sigma}{n-2\sigma}}\right)\widehat{\mathcal{R}}_{ \sigma,L}(t-\tau)\mathrm{d}\tau \tag{4.29}\] \[\lesssim\sum_{j\in\mathbb{Z}^{*}}\int_{-L}^{L}V_{(0,\infty)}(\tau )^{\frac{4\sigma}{n-2\sigma}}V_{(0,L_{j})}(\tau)\widehat{\mathcal{R}}_{\sigma,L}(t-\tau)\mathrm{d}\tau+\sum_{j\in\mathbb{Z}^{*}}\int_{-L}^{L}V_{(0,L_{j})}( \tau)^{\frac{n+2\sigma}{n-2\sigma}}\widehat{\mathcal{R}}_{\sigma,L}(t-\tau) \mathrm{d}\tau.\]
From (4.29), we find
\[\int_{-L}^{L}|\mathscr{E}_{\sigma}(0,L_{j})(\widehat{V}^{+}_{(0,L _{j})})|^{2}\mathrm{d}t\] \[\lesssim\int_{-L}^{L}\left(\sum_{j\in\mathbb{Z}^{*}}\int_{-L}^{L} V_{(0,\infty)}(\tau)^{\frac{8\sigma}{n-2\sigma}}V_{(0,L_{j})}(\tau)^{2}\widehat{ \mathcal{R}}_{\sigma,L}(t-\tau)^{2}\mathrm{d}\tau+\sum_{j\in\mathbb{Z}^{*}} \int_{-L}^{L}V_{(0,L_{j})}(\tau)^{\frac{2n+4\sigma}{n-2\sigma}}\widehat{ \mathcal{R}}_{\sigma,L}(t-\tau)^{2}\mathrm{d}\tau\right)\mathrm{d}t\] \[\lesssim\sum_{j\in\mathbb{Z}^{*}}\int_{-L}^{L}\int_{-L}^{L}V_{(0, \infty)}(\tau)^{\frac{8\sigma}{n-2\sigma}}V_{(0,L_{j})}(\tau)^{2}\widehat{ \mathcal{R}}_{\sigma,L}(t-\tau)^{2}\mathrm{d}\tau\mathrm{d}t+\sum_{j\in \mathbb{Z}^{*}}\int_{-L}^{L}\int_{-L}^{L}V_{(0,L_{j})}(\tau)^{\frac{2n+4\sigma} {n-2\sigma}}\widehat{\mathcal{R}}_{\sigma,L}(t-\tau)^{2}\mathrm{d}\tau\] \[=:I_{1}+I_{2}. \tag{4.30}\]
To estimate these two terms, we fix \(\alpha\in(0,1)\) and subdivide \(\mathbb{R}=\{|t|\leqslant\alpha L\}\cup\{|t|\geqslant\alpha L\}\). Then, we use the exponential decay of the standard bubble solution from Proposition B to obtain
\[\sum_{j\in\mathbb{Z}^{*}}V_{(0,L_{j})}\lesssim e^{-\gamma_{\sigma}L(2-\alpha)} \quad\text{and}\quad\sum_{j\in\mathbb{Z}^{*}}V_{(0,L_{j})}\lesssim e^{-\gamma _{\sigma}L}. \tag{4.31}\]
Hence, by substituting in (4.31) into the first term in (4.30), we obtain
\[I_{1} =\sum_{j\in\mathbb{Z}^{*}}\int_{-L}^{L}\int_{-L}^{L}V_{(0,\infty) }(\tau)^{\frac{8\sigma}{n-2\sigma}}V_{(0,L_{j})}(\tau)^{2}\widehat{\mathcal{R }}_{\sigma,L}(t-\tau)^{2}\mathrm{d}\tau\mathrm{d}t\] \[\lesssim e^{-2\gamma_{\sigma}L(2-\alpha)}+e^{-2\gamma_{\sigma}L \left(\frac{4\sigma\alpha}{n-2\sigma}\right)}e^{-2\gamma_{\sigma}L}e^{-2\gamma_ {\sigma}L\left(\frac{n+2\sigma}{n-2\sigma}\right)} \tag{4.32}\] \[\lesssim e^{-2\gamma_{\sigma}L(2-\alpha)}+e^{-2\gamma_{\sigma}L(1+ \xi)}\]
for some \(\xi>0\) (depending only on \(n\), \(\sigma\), and \(\alpha\)), where we used the asymptotic behavior of the Kernel (4.12) for \(\tau\to+\infty\) large and the fact that it is bounded for \(\tau\to 0\) small.
Furthermore, by substituting (4.31) into the second term in (4.30), we have
\[I_{2}=\sum_{j\in\mathbb{Z}^{*}}\int_{-L}^{L}\int_{-L}^{L}V_{(0,L_{j})}(\tau)^{ \frac{2n+4\sigma}{n-2\sigma}}\widehat{\mathcal{R}}_{\sigma,L}(t-\tau)^{2} \mathrm{d}\tau\mathrm{d}t\lesssim e^{-2\gamma_{\sigma}L(2-\alpha)}+e^{-2\gamma _{\sigma}L(\frac{n+2\sigma}{n-2\sigma})}. \tag{4.33}\]
In conclusion, by substituting (4.32) and (4.33) into (4.29), we have
\[\|\mathscr{E}_{\sigma}(0,L_{j})(\widehat{V}_{(0,L_{j})}^{+})\|_{L^{2}(\mathbb{ R})}\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\]
for some \(\xi>0\) uniformly on \(L\gg 1\) large, which combined with (4.28) proves the third claim.
Finally, by standard estimates in Lemma 4.8 combined with the regularity lifting theorem from [23, Theorem 3.3.1] applied to (4.21), it follows that \(\psi_{(0,L_{j})}\in H^{\sigma}_{L,0,*}(\mathbb{R})\) is smooth and satisfies
\[\|\psi_{(0,L_{j})}\|_{\mathcal{C}^{2\sigma+\alpha}(\mathbb{R})}\lesssim e^{- \gamma_{\sigma}L(1+\xi)}\]
for some \(\xi>0\) independent of \(L\gg 1\) large.
Therefore, the maximum principle in Lemma 4.9 concludes the proof of the proposition.
**Remark 4.11**.: _It is worth noticing that the last proof differs in spirit from the local inversion technique in [36, Proposition 2.3]. Instead of using this method, we give an alternative proof based on the dual formulation from Lemma 3.3. This technique is of independent interest to a larger class of integral equations not necessarily arising as the dual of a differential equation._
**Remark 4.12**.: _We notice that it is straightforward to extend the local inversion method in [36, Proposition 2.3] at least for the higher order local cases \(\sigma=m\in\mathbb{N}\). For see this fact, we write the poly-harmonic operator in Emden-Fowler coordinates, which gives us_
\[(-\Delta)^{m}_{\mathrm{cyl}}:=(-\Delta)^{m}_{\mathrm{rad}}+(-\Delta)^{m}_{ \mathrm{ang}}\]
_with_
\[(-\Delta)^{m}_{\mathrm{rad}}:=\partial_{t}^{(m)}-K^{(0)}_{2m-2}\partial_{t}^{ (2m-2)}+\cdots+(-1)^{m}K^{(0)}_{1}\partial_{t}^{(1)}+(-1)^{m+1}K^{(0)}_{0},\]
_and_
\[(-\Delta)^{m}_{\mathrm{ang}}:=\sum_{\ell=1}^{2m}\sum_{j=0}^{2m}(-1)^{\frac{j+2 }{2}}K^{(\ell)}_{2m,j}\partial_{t}^{(j)}(-\Delta_{\theta})^{\ell},\]
_where \(K^{(\ell)}_{2m,j}=K^{(\ell)}_{2m,j}(n)>0\) for \(j\in\{0,\dots,2m\}\) and \(\ell\in\{1,\dots,2m\}\) are dimensional constants. For this computation, we refer the interested reader to [5]. After that, we need to build on the classification result from standard bubbles for the critical Sobolev embedding \(H^{m}(\mathbb{R}^{n})\to L^{2^{*}_{m}}(\mathbb{R}^{n})\), where \(2^{*}_{m}:=\frac{2n}{n-2m}\) from [54] and a standard nondegeneracy technique as in Lemma 4.6._
As an immediate consequence of the last proposition, one has
**Corollary 4.13**.: _Let \(\sigma\in(1,+\infty)\) and \(n>2\sigma\). For any \(L\gg 1\) sufficiently large, there exist there exist a sequence of periods \((L_{j})\in\ell^{\infty}(\mathbb{R}_{+})\), an error function \(\psi_{(0,L_{j})}\in H^{\sigma}_{L}(\mathbb{R})\) and a unique positive even periodic solution \(\bar{v}_{(0,L_{j})}\in H^{\sigma}_{L}(\mathbb{R})\) to \((\mathcal{O}^{\prime}_{2\sigma,L})\) satisfying_
\[\bar{v}_{(0,L_{j})}(t)=\widehat{V}_{(0,L_{j})}^{+}(t)+\psi_{(0,L_{j})}(t)\]
_and_
\[\|\psi_{(0,L_{j})}\|_{H^{\sigma}_{L}(\mathbb{R})}\to 0\quad\mathrm{as} \quad L\to+\infty,\]
_where \(\widehat{V}_{(0,L_{j})}^{+}\in\mathcal{C}^{2\sigma}(\mathbb{R})\) is the standard bubble lower solution given by (5.5). Moreover, we have the following Holder estimate_
\[\|\psi_{(0,L_{j})}\|_{\mathcal{C}^{2\sigma}_{L}(\mathbb{R})}\lesssim e^{- \gamma_{\sigma}L(1+\xi)} \tag{4.34}\]
_for some \(\alpha\in(0,1)\) and \(\xi>0\) independent of the period \(L\gg 1\) large._
Since \((\mathcal{O}_{2\sigma,\infty})\) is translational invariant, we now will use the periodic solution \(\bar{v}_{(0,L_{j})}\) which attains its minimum at the points \(t=2jL\) with \(j\in\mathbb{Z}\). Indeed, using Lemma 4.10, this periodic solution can be expressed as a perturbation of a bubble tower with estimated error.
**Definition 4.14**.: _Let \(\sigma\in(1,+\infty)\) and \(n>2\sigma\). For any \(L\gg 1\) sufficiently large, let us define the generalized bubble tower solution_
* (_Emden-Fowler coordinates_) \[\bar{v}_{(0,L_{j})}(t):=\widehat{V}^{+}_{(0,L_{j})}(t)+\psi_{(0,L_{j})}(t),\] (4.35) _where_ \(\widehat{V}^{+}_{(0,L_{j})}\in\mathcal{C}^{2\sigma}(\mathbb{R})\) _is the standard half bubble tower solution given by (_5.6_) and_ \(\psi_{(0,L_{j})}\in\mathcal{C}^{2\sigma}(\mathbb{R})\) _the perturbation function constructed in Corollary_ 4.13_. More precisely, one has_ \[\widehat{V}^{+}_{(0,L_{j})}(t)=\sum_{j\in\mathbb{N}}\cosh(t-L_{j}-L)^{\gamma _{\sigma}},\quad\text{where}\quad L_{j}=(1+2j)L\quad\text{for}\quad j\in \mathbb{N}.\]
* (_Spherical coordinates_) \[\bar{u}_{(0,L_{j})}(x):=\widehat{U}^{+}_{(0,L_{j})}(x)+\phi_{(0,L_{j})}(x),\] (4.36) _where_ \(\widehat{U}^{+}_{(0,L_{j})}\in\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\Sigma)\) _is the standard half bubble tower solution given by (_5.3_) and_ \(\phi_{(0,L_{j})}\in\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\Sigma)\) _is the perturbation function constructed in Corollary_ 5.1_. More precisely, we have_ \[\widehat{U}^{+}_{(0,L_{j})}(x)=\sum_{j\in\mathbb{N}}\left(\frac{\lambda_{j}}{ \lambda_{j}^{2}+|x|^{2}}\right)^{\gamma_{\sigma}},\quad\text{where}\quad \lambda_{j}=e^{-(1+2j)L}\quad\text{for}\quad j\in\mathbb{N}.\]
We find better asymptotics near the isolated singularities for the deformed solution obtained in Lemma 4.10. These refined estimates in terms of the bubble tower solution will be a crucial part of estimating the errors in our approximate solution in the gluing procedure in Section 7.
**Lemma 4.15**.: _The asymptotics holds_
\[\bar{v}_{(0,L_{j})}(t)=v_{\rm sph}(t)(1+{\rm o}(1))\quad\text{\rm as}\quad L \to+\infty, \tag{4.37}\]
_or undoing the Emden-Fowler change of variables, it holds_
\[\bar{u}_{(0,L_{j})}(t)=u_{\rm sph}(|x|)(1+{\rm o}(1))\quad\text{\rm as}\quad L \to+\infty.\]
_Moreover, one has_
\[\bar{u}_{(0,L_{j})}(x)=|x|^{n-2\sigma}e^{-\gamma_{\sigma}L}(1+{\rm o}(1))\quad \text{\rm as}\quad L\to+\infty. \tag{4.38}\]
_and_
\[\varepsilon_{L}:=\bar{v}_{(0,L_{j})}(0)=e^{-\gamma_{\sigma}L}(1+{\rm o}(1)) \quad\text{\rm as}\quad L\to+\infty.\]
_This parameter is called the neck size or Delaunay parameter._
Proof.: Notice that for \(t\leqslant 0\) (or \(|x|\geqslant 1\)), the proof of the lemma follows as a combination of Corollary 4.13 together with exponential decay of the standard spherical solution in Emden-Fowler coordinates to prove (4.37).
## 5. Approximate solution
This section will construct a suitable approximate solution to \((\mathcal{Q}^{\prime}_{2\sigma,\Sigma})\). We also prove some estimates concerning the behavior of such a solution near the singular set. As we have mentioned, one of the main ideas is that, although we would like the approximate solution to have Delaunay-type singularities around each point isolated singularity, it should have a fast decay once we are away from the singular set to glue to the flat background manifold. To this end, we will only take half a Delaunay solution (this is, only values \(j\in\mathbb{N}\)).
### Local asymptotic behavior
In this subsection, we study the local behavior of solutions to \((\mathcal{Q}^{\prime}_{2\sigma,\infty})\) near the isolated singularity at the origin. Namely, we show that near the origin, it can be approximated by a bubble tower solution. In contrast with the cases \(\sigma\in\{1,2,3\}\) on which a complete classification of this local behavior is given in terms of the two-parameter family of Delaunay solutions studied in [5, 18, 30], which are inspired by the classical result of Korevaar _et al._[40] for \(\sigma=1\) and Caffarelli _et al._\(\sigma\in(0,1)\)[16]. These will be the building blocks in constructing suitable approximate solutions to \((\mathcal{Q}^{\prime}_{2\sigma,\infty})\).
First, recall the local asymptotic classification result from [37].
**Proposition B**.: _Let \(\sigma\in(1,+\infty)\) and \(n>2\sigma\)._
1. _Assume that_ \(R=+\infty\)_. For any_ \(L\gg 1\) _sufficiently large, there exists a blow-up limit solution to_ \((\mathcal{Q}^{\prime}_{2\sigma,\infty})\) _denoted by_ \(u_{L}\in\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\{0\})\) _and given by_ \[u_{(0,L)}(x)=(\mathfrak{F}_{\sigma})^{-1}\left(v_{(0,L)}\right)=|x|^{-\gamma_{ \sigma}}v_{(0,L)}(-\ln|x|),\] _where_ \(v_{L}\in\mathcal{C}^{2\sigma}(\mathbb{R})\) _is a bounded periodic even solution to_ \((\mathcal{O}^{\prime}_{2\sigma,L})\)_. In addition, one has_ \[v_{(0,L)}(x)=\mathcal{O}(e^{-\gamma_{\sigma}L})\quad\text{as}\quad t\to+\infty.\] _These will be called Delaunay solutions._
2. _Assume that_ \(0<R<+\infty\)_. If_ \(u\in\mathcal{C}^{2\sigma}(B^{*}_{R})\) _is a positive singular solution to_ \((\mathcal{Q}^{\prime}_{2\sigma,R})\)_, then there exists a Delaunay solution with a large period, denoted by_ \(u_{L}\)_, such that_ \[u(x)=u_{(0,L)}(x)(1+\mathrm{o}(1))\quad\text{as}\quad|x|\to 0,\] _or_ \[v(t)=v_{(0,L)}(t)(1+\mathrm{o}(1))\quad\text{as}\quad t\to+\infty,\] _where_ \(L\gg 1\) _is sufficiently large._
In addition, writing Lemma 4.15 into using Emden-Fowler change of variables, we can reformulate it as an improvement for the result above.
**Corollary 5.1**.: _Let \(\sigma\in(1,+\infty)\) and \(n>2\sigma\). For any \(L\gg 1\) sufficiently large, there exist a sequence of periods \((L_{j})\in\ell^{\infty}(\mathbb{R}_{+})\) such that \(\phi_{(0,L_{j})}\in H^{\sigma}_{L}(\mathbb{R}^{n}\setminus\{0\})\) and a unique positive even solution \(\bar{u}_{(0,L_{j})}\in H^{\sigma}_{L}(\mathbb{R}^{n}\setminus\{0\})\) to \((\mathcal{Q}^{\prime}_{2\sigma,\infty})\) satisfying_
\[\bar{u}_{(0,L_{j})}(x)=\widehat{U}^{+}_{(0,L_{j})}(x)+\phi_{(0,L_{j})}(x),\]
_and_
\[\|\phi_{(0,L_{j})}\|_{H^{\sigma}_{L}(\mathbb{R}^{n}\setminus\{0\})}\to 0 \quad\text{as}\quad L\to+\infty,\]
_where \(\widehat{U}^{+}_{(0,L_{j})}\in\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\{ 0\})\) is the standard bubble tower solution given by (5.2). Moreover, we have the following Holder estimate_
\[\|\phi_{(0,L_{j})}\|_{\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\{0\})} \lesssim e^{-\gamma_{\sigma}L(1+\xi)} \tag{5.1}\]
_for some \(\alpha\in(0,1)\) and \(\xi>0\) independent of \(L\gg 1\) large._
Based on the definition of a spherical solution in (4.2) and (4.13), we introduce the concept of a standard bubble tower solution. In addition, in order to have fast decay far from the singularity (\(t\to-\infty\)), we will need only half a bubble tower. This fact motivates the following definition
**Definition 5.2**.: _Let \(\sigma\in(1,+\infty)\) and \(n>2\sigma\). For any \(L\gg 1\) sufficiently large, let us define the following standard bubble tower solution_
* (_Spherical coordinates_) \[\widehat{U}_{(0,L_{j})}(x):=\sum_{j\in\mathbb{Z}}U_{(0,L_{j})}(x),\] (5.2) _and_ \[\widehat{U}_{(0,L_{j})}^{+}(x):=\sum_{j\in\mathbb{N}}U_{(0,L_{j})}(x),\] (5.3) _where_ \[U_{(0,L_{j})}(x)=\left(\frac{\lambda_{j}}{\lambda_{j}^{2}+\left|x\right|^{2} }\right)^{\gamma_{\sigma}}\quad\text{with}\quad\lambda_{j}=e^{-2jL}\quad \text{for}\quad j\in\mathbb{Z}.\] (5.4)
* (_Emden-Fowler coordinates_) \[\widehat{V}_{(0,L_{j})}(t):=\sum_{j\in\mathbb{Z}}V_{(0,L_{j})}(t),\] (5.5) _and_ \[\widehat{V}_{(0,L_{j})}^{+}(t):=\sum_{j\in\mathbb{N}}V_{(0,L_{j})}(t),\] (5.6) _where_ \[V_{(0,L_{j})}(t)=\cosh(t-L_{j})^{\gamma_{\sigma}}\quad\text{with}\quad L_{j }=2jL\quad\text{for}\quad j\in\mathbb{Z}.\] (5.7)
_These will be called the standard bubble tower solution_.
As a consequence of Corollary 4.13, we will improve the last two results. Indeed, we show that near an isolated singularity, solutions are close to some bubble tower solution up to some controlled error.
**Proposition 5.3**.: _Let \(\sigma\in(1,+\infty]\) with \(n>2\sigma\). If \(u\in\mathcal{C}^{2\sigma}(B_{R}^{*})\) is a positive smooth singular solution to \((\mathcal{Q}^{\prime}_{2\sigma,R})\) with \(R>0\), then there exists a sequence of periods \((L_{j})\in\ell^{\infty}(\mathbb{R}_{+})\) and a blow-up limit solution \(\bar{u}_{(0,L_{j})}\in\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\{0\})\) to \((\mathcal{Q}^{\prime}_{2\sigma,\infty})\) such that_
\[u(x)=\bar{u}_{(0,L_{j})}(x)(1+\mathrm{o}(1))\quad\text{as}\quad|x|\to 0.\]
_More precisely, one has_
\[\bar{u}_{(0,L_{j})}(x)=\widehat{U}_{(0,L_{j})}(x)+\phi_{(0,L_{j})}(x), \tag{5.8}\]
_where \(\widehat{U}_{(0,L_{j})}\in\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\{0\})\) is the standard bubble tower solution in (5.3) and \(\phi_{(0,L_{j})}\in H^{\sigma}(\mathbb{R}^{n})\) satisfies_
\[\|\phi_{(0,L_{j})}\|_{\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\{0\})} \lesssim e^{-\gamma_{\sigma}L(1+\xi)} \tag{5.9}\]
_for some \(\alpha\in(0,1)\) and \(\xi>0\) independent of \(L\gg 1\) large._
### Balanced configurations
Here we introduce a necessary set of compatibility conditions for the configuration parameters.
**Definition 5.4**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\), and \(N\geqslant 2\). Given \(L\gg 1\) large enough, we will fix the vector \(\mathbf{L}=(L_{1},\ldots,L_{N})\in\mathbb{R}_{+}^{N}\) to be the Delaunay parameters, which also are related to the neck sizes of each Delaunay solution. They will be chosen (large enough) in the proof. They will satisfy the following conditions \(|L_{i}-L|\lesssim 1\) for all \(i\in\{1,\ldots,N\}\). More precisely, they will be related by the vector \(\mathbf{q}=(q_{1},\ldots,q_{N})\in\mathbb{R}_{+}^{N}\), which satisfy the following relations_
\[q_{i}e^{-\gamma_{\sigma}L}=e^{-\gamma_{\sigma}L_{i}}\quad\text{for}\quad i\in \{1,\ldots,N\}. \tag{5.10}\]
Next, we will give some explanation about the choice of parameters. Given the \(N(n+2)\) balancing parameters \((\mathbf{q}^{b},\mathbf{R}^{b},\mathbf{\hat{a}}_{0}^{b})\) satisfying the balancing conditions \((\mathscr{B}_{1})\) and \((\mathscr{B}_{2})\), we first choose \(N(n+2)\) initial perturbation parameters \((\mathbf{q},\mathbf{R},\mathbf{\hat{a}}_{0})\) which are close to the balancing parameters, _i.e._, (5.11) and (5.12).
**Definition 5.5**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\), and \(N\geqslant 2\). For any fixed nonnegative vector \(\mathbf{q}^{b}=(q_{1}^{b},\ldots,q_{N}^{b})\in\mathbb{R}_{+}^{N}\), let us define the vector \((\mathbf{a}_{0}^{b},\mathbf{R}^{b})=(a_{0}^{i,b},\ldots,a_{0}^{N,b},R^{1,b},\ldots R ^{N,b})\in\mathbb{R}^{(n+1)N}\) to be determined by the following balancing conditions:_
\[q_{i}^{b}=A_{2}\sum_{i^{\prime}\neq i}q_{i^{\prime}}^{b}(R^{i,b}R^{i^{\prime}, b})^{\gamma_{\sigma}}|x_{i}-x_{i^{\prime}}|^{-2\gamma_{\sigma}}\quad\text{for} \quad i\in\{1,\ldots,N\}.\] ( \[\mathscr{B}_{1}\] )
_and_
\[\frac{a_{0}^{i,b}}{(\lambda_{0}^{i,b})^{2}}=-\frac{A_{3}}{A_{1}}\sum_{i^{ \prime}\neq i}\frac{x_{i^{\prime}}-x_{i}}{|x_{i^{\prime}}-x_{i}|^{2\gamma_{ \sigma}+2}}\frac{q_{i^{\prime}}^{b}}{q_{i}^{b}}(R^{i,b}R^{i^{\prime},b})^{ \gamma_{\sigma}}\quad\text{for}\quad i\in\{1,\ldots,N\}\] ( \[\mathscr{B}_{2}\] )
_where \(\lambda_{0}^{i,b}=R^{i,b}e^{-L_{i}^{b}}\), and the \(L_{i}^{b}\in\mathbb{R}_{+}\) are defined from the \(q_{i}^{b}\in\mathbb{R}_{+}\) by (5.10) for each \(i\in\{1,\ldots,N\}\) and the constants \(A_{1},A_{2}>0,A_{3}<0\) are defined in (A.1), (A.2), and (A.3), respectively. We denote by \((\mathbf{q}^{b},\mathbf{a}_{0}^{b},\mathbf{R}^{b})\in\operatorname{Bal}_{\sigma}(\Sigma)\) the set of balanced configurations._
**Remark 5.6**.: _We remark that it has been shown in [47, Remark 3] that for \(\mathbf{q}:=(q_{1}^{b},\ldots,q_{N}^{b})\in\mathbb{R}_{+}^{N}\) in the positive octant, there exists a solution \(\mathbf{R}^{b}=(R^{1,b},\ldots,R^{N,b})\) to equation \((\mathscr{B}_{1})\). Once this is chosen, then we can use equation \((\mathscr{B}_{2})\) to determine \(\mathbf{a}_{0}^{b}=(a_{0}^{1,b},\ldots,a_{0}^{N,b})\in(\mathbb{R}_{+}^{n})^{N}\). In other words, the set of balanced configurations is non-empty \(\operatorname{Bal}_{\sigma}(\Sigma)\neq\varnothing\) for all \(\sigma\in\mathbb{R}_{+}\)._
Although the meaning of these compatibility conditions will become apparent in the following sections, we have just seen that they are analogous to those of [46] for the local case. The idea is that perturbations at the base level should be very close to those for a single bubble. This fact also shows, in particular, that even though our problem is nonlocal, very near the singularity, it presents a local behavior due to the strong influence of the underlying geometry. However, for the rest of the perturbation parameters, we must solve an infinite-dimensional system of equations.
The last discussion motivates the definition below
**Definition 5.7**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). We define the so-called configuration map \(\Upsilon_{\mathrm{conf}}:\mathbb{R}^{(n+1)N}\to\mathbb{R}^{(n+2)N}\) which associates compatible moduli space parameters \((\mathbf{x},\mathbf{L})\) with configuration parameters \((\mathbf{q},\mathbf{a_{0}},\mathbf{R})\). We say that a set moduli space parameters \((\mathbf{x},\mathbf{L})\in\mathbb{R}^{(n+1)N}\) is compatible if its associated set of configuration parameters \((\mathbf{q},\mathbf{a_{0}},\mathbf{R})\in\operatorname{Bal}_{\sigma}(\Sigma)\) is balanced._
### Admissible perturbation parameters
We also would like to introduce some perturbation parameters \(R\in\mathbb{R},a\in\mathbb{R}^{n}\), since each standard bubble has \(n+1\) free parameters corresponding to scaling and translations, which is done for each bubble in the bubble tower independently. Thus, we will have an infinite-dimensional set of perturbations.
**Definition 5.8**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). For any \(L\gg 1\) sufficiently large and \(\mathbf{L}=(L^{1},\ldots,L^{N})\in\ell^{\infty}(\mathbb{R}^{N}_{+})\) and \(\mathbf{R}=(R^{1},\ldots,R^{N})\in\ell^{\infty}(\mathbb{R}^{N}_{+})\) such that (5.10) holds, let us define the full set of perturbation parameters \((\mathbf{a}_{j},\mathbf{\lambda}_{j})=(a_{j}^{1},\ldots,a_{j}^{N},\lambda_{j}^{1}, \ldots,\lambda_{j}^{N})\in\ell^{\infty}(\mathbb{R}^{(n+1)N})\), where_
\[\lambda_{j}^{i}=R_{j}^{i}e^{-(1+2j)L_{i}}\quad\mathrm{for}\quad i\in\{1, \ldots,N\}\quad\mathrm{and}\quad j\in\mathbb{Z}.\]
We introduce the perturbation parameters we will use in the gluing technique
**Definition 5.9**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). Let \(\mathbf{R}=(R^{1},\ldots,R^{N})\in\mathbb{R}^{N}_{+}\) and \(\mathbf{q}\in\mathbb{R}^{N}_{+}=(q_{1},\ldots,q_{N})\) be such that_
\[|R^{i}-R^{i,b}|\lesssim 1\quad\mathrm{and}\quad|q_{i}-q_{i}^{b}|\lesssim 1 \quad\mathrm{for}\quad i\in\{1,\ldots,N\}. \tag{5.11}\]
_Also, we let \(\mathbf{\lambda}_{0}^{0}=(\lambda_{0}^{1,0},\ldots,\lambda_{0}^{N,0})\in\mathbb{ R}^{N}_{+}\) and \(\hat{\mathbf{a}}_{0}^{i}=(\hat{a}_{0}^{1},\ldots,\hat{a}_{0}^{N})\in(\mathbb{R}^{n })^{N}\) be respectively given by_
\[\lambda_{0}^{i,0}=R^{i}e^{-\frac{(1+2j)}{2}L_{i}}\quad\mathrm{for}\quad i\in \{1,\ldots,N\}\]
_and_
\[\frac{a_{0}^{i,0}}{(\lambda_{0}^{i,0})^{2}}=\hat{a}_{0}^{i}\quad\mathrm{for} \quad i\in\{1,\ldots,N\}\]
_such that_
\[|\hat{a}_{0}^{i}-\hat{a}_{0}^{i,b}|\lesssim 1, \tag{5.12}\]
_where_
\[\hat{a}_{0}^{i,b}=\frac{a_{0}^{i,b}}{(\lambda_{0}^{i,b})^{2}}.\]
_For all \(i\in\{1,\ldots,N\}\) and \(j\in\mathbb{N}\), let us define the sequence of perturbation parameters \((\mathbf{a}_{j},\mathbf{\lambda}_{j})=(a^{1},\ldots,a^{N},R^{1},\ldots,R^{N})\in \ell^{\infty}(\mathbb{R}^{(n+1)N})\) by_
\[R_{j}^{i}=R^{i}(1+r_{j}^{i})\quad\mathrm{and}\quad\frac{a_{j}^{i}}{(\lambda_{ j}^{i})^{2}}=\bar{a}_{j}^{i}=\hat{a}_{0}^{i}+\tilde{a}_{j}^{i}, \tag{5.13}\]
_where \((\tilde{\mathbf{a}}_{j},\mathbf{r}_{j})=(\tilde{a}^{i},\ldots,\tilde{a}^{N},r^{1}, \ldots r^{N})\in\ell^{\infty}(\mathbb{R}^{(n+1)N})\) satisfy_
\[|r_{j}^{i}|\lesssim e^{-\tau t_{j}^{i}}\quad\mathrm{and}\quad|\tilde{a}_{j}^{ i}|\lesssim e^{-\tau t_{j}^{i}}\quad\mathrm{for}\quad i\in\{1,\ldots,N\} \tag{5.14}\]
_for some \(\tau>0\), where \(t_{j}^{i}=(1+2j)L_{i}\)._
**Definition 5.10**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). We define the so-called perturbation map \(\Upsilon_{\mathrm{per}}:\mathbb{R}^{(n+2)N}\to\ell_{\tau}^{\infty}(\mathbb{R}^ {(n+1)N})\) such that it associates balanced configurations with a sequence of admissible perturbations. A sequence of perturbation parameters \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\ell^{\infty}(\mathbb{R}^{(n+1)N})\) or \((\mathbf{a}_{j},\mathbf{r}_{j})\in\ell^{\infty}(\mathbb{R}^{(n+1)N})\) is said to be admissible if the parameters satisfy_
* _For_ \(j=0\)_, the configuration parameters_ \(\Upsilon_{\mathrm{per}}^{-1}(\mathbf{a}_{j},\mathbf{\lambda}_{j})=(\mathbf{q},\mathbf{a}_{0 },\mathbf{R})\in\mathbb{R}^{(n+2)N}\) _is a balanced, that is,_ \((\mathbf{q},\mathbf{a}_{0},\mathbf{R})\in\mathrm{Bal}_{\sigma}(\Sigma)\)_;_
* _For_ \(j\geqslant 1\)_, the parameters_ \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\ell^{\infty}(\mathbb{R}^{(N+1)n})\) _satisfy the set of relations (_5.10_),(_5.11_), (_5.12_), (_5.13_), and (_5.14_)._
_We denote by \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\mathrm{Adm}_{\sigma}(\Sigma)\) the set of admissible configurations. Notice under (5.13), one can work indiscriminately with either parameter. In this fashion, we call \((\mathbf{0},\mathbf{1})\in\ell^{\infty}(\mathbb{R}^{(n+1)N})\) or \((\mathbf{0},\mathbf{0})\in\ell^{\infty}(\mathbb{R}^{(n+1)N})\) the trivial configurations._
### Generalized Delaunay solutions
We now define a family of approximate solutions to the problem using the Delaunay solutions from the previous section. From now on, we denote by \(\chi:\mathbb{R}\to\mathbb{R}\) the cut-off function such that
\[\chi(x)=\begin{cases}1,&\text{if }0<|x|\leqslant\frac{1}{2}\\ 0,&\text{if }\frac{1}{2}\leqslant|x|\leqslant 1\\ \chi(x),&\text{if }|x|\geqslant 1.\end{cases}\]
First, one can always assume that all the balls \(B_{2}(x_{i})\) are disjoint since we may dilate the problem by some factor \(\kappa>0\) that will change the set \(\Sigma\) into \(\kappa\Sigma\) and a function \(u\) defined in \(\mathbb{R}^{n}\setminus\Sigma\) into \(\kappa^{-\gamma_{\sigma}}u(x\kappa^{-1})\) defined in \(\mathbb{R}^{n}\setminus\kappa\Sigma\).
**Definition 5.11**.: _Let \(\sigma\in(1,+\infty)\) and \(n>2\sigma\). For any \(L\gg 1\) sufficiently large and \(\mathbf{L}=(L^{1},\ldots,L^{N})\in\ell^{\infty}(\mathbb{R}^{N}_{+})\) and \(\mathbf{R}=(R^{1},\ldots,R^{N})\in\ell^{\infty}(\mathbb{R}^{N}_{+})\) such that (5.10) holds let \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\ell^{\infty}(\mathbb{R}^{(n+1)N})\) be its associated perturbation parameters. Fix \(x_{i}\in\Sigma\) for \(i\in\{1,\ldots,N\}\), let us define the following generalized bubble tower solution_
1. (_Spherical coordinates_) \[\widehat{U}_{(x_{i},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}(x):=\sum_{j\in \mathbb{Z}}U_{(x_{i},L_{j}^{i},a_{j}^{i},\lambda_{j}^{i})}(x),\] (5.15) _and_ \[\widehat{U}_{(x_{i},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}^{+}(x):=\sum_{j\in \mathbb{N}}U_{(x_{i},L_{j}^{i},a_{j}^{i},\lambda_{j}^{i})}(x),\] (5.16) _where_ \[U_{(x_{i},L_{j}^{i},a_{j}^{i},\lambda_{j}^{i})}(x)=\left(\frac{\lambda_{j}^{ i}}{\lambda_{j}^{2}+|x-a_{j}^{i}-x_{i}|^{2}}\right)^{\gamma_{\sigma}}\] (5.17) _with_ \[\lambda_{j}^{i}=R_{j}^{i}e^{-2jL_{j}^{i}}\quad\text{for}\quad j\in\mathbb{Z}.\] _and_ \[L_{j}^{i}=L_{i}-jL_{i}+\ln R_{j}^{i}\quad\text{for}\quad j\in\mathbb{Z}.\]
2. (_Emden-Fowler coordinates_) \[\widehat{V}_{(x_{i},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}(t):=\sum_{j\in \mathbb{Z}}V_{(x_{i},L_{j}^{i},a_{j}^{i},\lambda_{j}^{i})}(t),\] (5.18) _and_ \[\widehat{V}_{(x_{i},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}(t):=\sum_{j\in \mathbb{N}}V_{(x_{i},L_{j}^{i},a_{j}^{i},\lambda_{j}^{i})}(t),\] (5.19) _where_ \[V_{(x_{i},L_{j}^{i},a_{j}^{i},\lambda_{j}^{i})}(t)=\cosh(-\ln|x-x_{i}-a_{j}^{ i}|-L_{j}^{i})^{\gamma_{\sigma}}.\] (5.20)
_These will be called the general \((\text{half})\) bubble tower solutions._
We also have the most basic definition of this section. We observe that although in the definition the solution is indexed by \((\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})\), one should recall that the configuration map from Definition 5.7 relates them and by the perturbation map from Definition 5.7, namely \((\mathbf{x},\mathbf{L})=(\mathbf{a}_{j}(\mathbf{q},\mathbf{a}_{0},\mathbf{R}),\mathbf{\lambda}_{j}(\mathbf{q}, \mathbf{a}_{0},\mathbf{R}))\) and \((\mathbf{q},\mathbf{a}_{0},\mathbf{R})=(\mathbf{q}(\mathbf{x},\mathbf{L}),\mathbf{a}_{0}(\mathbf{x},\mathbf{L}), \mathbf{R}(\mathbf{x},\mathbf{L}))\).
**Definition 5.12**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\), and \(N\geqslant 2\). For any \(L\gg 1\) sufficiently large and \(\mathbf{L}=(L^{1},\ldots,L^{N})\in\ell^{\infty}(\mathbb{R}_{+}^{N})\) and \(\mathbf{R}=(R^{1},\ldots,R^{N})\in\ell^{\infty}(\mathbb{R}_{+}^{N})\) such that (5.10) holds let \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\ell^{\infty}(\mathbb{R}^{(n+1)N})\) be its associated perturbation parameters. We define its associated solution \(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda})}\in\mathcal{C}^{\infty}( \mathbb{R}^{n}\setminus\Sigma)\) as_
\[\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}(x)=\sum_{i=1}^{N}\bar{u} _{(x_{i},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}(x). \tag{5.21}\]
_Here_
\[\bar{u}_{(x_{i},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}(x)=:\widehat{U}_{(x_{i}, \mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}^{+}(x)+\chi_{i}(x)\phi_{(x_{i},\mathbf{L},\bm {a}_{j},\mathbf{\lambda}_{j})}(x), \tag{5.22}\]
_where \(\widehat{U}_{(x_{i},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}^{+}\in\mathcal{C}^{2 \sigma}(\mathbb{R}^{n}\setminus\Sigma)\) is the generalized bubble tower solution given by (5.16) and_
\[\phi_{(x_{i},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}(x)=\phi_{(\mathbf{L},\mathbf{a}_{j}, \mathbf{\lambda}_{j})}(x-x_{i})\quad\text{and}\quad\chi_{i}(x)=\chi(x-x_{i})\quad \text{for all}\quad i\in\{1,\ldots,N\} \tag{5.23}\]
_with \(\phi_{(\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}\) the error function from Lemma 4.10. We say that \(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}\in\mathcal{C}^{\infty}( \mathbb{R}^{n}\setminus\Sigma)\) is an approximate solution to \((\mathcal{Q}_{2\sigma,\Sigma})\), denote by \(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}\in\mathrm{Apx}_{\sigma} (\Sigma)\), whenever \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\mathrm{Adm}_{\sigma}(\Sigma)\). We then define the so-called perturbation map \(\Upsilon_{\mathrm{sol}}:\ell_{\tau}^{\infty}(\mathbb{R}^{(n+1)N})\to\mathcal{C} ^{\infty}(\mathbb{R}^{n}\setminus\Sigma)\) such that it associates balanced configurations with sequences of admissible perturbations._
### Normalized approximate kernels
In this subsection, we will use the aforementioned parameters to define a family of projections on the (normalized) approximate kernels. At least for low Fourier eigenmodes, this family is entirely constructed by varying the parameters in the approximate solution.
**Definition 5.13**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). Assume that \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\mathrm{Adm}_{\sigma}(\Sigma)\) is an admissible configuration as in Definition 5.5 with \(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}\in\mathrm{Apx}_{\sigma} (\Sigma)\) their associated approximate solution as in Definition 5.12._
* _Let us introduce some notation of normalized approximate kernels._
* _If_ \(\ell=0\)_, we set_ \[Z^{i}_{j,0}(\mathbf{a}_{j},\mathbf{\lambda}_{j})=\partial_{r^{i}_{j}}U_{(x_{i},L^{i}_ {j},\lambda^{i}_{j},a^{i}_{j})};\] _for the zero-frequency Fourier eigenmodes._
* _If_ \(\ell\in\{1,\ldots,n\}\)_, we set_ \[Z^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})=\lambda^{i}_{j}\partial_{a^{i}_{j, \ell}}U_{(x_{i},L^{i}_{j},\lambda^{i}_{j},a^{i}_{j})}=-\lambda^{i}_{j}\partial _{x_{\ell}}U_{(x_{i},L^{i}_{j},\lambda^{i}_{j},a^{i}_{j})}.\] _for the low-frequency Fourier eigenmodes._ _We denote by_ \(\{Z^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\}_{(i,j,\ell)\in\mathcal{I}_{ \infty}}\subset\mathcal{C}^{0}(\mathbb{R}^{n}\setminus\Sigma)\) _the family of normalized approximate kernels._
* _Let us introduce some notation of normalized approximate cokernels._
* _If_ \(\ell=0\)_, we set_ \[\overline{Z}^{i}_{j,0}(\mathbf{a}_{j},\mathbf{\lambda}_{j})=f^{\prime}_{\sigma}(U_{(x_{i },L^{i}_{j},\lambda^{i}_{j},a^{i}_{j})})Z^{i}_{j,0}(\mathbf{a}_{j},\mathbf{\lambda}_{j});\] _for the zero-frequency Fourier eigenmodes._
* _If_ \(\ell\in\{1,\ldots,n\}\)_, we set_ \[\overline{Z}^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})=f^{\prime}_{\sigma}(U_{(x_ {i},L^{i}_{j},\lambda^{i}_{j},a^{i}_{j})})Z^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda} _{j})\] _for the low-frequency Fourier eigenmodes._ _We denote by_ \(\{\overline{Z}^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\}_{(i,j,\ell)\in \mathcal{I}_{\infty}}\subset\mathcal{C}^{0}(\mathbb{R}^{n}\setminus\Sigma)\) _the family of normalized approximate cokernels._
These normalized kernels satisfy some orthogonality conditions, which will be important in applying a finite-dimensional reduction.
**Lemma 5.14**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). Assume that \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\operatorname{Adm}_{\sigma}(\Sigma)\) is an admissible configuration as in Definition 5.5 with \(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}\in\operatorname{Apx}_{ \sigma}(\Sigma)\) their associated approximate solution as in Definition 5.12. Then, one has_
* _If_ \(\ell\in\{1,\ldots,n\}\)_, then_ \[\int_{\mathbb{R}^{n}}\overline{Z}_{j,\ell}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j}) \overline{Z}_{j^{\prime},\ell^{\prime}}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j}) \mathrm{d}x=\frac{4(n-2\sigma)^{2}}{n}\left(\delta_{\ell,\ell^{\prime}}+ \mathrm{o}(1)\right)e^{-(\gamma_{\sigma}+1)|t_{j}^{i}-t_{j^{\prime}}^{i}|},\] (5.24) _where_ \(\delta_{\ell,\ell^{\prime}}\) _is Kronecker's delta;_
* _If_ \(\ell=0\)_, then_ \[\int_{\mathbb{R}^{n}}\overline{Z}_{j,0}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j})Z_{j^ {\prime},0}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\mathrm{d}x=C_{0}(1+\mathrm{o}(1) )e^{-\gamma_{\sigma}|t_{j}^{i}-t_{j^{\prime}}^{i}|}\] (5.25) _for some_ \(C_{0}>0\)_._
Proof.: Initially, let us observe that by Lemma 4.6, the set of bounded solutions to
\[\phi-(-\Delta)^{-\sigma}(f_{\sigma}^{\prime}(U_{(x_{i},L_{j}^{i},\lambda_{j}^ {i},a_{j}^{i})})\phi)=0\quad\text{in}\quad\mathbb{R}^{n}\]
is spanned by \(\{\overline{Z}_{j,0}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j}),\ldots,\overline{Z}_{j, n}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\}\) for any \(i\in\{1,\ldots,N\}\) and \(j\in\mathbb{N}\).
Without loss of generality, assume in the following that \(x_{i}=0\). For \(\ell=0\), we will repeatedly use the following estimates
\[\left|Z_{j,0}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(x)\right|\lesssim\begin{cases} |x|^{-\gamma_{\sigma}}V_{(x_{i},L_{i},a_{j}^{i},\lambda_{j}^{i})}(-\ln|x|),& \text{if }|x|\leqslant 1,\\ |x|^{-2\gamma_{\sigma}}(\lambda_{j}^{i})^{\gamma_{\sigma}},&\text{if }|x| \geqslant 1.\end{cases} \tag{5.26}\]
In addition, we have also have
\[Z_{j,\ell}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(x)=2\gamma_{\sigma}V_{a_{j}^{i},L_ {i},t_{i}}(-\ln|x|)^{\frac{\gamma_{\sigma}+1}{\gamma_{\sigma}}}|x-a_{j}^{i}-x _{i}|^{-\gamma_{\sigma}-1}\left(x-a_{j}^{i}-x_{i}\right)_{\ell},\]
Then, after recentering at \(x_{i}=0\), it is easy to see that the following orthogonality conditions (5.24) are in force. Similar estimates also hold true for \(\ell=0\); the orthogonality condition in (5.25) is also satisfied.
The lemma is then proved.
### Weighted functional spaces
It is convenient to define the suitable function spaces on which we will run our perturbation technique.
**Definition 5.15**.: _Let \(\alpha\in(0,1)\) and \(\zeta_{1},\zeta_{2}\in\mathbb{R}\) such that \(\zeta_{1}<0\) and \(\zeta_{2}>0\). We set the weighted norm_
\[\|u\|_{\mathcal{C}_{\zeta_{1},\zeta_{2}}^{\alpha}(\mathbb{R}^{n}\setminus \Sigma)}=\|\operatorname{dist}(x,\Sigma)^{-\zeta_{1}}u\|_{\mathcal{C}^{\alpha} (B_{1}(\Sigma))}+\||x|^{-\zeta_{2}}u\|_{\mathcal{C}^{\alpha}(\mathbb{R}^{n} \setminus B_{1}(\Sigma))}.\]
_In other words, one that \(u\in\mathcal{C}_{\zeta_{1},\zeta_{2}}^{\alpha}(\mathbb{R}^{n}\setminus\Sigma)\) if and only if_
* (_Near the singular set_) _it is bounded by a constant times_ \(|x-x_{i}|^{\zeta_{1}}\) _and has its_ \(\ell\)_-th order partial derivatives bounded by a constant times_ \(|x-x_{i}|^{\zeta_{1}-\ell}\) _for_ \(\ell\leqslant\alpha\) _near each singular point_ \(x_{i}\in\Sigma\)_._
* (_Away from the singular set_) _it is bounded by_ \(|x|^{\zeta_{2}}\) _and has its_ \(\ell\)_-th order partial derivatives bounded by a constant times_ \(|x|^{\zeta_{2}-\ell}\) _for_ \(\ell\leqslant\alpha\)_._
_Note that we are implicitly assuming that \(0\in\Sigma\), in order to simplify the notation._
**Definition 5.16**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). We define the following weighted norms_
\[\|u\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}=\|u\|_{\mathcal{C}_ {\min\{\zeta_{1},-\gamma_{\sigma}+\tau\},-n-2\sigma}(\mathbb{R})} \tag{5.27}\]
_and_
\[\|h\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}=\|h\|_{\mathcal{ C}_{n+\tau,-n+2\sigma}(\mathbb{R})}, \tag{5.28}\]
_where_
\[-\gamma_{\sigma}<\zeta_{1}<\min\left\{-\gamma_{\sigma}+2\sigma,0\right\}. \tag{5.29}\]
_Here \(0<\tau\ll 1\) small enough is given in the definition of the perturbation parameters (5.13) and (5.14). In this fashion, we denote by \(\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)\) and \(\mathcal{C}_{**,\tau}(\mathbb{R}^{n}\setminus\Sigma)\) the corresponding weighted Holder spaces._
Let us make some observations regarding the last definition.
**Remark 5.17**.: _We emphasize that to simplify the notation, many times we will ignore the small perturbation and just the weight near the singular set as dist \((x,\Sigma)^{-\zeta_{1}},\mathrm{dist}(x,\Sigma)^{2\sigma-\zeta_{1}}\), respectively. The weights in Definition 5.16 are suitably chosen to guarantee the invertibility and Fredholmness of the linearized operator around approximate solutions on weighted Holder spaces; this will be clear in the reduction method we apply in the remaining subsections._
### Perturbation of the approximate solution
This subsection is devoted to performing a perturbation method based on the approximated solution, which requires linearizing \((\mathcal{Q}^{\prime}_{2\sigma,\Sigma})\) around the approximate solution (5.21) and estimating both the weighted norm of the right-inverse for the linearized operator given by and the associated remainder error. We emphasize that the balancing formulas and the orthogonality conditions for the normalized kernels discussed above will be building blocks of our construction.
Let us explain our strategy in more detail. First, we consider the nonlinear operator defined \(\mathscr{N}_{\sigma}:\mathcal{C}^{0}(\mathbb{R}^{n}\setminus\Sigma)\to \mathcal{C}^{2\sigma}(\mathbb{R})\) given by
\[\mathscr{N}_{\sigma}(u)=u-(-\Delta)^{-\sigma}(f_{\sigma}\circ u). \tag{5.30}\]
Notice that \((\mathcal{Q}^{\prime}_{2\sigma,\Sigma})\) can be reformulated as
\[\mathscr{N}_{\sigma}(u)=0\quad\text{in}\quad\mathbb{R}^{n}\setminus\Sigma.\]
Next, by linearizing this operator around the approximate solution, we find a linear operator \(\mathscr{L}_{\sigma}[\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a} _{j},\boldsymbol{\lambda}_{j})}]:\mathcal{C}^{0}(\mathbb{R}^{n}\setminus \Sigma)\to\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\Sigma)\) given by
\[\mathscr{L}_{\sigma}[\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a} _{j},\boldsymbol{\lambda}_{j})}](\phi)=\phi-(-\Delta)^{-\sigma}(f_{\sigma}^{ \prime}\circ\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a}_{j}, \boldsymbol{\lambda}_{j})})\phi. \tag{5.31}\]
For the sake of simplicity, let us denote
\[\mathscr{L}_{\sigma}[\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a} _{j},\boldsymbol{\lambda}_{j})}]:=\mathscr{L}_{\sigma}(\boldsymbol{x}, \boldsymbol{L},\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j}).\]
#### 5.7.1. Quantitative estimates
Our first estimate concerns the nonlinear operator defined as (5.30) applied to the approximate solution \(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a}_{j},\boldsymbol{ \lambda}_{j})}\in\mathcal{C}^{\infty}(\mathbb{R}^{n}\setminus\Sigma)\) given by (5.21), namely
\[\mathcal{N}_{\sigma}(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a} _{j},\boldsymbol{\lambda}_{j}) :=\mathscr{N}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L}, \boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})})\] \[=\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a}_{j}, \boldsymbol{\lambda}_{j})}-(-\Delta)^{-\sigma}(f_{\sigma}\circ\bar{u}_{( \boldsymbol{x},\boldsymbol{L},\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})}).\]
We emphasize that we must suitably choose the weighted norm in (5.28) so that our following estimates have the correct decay.
**Lemma 5.18**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). Assume that \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\operatorname{Adm}_{\sigma}(\Sigma)\) is an admissible configuration as in Definition 5.5 with \(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}\in\operatorname{Apx}_{ \sigma}(\Sigma)\) their associated approximate solution as in Definition 5.12. Then, there exists a weight \(\zeta_{1}<0\) satisfying (5.29) such that_
\[\|\mathcal{N}_{\sigma}(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})\|_{\mathcal{ C}_{\mathbf{\ast},\tau}(\mathbb{R}^{n}\setminus\Sigma)}\lesssim e^{-\gamma_{\sigma}L(1 +\xi)} \tag{5.32}\]
_for some \(\xi>0\) uniformly on \(L\gg 1\) large._
Proof.: For the sake of simplicity, we shall prove the estimate in (5.32) for the \(L^{\infty}-\)norm. Namely, we need to quantitatively estimate the term \(|\mathscr{N}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{0},\mathbf{1})})|\) and then a applying a classical perturbation technique.
The rest of the proof will be divided into two cases.
**Case 1: \((\mathbf{a}_{j},\mathbf{\lambda}_{j})=(\mathbf{0},\mathbf{1})\)** for all \(j\in\mathbb{N}\).
In this case, the approximate solution \(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{0},\mathbf{1})}\in\mathcal{C}^{\infty}(\mathbb{R}^{n} \setminus\Sigma)\) is given by
\[\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{0},\mathbf{1})}(x)=\sum_{i=1}^{N}\left[\sum_{j\in \mathbb{N}}U_{(x_{i},L^{i}_{j},\mathbf{0},\lambda^{i}_{j})}(x)+\chi_{i}(x)\phi_{i} (x-x_{i})\right],\]
where
\[U_{(x_{i},L^{i}_{j},\mathbf{0},\lambda^{i}_{j})}(x):=\left(\frac{\lambda^{i}_{j}}{ |\lambda^{i}_{j}|^{2}+|x-x_{i}|^{2}}\right)^{\gamma_{\sigma}}.\]
Without loss of generality, assume \(x_{1}=0\). Before we prove the estimate of \(|\mathscr{N}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{0},\mathbf{1})})|\), we first prove the following claim:
**Claim 1:** The following estimate holds
\[|\mathcal{D}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{0},\mathbf{1})})| \lesssim\begin{cases}|x-x_{i}|^{\zeta_{1}-2\sigma}e^{-\gamma_{\sigma}L(1+\xi)},&\text{ if }0<\operatorname{d}(x,\Sigma)<\frac{1}{2},\\ e^{-\gamma_{\sigma}L(1+\xi)},&\text{ if }\frac{1}{2}\leqslant\operatorname{d}(x, \Sigma)<1,\\ |x|^{-(n+2\sigma)}e^{-\gamma_{\sigma}L(1+\xi)},&\text{ if }\operatorname{d}(x, \Sigma)\geqslant 1,\end{cases}\]
where
\[\mathcal{D}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{0},\mathbf{1})}):=\sum_{i=1}^{N} \sum_{j\in\mathbb{N}}f_{\sigma}(U_{(x_{i},L^{i}_{j},\mathbf{0},\lambda^{i}_{j})}) -f_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{0},\mathbf{1})}). \tag{5.33}\]
As a matter of fact, we proceed by a direct estimate in terms in the asymptotic behavior of the bubble tower solution. Without loss of generality, assume \(x_{1}=0\). The proof will be divided into three steps: the exterior, transition, and interior region, respectively.
**Step 1:** If \(\operatorname{d}(x,\Sigma)\geqslant 1\), then \(|\mathcal{D}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{0},\mathbf{1})})|\lesssim|x|^{-( n+2\sigma)}e^{-\gamma_{\sigma}L(1+\xi)}\).
In this region, we notice \(\chi_{i}(x)=0\) for all \(i\in\{2,\dots,N\}\) when \(\operatorname{d}(x,\Sigma)\geqslant 1\). Next, using that
\[|U_{(x_{i},L^{i}_{j},\mathbf{0},\lambda^{i}_{j})}(x)|\sim\left(\lambda^{i}_{j} \right)^{\gamma_{\sigma}}|x|^{-(n-2\sigma)}\quad\text{as}\quad|x|\to+\infty\]
and recalling the relation in (5.10), we have
\[\left|\mathcal{D}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{0},\mathbf{1}) })\right| =c_{n,\sigma}\left|\left(\sum_{i=1}^{N}\sum_{j\in\mathbb{N}}U_{ (x_{i},L^{i}_{j},\mathbf{0},\lambda^{i}_{j})}\right)^{\frac{n+2\sigma}{n-2\sigma}} -\sum_{i=1}^{N}\sum_{j\in\mathbb{N}}\left(U_{(x_{i},L^{i}_{j},\mathbf{0},\lambda^{ i}_{j})}\right)^{\frac{n+2\sigma}{n-2\sigma}}\right|\] \[\lesssim\left(e^{-\frac{(n-2\sigma)L}{2}}|x|^{-(n-2\sigma)} \right)^{\frac{n+2\sigma}{n-2\sigma}}\] \[\lesssim e^{-\frac{(n+2\sigma)L}{2}}|x|^{-(n+2\sigma)},\]
which finishes the proof of the first step.
**Step 2:** If \(\frac{1}{2}\leqslant|x|\leqslant 1\), then \(|\mathcal{D}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0}, \boldsymbol{1})})|\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\) for some \(\xi>0\).
In this case, it is easy to verify the estimate
\[|\mathcal{D}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0}, \boldsymbol{1})})|\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\]
for some \(\xi>0\).
**Step 3:** If \(0<|x|\leqslant\frac{1}{2}\), then \(|\mathcal{D}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0}, \boldsymbol{1})})|\lesssim|x-x_{i}|^{\zeta_{1}-2\sigma}e^{-\gamma_{\sigma}L(1+ \xi)}\) for some \(\xi>0\).
Notice that \(\chi_{1}(x)\equiv 1\) and \(\chi_{i}(x)0\) for \(i\in\{2,\ldots,N\}\). By definition, it follows that
\[\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0},\boldsymbol{1})}= \widehat{U}_{(x_{1},L_{j}^{1},0,\lambda_{j}^{1})}-\left(1-\chi_{1}\right) \phi_{1}+\sum_{i=2}^{N}\left(\sum_{j\in\mathbb{N}}U_{(x_{i},L_{j}^{i}, \boldsymbol{0},\lambda_{j}^{i})}+\chi_{i}\phi_{i}\right)-\sum_{j\in\mathbb{Z} \setminus\mathbb{N}}U_{(x_{1},L_{j}^{1},0,\lambda_{j}^{1})}.\]
Hence, by an easy computation, we obtain
\[\left|\mathcal{D}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L}, \boldsymbol{0},\boldsymbol{1})})\right| \lesssim\left|\sum_{j\in\mathbb{N}}(U_{(x_{1},L_{j}^{1},0, \lambda_{j}^{1})})^{\frac{n+2\sigma}{n-2\sigma}}-\left(\sum_{j\in\mathbb{N}}U _{(x_{1},L_{j}^{1},0,\lambda_{j}^{1})}+\mathcal{O}(e^{-\gamma_{\sigma}(1+\xi) })\right)^{\frac{n+2\sigma}{n-2\sigma}}\right|+\mathcal{O}(e^{-\gamma_{\sigma} (1+\xi)})\] \[\lesssim\sum_{j\in\mathbb{N}}(U_{(x_{1},L_{j}^{1},0,\lambda_{j}^{ 1})})^{\frac{4\sigma}{n-2\sigma}}e^{-\gamma_{\sigma}}+\mathcal{O}(e^{-\gamma_{ \sigma}(1+\xi)})) \tag{5.34}\] \[\lesssim|x|^{-2\sigma}\sum_{j\in\mathbb{N}}V_{(x_{1},L_{j}^{1},0, \lambda_{j}^{1})}^{\frac{4\sigma}{n-2\sigma}}e^{-\gamma_{\sigma}L}+\mathcal{O }(e^{-\gamma_{\sigma}(1+\xi)}),\]
where
\[V_{(x_{1},L_{j}^{1},0,\lambda_{j}^{1})}(-\ln|x|):=V_{\text{sph}}(-\ln|x|-L_{1} -2jL_{1})\]
and we recall that the spherical solution \(v_{\text{sph}}\) is defined as (4.13).
Furthermore, it is straightforward to see that when \(0<|x|\leqslant\frac{1}{2}\) there exists \(\xi>0\) and \(\zeta_{1}<0\) satisfying
\[|x|^{-\zeta_{1}}\left(\sum_{j\in\mathbb{Z}}V_{(x_{1},L_{j}^{1},0,\lambda_{j}^ {1})}(-\ln|x|)\right)^{\frac{4\sigma}{n-2\sigma}}\lesssim e^{-\xi L_{1}}. \tag{5.35}\]
Indeed, if \(-\infty<t<L_{1}\) there exists \(C_{1}>0\) such that \(|x|\leqslant C_{1}\) and
\[\sum_{j\in\mathbb{Z}}V_{(x_{1},L_{j}^{1},0,\lambda_{j}^{1})}(-\ln|x|)\leqslant C _{1}e^{-\gamma_{\sigma}L_{1}}. \tag{5.36}\]
Also, if \(L_{1}\leqslant t<+\infty\), there exists \(C_{2}>0\) such that \(|x|\leqslant C_{2}e^{-L_{1}/2}\) and
\[\sum_{j\in\mathbb{Z}}V_{(x_{1},L_{j}^{1},0,\lambda_{j}^{1})}(-\ln|x|)\leqslant C _{2}. \tag{5.37}\]
Finally, combining (5.34) and (5.35) implies
\[\left|\mathcal{D}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{ 0},\boldsymbol{1})})\right|\lesssim|x|^{\zeta_{1}-2\sigma}e^{-\gamma_{\sigma} (1+\xi)},\]
which finishes the proof of the first claim.
We now proceed to the proof of our preliminary estimate.
**Claim 2:** The following estimates holds
\[|\mathscr{N}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a} _{j},\boldsymbol{\lambda}_{j})})|\lesssim\begin{cases}|x-x_{i}|^{\min\{\zeta_{1}- \tau,-\gamma_{\sigma}+\tau\}}e^{-\gamma_{\sigma}L(1+\xi)},&\text{ if }0<\text{d}(x,\Sigma)<\frac{1}{2},\\ e^{-\gamma_{\sigma}L(1+\xi)},&\text{ if }\frac{1}{2}\leqslant\text{d}(x,\Sigma)<1,\\ |x|^{2\sigma-n}e^{-\gamma_{\sigma}L(1+\xi)},&\text{ if }\text{d}(x,\Sigma)\geqslant 1. \end{cases}\]
As before, the proof will be divided into three steps as follows:
**Step 1:** If \(\mathrm{d}(x,\Sigma)\geqslant 1\), then \(|\mathscr{N}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0}, \boldsymbol{1})})|\lesssim|x|^{2\sigma-n}e^{-\gamma_{\sigma}L(1+\xi)}\).
Notice that \(\chi_{i}(x)\equiv 0\) for all \(i\in\{1,\ldots,N\}\) and \(x\in\mathbb{R}^{n}\setminus\Sigma\) such that \(\mathrm{d}(x,\Sigma)\geqslant 1\). From this, we get
\[\mathscr{N}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L}, \boldsymbol{0},\boldsymbol{1})}) =\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0}, \boldsymbol{1})}-(-\Delta)^{-\sigma}(f_{\sigma}(\bar{u}_{(\boldsymbol{x}, \boldsymbol{L},\boldsymbol{0},\boldsymbol{1})}))\] \[=\sum_{i=1}^{N}\sum_{j\in\mathbb{N}}U_{(x_{i},L_{j}^{i}, \boldsymbol{0},\lambda_{j}^{i})}(x)-(-\Delta)^{-\sigma}f_{\sigma}\left(\bar{u }_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0},\boldsymbol{1})}\right)\] \[=\int_{\mathbb{R}^{n}}\left(\sum_{i=1}^{N}\sum_{j\in\mathbb{N}}f_ {\sigma}(U_{(x_{i},L_{j}^{i},\boldsymbol{0},\lambda_{j}^{i})}(y))-f_{\sigma}( \bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0},\boldsymbol{1})}(y)) \right)\mathcal{R}_{\sigma}(x-y)\mathrm{d}y\] \[=\int_{\mathbb{R}^{n}}\mathcal{D}_{\sigma}(\bar{u}_{(\boldsymbol{ x},\boldsymbol{L},\boldsymbol{0},\boldsymbol{1})}(y))\mathcal{R}_{\sigma}(x-y) \mathrm{d}y\] \[=\left(\int_{|y|\leqslant 1}+\int_{1\leqslant|y|\leqslant|x|}+ \int_{|y|\geqslant|x|}\right)\mathcal{D}_{\sigma}(\bar{u}_{(\boldsymbol{x}, \boldsymbol{L},\boldsymbol{0},\boldsymbol{1})}(y))\mathcal{R}_{\sigma}(x-y) \mathrm{d}y\] \[=:I_{11}+I_{12}+I_{13}.\]
where we recall that \(\mathcal{D}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0}, \boldsymbol{1})})\) is given by (5.33).
Applying Step 1 of Claim 1, we have
\[|I_{11}|\lesssim e^{-\gamma_{\sigma}(1+\xi)}\int_{|y|\leqslant 1}|x-y|^{2 \sigma-n}|y|^{\zeta_{1}-2\sigma}\mathrm{d}y\lesssim|x|^{2\sigma-n}e^{-\gamma_ {\sigma}(1+\xi)},\]
\[|I_{12}|\lesssim e^{-\gamma_{\sigma}(1+\xi)}\int_{1\leqslant|y|\leqslant|x|} |x-y|^{n-2\sigma}|y|^{-(n+2\sigma)}\mathrm{d}y\lesssim|x|^{2\sigma-n}e^{-\gamma _{\sigma}L(1+\xi)},\]
and
\[|I_{13}|\lesssim e^{-\gamma_{\sigma}(1+\xi)}\int_{|y|\geqslant|x|}|x-y|^{2 \sigma-n}|y|^{-(n+2\sigma)}\mathrm{d}y\lesssim|x|^{-n}e^{-\gamma_{\sigma}(1+ \xi)}.\]
Combining the above estimates, we finish the proof of Step 1.
**Step 2:** If \(\frac{1}{2}\leqslant|x|\leqslant 1\), then \(|\mathscr{N}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a} _{j},\boldsymbol{\lambda}_{j})})|\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\) for some \(\xi>0\).
In this case, it holds
\[\mathscr{N}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L}, \boldsymbol{0},\boldsymbol{1})}) =\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0},\boldsymbol {1})}-(-\Delta)^{-\sigma}(f_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L}, \boldsymbol{0},\boldsymbol{1})}))\] \[=\sum_{i=1}^{N}\sum_{j\in\mathbb{N}}U_{(x_{i},L_{j}^{i}, \boldsymbol{0},\lambda_{j}^{i})}+\chi_{1}\phi_{1}-(-\Delta)^{-\sigma}f_{ \sigma}\left(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0},\boldsymbol {1})}\right)\] \[=\int_{\mathbb{R}^{n}}\mathcal{D}_{\sigma}(\bar{u}_{(\boldsymbol{ x},\boldsymbol{L},\boldsymbol{0},\boldsymbol{1})})(y)\mathcal{R}_{\sigma}(x-y) \mathrm{d}y+e^{-\gamma_{\sigma}L(1+\xi)}\] \[=:I_{21}+I_{22}+I_{23}+I_{24}+\mathcal{O}(e^{-\gamma_{\sigma}L(1+ \xi)}).\]
Applying Step 2 of Claim 1, we get
\[|I_{21}|\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\int_{|y|\leqslant\frac{|x|}{2}}| x-y|^{2\sigma-n}|y|^{\zeta_{1}-2\sigma}\mathrm{d}y\lesssim|x|^{\zeta_{1}}e^{- \gamma_{\sigma}L(1+\xi)}\lesssim e^{-\gamma_{\sigma}L(1+\xi)},\]
\[|I_{22}|\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\int_{\frac{|x|}{2}<|y|\leqslant 1 }|x-y|^{2\sigma-n}|y|^{\zeta_{1}-2\sigma}\mathrm{d}y\lesssim|x|^{\zeta_{1}-2 \sigma}e^{-\gamma_{\sigma}L(1+\xi)}\int_{|y-x|\leqslant\frac{3}{2}}|x-y|^{2 \sigma-n}\mathrm{d}y\lesssim e^{-\gamma_{\sigma}L(1+\xi)},\]
\[|I_{23}|\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\int_{1\leqslant|y|\leqslant 2|x|}|x-y|^{2 \sigma-n}|y|^{-(n+2\sigma)}\mathrm{d}y\lesssim e^{-\gamma_{\sigma}L(1+\xi)} \int_{|y-x|\leqslant 3|x|}|x-y|^{2\sigma-n}\mathrm{d}y\lesssim e^{-\gamma_{\sigma}L(1+ \xi)},\]
and
\[|I_{24}|\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\int_{|y|\geqslant 2|x|}|x-y|^{2 \sigma-n}|y|^{-(n+2\sigma)}\mathrm{d}y\lesssim|x|^{2\sigma-n}e^{-\gamma_{ \sigma}L(1+\xi)}\int_{|y|\geqslant 1}|y|^{-(n+2\sigma)}\mathrm{d}y\lesssim e^{- \gamma_{\sigma}L(1+\xi)}.\]
Consequently, the proof of Step 2 follows.
**Step 3:** If \(0<|x|\leqslant\frac{1}{2}\), then \(|\mathscr{N}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0}, \boldsymbol{1})})|\lesssim|x|^{\zeta_{1}-\tau}e^{-\gamma_{\sigma}L(1+\xi)}\).
Similarly to the previous steps, we obtain
\[\mathscr{N}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L}, \boldsymbol{0},\boldsymbol{1})}) =\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0}, \boldsymbol{1})}-(-\Delta)^{-\sigma}(f_{\sigma}(\bar{u}_{(\boldsymbol{x}, \boldsymbol{L},\boldsymbol{0},\boldsymbol{1})}))\] \[=\sum_{i=1}^{N}\sum_{j\in\mathbb{N}}U_{(x_{i},L_{j}^{i}, \boldsymbol{0},\lambda_{j}^{i})}+\phi_{1}-(-\Delta)^{-\sigma}f_{\sigma}\left( \bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0},\boldsymbol{1})}\right)\] \[=\int_{\mathbb{R}^{n}}\mathcal{D}_{\sigma}(\bar{u}_{(\boldsymbol{ x},\boldsymbol{L},\boldsymbol{0},\boldsymbol{1})}(y))\mathcal{R}_{\sigma}(x-y) \mathrm{d}y+e^{-\gamma_{\sigma}L(1+\xi)}\] \[=\left(\int_{|y|\leqslant\frac{|x|}{2}}+\int_{\frac{|x|}{2}<|y| \leqslant 2|x|}+\int_{2|x|\leqslant|y|\leqslant 1}+\int_{|y|\geqslant 1} \right)\mathcal{D}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L}, \boldsymbol{0},\boldsymbol{1})}(y))\mathcal{R}_{\sigma}(x-y)\mathrm{d}y+ \mathcal{O}(e^{-\gamma_{\sigma}L(1+\xi)})\] \[=:I_{31}+I_{32}+I_{33}+I_{34}+\mathcal{O}(e^{-\gamma_{\sigma}L(1 +\xi)}).\]
Applying Step 3 of Claim 1, we get
\[|I_{31}|\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\int_{|y|\leqslant\frac{|x|}{2}}| x-y|^{2\sigma-n}|y|^{\zeta_{1}-2\sigma}\mathrm{d}y\lesssim|x|^{\zeta_{1}}e^{- \gamma_{\sigma}L(1+\xi)},\]
\[|I_{32}|\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\int_{\frac{|x|}{2}\leqslant|y| \leqslant 2|x|}|x-y|^{2\sigma-n}|y|^{\zeta_{1}-2\sigma}\mathrm{d}y\lesssim|x|^{ \zeta_{1}}e^{-\gamma_{\sigma}L(1+\xi)},\]
\[|I_{33}|\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\int_{2|x|\leqslant|y|\leqslant 1 }|x-y|^{2\sigma-n}|y|^{\zeta_{1}-2\sigma}\mathrm{d}y\lesssim e^{-\gamma_{\sigma }L(1+\xi)}|x|^{\zeta_{1}}\int_{2|x|\leqslant|y|\leqslant 1}|y|^{-n}\mathrm{d}y \lesssim|x|^{\zeta_{1}-\tau}e^{-\gamma_{\sigma}L(1+\xi)},\]
and
\[|I_{34}|\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\int_{|y|\geqslant 1}|x-y|^{2\sigma-n} |y|^{-(n+2\sigma)}\mathrm{d}y\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\int_{|y| \geqslant 1}|y|^{-(n+2\sigma)}\mathrm{d}y\lesssim e^{-\gamma_{\sigma}L(1+\xi)}.\]
Therefore, for \(|x|\leqslant\frac{1}{2}\), we conclude
\[|\mathscr{N}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0}, \boldsymbol{1})})|\lesssim|x|^{\zeta_{1}-\tau}e^{-\gamma_{\sigma}L(1+\xi)},\]
which gives us the desired estimate in Step 3.
By combining the last three steps, the proof of the first case is concluded.
Now we consider the case of a general configuration. We will use a perturbation technique based on the last case in this situation.
**Case 2: \((\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})\neq(\boldsymbol{0},\boldsymbol{ 1})\)** for some \(j\in\mathbb{N}\).
Initially, we will prove the following decomposition.
**Claim 3:** It holds that
\[|\mathscr{N}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L}, \boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})})-\mathscr{N}_{\sigma}(\bar{u}_{( \boldsymbol{x},\boldsymbol{L},\boldsymbol{0},\boldsymbol{1})})|\] \[\leqslant\sum_{i=1}^{N}\sum_{j\in\mathbb{N}}(-\Delta)^{-\sigma} \left[|f^{\prime}_{\sigma}(U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_{j})})-f^ {\prime}_{\sigma}(U_{(x_{i},L^{i}_{j},\boldsymbol{0},\lambda^{i}_{j})})|\left( |\partial_{r^{i}_{j}}U_{(x_{\ell},L^{i}_{\ell},a^{i}_{\ell},\lambda^{i}_{\ell} )}|r^{i}_{j}|+\sum_{\ell=1}^{n}|\partial_{a^{i}_{j,\ell}}U_{(x_{\ell},L^{i}_{ \ell},a^{i}_{\ell},\lambda^{i}_{\ell})}||a^{i}_{j,\ell}|\right)\right]\] \[=\sum_{i=1}^{N}\sum_{j\in\mathbb{N}}(-\Delta)^{-\sigma}\left[| \partial_{r^{i}_{j}}\widetilde{D}^{\prime}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0},\boldsymbol{1})})||r^{i}_{j}|+\sum_{\ell=1}^{n}| \partial_{a^{i}_{j,\ell}}\widetilde{\mathcal{D}}^{\prime}_{\sigma}(\bar{u}_{ (\boldsymbol{x},\boldsymbol{L},\boldsymbol{0},\boldsymbol{1})})||a^{i}_{j, \ell}|\right],\]
where
\[\widetilde{\mathcal{D}}_{\sigma}(\bar{u}_{(\boldsymbol{x}, \boldsymbol{L},\boldsymbol{0},\boldsymbol{1})}):=\sum_{i=1}^{N}\sum_{j\in \mathbb{N}}f_{\sigma}(U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_{j})})-f_{ \sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0},\boldsymbol{1} )}). \tag{5.38}\]
To prove this fact, we will differentiate \(\mathscr{N}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0}, \boldsymbol{1})})\) with respect to the parameters \(r^{i}_{j},a^{i}_{j,\ell}\). Since the variation is linear in the displacements of the parameters, we vary the parameter of one point at one time. First, with respect to \(r^{i}_{j}\), we have
\[\partial_{r^{i}_{j}}\mathscr{N}_{\sigma}(\bar{u}_{(\boldsymbol{x },\boldsymbol{L},\boldsymbol{0},\boldsymbol{1})}) =\partial_{r^{i}_{j}}U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_{ j})}-(-\Delta)^{-\sigma}(f^{\prime}_{\sigma}(\bar{u}_{(\boldsymbol{x}, \boldsymbol{L},\boldsymbol{0},\boldsymbol{1})})\partial_{r^{i}_{j}}U_{(x_{i},L^ {i}_{j},a^{i}_{j},\lambda^{i}_{j})})\] \[=(-\Delta)^{-\sigma}[(f^{\prime}_{\sigma}(U_{(x_{i},L^{i}_{j},a^{ i}_{j},\lambda^{i}_{j})})-f^{\prime}_{\sigma}(\bar{u}_{(\boldsymbol{x}, \boldsymbol{L},\boldsymbol{0},\boldsymbol{1})})\partial_{r^{i}_{j}}U_{(x_{i}, L^{i}_{j},a^{i}_{j},\lambda^{i}_{j})}].\]
Second, with respect to \(a^{i}_{j,\ell}\), we obtain
\[\partial_{a^{i}_{j,\ell}}\mathscr{N}_{\sigma}(\bar{u}_{(\boldsymbol{x}, \boldsymbol{L},\boldsymbol{0},\boldsymbol{1})})=(-\Delta)^{-\sigma}\left[(f^{ \prime}_{\sigma}(U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_{j})})-f^{\prime}_ {\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0},\boldsymbol{1} )}))\sum_{\ell=1}^{n}\partial_{a^{i}_{j,\ell}}U_{(x_{\ell},L^{i}_{\ell},a^{i}_ {\ell},\lambda^{i}_{\ell})}\right].\]
This fact concludes the proof of Claim 3.
Next, we shall obtain \(L^{\infty}\)-estimate in the sense below. We first consider the case of the parameters \(r^{i}_{j}\).
**Claim 4:** The following estimate holds
\[\left|\partial_{r^{i}_{j}}\widetilde{\mathcal{D}}^{\prime}_{\sigma}(\bar{u}_{( \boldsymbol{x},\boldsymbol{L},\boldsymbol{0},\boldsymbol{1})})\right| \lesssim\begin{cases}\mathrm{d}(x,\Sigma)^{\min\{\zeta_{1},-\gamma_{\sigma}+ \tau\}-2\sigma}e^{-\gamma_{\sigma}L(1+\xi)},&\text{ if }0<\mathrm{d}(x,\Sigma)<1,\\ |x|^{-(n+2\sigma)}e^{-\gamma_{\sigma}L(1+\xi)},&\text{ if }\mathrm{d}(x,\Sigma) \geqslant 1.\end{cases}\]
As before, we consider two cases separately.
**Step 1:** If \(\mathrm{d}(x,\Sigma)\geqslant 1\), then
\[[f^{\prime}_{\sigma}(U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_{j})})-f^{ \prime}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0}, \boldsymbol{1})})]\partial_{r^{i}_{j}}U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_{ j})} \lesssim|x|^{-(n+2\sigma)}e^{-\gamma_{\sigma}L(1+\xi)}e^{-\nu t^{i}_{j}},\]
for a suitable choice of \(\nu>0\).
As a matter of fact, we have
\[[f^{\prime}_{\sigma}(U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_{ j})})-f^{\prime}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0}, \boldsymbol{1})})]\partial_{r^{i}_{j}}U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_{ j})} \lesssim\left(e^{-\gamma_{\sigma}L}|x|^{-(n-2\sigma)}\right)^{\frac{4n}{n-2\sigma}}e^{- \gamma_{\sigma}L(2j+1)}|x|^{-(n-2\sigma)}\] \[\lesssim|x|^{-(n+2\sigma)}e^{-\gamma_{\sigma}L(1+\xi)}e^{-\nu t^{ i}_{j}},\]
which by (5.13) and (5.14) concludes the proof of this step.
**Step 2:** If \(0<\mathrm{d}(x,\Sigma)<1\), then
\[[f^{\prime}_{\sigma}(U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_{j})})-f^{\prime}_ {\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0},\boldsymbol{1})})] \partial_{r^{i}_{j}}U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_{j})} \lesssim\mathrm{d}(x,\Sigma)^{\min\{-\gamma_{\sigma}+\tau,\zeta_{1}\}-2\sigma}e^{- \gamma_{\sigma}L(1+\xi)}\]
for some \(-\gamma_{\sigma}<\zeta_{1}<\min\{0,-\gamma_{\sigma}+2\sigma\}\) and \(0<\tau\ll 1\) small enough.
In this situation, we may assume without loss of generality that \(|x-x_{i}|\leqslant 1\) for \(i\in\{2,\ldots,N\}\). Hence, we proceed similarly to the proof of the estimates (5.36) and (5.37) to find
\[[f^{\prime}_{\sigma}(U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_{j}) })-f^{\prime}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0}, \boldsymbol{1})})]\partial_{r^{i}_{j}}U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_ {j})}\] \[\lesssim|x-x_{i}|^{-2\sigma}\left(\sum_{j\in\mathbb{N}}V_{(x_{i^{ \prime}},L^{i^{\prime}}_{j^{\prime}},\lambda^{i^{\prime}}_{j^{\prime}},a^{i^{ \prime}}_{j^{\prime}})}(-\ln|x-x_{i}|)\right)^{\frac{n+2\sigma}{n-2\sigma}}e^{- \gamma_{\sigma}L(2j+1)}\] \[\lesssim|x-x_{i}|^{\zeta_{1}-2\sigma}e^{-\gamma_{\sigma}L(1+ \xi)}e^{-\nu t^{i}_{j}}.\]
for a suitable choice of \(\nu>0\).
Again, we have more two cases to consider. If \(|t-t^{i}_{j}|\geqslant L_{1}\), it follows
\[[f^{\prime}_{\sigma}(U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_ {j})})-f^{\prime}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol {0},\boldsymbol{1})})]\partial_{r^{i}_{j}}U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^ {i}_{j})}\] \[\lesssim f^{\prime}_{\sigma}(U_{(x_{i},L^{i}_{j},a^{i}_{j}, \lambda^{i}_{j})})\left(\sum_{\ell\neq j}f^{\prime}_{\sigma}(U_{(x_{\ell},L^{i }_{\ell},a^{i}_{\ell},\lambda^{i}_{\ell})})+e^{-\gamma_{\sigma}L}\right)\] \[\lesssim|x|^{-\frac{n+2\sigma}{2}}\sum_{\ell\neq j}V_{(x_{i},L^{i }_{j},a^{i}_{j},\lambda^{i}_{j})}^{\frac{4\sigma}{n-2\sigma}}V_{(x_{\ell},L^{ i}_{\ell},a^{i}_{\ell},\lambda^{i}_{\ell})}+|x|^{-2\sigma}V_{(x_{\ell},L^{i ^{\prime}}_{j^{\prime}}\lambda^{i^{\prime}}_{j^{\prime}},a^{i^{\prime}}_{j^{ \prime}})}^{\frac{4\sigma}{n-2\sigma}}e^{-\gamma_{\sigma}L}\] \[\lesssim|x|^{-\frac{n+2\sigma}{2}}e^{-\eta|t-t_{j}|}\sum_{\ell \neq j}e^{-(2\sigma-\eta)|t-t^{i}_{j}|}e^{-\gamma_{\sigma}|t-t^{i}_{\ell}|}+|x |^{\zeta_{1}-2\sigma}|x|^{\zeta_{1}}e^{-2\sigma|t-t^{i}_{j}|}e^{-\gamma_{ \sigma}L}\] \[\lesssim\left(|x|^{-\frac{n+2\sigma}{2}}e^{-\eta|t-t^{i}_{j}|}+|x |^{\zeta_{1}-2\sigma}e^{-\nu t^{i}_{j}}\right)e^{-\gamma_{\sigma}L(1+\xi)},\]
if \(0<\eta<2\sigma\) is chosen suitably. Whereas, if \(|t-t^{i}_{\ell}|\leqslant L_{1}\) for some \(\ell\neq j\), one has
\[[f^{\prime}_{\sigma}(U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_ {j})})-f^{\prime}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol {0},\boldsymbol{1})})]\partial_{r^{i}_{j}}U_{(x_{i},L^{i}_{j},a^{i}_{j}, \lambda^{i}_{j})} \lesssim f^{\prime}_{\sigma}(U_{(x_{i},L^{i}_{j},a^{i}_{j}, \lambda^{i}_{j})})\partial_{r^{i}_{j}}U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i} _{j})}\] \[\lesssim|x|^{-\frac{n+2\sigma}{2}}\sum_{\ell\neq j}V_{(x_{\ell},L ^{i}_{\ell},a^{i}_{\ell},\lambda^{i}_{\ell})}^{\frac{4\sigma}{n-2\sigma}}V_{(x _{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_{j})}\] \[\lesssim|x|^{-\gamma_{\sigma}}e^{-\eta|t-t^{i}_{j}|}e^{\eta|t-t^{ i}_{j}|}e^{-\gamma_{\sigma}|t-t_{j}|}e^{-2\sigma|t-t^{i}_{\ell}|}\] \[\lesssim|x|^{-\frac{n+2\sigma}{2}}e^{-\eta|t-t^{i}_{j}|}e^{-\gamma _{\sigma}L(1+\xi)}\]
if \(0<\eta\ll\gamma_{\sigma}\) is chosen small enough.
In conclusion, by combining the above two estimates, we get
\[[f^{\prime}_{\sigma}(U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_{j})})-f^{\prime }_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0},\boldsymbol {1})})]\partial_{r^{i}_{j}}U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_{j})} \lesssim|x|^{-\frac{n+2\sigma}{2}}e^{-\tau t}e^{-\gamma_{\sigma}L(1+\xi)}+ |x|^{\zeta_{1}-2\sigma}e^{-\gamma_{\sigma}L(1+\xi)}\]
for \(0<|x|\leqslant 1\), which implies
\[[f^{\prime}_{\sigma}(U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_{j })})-f^{\prime}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0}, \boldsymbol{1})})]\partial_{r^{i}_{j}}U_{(x_{i},L^{i}_{j},a^{i}_{j},\lambda^{i}_ {j})} \lesssim(\mathrm{d}(x,\Sigma)^{-\frac{n+2\sigma}{2}}\mathrm{d}(x, \Sigma)^{\tau}+\mathrm{d}(x,\Sigma)^{\zeta_{1}-2\sigma})e^{-\gamma_{\sigma}(1+\xi)}\] \[\lesssim\mathrm{d}(x,\Sigma)^{\min\{-\gamma_{\sigma}+\tau,\zeta_{1 }\}-2\sigma}e^{-\gamma_{\sigma}(1+\xi)}.\]
The proof of this step is concluded, and so is one of the claims.
**Claim 5:** The following estimate holds
\[\left|\partial_{a^{i}_{j,\ell}}\widetilde{\mathcal{D}}^{\prime}_{\sigma}(\bar{u }_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0},\boldsymbol{1})})\right| \lesssim\begin{cases}\mathrm{d}(x,\Sigma)^{\min\{\zeta_{1},-\gamma_{\sigma}+\tau \}-2\sigma}e^{-\gamma_{\sigma}L(1+\xi)},&\text{ if }0<\mathrm{d}(x,\Sigma)<1,\\ |x|^{-(n+2\sigma)}e^{-\gamma_{\sigma}L(1+\xi)},&\text{ if }\mathrm{d}(x,\Sigma)\geqslant 1. \end{cases}\]
The estimates are similar to the ones in the last claim, so we omit them here.
As a combination of these estimates, we have our main conclusion.
**Claim 6:** The following estimate holds
\[\left|\mathscr{N}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}) -\mathscr{N}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{0},\mathbf{1})})\right|\lesssim \begin{cases}\mathrm{d}(x,\Sigma)^{\min\{\zeta_{1}-\tau,-\gamma_{\sigma}+\tau\}} e^{-\gamma_{\sigma}L(1+\xi)},&\text{ if }0<\mathrm{d}(x,\Sigma)<1,\\ |x|^{-(n-2\sigma)}e^{-\gamma_{\sigma}L(1+\xi)},&\text{ if }\mathrm{d}(x,\Sigma) \geqslant 1.\end{cases}\]
To prove this claim, we plug Claims 4 and 5 into Claim 3 and proceed similarly to the proof of Claim 2.
Finally, using the definitions of the weighted norms in Definition 5.16, it is straightforward to see that (5.32) is a direct consequence of the last claim.
The lemma is finally proved.
#### 5.7.2. Finite-dimensional reduction
We apply a finite-dimensional Lyapunov-Schmidt reduction to solve an auxiliary linearized equation around an approximate solution. As usual in this method, we use the orthogonality properties of the normalized approximate kernels and cokernels from Lemma 5.14.
**Lemma 5.19**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). Assume that \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\mathrm{Adm}_{\sigma}(\Sigma)\) is an admissible configuration as in Definition 5.5 with \(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}\in\mathrm{Apx}_{\sigma} (\Sigma)\) their associated approximate solution as in Definition 5.12. Then, there exists a weight \(\zeta_{1}<0\) satisfying (5.29) such that for any \(h\in\mathcal{C}_{\ast\ast,\tau}(\mathbb{R}^{n}\setminus\Sigma)\), there exists \(\{c^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\}_{(i,j,\ell)\in\mathcal{I}_{ \infty}}\subset\mathbb{R}\) and a unique solution \(\phi\in\mathcal{C}_{\ast,\tau}(\mathbb{R}^{n}\setminus\Sigma)\) to the following linearized equation_
\[\left\{\begin{array}{ll}\mathscr{L}_{\sigma}(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{ \lambda}_{j})(\phi)=h+\sum_{i=1}^{N}\sum_{j\in\mathbb{N}}\sum_{\ell=0}^{n}c^{i }_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})Z^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda} _{j})\quad\text{in}\quad\mathbb{R}^{n}\setminus\Sigma,\\ \int_{\mathbb{R}^{n}}\phi\overline{Z}^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j} )\mathrm{d}x=0\quad\text{for}\quad(i,j,\ell)\in\mathcal{I}_{\infty}.\end{array}\right.\] ( \[\mathcal{L}^{\prime}_{2\sigma,\mathbf{a},\mathbf{\lambda}}\] )
_Moreover, one has the estimate_
\[\|\phi\|_{\mathcal{C}_{\ast},\tau}(\mathbb{R}^{n}\setminus\Sigma)\lesssim\|h \|_{\mathcal{C}_{\ast\ast},\tau}(\mathbb{R}^{n}\setminus\Sigma).\]
_uniformly on \(\lambda\ll 1\) large. In what follows, we shall denote this error function by \(\phi_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}\)._
Proof.: First, by multiplying equation \((\mathcal{L}^{\prime}_{2\sigma,\mathbf{a},\mathbf{\lambda}})\) by the normalized approximate cokernels \(\overline{Z}^{i^{\prime}}_{j^{\prime},\ell^{\prime}}(\mathbf{a}_{j},\mathbf{\lambda}_ {j})\) given by Definition 5.13, and integrating over \(\mathbb{R}^{n}\), it follows
\[\int_{\mathbb{R}^{n}}\left[\phi-(-\Delta)^{-\sigma}(f^{\prime}_{ \sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})})\phi)\right]f^ {\prime}_{\sigma}(U_{(x_{i^{\prime}},L^{i^{\prime}}_{j^{\prime}}\lambda^{i^{ \prime}}_{j^{\prime}},a^{i^{\prime}}_{j^{\prime}})})Z^{i^{\prime}}_{j^{\prime}, \ell^{\prime}}(\mathbf{a}^{\prime}_{j},\mathbf{\lambda}^{\prime}_{j})\mathrm{d}x \tag{5.39}\] \[=\int_{\mathbb{R}^{n}}hf^{\prime}_{\sigma}(U_{(x_{i^{\prime}},L^{ i^{\prime}}_{j^{\prime}}\lambda^{i^{\prime}}_{j^{\prime}},a^{i^{\prime}}_{j^{ \prime}})})Z^{i^{\prime}}_{j^{\prime},\ell^{\prime}}(\mathbf{a}^{\prime}_{j},\mathbf{ \lambda}^{\prime}_{j})\mathrm{d}x\] \[+\sum_{i=1}^{N}\sum_{j\in\mathbb{N}}\sum_{\ell=0}^{n}c^{i}_{j,\ell }(\mathbf{a}^{\prime}_{j},\mathbf{\lambda}^{\prime}_{j})\int_{\mathbb{R}^{n}}f^{ \prime}_{\sigma}(U_{(x_{i^{\prime}},L^{i^{\prime}}_{j^{\prime}}\lambda^{i^{ \prime}}_{j^{\prime}},a^{i^{\prime}}_{j^{\prime}})})Z^{i}_{j,\ell}(\mathbf{a}^{ \prime}_{j},\mathbf{\lambda}^{\prime}_{j})Z^{i^{\prime}}_{j^{\prime},\ell^{\prime }}(\mathbf{a}^{\prime}_{j},\mathbf{\lambda}^{\prime}_{j})\mathrm{d}x.\]
They simplify our notation, let us set
\[I_{0}=\int_{\mathbb{R}^{n}}\left[\phi-(-\Delta)^{-\sigma}(f^{\prime}_{\sigma}( \bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})})\phi)\right]f^{\prime}_ {\sigma}(U_{(x_{i^{\prime}},L^{i^{\prime}}_{j^{\prime}}\lambda^{i^{\prime}}_{j^{ \prime}},a^{i^{\prime}}_{j^{\prime}})})Z^{i^{\prime}}_{j^{\prime},\ell^{\prime }}(\mathbf{a}^{\prime}_{j},\mathbf{\lambda}^{\prime}_{j})\mathrm{d}x\]
and
\[I_{1}=\int_{\mathbb{R}^{n}}hf^{\prime}_{\sigma}(U_{(x_{i^{\prime}},L^{i^{\prime}}_{ j^{\prime}}\lambda^{i^{\prime}}_{j^{\prime}},a^{i^{\prime}}_{j^{\prime}})})Z^{i^{ \prime}}_{j^{\prime},\ell^{\prime}}(\mathbf{a}^{\prime}_{j},\mathbf{\lambda}^{\prime}_{j}) \mathrm{d}x.\]
In the next claims, we will estimate the two terms above based on the orthogonality conditions from Lemma 5.14.
**Claim 1:** The following estimate holds
\[|I_{0}|\lesssim\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}e^{- \gamma_{\sigma}L(1+\xi)}e^{-(\zeta_{1}-\tau+\gamma_{\sigma})t^{i}_{j^{\prime}}}.\]
Indeed, it is not hard to check that the approximate kernel \(Z^{i^{\prime}}_{j^{\prime},\ell^{\prime}}(\mathbf{a}^{\prime}_{j},\mathbf{\lambda}^{ \prime}_{j})\) satisfies the linearized equation below
\[(-\Delta)^{\sigma}Z^{i^{\prime}}_{j^{\prime},\ell^{\prime}}(\mathbf{a}^{\prime}_{j },\mathbf{\lambda}^{\prime}_{j})-f^{\prime}_{\sigma}(U_{(x_{i^{\prime}},L^{i^{ \prime}}_{j^{\prime}}\lambda^{i^{\prime}}_{j^{\prime}},a^{i^{\prime}}_{j^{ \prime}})})Z^{i^{\prime}}_{j^{\prime},\ell^{\prime}}(\mathbf{a}^{\prime}_{j},\mathbf{ \lambda}^{\prime}_{j})=0\qquad\mathbb{R}^{n}\setminus\Sigma,\]
we have
\[I_{0} =\int_{\mathbb{R}^{n}}f^{\prime}_{\sigma}(U_{(x_{i^{\prime}},L^{ i^{\prime}}_{j^{\prime}}\lambda^{i^{\prime}}_{j^{\prime}},a^{i^{\prime}}_{j^{ \prime}})})\phi Z^{i}_{j^{\prime},\ell^{\prime}}(\mathbf{a}_{j},\mathbf{\lambda}_{j}) -(-\Delta)^{-\sigma}(f^{\prime}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j}, \mathbf{\lambda}_{j})})\phi)(-\Delta)^{\sigma}Z^{i^{\prime}}_{j^{\prime},\ell^{ \prime}}(\mathbf{a}^{\prime}_{j},\mathbf{\lambda}^{\prime}_{j})\mathrm{d}x\] \[=\int_{\mathbb{R}^{n}}\left[f^{\prime}_{\sigma}(U_{(x_{i^{\prime} },L^{i^{\prime}}_{j^{\prime}}\lambda^{i^{\prime}}_{j^{\prime}},a^{i^{\prime}}_ {j^{\prime}})})-f^{\prime}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{ \lambda}_{j})})\right]\phi Z^{i^{\prime}}_{j^{\prime},\ell^{\prime}}(\mathbf{a}^{ \prime}_{j},\mathbf{\lambda}^{\prime}_{j})\mathrm{d}x\] \[=\left[\int_{B_{1}(x_{i^{\prime}})}+\sum\limits_{i\neq i^{\prime} }\int_{B_{1}(x_{i})}+\int_{\mathbb{R}^{n}\setminus\underset{i=1}{\overset{N}{ \sqcup}}B_{1}(x_{i})}\right]\left[f^{\prime}_{\sigma}(U_{(x_{i^{\prime}},L^{i^ {\prime}}_{j^{\prime}}\lambda^{i^{\prime}}_{j^{\prime}},a^{i^{\prime}}_{j^{ \prime}})})-f^{\prime}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{ \lambda}_{j})})\right]\phi Z^{i^{\prime}}_{j^{\prime},\ell^{\prime}}(\mathbf{a}^{ \prime}_{j},\mathbf{\lambda}^{\prime}_{j})\mathrm{d}x\] \[=:I_{01}+I_{02}+I_{03}.\]
Without loss of generality, assume that \(i^{\prime}=1\) and \(x_{1}=0\). First, we consider the case when \(\ell^{\prime}=0\). Recalling the estimates for \(Z^{i^{\prime}}_{j^{\prime},0}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\) from (5.26), we get
\[|I_{01}| =\left|\int_{B_{1}}\left[f^{\prime}_{\sigma}(U_{(x_{i^{\prime}},L^ {i^{\prime}}_{j^{\prime}}\lambda^{i^{\prime}}_{j^{\prime}},a^{i^{\prime}}_{j^ {\prime}})})-f^{\prime}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda }_{j})})\right]\phi Z^{i^{\prime}}_{j^{\prime},\ell^{\prime}}(\mathbf{a}^{\prime}_{j },\mathbf{\lambda}^{\prime}_{j})\mathrm{d}x\right|\] \[\lesssim\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus \Sigma)}\int_{B_{1}}|f^{\prime}_{\sigma}(U_{(x_{i^{\prime}},L^{i^{\prime}}_{j^{ \prime}}\lambda^{i^{\prime}}_{j^{\prime}},a^{i^{\prime}}_{j^{\prime}})})-f^{ \prime}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})})\Big{|} \,|x|^{\zeta_{1}}|Z^{i^{\prime}}_{j^{\prime},\ell^{\prime}}(\mathbf{a}^{\prime}_{j },\mathbf{\lambda}^{\prime}_{j})|\mathrm{d}x\] \[\lesssim\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus \Sigma)}\int_{B_{1}}|x|^{\zeta_{1}-\frac{n+2\sigma}{2}}V_{\frac{n-2\sigma}{2}}^ {\frac{4\sigma}{n-2\sigma}}\sum\limits_{j\neq j^{\prime}}V_{(x_{i^{\prime}},L^ {i}_{j},a^{i}_{j},\lambda^{i}_{j})}\mathrm{d}t\] \[\lesssim\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus \Sigma)}e^{-\gamma_{\sigma}L(1+\xi)}e^{-(\zeta_{1}+\gamma_{\sigma})t^{i^{ \prime}}_{j^{\prime}}},\]
since \(\zeta_{1}>-\gamma_{\sigma}\). Next, it holds
\[|I_{02}| =\left|\sum\limits_{i\neq 1}\int_{B_{1}(x_{i})}\left[f^{\prime}_{ \sigma}(U_{(x_{i^{\prime}},L^{i^{\prime}}_{j^{\prime}}\lambda^{i^{\prime}}_{j^{ \prime}},a^{i^{\prime}}_{j^{\prime}})})-f^{\prime}_{\sigma}(\bar{u}_{(\mathbf{x}, \mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})})\right]\phi Z^{i^{\prime}}_{j^{\prime},\ell^{ \prime}}(\mathbf{a}^{\prime}_{j},\mathbf{\lambda}^{\prime}_{j})\mathrm{d}x\right|\] \[\lesssim\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus \Sigma)}\sum\limits_{i\neq 1}\int_{B_{1}(x_{i})}\left|f^{\prime}_{\sigma}(U_{(x_{i^{\prime}},L^ {i^{\prime}}_{j^{\prime}}\lambda^{i^{\prime}}_{j^{\prime}},a^{i^{\prime}}_{j^{ \prime}})})-f^{\prime}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda }_{j})})\right||Z^{i^{\prime}}_{j^{\prime},\ell^{\prime}}(\mathbf{a}^{\prime}_{j}, \mathbf{\lambda}^{\prime}_{j})||x-x_{i}|^{\zeta_{1}}\mathrm{d}x\] \[\lesssim\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus \Sigma)}\sum\limits_{i\neq 1}\int_{B_{1}(x_{1})}|x-x_{i}|^{\zeta_{1}-2\sigma}(\lambda^{i^{\prime}}_{j ^{\prime}})^{\gamma_{\sigma}}\left(\sum\limits_{j\in\mathbb{N}}V_{(x_{i},L_{i},a^{ i}_{j},\lambda^{i}_{j})}(-\ln|x|)\right)\mathrm{d}x\] \[\lesssim\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus \Sigma)}(\lambda^{i^{\prime}}_{j^{\prime}})^{\gamma_{\sigma}}e^{-(n+\zeta_{1}-2 \sigma)L}\] \[\lesssim\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus \Sigma)}e^{\zeta_{1}t^{i^{\prime}}_{j^{\prime}}-2\gamma_{\sigma}L-\zeta_{1}L}e^{-( \zeta_{1}+\gamma_{\sigma})t^{i^{\prime}}_{j^{\prime}}}\] \[\lesssim\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus \Sigma)}e^{-\gamma_{\sigma}L(1+\xi)}e^{-(\zeta_{1}+\gamma_{\sigma})t^{i^{ \prime}}_{j^{\prime}}}.\]
In addition, one has
\[|I_{03}| =\left|\int_{\mathbb{R}^{n}\backslash\underset{i=1}{\overset{N}{|}}B _{1}(x_{i})}\left[f^{\prime}_{\sigma}(U_{(x_{i^{\prime}},L^{i^{\prime}}_{j^{ \prime}}\lambda^{i^{\prime}}_{j^{\prime}},a^{i^{\prime}}_{j^{\prime}})})-f^{ \prime}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a}_{j}, \boldsymbol{\lambda}_{j})})\right]\phi Z^{i^{\prime}}_{j^{\prime},\ell^{\prime}}( \boldsymbol{a}^{\prime}_{j},\boldsymbol{\lambda}^{\prime}_{j})\mathrm{d}x\right|\] \[\lesssim\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\backslash \Sigma)}\int_{\mathbb{R}^{n}\backslash\underset{i=1}{\overset{N}{|}}B_{1}(x_{ i})}|x|^{-(n-2\sigma)}|x|^{-(n+2\sigma)}(\lambda^{i^{\prime}}_{j^{\prime}})^{ \gamma_{\sigma}}e^{-2\sigma L}\mathrm{d}x\] \[\lesssim\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\backslash \Sigma)}e^{\zeta_{1}t^{i^{\prime}}_{j^{\prime}}-2\sigma L}e^{-(\zeta_{1}+ \gamma_{\sigma})t^{i^{\prime}}_{j^{\prime}}}\] \[\lesssim\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\backslash \Sigma)}e^{-\gamma_{\sigma}L(1+\xi)}e^{-(\zeta_{1}+\gamma_{\sigma})t^{i^{ \prime}}_{j^{\prime}}},\]
where we have used \(-\gamma_{\sigma}<\zeta_{1}<-\gamma_{\sigma}+2\sigma\).
On the other hand, from (5.26), we recall
\[Z^{i^{\prime}}_{j^{\prime},\ell^{\prime}}(\boldsymbol{a}^{\prime}_{j}, \boldsymbol{\lambda}^{\prime}_{j})=\mathcal{O}(|x-x_{i^{\prime}}|^{-\gamma_{ \sigma}})V_{(x_{i^{\prime}},L^{i^{\prime}}_{j^{\prime}},a^{i^{\prime}}_{j^{ \prime}},\lambda_{i^{\prime}})}(-\ln|x|)^{1+\frac{2}{n-2\sigma}}\quad\text{for} \quad\ell^{\prime}\in\{1,\ldots,n\}.\]
Using the last identity, one can get similar estimates to the ones above. In conclusion, it is straightforward to check
\[|I_{0}| =\left|\int_{\mathbb{R}^{n}}\left[\phi-(-\Delta)^{-\sigma}(f^{ \prime}_{\sigma}(\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a}_{j}, \boldsymbol{\lambda}_{j})})\phi)\right]f^{\prime}_{\sigma}(U_{(x_{i^{\prime}}, L^{i^{\prime}}_{j^{\prime}}\lambda^{i^{\prime}}_{j^{\prime}},a^{i^{\prime}}_{j^{ \prime}})})Z^{i^{\prime}}_{j^{\prime},\ell^{\prime}}(\boldsymbol{a}^{\prime}_{ j},\boldsymbol{\lambda}^{\prime}_{j})\mathrm{d}x\right|\] \[\lesssim\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\backslash \Sigma)}e^{-\gamma_{\sigma}L(1+\xi)}e^{-(\zeta_{1}+\gamma_{\sigma})t^{i^{ \prime}}_{j^{\prime}}},\]
which proves the claim.
**Claim 2:** The following estimate holds
\[|I_{1}|\lesssim\|h\|_{\mathcal{C}_{**,\tau}(\mathbb{R}^{n}\backslash\Sigma)}e ^{-(\zeta_{1}+\gamma_{\sigma})t^{i^{\prime}}_{j^{\prime}}}.\]
In fact, we have
\[|I_{1}| =\left|\int_{\mathbb{R}^{n}}hf^{\prime}_{\sigma}(U_{(x_{i^{\prime }},L^{i^{\prime}}_{j^{\prime}}\lambda^{i^{\prime}}_{j^{\prime}},a^{i^{\prime}}_{ j^{\prime}})})Z^{i^{\prime}}_{j^{\prime},\ell^{\prime}}(\boldsymbol{a}^{\prime}_{j}, \boldsymbol{\lambda}^{\prime}_{j})\mathrm{d}x\right|\] \[\lesssim\int_{B_{1}(x_{i^{\prime}}}\|h\|_{\mathcal{C}_{**,\tau}( \mathbb{R}^{n}\backslash\Sigma)}|x-x_{i^{\prime}}|^{\zeta_{1}-\tau}|x-x_{i^{ \prime}}|^{-\gamma_{\sigma}}\left(e^{-\gamma_{\sigma}t^{i^{\prime}}_{j^{ \prime}}}+e^{-(\gamma_{\sigma}+1)t^{i^{\prime}}_{j^{\prime}}}\right)\mathrm{d}x\] \[+\sum_{i\neq i^{\prime}}\int_{B_{1}(x_{i})}\|h\|_{\mathcal{C}_{**, \tau}(\mathbb{R}^{n}\backslash\Sigma)}|x-x_{i}|^{\zeta_{1}-\tau}e^{-\gamma_{ \sigma}t^{i^{\prime}}_{j^{\prime}}}\mathrm{d}x\] \[+\int_{\mathbb{R}^{n}\backslash\underset{i=1}{\overset{N}{|}}B_{1 }(x_{i})}\|h\|_{\mathcal{C}_{**,\tau}(\mathbb{R}^{n}\backslash\Sigma)}|x|^{-(n- 2\sigma)}|x|^{-4\sigma}|x|^{-(n-2\sigma)}e^{-\gamma_{\sigma}t^{i^{\prime}}_{j^{ \prime}}}\mathrm{d}x\] \[\lesssim\|h\|_{\mathcal{C}_{**,\tau}(\mathbb{R}^{n}\backslash \Sigma)}e^{-(\zeta_{1}-\tau+\gamma_{\sigma})t^{i^{\prime}}_{j^{\prime}}},\]
which proves the desired estimate.
**Claim 3:** The following estimate holds
\[\left\|\sum_{i=1}^{N}\sum_{j\in\mathbb{N}}\sum_{\ell=0}^{n}c^{i}_{j,\ell}( \boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})Z^{i}_{j,\ell}(\boldsymbol{a}_{j}, \boldsymbol{\lambda}_{j})\right\|_{\mathcal{C}_{**,\tau}(\mathbb{R}^{n} \backslash\Sigma)}\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\|\phi\|_{\mathcal{C}_{ *,\tau}(\mathbb{R}^{n}\backslash\Sigma)}+\|h\|_{\mathcal{C}_{**,\tau}(\mathbb{R}^{n} \backslash\Sigma)}.\]
As a matter of fact, we first isolate the term \(c^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\) in (5.39) by inverting the matrix
\[\int_{\mathbb{R}^{n}}f^{\prime}_{\sigma}(U_{(x_{i^{\prime}},L^{i}_{j^{\prime}} \lambda^{i^{\prime}}_{j^{\prime}},a^{i^{\prime}}_{j^{\prime}})})Z^{i}_{j,\ell }(\mathbf{a}_{j},\mathbf{\lambda}_{j})Z^{i^{\prime}}_{j^{\prime},\ell^{\prime}}(\mathbf{a} ^{\prime}_{j},\mathbf{\lambda}^{\prime}_{j})\mathrm{d}x.\]
For this, recall the orthogonality estimates from (5.24) and (5.25), which yields
\[\int_{\mathbb{R}^{n}}f^{\prime}_{\sigma}(U_{(x_{i},L^{i}_{j}, \lambda^{i}_{j},a^{i}_{j})})Z^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})Z^{i}_{ j,\ell^{\prime}}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\mathrm{d}x=C_{0}\delta_{\ell, \ell^{\prime}}\quad\text{for}\quad\ell^{\prime}\in\{1,\dots,n\},\]
and
\[\int_{\mathbb{R}^{n}}f^{\prime}_{\sigma}(U_{(x_{i},L^{i}_{j}, \lambda^{i}_{j},a^{i}_{j})})Z^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})Z^{i}_{ j^{\prime},\ell^{\prime}}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\mathrm{d}x=\mathcal{O}(e^{- \gamma_{\sigma}|t^{i}_{j}-t^{i}_{j^{\prime}}|})\quad\text{ if }\quad\ell\neq\ell^{\prime},\]
plus a tiny error. Then using in [44, Lemma A.6] for the inversion of a Toepliz-type operator, one has from (5.39) that
\[|c^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})| \lesssim[e^{-\gamma_{\sigma}L(1+\xi)}\|\phi\|_{\mathcal{C}_{*, \tau}(\mathbb{R}^{n}\setminus\Sigma)}+\|h\|_{\mathcal{C}_{**,\tau}(\mathbb{R}^{ n}\setminus\Sigma)}]e^{-(\zeta_{1}-\tau+\gamma_{\sigma})t^{i}_{j}}\] \[+\sum_{j^{\prime}\neq j}[e^{-\gamma_{\sigma}L(1+\xi)}\|\phi\|_{ \mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}+\|h\|_{\mathcal{C}_{**, \tau}(\mathbb{R}^{n}\setminus\Sigma)}]e^{-\gamma_{\sigma}(1+o(1))|t_{j}-t_{j^{ \prime}}|}e^{-(\zeta_{1}-\tau+\gamma_{\sigma})t^{i}_{j^{\prime}}}\] \[\lesssim[e^{-\gamma_{\sigma}L(1+\xi)}\|\phi\|_{\mathcal{C}_{*, \tau}(\mathbb{R}^{n}\setminus\Sigma)}+\|h\|_{\mathcal{C}_{**,\tau}(\mathbb{R}^{ n}\setminus\Sigma)}]e^{-\gamma_{\sigma}(1+o(1))|t_{j}-t_{j^{\prime}}|}e^{-(\zeta_{1}- \tau+\gamma_{\sigma})t^{i}_{j}}.\]
Using the estimates (5.26) of \(Z^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\) and its equation, we split the integrals as in Step 3 in the proof of Claim 2 in Lemma 5.18 and get in \(B_{1}(p_{i})\),
\[|Z^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})| =|(-\Delta)^{-\sigma}(f^{\prime}_{\sigma}(U_{(x_{i},L^{i}_{j}, \lambda^{i}_{j},a^{i}_{j})})Z^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j}))|\] \[\lesssim|x-x_{i}|^{-\gamma_{\sigma}+2\sigma}e^{-(\gamma_{\sigma} +2\sigma)|t^{i}_{j^{\prime}}-t^{i}_{j}|}\lesssim|x-x_{i}|^{\zeta_{1}-\tau}e^{-( \gamma_{\sigma}+2\sigma)|t^{i}_{j^{\prime}}-t^{i}_{j}|}.\]
The above two estimates yield that
\[|c^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})Z^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{ \lambda}_{j})| \lesssim|x-x_{i}|^{\zeta_{1}-\tau}[e^{-\gamma_{\sigma}L(1+\xi)}\| \phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}+\|h\|_{ \mathcal{C}_{**,\tau}(\mathbb{R}^{n}\setminus\Sigma)}]e^{-\zeta|t^{i}_{j^{ \prime}}-t^{i}_{j}|}\]
for some \(\zeta>0\).
For \(x\in\mathbb{R}^{n}\setminus\bigsqcup\limits_{i=1}^{N}B_{1}(x_{i})\), one has
\[|c^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})Z^{i}_{j,\ell}(\mathbf{ a}_{j},\mathbf{\lambda}_{j})| \lesssim(\lambda^{i}_{j})^{\gamma_{\sigma}}|x|^{-(n-2\sigma)}|c^{i }_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})|\] \[\lesssim|x|^{-(n-2\sigma)}[e^{-\gamma_{\sigma}L(1+\xi)}\|\phi\|_{ \mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}+\|h\|_{\mathcal{C}_{**, \tau}(\mathbb{R}^{n}\setminus\Sigma)}]e^{-\zeta|t^{i}_{j^{\prime}}-t^{i}_{j} |}.\]
Combining the above two estimates yields
\[\left\|\sum_{i=1}^{N}\sum_{j\in\mathbb{N}}\sum_{\ell=0}^{n}c^{i}_{j,\ell}(\mathbf{ a}_{j},\mathbf{\lambda}_{j})Z^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\right\|_{\mathcal{C}_{ **,\tau}(\mathbb{R}^{n}\setminus\Sigma)}\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\| \phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}+\|h\|_{\mathcal{C}_ {**,\tau}(\mathbb{R}^{n}\setminus\Sigma)}.\]
The proof of the claim is concluded.
**Claim 4:** It holds that
\[\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}\lesssim\|\bar{h} \|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)},\]
uniformly on \(L\gg 1\), where
\[\bar{h}=h+\sum_{i=1}^{N}\sum_{j\in\mathbb{N}}\sum_{\ell=0}^{n}c^{i}_{j,\ell}( \mathbf{a}_{j},\mathbf{\lambda}_{j})Z^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j}).\]
We suppose by contradiction that there exist sequences of functions \(\{\bar{h}_{k}\}_{k\in\mathbb{N}}\subset\mathcal{C}_{**,\tau}(\mathbb{R}^{n}\setminus\Sigma)\) and \(\{\phi_{k}\}_{k\in\mathbb{N}}\subset\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)\), where \(\phi_{k}=(\mathscr{L}_{\sigma}(\boldsymbol{a},\boldsymbol{\lambda}))^{-1}( \bar{h}_{k})\) for all \(k\in\mathbb{N}\) such that \(\|\phi_{k}\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}=1\) and
\[\|\bar{h}_{k}\|_{\mathcal{C}_{**,\tau}(\mathbb{R}^{n}\setminus\Sigma)}\to 0 \quad\text{as}\quad k\to+\infty. \tag{5.40}\]
Here we can write
\[\bar{h}_{k}=h_{k}+\sum_{i=1}^{N}\sum_{j\in\mathbb{N}}\sum_{\ell=0}^{n}c_{j, \ell}^{i,k}(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})Z_{j,\ell}^{i,k}( \boldsymbol{a}_{j},\boldsymbol{\lambda}_{j}),\]
where \(\{c_{j,\ell}^{i,k}\}_{k\in\mathbb{N}}\subset\mathcal{C}^{\infty}(\mathrm{ Adm}_{\sigma}(\Sigma))\), \(\{h_{k}\}_{k\in\mathbb{N}}\subset\mathcal{C}_{**,\tau}(\mathbb{R}^{n}\setminus \Sigma)\), and \(\{\boldsymbol{L}_{k}\}_{k\in\mathbb{N}}\subset\mathbb{R}^{N}\) such that
\[\max_{1\leqslant i\leqslant N}L_{k}^{i}=:|\boldsymbol{L}_{k}|\to+\infty\quad \text{as}\quad k\to+\infty\]
is a sequence of parameters.
Notice that
\[\phi_{k}=(-\Delta)^{-\sigma}(f_{\sigma}^{\prime}(\bar{u}_{(\boldsymbol{x}, \boldsymbol{L},\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})})\phi_{k})+\bar{h }_{k}\quad\text{in}\quad\mathbb{R}^{n}\setminus\Sigma.\]
Thus, we need to estimate the first term on the right-hand side of the last equation.
**Step 1:** If \(\mathrm{d}(x,\Sigma)\geqslant 1\), then
\[|(-\Delta)^{-\sigma}(f_{\sigma}^{\prime}(\bar{u}_{(\boldsymbol{x},\boldsymbol{ L},\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})})\phi_{k})|\leqslant\mathrm{o}(1)\| \phi_{k}\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}|x|^{-(n-2 \sigma)}\quad\text{as}\quad L\to+\infty\]
Indeed, notice that
\[(-\Delta)^{-\sigma}(f_{\sigma}^{\prime}(\bar{u}_{(\boldsymbol{x}, \boldsymbol{L},\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})})\phi_{k}) =\left[\int_{\mathrm{d}(y,\Sigma)\leqslant 1}+\int_{\mathrm{d}(y, \Sigma)\geqslant 1}\right]f_{\sigma}^{\prime}(\bar{u}_{(\boldsymbol{x}, \boldsymbol{L},\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})})\phi_{k} \mathcal{R}_{\sigma}(x-y)\mathrm{d}y\] \[=:I_{1}+I_{2}.\]
Let us start with estimating the second term on the right-hand side above. First, by Lemma 4.10, we have
\[\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a}_{j},\boldsymbol{ \lambda}_{j})}(y)=\mathcal{O}(e^{-\gamma_{\sigma}L})|y|^{-(n-2\sigma)}\quad \text{for}\quad\mathrm{d}(y,\Sigma)\geqslant 1,\]
from which we conclude
\[I_{2} \lesssim e^{-\sigma L}\|\phi_{k}\|_{\mathcal{C}_{*,\tau}(\mathbb{ R}^{n}\setminus\Sigma)}\int_{\mathrm{d}(y,\Sigma)\geqslant 1}|y|^{-(n+2\sigma)} \mathcal{R}_{\sigma}(x-y)\mathrm{d}y \tag{5.41}\] \[\lesssim\mathrm{o}(1)\|\phi_{k}\|_{\mathcal{C}_{*,\tau}(\mathbb{ R}^{n}\setminus\Sigma)}|x|^{-(n-2\sigma)}.\]
For the first term, we get
\[I_{1} \lesssim\sum_{i=1}^{N}\int_{|y-x_{i}|\leqslant 1}|y-x_{i}|^{-2 \sigma}\left(\sum_{j\in\mathbb{N}}V_{(x_{i},L_{i},a_{j}^{i},\lambda_{j}^{i})} (-\ln|x|)\right)^{\frac{2n}{n-2\sigma}}\|\phi_{k}\|_{\mathcal{C}_{*,\tau}( \mathbb{R}^{n}\setminus\Sigma)}|y-x_{i}|^{\zeta_{1}}|x-y|^{-(n-2\sigma)} \mathrm{d}y\] \[\lesssim\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus \Sigma)}|x|^{-(n-2\sigma)}\int_{|y-x_{i}|<1}|y-x_{i}|^{\zeta_{1}-2\sigma}\left( \sum_{j\in\mathbb{N}}V_{(x_{i},L_{i},a_{j}^{i},\lambda_{j}^{i})}(-\ln|x|) \right)^{\frac{2n}{n-2\sigma}}\,\mathrm{d}y\] \[\lesssim\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus \Sigma)}|x|^{-(n-2\sigma)}\int_{0}^{+\infty}e^{-(n+\zeta_{1}-2\sigma)t}\left( \sum_{j\in\mathbb{N}}V_{(x_{i},L_{i},a_{j}^{i},\lambda_{j}^{i})}(-\ln|x|) \right)^{\frac{2n}{n-2\sigma}}\,\mathrm{d}t,\]
which implies
\[I_{1}\lesssim e^{-(n+\zeta_{1}-2\sigma)L}|x|^{-(n-2\sigma)}\|\phi_{k}\|_{ \mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}\lesssim\mathrm{o}(1)\| \phi_{k}\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}|x|^{-(n-2 \sigma)}. \tag{5.42}\]
Since \(\zeta_{1}>-\gamma_{\sigma}\), by combining estimates (5.41) and (5.42) one concludes the proof Step 1.
Subsequently, using Step 1, we also observe that by the estimates above, it holds
\[\sup_{\mathrm{d}(x,\Sigma)\geqslant 1}|x|^{n-2\sigma}|\phi_{k}(x)|\lesssim\| \bar{h}_{k}\|_{\mathcal{C}_{\ast,\tau}(\mathbb{R}^{n}\setminus\Sigma)}+\mathrm{ o}(1)\|\phi_{k}\|_{\mathcal{C}_{\ast,\tau}(\mathbb{R}^{n}\setminus\Sigma)}\to 0 \quad\text{ as }\quad L\to+\infty, \tag{5.43}\]
where we also used our contradiction assumption (5.40). Hence, one can find \(x_{i}\in\Sigma\) for some \(i\in\{1,\ldots,N\}\) such that
\[\sup_{|x|\leqslant 1}|x-x_{i}|^{-\zeta_{1}}\phi_{k}(x)\geqslant\frac{1}{2} \quad\text{for all}\quad k\in\mathbb{N}. \tag{5.44}\]
In the next step, we prove an estimate contradicting the lower bound above. To simplify the notation, we assume that \(x_{i}=0\) and so \(|x|<1\).
**Step 2:** If \(|x|\leqslant 1\), then one find \(R\gg 1\) large enough such that
\[|(-\Delta)^{-\sigma}(f^{\prime}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j}, \mathbf{\lambda}_{j})})\phi_{k})|\lesssim\mathrm{o}(1)\|\phi_{k}\|_{\mathcal{C}_{ \ast,\tau}(\mathbb{R}^{n}\setminus\Sigma)}|x|^{\zeta_{1}}+e^{-\gamma_{\sigma} R}+e^{-2R}|x|^{\zeta_{1}}\quad\text{as}\quad L\to+\infty.\]
As a matter of fact, similar to before, we have
\[|(-\Delta)^{-\sigma}(f^{\prime}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j}, \mathbf{\lambda}_{j})})\phi_{k})|=\left[\int_{\mathrm{d}(y,\Sigma)\leqslant 1}+\int_{ \mathrm{d}(y,\Sigma)\geqslant 1}\right]f^{\prime}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L}, \mathbf{a}_{j},\mathbf{\lambda}_{j})})\phi_{k}\mathcal{R}_{\sigma}(x-y)\mathrm{d}y=:I_ {1}+I_{2}.\]
In the same spirit of the estimates for \(\bar{h}\) above, it holds
\[I_{1}\lesssim\int_{\mathrm{d}(y,\Sigma)\geqslant 1}e^{-\sigma L}|y|^{-4 \sigma}|x-y|^{2\sigma-n}\|\phi_{k}\|_{\mathcal{C}_{\ast,\tau}(\mathbb{R}^{n} \setminus\Sigma)}|y|^{-(n-2\sigma)}\mathrm{d}y\lesssim\mathrm{o}(1)\|\phi_{k} \|_{\mathcal{C}_{\ast,\tau}(\mathbb{R}^{n}\setminus\Sigma)}|x|^{\zeta_{1}}.\]
For the second term, the computation is slightly more involved. We proceed by performing a standard blow-up method. Namely, let us consider the family of rescaled functions \(\widehat{\phi}_{j}^{i,k}:\mathcal{A}_{j}^{i,k}\to\mathbb{R}\) defined on the annular region as
\[\widehat{\phi}_{j}^{i,k}(\hat{x})=\left(\lambda_{j}^{i}\right)^{-\zeta_{1}} \phi_{k}(\lambda_{j}^{i}\hat{x})\quad\text{for}\quad k\in\mathbb{N},\]
where
\[\mathcal{A}_{j}^{i,k}:=\{x\in\mathbb{R}^{n}:\tilde{A}_{j}^{i,k}<|x|<\tilde{A}_ {j-1}^{i,k}\}.\]
Here we set \(\tilde{A}_{j}^{i,k}=A_{j}^{i,k}/\lambda_{j}^{i,k}\), where \(A_{j}^{i,k}=\sqrt{\lambda_{j+1}^{i,k}\lambda_{j}^{i,k}}\) for \(k\in\mathbb{N}\) and observe
\[|\mathbb{R}_{+}^{n}\setminus\mathcal{A}_{j}^{i,k}|\to 0\quad\text{as}\quad k \to+\infty.\]
Furthermore, it is not hard to check that \(\widehat{\phi}_{j}^{i,k}\in\mathcal{C}^{0}(\mathcal{A}_{j}^{i,k})\) satisfy the following rescaled equation
\[\left\{\begin{array}{c}\widehat{\phi}_{j}^{i,k}-c_{n,\sigma} \frac{n\pm 2\sigma}{n-2\sigma}\int_{\mathbb{R}^{n}}\left[\frac{\widehat{\phi}_{j} ^{i,k}}{(1+|\hat{x}|^{2})^{2\sigma}}\right]\mathcal{R}_{\sigma}(x-y)\mathrm{d }y(1+\mathrm{o}(1))=(\lambda_{j}^{i,k})^{2\sigma-\zeta_{1}}\bar{h}(\lambda_{j} ^{i,k}\hat{x})\quad\text{ in }\quad\mathcal{A}_{j}^{i,k},\\ \\ \int_{\mathbb{R}^{n}}\widehat{\phi}_{j}^{i,k}[f^{\prime}_{\sigma}(U_{(0,L_{j}^{ i,k},\lambda_{j}^{i,k},a_{j}^{i,k})})Z_{j,\ell}^{i,k}(\lambda_{j}^{i,k},a_{j}^{i,k})]( \lambda_{j}^{i}\hat{x})\mathrm{d}\hat{x}=0\text{ for }i\in\{1,\ldots,N\},\ j,k\in \mathbb{N},\text{ and }\ell\in\{0,\ldots,n\}.\end{array}\right.\]
Now, we observe the estimate below holds
\[|h_{k}|\lesssim\|h_{k}\|_{\mathcal{C}_{\ast\ast,\tau}(\mathbb{R}^{n}\setminus \Sigma)}|\lambda_{j}^{i,k}\hat{x}|^{\zeta_{1}-2\sigma}\quad\text{as}\quad k \to+\infty.\]
Then, there exists \(\widehat{\phi}_{j}^{i,\infty}\in\mathcal{C}^{0}(\mathcal{A}_{j}^{i,\infty})\) solution to the following blow-up limit equation
\[\left\{\begin{array}{c}\widehat{\phi}_{j}^{i,\infty}-c_{n,\sigma}\int_{ \mathbb{R}^{n}}\left[f^{\prime}_{\sigma}(U_{(0,1)}(y))\widehat{\phi}_{j}^{i, \infty}(y)\right]\mathcal{R}_{\sigma}(x-y)\mathrm{d}y=0\quad\text{ in }\quad\mathcal{A}_{j}^{i,\infty},\\ \\ \int_{\mathbb{R}^{n}}\widehat{\phi}_{j}^{i,\infty}f^{\prime}_{\sigma}(U_{(0,1)})Z_ {j,\ell}^{i,\infty}(0,1)\mathrm{d}x=0.\end{array}\right.\]
Here \(\mathcal{A}_{j}^{i,\infty}=\cup_{k\in\mathbb{N}}\mathcal{A}_{j}^{i,k}\) is such that \(\widehat{\phi}_{j}^{i,k}\to\widehat{\phi}_{j}^{i,\infty}\) as \(k\to+\infty\) in \(\mathcal{A}_{R}\), where the annular region \(\mathcal{A}_{R}:=\{x\in\mathbb{R}^{n}:R^{-1}\leqslant|\hat{x}|\leqslant R\}\) is such that \(\mathcal{A}_{R}\subset\mathcal{A}_{j}^{i,\infty}\) for \(R\gg 1\) large enough which will be chosen suitably later, where we recall that \(U_{(0,1)}=u_{\mathrm{sph}}\) is the standard bubble tower solution given by (5.2) and \(Z_{j,\ell}^{i,k}(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})\) for \(\ell\in\{0,\dots,n\}\), are the corresponding kernels in Definition 5.13. Therefore, by the non-degeneracy of the standard bubble in Lemma 4.6, we conclude that the blow-up limit is trivial, that is, \(\widehat{\phi}_{j}^{i,\infty}\equiv 0\) and \(\widehat{\phi}_{j}^{i,k}\to 0\) as \(k\to+\infty\) in \(\mathcal{A}_{R}\).
As a consequence, if we consider the original \(\phi_{k}\), this is equivalent to the uniform convergence
\[|x|^{-\zeta_{1}}\phi_{k}(x)\to 0\quad\text{ in }\quad\mathcal{A}_{\infty}^{i,k} \quad\text{ as }\quad k\to+\infty, \tag{5.45}\]
where \(\mathcal{A}_{\infty}^{i,k}:=\cup_{j\in\mathbb{N}}\mathcal{A}_{j}^{i,k}\) and \(\mathcal{A}_{j}^{i,k}:=\{R^{-1}\lambda_{j}^{i,k}<|x|<R\lambda_{j}^{i,k}\}\). Using the convergence above, we can now estimate the remaining term
\[I_{2} =\int_{\mathrm{d}(y,\Sigma)\leqslant 1}f_{\sigma}^{\prime}( \bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a}_{j},\boldsymbol{ \lambda}_{j})})\phi_{k}\mathcal{R}_{\sigma}(x-y)\mathrm{d}y\] \[\lesssim\sum_{j\in\mathbb{N}}\left[\int_{\mathcal{A}_{j}^{i,k}}+ \int_{(\mathcal{A}_{j}^{i,k})^{c}}\right]f_{\sigma}^{\prime}(\bar{u}_{( \boldsymbol{x},\boldsymbol{L},\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})}) \phi_{k}\mathcal{R}_{\sigma}(x-y)\mathrm{d}y=:I_{21}+I_{22},\]
where \((\mathcal{A}_{j}^{i,k})^{c}:=\{y\in\mathbb{R}^{n}:0<\mathrm{d}(y,\Sigma) \leqslant 1\}\setminus\mathcal{A}_{j}^{i,k}\).
First, again from Lemma 4.10, we know
\[\bar{u}_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{a}_{j},\boldsymbol{ \lambda}_{j})}(y)=|y|^{-\gamma_{\sigma}}\left(\sum_{j\in\mathbb{N}}V_{(x_{i},L _{i},a_{j}^{i},\lambda_{j}^{i})}(-\ln|x|)\right)(1+\mathrm{o}(1))\quad\text{ for}\quad 0<\mathrm{d}(y,\Sigma)<1,\]
from which we get
\[\sum_{j\in\mathbb{N}}V_{(x_{i},L_{i},a_{j}^{i},\lambda_{j}^{i})}(-\ln|x|) \lesssim e^{-\gamma_{\sigma}R}\quad\text{in}\quad(\mathcal{A}_{\infty}^{i,k} )^{c}.\]
Hence, the summation on the left-hand side of the last equation can be made small enough by choosing \(R\gg 1\) large enough but uniform on \(k\gg 1\), which in turn implies
\[I_{22}\lesssim e^{-2R}\int_{(\mathcal{A}_{\infty}^{i,k})^{c}}|y|^{-2\sigma} \|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}|y|^{\zeta_{1}} |x-y|^{n-2\sigma}\mathrm{d}y\lesssim e^{-2R}|x|^{\zeta_{1}}.\]
Additionally, from (5.45), it is direct to see
\[I_{21} \lesssim\sum_{j\in\mathbb{N}}\int_{\mathcal{A}_{j}^{i,k}}|\phi_{ k}||y|^{-\zeta_{1}}|y|^{\zeta_{1}-2\sigma}|x-y|^{2\sigma-n}\left(\sum_{j\in \mathbb{N}}V_{(x_{i},L_{i},a_{j}^{i},\lambda_{j}^{i})}(-\ln|x|)\right)^{\frac{2 n}{n-2\sigma}}\mathrm{d}y\] \[\lesssim\mathrm{o}(1)\int_{\mathcal{A}_{j}^{i,k}}|y|^{\zeta_{1}-2 \sigma}|x-y|^{2\sigma-n}\mathrm{d}y\] \[\lesssim\mathrm{o}(1)|x|^{\zeta_{1}}.\]
The proof of this step is then finished.
Finally, from Step 2, we must have \(|x|^{-\zeta_{1}}\phi_{k}(x)=\mathrm{o}(1)\) as \(k\to+\infty\), which is a contradiction with (5.44). This completes the proof of Claim 4.
**Claim 5:** For any \(\bar{h}\in\mathcal{C}_{**,\tau}(\mathbb{R}^{n}\setminus\Sigma)\), one can find a unique solution \(\phi\in\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)\) to \((\mathcal{L}_{2\sigma,\boldsymbol{a},\boldsymbol{\lambda}}^{\prime})\). First, we consider the space
\[\mathscr{H}^{\perp}(\mathbb{R}^{n})=\left\{\phi\in H^{\sigma}(\mathbb{R}^{n}): \int_{\mathbb{R}^{n}}\phi\overline{Z}_{j,\ell}^{i}(\boldsymbol{a}_{j}, \boldsymbol{\lambda}_{j})\mathrm{d}x=0\text{ for }(i,j,\ell)\in\mathcal{I}_{\infty} \right\}.\]
Notice that Eq. \((\mathcal{L}^{\prime}_{2\sigma,\mathbf{a},\mathbf{\lambda}})\) may be reformulated in terms of \(\phi\) to become
\[\phi+\mathscr{K}(\phi)=\bar{h}\quad\text{ in }\quad\mathscr{H}^{\perp}(\mathbb{R}^{n}), \tag{5.46}\]
where \(\bar{h}\) is defined by duality and \(\mathscr{K}:\mathscr{H}^{\perp}(\mathbb{R}^{n})\to\mathscr{H}^{\perp}(\mathbb{ R}^{n})\) is a linear compact operator. Using Fredholm alternative, showing that \((\mathcal{L}^{\prime}_{2\sigma,\mathbf{a},\mathbf{\lambda}})\) has a unique solution for each \(\bar{h}\) is equivalent to finding a unique solution for \(\bar{h}=0\) to (5.46), which in turn follows from Claim 4.
The proof is now a consequence of Claims 4 and 5.
As a consequence of the last result, we can state the last lemma
**Lemma 5.20**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). Assume that \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\mathrm{Adm}_{\sigma}(\Sigma)\) is an admissible configuration as in Definition 5.5 with \(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}\in\mathrm{Apx}_{\sigma}(\Sigma)\) their associated approximate solution as in Definition 5.12. Then, there exists a bounded right-inverse for the linearized operator \((\mathscr{L}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j}))^{-1}:\mathcal{C}_{\ast \ast,\tau}(\mathbb{R}^{n}\setminus\Sigma)\to\mathcal{C}_{\ast,\tau}(\mathbb{R }^{n}\setminus\Sigma)\). Moreover, the following estimate holds_
\[\|\phi\|_{\mathcal{C}_{\ast,\tau}(\mathbb{R}^{n}\setminus\Sigma)}\lesssim\| \mathscr{L}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})^{-1}(\phi)\|_{\mathcal{C}_{ \ast\ast,\tau}(\mathbb{R}^{n}\setminus\Sigma)}.\]
_uniformly on \(L\gg 1\) large._
#### 5.7.3. Fixed-point argument
We prove our main result using a standard perturbation method. The main idea is to apply a contraction theorem for the operator \(\mathcal{N}_{\sigma}(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi)=\mathcal{ N}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}+\phi)\) on the suitably weighted norms introduced in Definition 5.16.
**Proposition 5.21**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). Assume that \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\mathrm{Adm}_{\sigma}(\Sigma)\) is an admissible configuration as in Definition 5.5 with \(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}\in\mathrm{Apx}_{\sigma}(\Sigma)\) their associated approximate solution as in Definition 5.12. Then, for \(L\gg 1\) large enough and \(\zeta_{1}<0\) satisfying (5.29), there exists \(\{c^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\}_{(i,j,k)\in\mathcal{I}_{\infty}} \subset\mathbb{R}\) and a solution \(\phi\in\mathcal{C}_{\ast,\tau}(\mathbb{R}^{n}\setminus\Sigma)\) to_
\[\left\{\begin{array}{l}\mathcal{N}_{\sigma}(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{ \lambda}_{j})(\phi)=\sum_{i=1}^{N}\sum_{j\in\mathbb{N}}\sum_{\ell=0}^{n}c^{i}_ {j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\overline{Z}^{i}_{j,\ell}(\mathbf{a}_{j},\bm {\lambda}_{j})\quad\text{in}\quad\mathbb{R}^{n}\setminus\Sigma,\\ \int_{\mathbb{R}^{n}}\phi\overline{Z}^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j}) \mathrm{d}x=0\quad\text{for}\quad(i,j,\ell)\in\mathcal{I}_{\infty},\end{array}\right.\] ( \[\mathcal{Q}^{\prime}_{2\sigma,\mathbf{a},\mathbf{\lambda}}\] )
_where \(\{\overline{Z}^{i}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\}_{(i,j,\ell)\in \mathcal{I}_{\infty}}\subset\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\Sigma)\) is the family of approximate normalized corkernels given by Definition 5.13. Moreover, one has the estimate_
\[\|\phi_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}\|_{\mathcal{C}_{\ast,\tau }(\mathbb{R}^{n}\setminus\Sigma)}\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\]
_for some \(\xi>0\) depending uniformly on \(L\gg 1\) large._
Proof.: According to Lemma 5.19, the solution operator \((\mathscr{L}_{\sigma}(\mathbf{a},\mathbf{\lambda}))^{-1}:\mathcal{C}^{0}(\mathbb{R}^{n} \setminus\Sigma)\to\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\Sigma)\) defined in Lemma 5.20 is well-defined. Notice that \(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}+\phi\) with \(\phi\in\mathcal{C}_{\ast}(\mathbb{R}^{n}\setminus\Sigma)\) solves equation \((\mathcal{Q}^{\prime}_{2\sigma,\mathbf{a},\mathbf{\lambda}})\), if and only if, it solves the fixed-point problem below
\[\phi=\mathscr{B}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi)\quad\text{in} \quad\mathbb{R}^{n}\setminus\Sigma.\]
Here \(\mathscr{B}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j}):\mathcal{C}^{0}(\mathbb{R}^{n} \setminus\Sigma)\to\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\Sigma)\) is given by
\[\mathscr{B}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi):=-(\mathscr{L}_{\sigma}( \mathbf{a},\mathbf{\lambda}))^{-1}(\mathcal{N}_{\sigma}(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_ {j},\mathbf{\lambda}_{j})}))+(\mathscr{L}_{\sigma}(\mathbf{a},\mathbf{\lambda}))^{-1}( \mathscr{R}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi)) \tag{5.47}\]
and \(\mathscr{R}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j}):\mathcal{C}^{0}(\mathbb{R}^{n} \setminus\Sigma)\to\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\Sigma)\) is given by
\[\mathscr{R}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi):=(-\Delta)^{-\sigma}[ \mathscr{L}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi)], \tag{5.48}\]
where
\[\mathscr{Q}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi):=|f_{\sigma}(\bar{u}_{ (\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}+\phi)-f_{\sigma}(\bar{u}_{(\mathbf{x}, \mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})})-f^{\prime}_{\sigma}(\bar{u}_{(\mathbf{x},\bm {L},\mathbf{a}_{j},\mathbf{\lambda}_{j})})\phi|. \tag{5.49}\]
First, by definition, one has
\[\|\mathscr{B}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi)\|_{\mathcal{C}_{*,\tau} (\mathbb{R}^{n}\setminus\Sigma)}\lesssim\|\mathscr{N}_{\sigma}(\bar{u}_{(\mathbf{x}, \mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})})\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n} \setminus\Sigma)}+\|\mathscr{R}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi)\|_ {\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}.\]
Second, fixing a large \(C\gg 0\), we define the set
\[\mathcal{B}_{C}=\left\{\phi\in\mathcal{C}_{*,\tau}(\mathbb{R}^{n} \setminus\Sigma):\|\phi\|_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma) }\lesssim e^{-\gamma_{\sigma}L(1+\xi)}\text{ and }\int_{\mathbb{R}^{n}}\phi f^{ \prime}_{\sigma}(U_{(x_{i},L^{i}_{j},\lambda^{i}_{j},a^{i}_{j})})Z^{i}_{j,\ell }(\mathbf{a}_{j},\mathbf{\lambda}_{j})\mathrm{d}x=0\right\}.\] \[\text{ for all }(i,j,\ell)\in\mathcal{I}_{\infty}\]
We observe that the first term of the right-hand side of the equation above is estimated in Lemma 5.18. Hence, we are left to provide similar estimates for the remaining term.
**Claim 1:** The following estimate holds
\[\|\mathscr{B}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi)\|_{\mathcal{C}_{*, \tau}(\mathbb{R}^{n}\setminus\Sigma)}\lesssim e^{-\frac{(n-6\sigma)\gamma_{ \sigma}L}{n-2\sigma}}\|\phi\|^{2}_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n} \setminus\Sigma)}+\|\phi\|^{\frac{n+2\sigma}{n-2\sigma}}_{\mathcal{C}_{*,\tau }(\mathbb{R}^{n}\setminus\Sigma)}\leqslant\mathrm{o}(1)\|\phi\|_{\mathcal{C}_ {*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}.\]
Now, for any \(\phi\in\mathcal{B}_{C}\) We must estimate the \(L^{\infty}\)-norm of the error term in (5.48). We start by estimating the term (5.49). Indeed, it is not hard to check
\[|\mathscr{D}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi)|\lesssim\begin{cases} \bar{u}_{n-2\sigma}^{-6\sigma}\\ \bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}^{-2}\phi^{2},&\text{ if } \ |\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}|\geqslant\frac{1}{4}\phi, \\ \phi^{\frac{n+2\sigma}{n-2\sigma}},&\text{ if }\ |\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{ \lambda}_{j})}|\leqslant\frac{1}{4}\phi.\end{cases}\]
Again, the proof will be divided into two steps. First, we estimate the integrand in (5.49).
**Step 1:** The estimate below holds
\[|\mathscr{D}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi)|\lesssim\begin{cases} \sum_{i=1}^{N}\left(\|\phi\|^{2}_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus \Sigma)}+\|\phi\|^{\frac{n-6\sigma}{n-2\sigma}}_{\mathcal{C}_{*,\tau}(\mathbb{ R}^{n}\setminus\Sigma)}\right)|x-x_{i}|^{\zeta_{1}-2\sigma},&\text{ if }\ 0<\mathrm{d}(x,\Sigma)<1,\\ |x|^{-(n+2\sigma)}\left(e^{\frac{(n-6\sigma)\gamma_{\sigma}L}{n-2\sigma}}\| \phi\|^{2}_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}+\|\phi\|^{ \frac{n+2\sigma}{n-2\sigma}}_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus \Sigma)}\right),&\text{ if }\ \mathrm{d}(x,\Sigma)\geqslant 1.\end{cases}\]
As a matter of fact, by our construction, it holds:
1. If \(\operatorname{dist}(x,\Sigma)<1\), then \[|\mathscr{D}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi)|\] \[\lesssim\sum_{i=1}^{N}\left(\|\phi\|^{2}_{\mathcal{C}_{*,\tau}( \mathbb{R}^{n}\setminus\Sigma)}+\|\phi\|^{\frac{n+2\sigma}{n-2\sigma}}_{ \mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}\right)|x-x_{i}|^{\zeta_{ 1}-2\sigma}.\]
2. If \(\operatorname{dist}(x,\Sigma)\geqslant 1\), then \[|\mathscr{D}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi)| \lesssim\|\phi\|^{2}_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n} \setminus\Sigma)}\bar{u}^{\frac{n-6\sigma}{n-2\sigma}}_{\mathcal{C}_{*,\tau}( \mathbb{R}^{n}\setminus\Sigma)}|x|^{-4\gamma_{\sigma}}+\|\phi\|^{\frac{n+2 \sigma}{n-2\sigma}}_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus\Sigma)}|x| ^{-\frac{n+2\sigma}{n-2\sigma}(n-2\sigma)}\] \[\lesssim|x|^{-(n+2\sigma)}\left(e^{\frac{(n-6\sigma)\gamma_{ \sigma}L}{n-2\sigma}}\|\phi\|^{2}_{\mathcal{C}_{*,\tau}(\mathbb{R}^{n}\setminus \Sigma)}+\|\phi\|^{\frac{n+2\sigma}{n-2\sigma}}_{\mathcal{C}_{*,\tau}(\mathbb{R }^{n}\setminus\Sigma)}\right).\]
By combining the above two estimates, the proof of the first step is concluded.
Second, we can use the estimate above the handle the term (5.48).
**Step 2:** The estimate below holds
\[|\mathscr{R}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi)|\lesssim\begin{cases} \sum_{i=1}^{N}\left(\|\phi\|_{\mathcal{C}_{\star,\tau}(\mathbb{R}^{n}\setminus \Sigma)}^{2}+\|\phi\|_{\mathcal{C}_{\star,\tau}(\mathbb{R}^{n}\setminus\Sigma)} ^{\frac{n+2\sigma}{n-2\sigma}}\right)|x-x_{i}|^{\zeta_{1}-\tau},&\text{ if }0< \mathrm{d}(x,\Sigma)<1,\\ |x|^{-(n-2\sigma)}\left(e^{\frac{(n-6\sigma)\gamma_{0}L}{n-2\sigma}}\|\phi\|_{ \mathcal{C}_{\star,\tau}(\mathbb{R}^{n}\setminus\Sigma)}^{2}+\|\phi\|_{ \mathcal{C}_{\star,\tau}(\mathbb{R}^{n}\setminus\Sigma)}^{\frac{n+2\sigma}{n- 2\sigma}}\right),&\text{ if }\mathrm{d}(x,\Sigma)\geqslant 1.\end{cases}\]
Indeed we need to plug Step 1 into (5.48) and proceed as in Claim 2 of Lemma 5.18.
The proof follows by recalling the definition of weighted norms in (5.27) and (5.28).
**Claim 2:** The map \(\mathscr{R}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j}):\mathcal{B}_{C}\to\mathcal{ B}_{C}\) is a contraction.
Now we consider two functions \(\phi_{1},\phi_{2}\in\mathcal{B}_{C}\). From the estimates in Step 1 combined with the ones in Lemma 5.18, it is easy to see for \(L\gg 1\) large, one has
\[\|\mathscr{R}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi_{1})-\mathscr{R}_{ \sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})(\phi_{2})\|_{\mathcal{C}_{\star,\tau}( \mathbb{R}^{n}\setminus\Sigma)}\leqslant\mathrm{o}(1)\|\phi_{1}-\phi_{2}\|_{ \mathcal{C}_{\star,\tau}(\mathbb{R}^{n}\setminus\Sigma)}.\]
Therefore, one can be combined, and the proof of the claim is concluded.
Based on the last step, we can use the standard Banach contracting argument to obtain the desired fixed point; this completes the proof of the proposition.
## 6. Estimates for the projections on the approximate null space
In this section, we provide some estimates related to the coefficient functions seen as functions on the perturbation parameters, namely
\[\{c_{j,\ell}^{i}\}_{(i,j,\ell)\in\mathcal{I}_{\infty}}\subset\mathcal{C}^{ \infty}(\ell_{\tau}^{\infty}(\mathbb{R}^{(n+1)N})), \tag{6.1}\]
which were obtained in Section 5, where we recall \(\mathcal{I}_{\infty}:=\{1\ldots,N\}\times\mathbb{N}\times\{0,\ldots,n\}\) is the total index set. More precisely, we notice that from Proposition 5.21, whenever \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\mathrm{Adm}_{\sigma}(\Sigma)\) is a set of admissible parameters in Definition 5.5, one can find a solution \(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}\in\mathrm{Apx}_{\sigma}(\Sigma)\) (or simply \(\bar{u}_{(\mathbf{x},\mathbf{L})}\in\mathrm{Apx}_{\sigma}(\Sigma)\)) to perturbed equation (\(\mathcal{Q}^{\prime}_{2\sigma,\mathbf{a},\mathbf{\lambda}}\)) as in Definition 5.12. Here we recall
\[(\mathbf{x},\mathbf{L})\in\mathrm{Comp}_{\sigma}(\Sigma)\mapsto(\mathbf{q},\mathbf{a}_{0},\bm {R})\in\mathrm{Bal}_{\sigma}(\Sigma)\mapsto(\mathbf{a}_{j},\mathbf{\lambda}_{j})\in \mathrm{Adm}_{\sigma}(\Sigma)\mapsto\bar{u}_{(\mathbf{x},\mathbf{L})}\in\mathrm{Apx}_{ \sigma}(\Sigma);\]
or, equivalently,
\[\bar{u}_{(\mathbf{x},\mathbf{L})}=(\Upsilon_{\mathrm{sol}}\circ\Upsilon_{\mathrm{per}} \circ\Upsilon_{\mathrm{conf}})(\mathbf{x},\mathbf{L}).\]
is the explicit construction of approximate solutions. Thus, applying the Lyapunov-Schmidt reduction, one can see that finding solutions to our original problem (\(\mathcal{Q}^{\prime}_{2\sigma,\Sigma}\)) is equivalent to solving the following infinite-dimensional system
\[\beta_{j,\ell}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j})=0\quad\text{for}\quad(i,j,\ell )\in\mathcal{I}_{\infty}.\] ( \[\mathcal{S}_{2\sigma,\Sigma}\] )
Here the projection functions \(\{\beta_{j,\ell}^{i}\}_{(i,j,\ell)\in\mathcal{I}_{\infty}}\subset\mathcal{C}^ {\infty}(\ell_{\tau}^{\infty}(\mathbb{R}^{(n+1)N}))\) are given by
\[\beta_{j,\ell}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j})=\int_{\mathbb{R}^{n}}\mathcal{ N}_{\sigma}(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})\overline{Z}_{j,\ell}^{i}( \mathbf{a}_{j},\mathbf{\lambda}_{j})\mathrm{d}x\quad\text{for}\quad(i,j,\ell)\in \mathcal{I}_{\infty}, \tag{6.2}\]
where we recall that
\[\mathcal{N}_{\sigma}(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})=\bar{u}_{(\bm {x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}-(-\Delta)^{-\sigma}(f_{\sigma}\circ \bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})})\]
and the family of cokernels \(\{\overline{Z}_{j,\ell}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\}_{(i,j,\ell)\in \mathcal{I}_{\infty}}\subset\mathcal{C}^{0}(\mathbb{R}^{n}\setminus\Sigma)\) given by Definition 5.13. The idea is to find a set of configuration parameters such that its associated sequence of perturbation parameters satisfies Syst. (\(\mathcal{S}_{2\sigma,\Sigma}\)). Then, the balancing conditions (\(\mathscr{R}_{1}\)) and (\(\mathscr{R}_{2}\)) will allow us to perturb this special configuration to find a true solution to our problem. In some sense, this is a discrete version of the perturbation technique we applied to approximate Delaunay solutions by half-bubble tower solutions.
### Projection on the normalized approximate kernels
Initially, we prove the decay of the functions defined in (6.2). For this, we shall consider two cases. Namely, when the perturbation sequence of parameters is trivial, that is, \((\mathbf{a}_{j},\mathbf{\lambda}_{j})=(\mathbf{0},\mathbf{1})\). Notice that indeed \((\mathbf{0},\mathbf{1})\in\operatorname{Adm}_{\sigma}(\Sigma)\) is an admissible perturbation sequence.
With this definition, we have the following estimate:
**Lemma 6.1**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). Assume that \((\mathbf{a}_{j},\mathbf{\lambda}_{j})=(\mathbf{0},\mathbf{1})\in\operatorname{Adm}_{\sigma}(\Sigma)\) is a set of trivial perturbations. Then, there exists two constants \(A_{2}>0,A_{3}<0\) independent of \(L\gg 1\) given by (A.2) and (A.3) such that the following estimates hold_
* _If_ \(j=0\) _and_
* \(\ell=0\)_, then one has_ \[\beta_{0,0}^{i}(\mathbf{0},\mathbf{1})=-c_{n,\sigma}q_{i}\left[A_{2}\sum_{i^{\prime} \neq i}|x_{i^{\prime}}-x_{i}|^{-(n-2\sigma)}(R^{i}R^{i^{\prime}})^{\gamma_{ \sigma}}q_{i^{\prime}}-q_{i}\right]e^{-\gamma_{\sigma}L}(1+\operatorname{o}(1 ))+\mathcal{O}(e^{-\gamma_{\sigma}L(1+\xi)});\]
* \(\ell\in\{1,\dots,n\}\)_, then one has_ \[\beta_{0,\ell}^{i}(\mathbf{0},\mathbf{1})=c_{n,\sigma}\lambda_{0}^{i}\left[A_{3}\sum_{ i^{\prime}\neq i}\frac{(x_{i^{\prime}}-x_{i})_{\ell}}{|x_{i^{\prime}}-x_{i}|^{n-2 \sigma+2}}(R^{i}R^{i^{\prime}})^{\gamma_{\sigma}}q_{i^{\prime}}q_{i}e^{- \gamma_{\sigma}L}+\mathcal{O}(e^{-\gamma_{\sigma}L(1+\xi)})\right].\]
* _If_ \(j\geqslant 1\) _and_
* \(\ell=0\)_, then one has_ \[\beta_{j,0}^{i}(\mathbf{0},\mathbf{1})=\mathcal{O}\left(e^{-\gamma_{\sigma}(1+\xi)}e^{ -\nu t_{j}^{i}}\right);\]
* \(\ell\in\{1,\dots,n\}\)_, then one has_ \[\beta_{j,\ell}^{i}(\mathbf{0},\mathbf{1})=\mathcal{O}\left(e^{-\gamma_{\sigma}L(1+\xi )}e^{-(1+\nu)t_{j}^{i}}\right),\] _where_ \(\nu=\min\left\{\zeta_{1}+\gamma_{\sigma},\frac{\gamma_{\sigma}}{2}\right\}\) _independent of_ \(L\gg 1\) _large and_ \(\xi>0\)_._
Proof.: Recall the definition of the cokernel \(\overline{Z}^{i}_{j,\ell}(\mathbf{0},\mathbf{1})\), we have
\[\beta_{j,\ell}^{i}(\mathbf{0},\mathbf{1})=\int_{\mathbb{R}^{n}}[(-\Delta)^{\sigma} \bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{0},\mathbf{1})}-(f_{\sigma}\circ\bar{u}_{(\mathbf{x},\mathbf{L },\mathbf{0},\mathbf{1})})]Z^{i}_{j,\ell}(\mathbf{0},\mathbf{1})\mathrm{d}x.\]
Then The proof is the same as in [8, Lemma 4.1], and we omit the details.
It is not hard to check that only the perturbations of \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\) will affect the numbers \(\beta_{j,\ell}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\), that is, we can get the same estimates for \(\beta_{j,\ell}^{i}(\mathbf{0},\mathbf{1})\) for an admissible sequence of parameters. Consequently, one can see that for any fixed \(i_{*}\in\{1,\dots,N\}\), the corresponding \(x_{i_{*}}\in\Sigma\) and \(L_{i_{*}}\in\mathbb{R}_{+}\) are also fixed. Hence, if we consider the approximate solution defined as
\[\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{0},\mathbf{1})}^{*}:=\bar{u}_{(x_{i_{*}},L_{i_{*}},\bm {0},\mathbf{1})}, \tag{6.3}\]
then the same estimates for \(\beta_{j,\ell}^{i_{*}}(\mathbf{0},\mathbf{1})\) in the above lemma are still in force.
Next, we estimate the coefficients in (6.1) for a general admissible perturbation sequence. So fixing \(i_{*}\in\{1,\dots,n\}\), we would like to study the estimates for the variations of \(\beta_{j,\ell}^{i_{*}}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\). Before, let us introduce some terminology. For any fixed \(i_{*}\in\{1,\dots,N\}\) and \(j_{*}\in\mathbb{N}\), we let \(\mathrm{e}^{i_{*}}_{j_{*}}\in\mathbb{R}^{n}\) and let \(r_{j_{*}}^{i_{*}}\in\mathbb{R}\) be such that
\[|\mathrm{e}^{i_{*}}_{j_{*}}|\lesssim(\lambda_{j_{*}}^{i_{*}})^{2}\quad\text{ and }\quad|r_{j_{*}}^{i_{*}}|\lesssim e^{-\tau t_{j_{*}}^{i_{*}}}.\]
In this fashion, we define the variation of the perturbation sequence as
\[(\boldsymbol{a}_{j}(t),\boldsymbol{\lambda}_{j}(t))=\begin{cases}(\mathbf{0}, \mathbf{1}),&\text{if }\,j\neq j_{*}\\ te_{j_{*}}^{i_{*}}+R^{i_{*}}(1+tr_{j_{*}}^{i_{*}})&\text{if }\,j=j_{*}.\end{cases}\]
Finally, we set
\[\bar{u}_{(\boldsymbol{a}_{j}(t),\boldsymbol{\lambda}_{j}(t))}^{*}(x)=\sum_{i=1 }^{N}\left(\widehat{U}_{(\boldsymbol{a}_{j}(t),\boldsymbol{\lambda}_{j}(t))}^ {+,*}+\chi_{i}(x-te_{j_{*}}^{i_{*}})\phi^{*}(\boldsymbol{a}_{j}(t),\boldsymbol {\lambda}_{j}(t))\right)(x), \tag{6.4}\]
where
\[\widehat{U}_{(\boldsymbol{a}_{j}(t),\boldsymbol{\lambda}_{j}(t))}^{+,*}(x)= \sum_{j\in\mathbb{N}}\widehat{U}_{(x_{i_{*}},L_{i_{*}},a_{j}^{i_{*}}(t),\lambda _{j}^{i_{*}}(t))}^{*}(x)\]
with
\[\widehat{U}_{(x_{i_{*}},L_{i_{*}},a_{j}^{i_{*}}(t),\lambda_{j}^{i_{*}}(t))}^{ *}(x)=U_{R^{i_{*}}(1+tr_{i_{*}}^{j_{*}})}(x-te_{i_{*}}^{j_{*}}),\]
where \(0<\tau\ll 1\) is sufficiently small.
with this definition in hands, we have the following estimates
**Lemma 6.2**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). Assume that \((\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})\in\mathrm{Adm}_{\sigma}(\Sigma)\) is an admissible configuration as in Definition 5.5 and \(\Psi:\mathbb{R}\to\mathbb{R}\) be defined by (A.4). Then, there exists constants \(A_{1}<0,A_{2}>0,A_{3}<0\) independent of \(L\gg 1\) given by (A.1), (A.2), and (A.3) such that the following estimates hold_
1. _If_ \(i=i_{*}\)_,_ \(j_{*}\neq j\)_, and_ 1. \(\ell=0\)_, then one has_ 2. \(\ell\in\{1,\dots,n\}\)_, then one has_ 3. \[\partial_{t}\big{|}_{t=0}\int_{\mathbb{R}^{n}}\mathscr{N}_{\sigma}( \bar{u}_{(\boldsymbol{a}_{j}(t),\boldsymbol{\lambda}_{j}(t))}^{*})Z_{j,\ell}^ {i}(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})\mathrm{d}x\] \[=c_{n,\sigma}\lambda_{0}^{i}\partial_{t}\left[A_{3}\sum_{i^{ \prime}\neq i}\frac{x_{i^{\prime}}-x_{i}}{|x_{i^{\prime}}-x_{i}|^{n-2\sigma+2}} (\lambda_{0}^{i}\lambda_{0}^{i^{\prime}})^{\gamma_{\sigma}}+A_{1}\frac{\min\{ \lambda_{0}^{i}/\lambda_{1}^{i},\lambda_{1}^{i}/\lambda_{0}^{i}\}^{\gamma_{ \sigma}}}{|\max\{\lambda_{j^{\prime}}^{i},\lambda_{j_{*}}^{i}\}|^{2}}te_{\ell} \right]+\mathcal{O}(\lambda_{0}^{i}e^{-\gamma_{\sigma}L(1+\xi)});\]
4. _If_ \(i=i_{*}\)_,_ \(j_{*}=j\geqslant 1\)__ 1. \(\ell=0\)_, then one has_ 2. \(\ell\in\{1,\dots,n\}\)_, then one has_ 3. \(\ell\in\{1,\dots,n\}\)_, then one has_ 4. \[\partial_{t}\big{|}_{t=0}\int_{\mathbb{R}^{n}}\mathscr{N}_{\sigma}(\bar{u}_{( \boldsymbol{a}_{j}(t),\boldsymbol{\lambda}_{j}(t))}^{*})Z_{j,\ell}^{i}( \boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})\mathrm{d}x=c_{n,\sigma}\lambda_{j }^{i}\partial_{t}\left[A_{1}\frac{\min\{\lambda_{0}^{i}/\lambda_{1}^{i},\lambda _{1}^{i}/\lambda_{0}^{i}\}^{\gamma_{\sigma}}}{|\max\{\lambda_{j^{\prime}}^{i}, \lambda_{j_{*}}^{i}\}|^{2}}te_{\ell}\right]+\mathcal{O}(\lambda_{j}^{i}e^{- \gamma_{\sigma}L(1+\xi)}e^{-\tau t_{j_{*}}});\] _for some_ \(\nu>0\) _independent of_ \(0<\tau\ll 1\) _small,_ \(L\gg 1\) _large, and_ \(\xi>0\)_._
Proof.: The proof is the same as in [8, Lemma 4.3]; thus, we omit the details.
Next, we study the case of a general sequence of perturbation. In this proof, it will be fundamental to use the fact that our sequence of parameters is admissible.
**Lemma 6.3**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\) and \((\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})\in\operatorname{Adm}_{\sigma}(\Sigma)\) be an admissible configuration as in Definition 5.5. There exists constants \(A_{1},A_{2}>0,A_{3}<0\) independent of \(L\gg 1\) given by (A.1), (A.2) and (A.3) such that the following estimates hold_
* _If_ \(j=0\) _and_
* \(\ell=0\)_, then one has_ \[\beta_{0,0}^{i}(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j}) =-c_{n,\sigma}q_{i}\left[A_{2}\sum_{i^{\prime}\neq i}|x_{i^{\prime }}-x_{i}|^{-(n-2\sigma)}(R_{0}^{i}R_{0}^{i^{\prime}})^{\gamma_{\sigma}}q_{i^{ \prime}}-\left(\frac{R_{1}^{i}}{R_{0}^{i}}\right)^{\gamma_{\sigma}}q_{i} \right]e^{-\gamma_{\sigma}L}(1+\mathrm{o}(1))\] \[+\mathcal{O}(e^{-\gamma_{\sigma}L(1+\xi)}).\]
* \(\ell\in\{1,\ldots,n\}\)_, then one has_ \[\beta_{0,\ell}^{i}(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j}) =c_{n,\sigma}\lambda_{0}^{i}\left[A_{3}\sum_{i^{\prime}\neq i} \frac{(x_{i^{\prime}}-x_{i})_{\ell}}{|x_{i^{\prime}}-x_{i}|^{n-2\sigma+2}}(R_{ 0}^{i}R_{0}^{i^{\prime}})^{\gamma_{\sigma}}q_{i^{\prime}}+A_{0}\left(\frac{R_ {1}^{i}}{R_{0}^{i}}\right)^{\gamma_{\sigma}}\frac{a_{0}^{i}-a_{1}^{i}}{\left( \lambda_{0}^{i}\right)^{2}}q_{i}\right]q_{i}e^{-\gamma_{\sigma}L}\] \[+\mathcal{O}(\lambda_{0}^{i}e^{-\gamma_{\sigma}L(1+\xi)})\text{ for }\ell\in\{1,\ldots,n\}.\]
* _If_ \(j\geqslant 1\) _and_
* \(\ell=0\)_, then one has_
* \[\beta_{j,0}^{i}(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j}) =\mathcal{O}\left(e^{-\gamma_{\sigma}L(1+\xi)}e^{-\nu t_{j}^{i}}+e^{- \gamma_{\sigma}L(1+\xi)}e^{-\tau t_{j-1}^{i}}\right).\]
* \(\ell\in\{1,\ldots,n\}\)_, then one has_ \[\beta_{j,\ell}^{i}(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j}) =\mathcal{O}\left(\lambda_{j}^{i}e^{-\gamma_{\sigma}L(1+\xi)}e^{-\nu t_{j}^{ i}}+\lambda_{j}^{i}e^{-\gamma_{\sigma}L}e^{-\tau t_{j-1}^{i}}\right)\text{ for }\ell\in\{1,\ldots,n\},\] _where_ \(\nu=\min\left\{\zeta_{1}+\gamma_{\sigma},\frac{\gamma_{\sigma}}{2}\right\}\) _independent of_ \(L\gg 1\) _large and_ \(\xi>0\)_._
Proof.: For the same reason as in Lemma 6.1, The proof is the same as in [8, Lemma 4.4], and we omit the details.
### Derivative of the projection on the normalized approximate kernels
Here we estimate the variations of the projection functions in (6.2) with respect to the perturbation parameters. As before, for any fixed \(i_{*}\in\{1,\ldots,N\}\) one has \(x_{i_{*}}\in\Sigma\) and \(L_{i_{*}}\in\mathbb{R}_{+}\), we denote by \(u_{(\boldsymbol{x},\boldsymbol{L},\boldsymbol{0},\boldsymbol{1})}^{*}\in \mathcal{C}^{2\sigma+\alpha}(\mathbb{R}^{n}\setminus\Sigma)\) an approximate solution to \((\mathcal{Q}_{2\sigma,\Sigma})\). Using Proposition 5.21, we know that by performing the Lyapunov-Schmidt reduction method, there exists an error function
\[\phi_{(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})}^{*}:=\phi_{(x_{i_{*}},L_ {i_{*}}\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})}\in\mathcal{C}_{*}( \mathbb{R}^{n}\setminus\Sigma). \tag{6.5}\]
In this direction, it also makes sense to define
\[\mathscr{N}_{\sigma}^{*}(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})(\phi): =\mathscr{N}_{\sigma}(u_{(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})}^{*}+ \phi)=(u_{(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})}^{*}+\phi)-(-\Delta)^{- \sigma}[f_{\sigma}\circ(u_{(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})}^{*} +\phi)]. \tag{6.6}\]
Furthermore, let us introduce the linearized operator applied to this approximate solution \(\mathscr{L}_{\sigma}^{*}(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j}): \mathcal{C}^{\alpha}(\mathbb{R}^{n}\setminus\Sigma)\to\mathcal{C}^{2\sigma+ \alpha}(\mathbb{R}^{n}\setminus\Sigma)\) given by
\[\mathscr{L}_{\sigma}^{*}(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})(\phi)= \phi-(-\Delta)^{-\sigma}(f_{\sigma}^{\prime}\circ\bar{u}_{(\boldsymbol{a}_{j},\boldsymbol{\lambda}_{j})}^{*})\phi.\]
From now on, it will be convenient to denote the new coordinate system as
\[\xi_{j,0}^{i}=r_{j}^{i}\quad\text{and}\quad\text{for}\quad\xi_{j,\ell}^{i}=a_{ j,\ell}^{i}\quad\text{for}\quad\ell\in\{1,\ldots,n\}. \tag{6.7}\]
We study the variations with respect to (6.7).
We now need to study the derivative of \(c^{i_{*}}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\) with respect to the variations of the parameters in (6.7). Initially, we consider the most straightforward model case when there is only one point singularity at \(\Sigma=\{0\}\) and \(\bar{u}_{(\mathbf{x},\mathbf{L})}=\bar{u}_{(0,L_{j})}\in\mathcal{C}^{2\sigma}(\mathbb{R }^{n}\setminus\{0\})\) is the associated approximate solution, _i.e._, the Delaunay solution from (5.8). We recall that this solution satisfies \((\mathcal{Q}^{\prime}_{2\sigma,\mathbf{a},\mathbf{\lambda}})\) with \(\phi_{(0,L_{j})}\equiv 0\) and vanishing right-hand side. We define
\[\beta^{i_{*}}_{j,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j}):=\int_{\mathbb{R}^{n}} \mathscr{N}^{*}_{\sigma}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\overline{Z}^{*}_{j,\ell }(\mathbf{a}_{j},\mathbf{\lambda}_{j})\mathrm{d}x. \tag{6.8}\]
In this setting, one can still perform the reduction in Proposition 5.21 to find a perturbed solution in the form \(u=\bar{u}_{(0,L_{j})}+\phi\) of the following equation
\[\left\{\begin{array}{l}\mathcal{N}^{*}_{\sigma}(\mathbf{a}_{j},\mathbf{ \lambda}_{j})(\phi)=\sum_{j\in\mathbb{N}}\sum_{\ell=0}^{n}c^{i_{*}}_{j,\ell}( \mathbf{a}_{j},\mathbf{\lambda}_{j})\overline{Z}^{i_{*}}_{j,\ell}(\mathbf{a}_{j},\mathbf{ \lambda}_{j})\quad\text{in}\quad\mathbb{R}^{n}\setminus\Sigma,\\ \int_{\mathbb{R}^{n}}\phi\overline{Z}^{i_{*}}_{j,\ell}(\mathbf{a}_{j},\mathbf{ \lambda}_{j})\mathrm{d}x=0\quad\text{for}\quad(\ell,j)\in\{0,\dots,n\}\times \mathbb{N}.\end{array}\right. \tag{6.9}\]
In conclusion, for any fixed \(i_{*}\in\{1,\dots,N\}\). Let us denote by \(\bar{u}^{*}_{(\mathbf{a}_{j},\mathbf{\lambda}_{j})},\phi^{*}_{(\mathbf{a}_{j},\mathbf{\lambda} _{j})}\) the pair satisfying the infinite-dimensional reduced equation (6.9). Notice that the reason to start with the trivial configuration \(u_{(0,L_{j})}\in\mathcal{C}^{2\sigma}(\mathbb{R}^{n}\setminus\{0\})\) is that we will have the identification \(\partial_{\xi^{i}_{j,\ell}}\beta^{i}_{j,\ell}=\lim_{j\to+\infty}\partial_{\xi _{j,\ell}}\beta^{i_{*}}_{j,\ell}\), where we set \(\xi_{j,\ell}:=\xi^{i_{*}^{i}}_{j,\ell}\).
Let us begin with the lemma below
**Lemma 6.4**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). Assume that \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\mathrm{Adm}_{\sigma}(\Sigma)\) is an admissible configuration as in Definition 5.5 with \(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}\in\mathrm{Apx}_{\sigma}(\Sigma)\) their associated approximate solution as in Definition 5.12. Then, for \(L\gg 1\) is sufficiently large, one has_
\[|\partial_{\xi^{i}_{j,\ell}}\phi_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j}) }(x)|\lesssim\begin{cases}e^{-\gamma_{\sigma}L(1+\xi)}|x-x_{i}|^{-\gamma_{ \sigma}}e^{-\nu|t^{i}-t^{i}_{j}|},&\text{if}\ \ell=0,\\ (\lambda^{i}_{j})^{-1}e^{-\gamma_{\sigma}L(1+\xi)}|x-x_{i}|^{-\gamma_{\sigma}} e^{-\sigma|t^{i}-t^{i}_{j}|},&\text{if}\ \ell\in\{1\dots,n\},\end{cases}\quad\text{in}\quad B_{1}(x_{i}),\]
_where \(\nu=\min\big{\{}\zeta_{1}+\gamma_{\sigma},\frac{\gamma_{\sigma}}{2}\big{\}}\) independent of \(L\gg 1\) large and \(\xi>0\)._
Proof.: The proof is the same as in [8, Lemma 5.1]; thus, we omit the details.
We remark that an estimate similar to this will hold for the pair \(\bar{u}^{*}_{(\mathbf{a}_{j},\mathbf{\lambda}_{j})},\phi^{*}_{(\mathbf{a}_{j},\mathbf{\lambda} _{j})}\). The last lemma shows the suitable weighted Holder spaces for this setting.
**Definition 6.5**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). For any \(\alpha\in(0,1)\), let us introduce two new weighted norms_
\[\|\phi\|_{\mathcal{C}_{*,\nu}(\mathbb{R}^{n}\setminus\Sigma)} =\||x-x_{i}|^{\gamma_{\sigma}}e^{\nu|t^{i}-t^{i}_{j}|}\phi\|_{ \mathcal{C}^{2\sigma+\alpha}(B_{1}(x_{i}))}+\sum_{i^{\prime}\neq i}\||x-x_{i^{ \prime}}|^{\gamma_{\sigma}}\phi\|_{\mathcal{C}^{2\sigma+\alpha}(B_{1}(x_{i^{ \prime}}))}\] \[+\||x|^{n-2\sigma}\phi\|_{\mathcal{C}^{2\sigma+\alpha}(\mathbb{R }^{n}\setminus\cup_{i^{\prime}}B_{1}(x_{i^{\prime}}))}\]
_and_
\[\|\phi\|_{\mathcal{C}_{**,\nu}(\mathbb{R}^{n}\setminus\Sigma)} =\||x-x_{i}|^{\gamma^{\prime}_{\sigma}}e^{\nu|t^{i}-t^{i}_{j}|} \phi\|_{\mathcal{C}^{2\sigma+\alpha}(B_{1}(x_{i}))}+\sum_{i^{\prime}\neq i}\||x- x_{i^{\prime}}|^{\gamma^{\prime}_{\sigma}}\phi\|_{\mathcal{C}^{2\sigma+\alpha}(B_{1}(x_{i^{ \prime}}))}\] \[+\||x|^{n+2\sigma}\phi\|_{\mathcal{C}^{2\sigma+\alpha}(\mathbb{R }^{n}\setminus\cup_{i^{\prime}}B_{1}(x_{i^{\prime}}))}\]
_where we recall that \(t^{i}=-\ln|x-x_{i}|\) and \(0<\nu\ll 1\) is a small positive constant to be determined later. We also denote by \(\mathcal{C}_{*,\nu}(\mathbb{R}^{n}\setminus\Sigma)\) and \(\mathcal{C}_{**,\nu}(\mathbb{R}^{n}\setminus\Sigma)\) the corresponding weighted Holder spaces._
In the light of Lemma 6.2, one can prove the estimate below
**Lemma 6.6**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). Assume that \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\operatorname{Adm}_{\sigma}(\Sigma)\) is an admissible configuration as in Definition 5.5 and \(\Psi:\mathbb{R}\to\mathbb{R}\) be defined as (A.4). Then, there exists constants \(A_{1}<0,A_{2}>0,A_{3}<0\) independent of \(L\gg 1\) given by (A.1), (A.2), and (A.3) and \(\xi>0\) such that the following estimates hold:_
1. _If_ \(\ell=0\)_, then one has_ \[\partial_{r_{j}}\beta_{j,0}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\big{|}_{ (\mathbf{a}_{j},\mathbf{\lambda}_{j})=(\mathbf{0},\mathbf{1})} =-2c_{n,\sigma}\Psi^{\prime}(L)+\mathcal{O}(e^{-\gamma_{\sigma}L( 1+\xi)}),\] \[\partial_{r_{j}}\beta_{j-1,0}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\big{|} _{(\mathbf{a}_{j},\mathbf{\lambda}_{j})=(\mathbf{0},\mathbf{1})} =c_{n,\sigma}\Psi^{\prime}(L)+\mathcal{O}(e^{-\gamma_{\sigma}L( 1+\xi)}),\] \[\partial_{r_{j}}\beta_{j+1,0}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\big{|} _{(\mathbf{a}_{j},\mathbf{\lambda}_{j})=(\mathbf{0},\mathbf{1})} =c_{n,\sigma}\Psi^{\prime}(L)+\mathcal{O}(e^{-\gamma_{\sigma}L( 1+\xi)}),\] \[\partial_{r_{j}}\beta_{j_{*},0}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\big{|} _{(\mathbf{a}_{j},\mathbf{\lambda}_{j})=(\mathbf{0},\mathbf{1})} =\mathcal{O}(e^{-\gamma_{\sigma}(1+\xi)}e^{-\sigma|t_{j_{*}}-t_{j }|})\quad\text{for}\quad|j_{*}-j|\geqslant 2.\]
2. _If_ \(\ell\in\{1,\ldots,n\}\)_, then one has_ \[\partial_{a_{j,\ell}}\beta_{j,\ell}^{i_{*}}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\Big{|} _{(\mathbf{a}_{j},\mathbf{\lambda}_{j})=(\mathbf{0},\mathbf{1})}=c_{n,\sigma}\lambda_ {j}\sum_{j^{\prime}\neq j}\frac{\min\{\lambda_{j^{\prime}}/\lambda_{j},\lambda _{j}/\lambda_{j^{\prime}}\}^{\gamma_{\sigma}}}{\max\{\lambda_{j^{\prime}}^{2}, \lambda_{j}^{2}\}}+\mathcal{O}(e^{-\gamma_{\sigma}L(1+\xi)})\quad\text{if} \quad j\neq j_{*}\] _and_ \[\partial_{a_{j,\ell}}\beta_{j,\ell}^{i_{*}}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\Big{|} _{(\mathbf{a}_{j},\mathbf{\lambda}_{j})=(\mathbf{0},\mathbf{1})}=c_{n,\sigma}\lambda_ {j}\frac{\min\{\lambda_{j_{*}}/\lambda_{j},\lambda_{j}/\lambda_{j_{*}}\}^{ \gamma_{\sigma}}}{\max\{\lambda_{j_{*}}^{2},\lambda_{j}^{2}\}}+\mathcal{O}(e^{ -\gamma_{\sigma}L(1+\xi)}).\] _In addition, it follows_ \[\partial_{\xi_{j,\ell^{\prime}}}\beta_{j_{*},\ell}^{i_{*}}(\mathbf{a}_{j},\mathbf{ \lambda}_{j})\Big{|}_{(\mathbf{a}_{j},\mathbf{\lambda}_{j})=(\mathbf{0},\mathbf{1})} =0\quad\text{if}\quad\ell\neq\ell^{\prime}.\]
Proof.: The proof is the same as in [8, Lemma 5.2]; thus, we omit the details.
The strategy to proof the desired estimates for the case of a generally admissible perturbation sequence \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\operatorname{Adm}_{\sigma}(\Sigma)\) is first to study the trivial case \((\mathbf{a}_{j},\mathbf{\lambda}_{j})=(\mathbf{0},\mathbf{1})\) and then perform a by-now standard perturbation of parameters method.
For simplicity, we only state the latter case:
**Lemma 6.7**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\) and \(N\geqslant 2\). Assume that \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\operatorname{Adm}_{\sigma}(\Sigma)\) is an admissible configuration as in Definition 5.5 with \(\bar{u}_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{\lambda}_{j})}\in\operatorname{Apx}_{ \sigma}(\Sigma)\) their associated approximate solution as in Definition 5.12. Then, for any \(i_{*}\in\{1,\ldots,N\}\) fixed and \(j\in\mathbb{N}\), we have the following estimate_
\[\Big{\|}\partial_{\xi_{j,\ell}^{i}}\left(\phi_{(\mathbf{x},\mathbf{L},\mathbf{a}_{j},\mathbf{ \lambda}_{j})}-\phi_{(\mathbf{a}_{j},\mathbf{\lambda}_{j})}^{*}\right)\Big{\|}_{ \mathcal{C}_{*,\nu}(\mathbb{R}^{n}\setminus\Sigma)}\lesssim\begin{cases}e^{- \gamma_{\sigma}L(1+\xi)}e^{-\nu t_{j}^{i}}\quad\text{for}\quad\ell=0\\ (\lambda_{j}^{i})^{-1}e^{-\gamma_{\sigma}L(1+\xi)}e^{-\nu t_{j}^{i}}\quad \text{for}\quad\ell\in\{1,\ldots,n\}.\end{cases}\]
_In particular, it follows_
\[\Big{|}\partial_{\xi_{j,\ell}^{i}}\left(\beta_{j^{\prime},\ell^{\prime}}^{i}( \mathbf{a}_{j},\mathbf{\lambda}_{j})-\beta_{j^{\prime},\ell^{\prime}}(\mathbf{a}_{j},\mathbf{ \lambda}_{j})\right)\Big{|}\lesssim\begin{cases}e^{-\gamma_{\sigma}L(1+\xi)}e^{ -\nu t_{j}^{i}}e^{-\nu|t_{j}^{i}-t_{j^{\prime}}^{i}|}\quad\text{for}\quad\ell=0 \\ (\lambda_{j}^{i})^{-1}e^{-\gamma_{\sigma}(1+\xi)}e^{-\nu t_{j}^{i}}e^{-\nu|t_{j} ^{i}-t_{j^{\prime}}^{i}|}\quad\text{for}\quad\ell\in\{1,\ldots,n\},\end{cases}\]
_where \(0<\tau\ll\nu\) small enough with \(\nu=\min\left\{\zeta_{1}+\gamma_{\sigma},\frac{\gamma_{\sigma}}{2}\right\}\) independent of \(L\gg 1\) large and \(\xi>0\)._
Proof.: The proof is the same as in [8, Lemmas 5.5 and 5.6]; thus, we omit the details.
## 7. Gluing technique
In this section, we prove our main results. We keep the notation and assumptions in the previous sections. The proof here is similar in spirit to the one in [8, Theorem 1]. Nevertheless, we include it here for the sake of completeness.
### Infinite-dimensional Toda-system
We apply a fixed-point strategy in a weighted space of sequences. Before we start, we define some notation.
**Definition 7.1**.: _For any \(\tau>0\), let us introduce the following weighted norm_
\[|(\mathbf{a}_{j},\mathbf{\lambda}_{j})|_{\infty,\tau}=\sup_{j\in\mathbb{N}}e^{(2j+1) \tau}|(\mathbf{a}_{j},\mathbf{\lambda}_{j})|_{\infty}.\]
_We also consider the associated Banach space given by_
\[\ell_{\tau}^{\infty}(\mathbb{R}^{(n+1)N})=\left\{(\mathbf{a}_{j},\mathbf{\lambda}_{j} )\in\ell^{\infty}(\mathbb{R}^{(n+1)N}):|(\mathbf{a}_{j},\mathbf{\lambda}_{j})|_{ \infty,\tau}<+\infty\right\}.\]
_For any \((\mathbf{\bar{a}}_{j},\mathbf{\bar{r}}_{j})\in\ell_{\tau}^{\infty}(\mathbb{R}^{(n+2)N})\), we define the interaction operator_
\[\mathscr{T}_{(\mathbf{\bar{a}}_{j},\mathbf{\bar{r}}_{j})}:\ell_{\tau}^{\infty}( \mathbb{R}^{(n+1)N})\to\ell_{\tau}^{\infty}(\mathbb{R}^{(n+1)N})\]
_given by \(\mathscr{T}_{(\mathbf{\bar{a}}_{j},\mathbf{\bar{r}}_{j})}=(\mathscr{T}_{(\mathbf{\bar{a}}_ {j})},\mathscr{T}_{(\mathbf{\bar{r}}_{j})})\). Here_
\[\mathscr{T}_{(\mathbf{\bar{a}}_{j})}(\mathbf{\bar{a}}_{j})=\mathscr{T}_{(\mathbf{\bar{a}} _{j})}\mathbf{\bar{a}}_{j}^{\rm t}\quad\text{and}\quad\mathscr{T}_{(\mathbf{\bar{r}}_ {j})}(\mathbf{\bar{r}}_{j})=\mathscr{T}_{(\mathbf{\bar{r}})_{j}}\mathbf{\bar{r}}_{j}^{\rm t},\]
_where \(\mathscr{T}_{(\mathbf{\bar{a}}_{j})}=(\mathscr{T}_{(\mathbf{\bar{a}}_{j})}^{1},\dots, \mathscr{T}_{(\mathbf{\bar{a}}_{j})}^{N})\) and \(\mathscr{T}_{(\mathbf{\bar{r}})_{j}}=(\mathscr{T}_{(\mathbf{\bar{r}})_{j}}^{1},\dots, \mathscr{T}_{(\mathbf{\bar{r}})_{j}}^{N})\) with_
\[\mathscr{T}_{(\mathbf{\bar{a}}_{j})}^{i}=\left(\begin{array}{cccccc}-1&1+e^{-2L _{i}}&-e^{-2L_{i}}&0&\cdots&\cdots&0\\ 0&-1&1+e^{-2L_{i}}&-e^{-2L_{i}}&0&\ddots&0\\ 0&0&-1&1+e^{-2L_{i}}&-e^{-2L_{i}}&0&\vdots\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots\end{array}\right) \tag{7.1}\]
_and_
\[\mathscr{T}_{(\mathbf{\bar{r}}_{j})}^{i}=\left(\begin{array}{cccccc}-1&2&-1&0& \cdots&\cdots&0\\ 0&-1&2&-1&0&\ddots&0\\ 0&0&-1&2&-1&0&\vdots\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots\end{array}\right) \tag{7.2}\]
_being infinite-dimensional matrix for all \(i\in\{1,\dots,N\}\)._
It is straightforward to see that these infinite-dimensional matrices are not invertible since they have a trivial kernel. However, they are indeed invertible in some suitably weighted norms defined above. In this direction, we have the following surjectiveness result for the interaction operator.
**Lemma 7.2**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\), and \(N\geqslant 2\). For any \(\tau>0\), the interaction operator \(\mathscr{T}_{(\mathbf{\bar{a}}_{j},\mathbf{\bar{r}}_{j})}:\ell_{\tau}^{\infty}( \mathbb{R}^{(n+1)N})\to\ell_{\tau}^{\infty}(\mathbb{R}^{(n+1)N})\) has an inverse, denoted by \(\mathscr{T}_{(\mathbf{\bar{a}}_{j},\mathbf{\bar{r}}_{j})}^{-1}:\ell_{\tau}^{\infty}( \mathbb{R}^{(n+1)N})\to\ell_{\tau}^{\infty}(\mathbb{R}^{(n+1)N})\). Moreover, one has_
\[\sup_{|(\mathbf{a}_{j},\mathbf{r}_{j})|_{\infty,\tau}=1}\|\mathscr{T}_{(\mathbf{\bar{a}}_{j },\mathbf{\bar{r}}_{j})}^{-1}(\mathbf{a}_{j},\mathbf{r}_{j})\|\lesssim e^{-2\tau}. \tag{7.3}\]
Proof.: The proof is given by directly constructing the inverse operator. First, we observe that \(\mathscr{T}_{(\mathbf{\bar{r}})_{j}}^{-1}:\ell_{\tau}^{\infty}(\mathbb{R}^{N})\to \ell_{\tau}^{\infty}(\mathbb{R}^{N})\) can be found in [44, Lemma 7.3].
We are left to provide the inverse for \(\mathscr{T}_{(\mathbf{\tilde{a}}_{j})}:\ell_{\tau}^{\infty}(\mathbb{R}^{nN})\to\ell_{ \tau}^{\infty}(\mathbb{R}^{nN})\). Indeed, for any \(\mathbf{b}\in\ell_{\tau}^{\infty}(\mathbb{R}^{nN})\), we have to solve \(\mathscr{T}_{(\mathbf{\tilde{a}}_{j})}(\mathbf{\tilde{a}}_{j})=\mathbf{b}_{j}\). This is accomplished by defining \(\mathscr{T}_{(\mathbf{\tilde{a}})_{j}}^{-1}:\ell_{\tau}^{\infty}(\mathbb{R}^{nN}) \to\ell_{\tau}^{\infty}(\mathbb{R}^{nN})\) as
\[\mathbf{\tilde{a}}_{j}=\sum_{k=j+1}^{\infty}\left(\sum_{s=0}^{k-j-1}e^{-2L_{i}s} \right)\mathbf{b}_{j}:=\mathscr{T}_{(\mathbf{\tilde{a}})_{j}}^{-1}.\]
Whence, by performing the same routine computations, one can quickly check that \(\mathbf{\tilde{a}}_{j}\in\ell_{\tau}^{\infty}(\mathbb{R}^{nN})\) satisfies the required conditions and that the operator \(\mathscr{T}_{(\mathbf{\tilde{a}})_{j}}^{-1}\) is a complete inverse of \(\mathscr{T}_{(\mathbf{\tilde{a}})_{j}}\).
In addition, one has
\[|\mathbf{\tilde{a}}_{j}|_{\infty}\lesssim|\mathbf{\tilde{b}}_{j}|_{\infty,\tau}\sum_{ k=j+1}^{\infty}\left(\sum_{s=0}^{k-j-1}e^{-2L_{i}s}\right)e^{-(2k+1)\tau}\lesssim e ^{-(2j+3)\tau}|\mathbf{\tilde{b}}_{j}|_{\infty,\tau},\]
which implies the estimate (7.3). The lemma is proved.
**Lemma 7.3**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\), and \(N\geqslant 2\). Assume that \((\mathbf{R},\mathbf{\tilde{a}}_{0},\mathbf{q})\in\mathrm{Bal}_{\sigma}(\Sigma)\) is a balanced configuration. Then, for \(L\gg 1\) sufficiently large, there exists \(0<\tau<\min\{\xi,\nu\}\) and an admissible perturbation sequence \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\mathrm{Adm}_{\sigma}(\Sigma)\subset\ell_{ \tau}^{\infty}(\mathbb{R}^{(n+1)N})\) such that \(\beta_{j,\ell}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j})=0\) for \((i,j,\ell)\in\mathcal{I}_{\infty}\), that is, \((\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\mathrm{Adm}_{\sigma}(\Sigma)\) solves the infinite-dimensional system \((\mathcal{S}_{2\sigma,\Sigma})\)._
Proof.: Indeed, for any \(i\in\{1,\dots,N\}\), let us define the operator \(\mathscr{G}^{i}:\ell_{\tau}^{\infty}(\mathbb{R}^{(n+1)N})\to\ell_{\tau}^{ \infty}(\mathbb{R}^{(n+1)N})\) given by \(\mathscr{G}^{i}=(\mathscr{G}^{i}_{0},\dots,\mathscr{G}^{i}_{n})\), where \(\mathscr{G}^{i}_{\ell}:\ell_{\tau}^{\infty}(\mathbb{R}^{N})\to\ell_{\tau}^{ \infty}(\mathbb{R}^{N})\) for each \(\ell\in\{0,\dots,n\}\). More precisely, we have
\[\mathscr{G}^{0}_{0}(\mathbf{a}_{j},\mathbf{\lambda}_{j})=\frac{1}{F(L_{i})}[\beta_{j, 0}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j})-\beta_{j,0}^{i}(\mathbf{0},\mathbf{1})]\mathrm{e} _{i}^{\mathrm{t}}-\mathscr{T}_{(\mathbf{\tilde{r}}_{j})}(\mathbf{\tilde{r}}_{j})\]
and
\[\mathscr{G}^{\ell}_{0}(\mathbf{a}_{j},\mathbf{\lambda}_{j})=\frac{e^{\gamma_{\sigma}L _{i}}}{\lambda_{j}^{i}}[\beta_{j,0}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j})-\beta_{j,0}^{i}(\mathbf{0},\mathbf{1})]\mathrm{e}_{i}^{\mathrm{t}}-\mathscr{T}_{(\mathbf{\tilde{a }}_{j})}(\mathbf{\tilde{a}}_{j})\quad\text{for}\quad\ell\in\{1,\dots,n\},\]
where \(\mathrm{e}_{i}\in\ell^{\infty}(\mathbb{R}^{(n+1)N})\) is the \(i\)-th vector in its standard Schauder basis, which we denote by \(\{\mathrm{e}_{i}\}_{i\in\mathbb{N}}\subset\ell^{\infty}(\mathbb{R}^{(n+1)N})\).
One can easily see that \(\beta_{i,j}^{\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})=0\) for \(j\geqslant 1\) if
\[(\tilde{a}_{j}^{i})^{\mathrm{t}}=-(\mathscr{T}_{(\mathbf{\tilde{a}}_{j})}^{i})^{- 1}\left(-\frac{e^{\gamma_{\sigma}L_{i}}}{\lambda_{j}^{i}}\beta_{j,0}^{i}(\mathbf{0},\mathbf{1})\mathrm{e}_{i}^{\mathrm{t}}+\mathscr{G}^{i}_{\ell}(\mathbf{a}_{j},\mathbf{ \lambda}_{j})\right) \tag{7.4}\]
and
\[(r_{j}^{i})^{\mathrm{t}}=-(\mathscr{T}_{(\mathbf{r}_{j})}^{i})^{-1}\left(-\frac{1 }{F(L_{i})}\beta_{j,0}^{i}(\mathbf{0},\mathbf{1})\mathrm{e}_{i}^{\mathrm{t}}+\mathscr{G }^{i}_{0}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\right). \tag{7.5}\]
Next, we show that the terms on the right-hand sides of (7.4) and (7.5) are contractions in an appropriate sense. First, by Lemma 6.1, one has
\[\left|\beta_{j,\ell}^{i}(\mathbf{0},\mathbf{1})\right|\lesssim\left\{\begin{array}{ ll}e^{-\gamma_{\sigma}L_{1}(1+\xi)}e^{-\nu t_{j}^{i}},&\text{if }\ell=0,\\ \lambda_{j}^{i}e^{-\gamma_{\sigma}L_{1}(1+\xi)}e^{-\nu t_{j}^{i}},&\text{if }\ell \geqslant 1,\end{array}\right.\quad\text{for }j\geqslant 1.\]
Also, let us denote by the projection on the \(j\)-th component it follows
\[(\widehat{\mathscr{G}}_{\ell,j}+\widetilde{\mathscr{G}}_{\ell,j})(\mathbf{a}_{j}, \mathbf{\lambda}_{j}):=\Pi_{j}\left(\frac{e^{\gamma_{\sigma}L_{i}}}{\lambda_{j}^{i} }[\beta_{j,0}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j})-\beta_{j,0}^{i}(\mathbf{0},\mathbf{1}) ]\mathrm{e}_{i}^{\mathrm{t}}-\mathscr{T}_{(\mathbf{\tilde{a}}_{j})}(\mathbf{\tilde{a}}_{ j})\right), \tag{7.6}\]
where
\[\widehat{\mathscr{G}}_{\ell,j}(\mathbf{a}_{j},\mathbf{\lambda}_{j})=\int_{0}^{1}\left[ \frac{e^{\gamma_{\sigma}}L_{i}}{\lambda_{j}^{i}}\partial_{t}\beta_{j,\ell}^{i}(t (\tilde{a}_{j}^{i},r_{j}^{i})^{\mathrm{t}})-\overline{\mathscr{A}}^{i}\right] \left((\tilde{a}_{j}^{i},r_{j}^{i})^{\mathrm{t}}\right)\mathrm{d}t\]
and
\[\widetilde{\mathscr{G}}_{\ell,j}(\mathbf{a}_{j},\mathbf{\lambda}_{j})=\overline{ \mathscr{A}}^{i}-\mathscr{T}_{(\bar{\mathbf{a}}_{j})}^{i}(\tilde{a}_{j}^{i})\]
with
\[\overline{\mathscr{A}}_{j}^{i}\left((\tilde{a}_{j}^{i},r_{j}^{i})^{\mathrm{t} }\right)=\sum_{j^{\prime}\in\mathbb{N}}\frac{e^{\gamma_{\sigma}L}}{\lambda_{j }^{i}}\partial_{\tilde{a}_{j^{\prime}}^{i}}\beta_{j,\ell}^{i_{*}}(\mathbf{a}_{j}, \mathbf{\lambda}_{j})\cdot\left[\tilde{a}_{j^{\prime}}^{i}\right],\]
where \(\beta_{j,\ell}^{i_{*}}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\in\mathbb{R}\) is defined as (6.8) and \(\tilde{a}_{j}^{i}\in\mathbb{R}^{nN}\) corresponds to the translation perturbation of the \(j\)-th bubble in the Delaunay solution from Lemma 6.6. Furthermore, by definition, one has
\[\mathscr{T}_{(\bar{\mathbf{a}}_{j})}((\tilde{a}_{j}^{i})^{\mathrm{t}})=\mathscr{ T}_{(\bar{\mathbf{a}}_{j})}((\tilde{a}_{j}^{i})^{\mathrm{t}}).\]
Now we have to estimate the terms on the left-hand side of (7.6). Indeed, we begin by estimating the first term. As a consequence of Lemma 6.6 for \(\ell\in\{1,\dots,n\}\), one finds
\[\left|\widehat{\mathscr{G}}_{\ell,j}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\right| \lesssim\sum_{j^{\prime}\in\mathbb{N}}\frac{e^{\gamma_{\sigma}L}} {\lambda_{j}^{i}}|\partial_{\tilde{a}_{j^{\prime}}^{i}}(\beta_{j,\ell}^{i}- \beta_{j,\ell}^{i_{*}})(\mathbf{a}_{j},\mathbf{\lambda}_{j})|\left|\bar{a}_{j^{\prime }}^{i}\right|+\mathcal{O}\left(e^{-\gamma_{\sigma}L_{i}}\xi e^{-\min\{\nu,\tau \}t_{j}^{i}}\right)\] \[\lesssim e^{-\gamma_{\sigma}\xi}\sum_{j^{\prime}\in\mathbb{N}}e^{- \nu t_{j^{\prime}}^{i^{\prime}}}e^{-\nu|t_{j}^{i}-t_{j^{\prime}}^{\ell^{ \prime}}}|\bar{a}_{j^{\prime}}^{i}|+\mathcal{O}\left(e^{-\frac{(n-2\gamma)L_{i }}{2}\xi}e^{-\min\{\nu,\tau\}t_{j}^{i}}\right)\] \[\lesssim\left(e^{-\gamma_{\sigma}L\xi}e^{-\min\{\nu,\tau\}t_{j}^ {i}}\right).\]
In addition, we apply Lemma 6.6 to estimate the second term; this gives us
\[\left|\widetilde{\mathscr{G}}_{\ell,j}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\right| \lesssim e^{-\gamma_{\sigma}L_{i}\xi}\left[e^{-\nu L_{i}}\left(| \tilde{a}_{j-1}^{i}|+|\tilde{a}_{j+1}^{i}|\right)+\sum_{j^{\prime}\neq j\pm 1}e^{- \nu|t_{j^{\prime}}^{i}-t_{j}^{i}|}|\tilde{a}_{j^{\prime}}^{i}\right]. \tag{7.7}\]
Therefore, by combining these two estimates, it follows that for \(0<\tau<\nu\ll 1\), one has
\[\left\|\mathscr{G}_{\ell}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\right\|_{\ell_{ \tau_{i}}^{\infty}(\mathbb{R}^{(n+1)N})}\lesssim e^{\tau L}e^{-\gamma_{\sigma} L_{i}\xi}\|(\tilde{a}_{j}^{i})^{\mathrm{t}}\|_{\ell_{\tau_{i}}^{\infty}( \mathbb{R}^{nN})}+\mathcal{O}(e^{-\gamma_{\sigma}L_{i}\xi})\quad\text{for}\quad \ell\in\{1,\dots,n\}\]
and
\[\left\|\mathscr{G}_{0}^{i}(\mathbf{a}_{j},\mathbf{\lambda}_{j})\right\|_{\ell_{\tau_{ i}}(\mathbb{R}^{(n+1)N})}\lesssim e^{\tau L}e^{-\gamma_{\sigma}L_{i}\xi}\|(r_{j}^{i})^{ \mathrm{t}}\|_{\ell_{\tau_{i}}(\mathbb{R}^{N})}+\mathcal{O}(e^{-\gamma_{\sigma} L_{i}\xi}),\]
where \(\tau_{i}=\frac{\tau L_{i}}{2}\). Next, up to some error, Eq. (7.4) and (7.5) can be reformulated as
\[(\tilde{a}_{j}^{i})^{\mathrm{t}}=(\mathscr{T}_{(\bar{\mathbf{a}}_{j})}^{i})^{-1} \left[e^{\tau L}e^{-\gamma_{\sigma}L_{i}\xi}\|(\tilde{a}_{j}^{i})^{\mathrm{t}} \|_{\ell_{\tau_{i}}^{\infty}(\mathbb{R}^{nN})}+\mathcal{O}(e^{-\gamma_{\sigma} L_{i}\xi})\right]=:\mathscr{G}_{(\mathbf{a}_{j})}((\tilde{a}_{j}^{i})^{\mathrm{t}})\]
and
\[(r_{j}^{i})^{\mathrm{t}}=(\mathscr{T}_{(\mathbf{r})_{j}}^{i})^{-1}\left[e^{\tau L}e^ {-\gamma_{\sigma}L_{i}\xi}\|(r_{j}^{i})^{\mathrm{t}}\|_{\ell_{\tau_{i}}^{\infty}( \mathbb{R}^{N})}+\mathcal{O}(e^{-\gamma_{\sigma}L_{i}\xi})\right]=:\mathscr{G}_{ (\mathbf{r}_{j})}((r_{j}^{i})^{\mathrm{t}}),\]
where the right-hand sides of the above equations are estimated in \(\ell_{\tau_{i}}^{\infty}(\mathbb{R}^{(n+1)N})\) norm.
At last, for \(0<\tau<\xi\ll 1\), let us consider the set
\[\mathscr{B}_{L}^{*}:=\left\{(\tilde{a}_{j}^{i},r_{j}^{i})^{\mathrm{t}}\in\ell_{ \tau_{i}}^{\infty}(\mathbb{R}^{(n+1)N}):\left\|(\tilde{a}_{j}^{i},r_{j}^{i}) \right\|_{\infty,\frac{\tau L_{i}}{2}}\lesssim e^{-\tau L}\right\}.\]
Notice that \(\mathscr{G}_{(\bar{\mathbf{a}}_{j},\mathbf{r}_{j})}:\mathscr{B}_{L}^{*}\to\ell_{\tau_{i}}^{ \infty}(\mathbb{R}^{(n+1)N})\) given by \(\mathscr{G}_{(\bar{\mathbf{a}}_{j},\mathbf{r}_{j})}=\left(\mathscr{G}_{(\bar{\mathbf{a}}_{j} )},\mathscr{G}_{(\mathbf{r}_{j})}\right)\) maps \(\mathscr{B}_{L}^{*}\) into itself, and it is a contraction. Therefore, one can invoke Banach's contraction principle to find a fixed point in the set \(\mathscr{B}_{L}^{*}\), which solves \((\mathscr{S}_{2\sigma,\Sigma})\). The proof is then finished.
Next, we have invertibility lemma based on the balancing conditions from Definition 5.5.
**Lemma 7.4**.: _Let \(\sigma\in(1,+\infty)\), \(n>2\sigma\), and \(N\geqslant 2\). Assume that \((\mathbf{q}^{b},\mathbf{a}^{b}_{0},\mathbf{R}^{b})\in\operatorname{Bal}_{\sigma}(\Sigma)\) is a balanced configuration. Let us consider the operator \(\mathcal{F}:\mathbb{R}^{2N}\to\mathbb{R}^{N}\) is given by_
\[\mathcal{F}(\mathbf{q},\mathbf{R})=A_{2}\sum_{i^{\prime}\neq i}|x_{i^{\prime}}-x_{i}|^{ -(n-2\sigma)}(R^{i}R^{i^{\prime}})^{\gamma_{\sigma}}q_{i^{\prime}}-q_{i}.\]
_Then, the linearized operator around \((\mathbf{q}^{b},\mathbf{R}^{b})\), denoted by \(\mathrm{d}\mathcal{F}_{(\mathbf{q}^{b},\mathbf{R}^{b})}:\mathbb{R}^{2N}\to\mathbb{R}^ {N}\), is invertible._
Proof.: Notice that the linearized operator \(\mathrm{d}\mathcal{F}(\mathbf{q},\mathbf{R})|_{(\mathbf{q}^{b},\mathbf{R}^{b})}:\mathbb{R}^{2N} \to\mathbb{R}^{N}\) has the following expression
\[\mathrm{d}\mathcal{F}_{(\mathbf{q},\mathbf{R})}=(\mathbf{q}_{i^{\prime}},\mathbf{R}_{i^{ \prime}})=:(\mathrm{d}\hat{\mathcal{F}}_{\mathbf{q}},\mathrm{d}\hat{\mathcal{F}} _{\mathbf{R}}),\]
where \(\mathbf{q}_{i^{\prime}}\in\mathbb{R}^{N}_{+}\) and \(\mathbf{R}_{i^{\prime}}\in\mathbb{R}^{N}_{+}\) are defined, respectively, as
\[\mathbf{q}_{i^{\prime}}=(q_{ii^{\prime}})\quad\text{and}\quad\mathbf{R}_{i^{\prime}}= (R_{ii^{\prime}})\]
with
\[q_{ii^{\prime}}=\begin{cases}-1,&\text{if}\quad i=i^{\prime}\\ A_{2}|x_{i}-x^{i^{\prime}}|^{-(n-2\sigma)}(R^{i,b}R^{i^{\prime},b})^{\gamma_{ \sigma}},&\text{if}\quad i\neq i^{\prime}\end{cases}\]
and
\[R_{ii^{\prime}}=\begin{cases}\gamma_{\sigma}(R^{i,b})^{-1}\sum_{i^{\prime}\neq i }A_{2}|x_{i^{\prime}}-x_{i}|^{-(n-2\sigma)}(R^{i,b}R^{i^{\prime},b})^{\gamma_ {\sigma}}q^{b}_{i},&\text{if}\quad i=i^{\prime}\\ \gamma_{\sigma}(R^{i,b})^{-1}A_{2}|x_{i^{\prime}}-x_{i}|^{-(n-2\sigma)}(R^{i,b }R^{i^{\prime},b})^{\gamma_{\sigma}}q^{b}_{i^{\prime}}&\text{if}\quad i\neq i ^{\prime}.\end{cases}\]
Next, from the balancing condition \((\mathscr{B}_{1})\), it follows \(\mathcal{F}(\mathbf{q}^{b},\mathbf{R}^{b})=0\). Also, one can see that \(\mathrm{d}\hat{\mathcal{F}}_{\mathbf{q}}\) is symmetric and has only a one-dimensional kernel. More precisely, we have \(\operatorname{Ker}(\mathrm{d}\hat{\mathcal{F}}_{\mathbf{q}})=\operatorname{span} \{\mathbf{q}^{b}\}\).
Finally, the balancing condition \((\mathscr{B}_{1})\) also implies \(\mathrm{d}\hat{\mathcal{F}}_{\mathbf{q}}(\mathbf{R})=\gamma_{\sigma}\mathbf{q}\). From this, it is easy to conclude that the operator \(\mathrm{d}\mathcal{F}_{(\mathbf{q}^{b},\mathbf{R}^{b})}\) is surjective.
### Proof of the main result
Now we can provide proof for our main result in this paper.
Proof of Theorem 2.: By Lemma 7.3, we are reduced to find \((\mathbf{R},\hat{\mathbf{a}}_{0},\mathbf{q})\in\mathbb{R}^{(n+2)N}\) such that \(\beta^{i}_{0,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})=0\) for all \(j\in\mathbb{N}\), where \((\mathbf{a}_{j},\mathbf{\lambda}_{j})=\Upsilon_{\mathrm{per}}(\mathbf{R},\hat{\mathbf{a}}_{0},\mathbf{q})\).
The rest of the proof will be divided into two main parts: the zero-mode and the linear-mode case. First, if \(j=0\), using Lemma 6.3 (i), one has that equation \(\beta^{i}_{0,0}(\mathbf{a}_{j},\mathbf{\lambda}_{j})=0\) is reduced to
\[-c_{n,\sigma}q_{i}\left[A_{2}\sum_{i^{\prime}\neq i}|x_{i^{\prime}}-x_{i}|^{-( n-2\sigma)}(R^{i}_{0}R^{i^{\prime}}_{0})^{\gamma_{\sigma}}q_{i^{\prime}}-\left( \frac{R^{i}_{1}}{R^{i}_{0}}\right)^{\gamma_{\sigma}}q_{i}\right]e^{-\gamma_{ \sigma}L}(1+\mathrm{o}(1))+\mathcal{O}(e^{-\gamma_{\sigma}L(1+\xi)})=0.\]
Furthermore, recall that since \(R^{i}_{0}=R^{i}(1+r^{i}_{0})\), one can use that \(r^{i}_{0}\in\mathbb{R}_{+}\) satisfies \(|r^{i}_{0}|\lesssim e^{-2\tau}\), to reformulate the equation above as
\[\mathscr{F}_{1}(\mathbf{R},\hat{\mathbf{a}}_{0},\mathbf{q})=\mathrm{o}(1). \tag{7.8}\]
Here \(\mathscr{F}_{1}:\mathbb{R}^{2N}\to\mathbb{R}^{(n+2)N}\) is given by
\[\mathscr{F}_{1}(\mathbf{R},\hat{\mathbf{a}}_{0},\mathbf{q}):=A_{2}\sum_{i^{\prime}\neq i}|x _{i^{\prime}}-x_{i}|^{-(n-2\sigma)}(R^{i}R^{i^{\prime}})^{\gamma_{\sigma}}q_{i ^{\prime}}-q_{i}. \tag{7.9}\]
Second, if \(\ell\in\{1,\dots,n\}\), using Lemma 6.3 (ii), it is not hard to check that the system \(\beta^{i}_{0,\ell}(\mathbf{a}_{j},\mathbf{\lambda}_{j})=0\) are reduced to
\[c_{n,\sigma}\lambda^{i}_{0}\left[A_{3}\sum_{i^{\prime}\neq i}\frac{(x_{i^{ \prime}}-x_{i})_{\ell}}{|x_{i^{\prime}}-p_{i}|^{n-2\sigma+2}}(R^{i}_{0}R^{i^{ \prime}}_{0})^{\gamma_{\sigma}}q_{i^{\prime}}+A_{0}\left(\frac{R^{i}_{1}}{R^{i }_{0}}\right)^{\gamma_{\sigma}}\frac{a^{i}_{0}-a^{i}_{1}}{\left(\lambda^{i}_{0} \right)^{2}}q_{i}\right]q_{i}e^{-\gamma_{\sigma}L}+\mathcal{O}(\lambda^{i}_{0} e^{-\gamma_{\sigma}L(1+\xi)})=0.\]
In addition, since
\[a^{i}_{j}=(\lambda^{i}_{j})^{2}\bar{a}^{i}_{j}\quad\text{and}\quad\bar{a}^{i}_{j} =\hat{a}^{i}_{0}+\tilde{a}^{i}_{j},\]
one can use that \(\tilde{a}^{i}_{j}\in\mathbb{R}^{nN}\) also has the decay \(|\tilde{a}^{i}_{j}|\lesssim e^{-2\tau}\), the above equation can be rewritten as
\[\mathscr{F}_{2}(\mathbf{R},\hat{\mathbf{a}}_{0},\mathbf{q})=o(1), \tag{7.10}\]
where \(\mathscr{F}_{2}:\mathbb{R}^{nN}\to\mathbb{R}^{(n+2)N}\) is given by
\[\mathscr{F}_{2}(\mathbf{R},\hat{\mathbf{a}}_{0},\mathbf{q}):=A_{3}\sum_{i^{\prime}\neq i} \frac{(x_{i^{\prime}}-x_{i})_{\ell}}{|x_{i^{\prime}}-x_{i}|^{n-2\sigma+2}}(R^{ i}R^{i^{\prime}})^{\gamma_{\sigma}}q_{i^{\prime}}+A_{0}\hat{a}^{i}_{0}q_{i}. \tag{7.11}\]
To conclude we need to choose suitable \((\mathbf{R},\hat{\mathbf{a}}_{0},\mathbf{q})\in\mathbb{R}^{(n+2)N}\) such that equations (7.8) and (7.10) are solvable. Notice that the solvability of (7.8) and (7.10) depends on the following invertibility property of the linearized operator of \(\mathscr{F}:\mathbb{R}^{(n+2)N}\to\mathbb{R}^{(n+2)N}\) given by \(\mathscr{F}=(\mathscr{F}_{1},\mathscr{F}_{2})\) around \((\mathbf{R},\hat{\mathbf{a}}_{0},\mathbf{q})\). Moreover, from Lemma 7.4, this accomplished since \((\mathbf{R},\hat{\mathbf{a}}_{0},\mathbf{q})\in\mathrm{Bal}_{\sigma}(\Sigma)\), that is, it satisfies \((\mathscr{B}_{1})\) and \((\mathscr{B}_{2})\). More precisely, the balancing condition \((\mathscr{B}_{1})\), one can easily perturb \((\mathbf{R}^{b},\mathbf{q}^{b})\) to find \((\mathbf{R},\mathbf{q})\) solving (7.8). Next, using the second balancing condition (7.10), one can find \(\hat{\mathbf{a}}_{0}\in\mathbb{R}^{nN}\) around \(\hat{\mathbf{a}}^{b}_{0}\in\mathbb{R}^{nN}\) which solves (7.10).
In conclusion, we use the maximum principle in Lemma 4.9 to show that \(u>0\) to conclude the proof of the main theorem.
## Appendix A Estimates on the bubble-towers interactions
In this appendix, we quote some important integrals in our proof. The following expressions may be found in [8, Appendix 7] for \(\sigma\in\mathbb{R}_{+}\). Let \(\lambda_{1},\lambda_{2}\lambda_{3}>0\) and \(x_{1},x_{2}\in\mathbb{R}^{n}\) with \(x\neq 0\), we define
\[U_{1}:=U_{0,\lambda_{1}},\quad U_{2}:=U_{0,\lambda_{2}},\quad\text{and}\quad U _{3}:=U_{x,\lambda_{3}},\]
where \(w_{x_{0},\lambda}\) is given by (4.4). We also recall
\[\gamma_{\sigma}:=\frac{n-2\sigma}{2}\quad\text{and}\quad\gamma^{\prime}_{ \sigma}:=\frac{n+2\sigma}{2}\]
to be the Fowler rescaling exponent and its Lebesgue conjugate, respectively.
In what follows, we use the constants below
\[A_{1}=\frac{(n+2\sigma)(n-2\sigma)}{n}\int_{\mathbb{R}^{n}}\left(|x|^{2\gamma _{\sigma}}\left(1+|x|^{2}\right)^{\gamma^{\prime}_{\sigma}}+1\right)^{-1} \mathrm{d}x>0,\] (A.1)
\[A_{2}=\frac{n+2\sigma}{2}\int_{\mathbb{R}^{n}}\left(|x|^{2}-1\right)\left(1+| x|^{2}\right)^{-\gamma_{\sigma}-1}\mathrm{d}x>0,\] (A.2)
and
\[A_{3}=-\frac{(n-2\sigma)^{2}}{n}\int_{\mathbb{R}^{n}}|x|^{2}\left(1+|x|^{2} \right)^{-\gamma_{\sigma}-1}\mathrm{d}x<0.\] (A.3)
**Lemma A.1**.: _For any \(\lambda_{1},\lambda_{2}>0\). It holds_
\[\int_{\mathbb{R}^{n}}f^{\prime}_{\sigma}(U_{1})U_{2}\partial_{\lambda_{1}}U_{ 1}\mathrm{d}x=\frac{1}{\lambda_{1}}\Psi\left(\left|\log\frac{\lambda_{2}}{ \lambda_{1}}\right|\right)\frac{\log\frac{\lambda_{2}}{\lambda_{1}}}{\left| \log\frac{\lambda_{2}}{\lambda_{1}}\right|},\]
_where_
\[\Psi(\ell)=e^{-\gamma_{\sigma}\ell}(1+\mathrm{o}(1))\quad\text{as}\quad\ell \to+\infty\]
_with_
\[\Psi(\ell):=\int_{\mathbb{R}}f^{\prime}_{\sigma}(v_{\mathrm{sph}}(t))v_{ \mathrm{sph}}(t+\ell)v^{\prime}(t)\mathrm{d}t.\] (A.4)
Proof.: See [8, Lemma 7.1].
**Lemma A.2**.: _If \(\lambda_{3}=\mathcal{O}(\lambda_{1})\), then the following estimates hold_
\[\int_{\mathbb{R}^{n}}f^{\prime}_{\sigma}(U_{1})U_{3}\partial_{\lambda_{1}}U_{1} \mathrm{d}x=A_{2}|x_{2}|^{2\sigma-n}\frac{\left(\lambda_{1}\lambda_{3}\right)^{ \gamma_{\sigma}}}{\lambda_{1}}\left(1+\mathcal{O}\left(\lambda_{1}\right)^{2}\right)\]
_and_
\[\int_{\mathbb{R}^{n}}f^{\prime}_{\sigma}(U_{1})U_{3}\partial_{x_{\ell}}U_{1} \mathrm{d}x=A_{3}x_{\ell}|x|^{2\sigma-n-2}\left(\lambda_{1}\lambda_{3}\right)^ {\gamma_{\sigma}}\left(1+\mathcal{O}\left(\lambda_{1}^{2}\right)\right)\quad \text{for}\quad\ell\in\{0,\ldots,n\}.\]
Proof.: See [8, Lemma 7.2].
**Lemma A.3**.: _Let \(\lambda_{1},\lambda_{2}>0\) and \(a\in\mathbb{R}^{n}\). If \(|a|\leqslant\max\left\{\lambda_{1}^{2},\lambda_{2}^{2}\right\}\ll 1\) and \(\min\left\{\frac{\lambda_{1}}{\lambda_{2}},\frac{\lambda_{2}}{\lambda_{1}} \right\}\ll 1\), then the following estimates holds_
\[\int_{\mathbb{R}^{n}}(\partial_{a}U_{a,\lambda_{1}}^{\gamma_{\sigma^{\prime}}} )U_{0,\lambda_{2}}^{\gamma_{\sigma}}\mathrm{d}x=-A_{0}c_{\lambda_{1},\lambda_{ 2}}C_{\lambda_{1},\lambda_{2}}+c_{\lambda_{1},\lambda_{2}}\mathcal{O}\left(C_ {\lambda_{1},\lambda_{2}}^{2}+c_{\lambda_{1},\lambda_{2}}^{2}C_{\lambda_{1}, \lambda_{2}}^{2}\right),\] (A.5)
_where_
\[c_{\lambda_{1},\lambda_{2}}=\min\left\{\left(\frac{\lambda_{1}}{\lambda_{2}} \right)^{\gamma_{\sigma}},\left(\frac{\lambda_{2}}{\lambda_{1}}\right)^{ \gamma_{\sigma}}\right\}\quad\text{and}\quad C_{\lambda_{1},\lambda_{2}}= \frac{a}{\max\left\{\lambda_{1}^{2},\lambda_{2}^{2}\right\}}.\]
Proof.: See [8, Lemma 7.3].
## Appendix B Nondegeneracy of the bubble solution
In this section, we add the proof of the nondegeneracy of the spherical solution.
Proof of Lemma 4.6.: Let us start with \(\phi\in H^{\sigma}(\mathbb{R}^{n})\). Using the statement in [42, Lemma 5.1], it suffices to know that \(\phi\in L^{\infty}\left(\mathbb{R}^{n}\right)\). We will divide the proof of this fact into three cases, which we describe as follows
**Case 1: \(n>6\sigma\)**.
Indeed, notice that, from (4.14) since \(f^{\prime}_{\sigma}(u_{\mathrm{sph}})\in L^{\infty}(\mathbb{R}^{n})\), one can find a large constant \(C\gg 1\) depending only on \(n,\sigma\) such that
\[|\phi(x)|\leqslant\int_{\mathbb{R}^{n}}C|x-y|^{2\sigma-n}\left(\frac{|\phi(y)| }{1+|y|^{4\sigma}}+\frac{1}{1+|y|^{n-2\sigma}}\right)\mathrm{d}y\quad\text{ for }\quad x\in\mathbb{R}^{n}.\] (B.1)
Also, by partitioning the Euclidean space as \(\mathbb{R}^{n}=B_{d}(0)\cup B_{d}(x)\cup(B_{d}(0)\cup B_{d}(x))^{c}\) with \(d:=|x|/2\geqslant 1\), and integrating on each subpart, we obtain
\[\int_{\mathbb{R}^{n}}\frac{|x-y|^{2\sigma-n}}{1+|y|^{n-2\sigma}}\mathrm{d}y \lesssim\frac{1}{|x|^{n-4\sigma}}\quad\text{for all}\quad x\in\mathbb{R}^{n}.\] (B.2)
Furthermore, by substituting the last inequality into (B.1), one has
\[|\phi(x)|\leqslant C\left[\int_{\mathbb{R}^{n}}\frac{|x-y|^{2\sigma-n}}{1+|y|^ {4\sigma}}|\phi(x)|\mathrm{d}y+\frac{1}{1+|x|^{n-4\sigma}}\right]\quad\text{ for }\quad x\in\mathbb{R}^{n}.\] (B.3)
Next, since \(n>6\sigma\), one has that \([p_{0},p_{*})\neq\varnothing\), where \(p_{0}=\frac{2n}{n-2\sigma}\) and \(p_{*}=\frac{n}{2\sigma}\), which allows us to use the Hardy-Littlewood-Sobolev inequality to get
\[\|\phi\|_{L^{p_{1}}(\mathbb{R}^{n})} \lesssim\left\|\frac{|\phi(x)|}{1+|x|^{4\sigma}}*|x|^{2\sigma-n} \right\|_{L^{p_{1}}(\mathbb{R}^{n})}\] \[\lesssim\left\|\frac{|\phi(x)|}{1+|x|^{4\sigma}}\right\|_{L^{q_{ 1}}(\mathbb{R}^{n})}\] \[\lesssim\|\phi\|_{L^{p_{0}}(\mathbb{R}^{n})}\left\|\frac{1}{1+|x |^{4\sigma}}\right\|_{L^{q_{0}}(\mathbb{R}^{n})},\] (B.4)
for any \(p\in[p_{0},p_{*})\) and \(p_{2}=\frac{np_{0}}{n-2\sigma p_{0}}\).
In what follows, we are based on the estimate (B.4) to run the bootstrap argument below and obtain the desired \(L^{\infty}\)-estimate. First, notice that from (B.4), we have \(\phi\in L^{p_{1}}(\mathbb{R}^{n})\), and so \(\phi\in L^{p_{1}}(\mathbb{R}^{n})\) for all \(p\in[p_{0},p_{1}]\). Second, we check whether \(p_{1}\geqslant p_{*}\) or not. In the affirmative case, we apply (B.4) with \(p=p_{*}-\varepsilon\) for \(0<\varepsilon\ll 1\) small enough to obtain that \(\phi\in L^{p_{1}}(\mathbb{R}^{n})\) for all \(p\in[p_{0},+\infty)\). In the negative case, we use (B.4) with \(p=p_{1}\), which gives us that \(\phi\in L^{p_{2}}(\mathbb{R}^{n})\) for all \(p\in[p_{0},p_{2}]\), where \(p_{2}=\frac{np_{1}}{n-2\sigma p_{1}}\). Third, we repeat the same process for this new exponent.
More precisely, it is not hard to check that the bootstrap sequence \(\{p_{\ell}\}_{\ell\in\mathbb{N}}\subset[p_{0},+\infty)\) satisfies
\[p_{\ell+1}=\left(1+\frac{4\sigma}{n-6\sigma}\right)p_{\ell}\quad\text{for all}\quad\ell\in\mathbb{N}.\]
Hence, \(\lim_{\ell\to+\infty}p_{\ell}=+\infty\), which shows that the bootstrap technique terminates in a finite step.
Now, let us fix some \(p\gg 1\) large enough. using the same strategy as in (B.2), we find
\[\int_{\mathbb{R}^{n}}|x-y|^{2\sigma-n}\frac{|\phi(y)|}{1+|y|^{4 \sigma}}\mathrm{d}y \lesssim\left(\int_{\mathbb{R}^{n}}\frac{|x-y|^{(2\sigma-n)p^{ \prime}}}{1+|y|^{4\sigma p_{0}^{\prime}}}\mathrm{d}y\right)^{\frac{1}{p^{ \prime}}}\|\phi\|_{L^{p}(\mathbb{R}^{n})}\] (B.5) \[\lesssim\frac{1}{1+|x|^{\frac{n(p^{\prime}-1)}{p^{\prime}}+2 \sigma}}\lesssim 1\quad\text{for all}\quad x\in\mathbb{R}^{n},\]
where \(p^{\prime}=\frac{p-1}{p}\) is the conjugate Lebesgue exponent of \(p\). Finally, from the last estimate combined with (B.3), we deduce that \(\phi\in L^{\infty}(\mathbb{R}^{n})\); this finishes the first case.
**Case 2:**\(n=6\sigma\).
Here we observe that since for \(n=6\sigma\), it holds that \(p_{0}=p_{*}=3\), one has \([3,3)=\varnothing\); thus (B.4) does not make sense for this case. However, we still have (B.3). In addition, since by Sobolev embedding, we know \(\phi\in H^{\sigma}(\mathbb{R}^{n})\hookrightarrow L^{3}(\mathbb{R}^{n})\), which, as before, yields
\[\|\phi\|_{L^{p_{1}}(\mathbb{R}^{n})} \lesssim\left\|\frac{|\phi(x)|}{1+|x|^{4\sigma}}*\frac{1}{|x|^{4 \sigma}}\right\|_{L^{p_{1}}(\mathbb{R}^{n})}+\left\|\frac{1}{1+|x|^{2\sigma}} \right\|_{L^{p_{1}}(\mathbb{R}^{n})}\] \[\lesssim\left\|\frac{|\phi(x)|}{1+|x|^{4\sigma}}\right\|_{L^{q_{ 1}}(\mathbb{R}^{n})}+1\] \[\lesssim\|\phi\|_{L^{3}(\mathbb{R}^{n})}\left\|\frac{1}{1+|x|^{4 \sigma}}\right\|_{L^{q_{0}}(\mathbb{R}^{n})}+1,\]
where \(q_{0}\in(3,+\infty),\zeta_{1}=\frac{3q_{0}}{q_{0}+3}\in(\frac{3}{2},3)\), and \(p_{1}=\frac{3q_{1}}{3-q_{1}}\in(3,+\infty)\).
This means that \(\phi\in L^{p}(\mathbb{R}^{n})\) for all \(p\geqslant 3\). More precisely, by taking \(q_{0}\gg 1\), one can make \(p\gg 1\) large enough. Finally, by the same argument in the last case, we have \(\phi\in L^{\infty}(\mathbb{R}^{n})\), which concludes the argument for the second case.
**Case 3: \(2\sigma<n<6\sigma\)**.**
In this case, using the Hardy-Littlewood-Sobolev inequality, it follows that
\[\|\phi\|_{L^{p_{1}}(\mathbb{R}^{n})} \lesssim\left\|\frac{|\phi(x)|}{1+|x|^{4\sigma}}*|x|^{n-2\sigma} \right\|_{L^{p_{1}}(\mathbb{R}^{n})}\] \[\lesssim\left\|\frac{|\phi(x)|}{1+|x|^{4\sigma}}\right\|_{L^{q_{ 1}}(\mathbb{R}^{n})}\] \[\lesssim\|\phi\|_{L^{p_{0}}(\mathbb{R}^{n})}\left\|\frac{1}{1+|x |^{4\sigma}}\right\|_{L^{q_{0}}(\mathbb{R}^{n})},\]
where \(p_{0}=\frac{2n}{n-2\sigma}=2_{\sigma}^{*}\), \(q_{0}\in(\frac{n}{2\sigma},\frac{2n}{6\sigma-n})\), \(q_{1}=\frac{p_{0}q_{0}}{q_{0}+p_{0}}\), and \(p_{1}=\frac{nq_{1}}{n-2\sigma q_{1}}\in(p_{0},+\infty)\). This means that \(\phi\in L^{p}(\mathbb{R}^{n})\) for all \(p\geqslant p_{0}\). From (B.5) we conclude that \(\phi\in L^{\infty}(\mathbb{R}^{n})\), which finishes the proof of this case.
The lemma is proved.
**Acknowledgments.** This paper was finished when the first-named author held a Post-doctoral position at the University of British Columbia, whose hospitality he would like to acknowledge.
|
2303.00926
|
Friedel oscillation in non-Fermi liquid: Lesson from exactly solvable
Hatsugai-Kohmoto model
|
When non-magnetic impurity immerses in Fermi sea, a regular modulation of
charge density around impurity will appear and such phenomena is called Friedel
oscillation (FO). Although both Luttinger liquid and Landau Fermi liquid show
such characteristic oscillation, FO in generic non-Fermi liquid (NFL) phase is
still largely unknown. Here, we show that FO indeed exists in NFL state of an
exactly solvable model, i.e. the Hatsugai-Kohmoto model which has been
intensively explored in recent years. Combining T-matrix approximation and
linear-response-theory, an interesting picture emerges, if two
interaction-induced quasi-particles bands in NFL are partially occupied, FO in
this situation is determined by a novel structure in momentum space, i.e. the
'average Fermi surface' (average over two quasi-particle Fermi surface), which
highlights the inter-band particle-hole excitation. We hope our study here
provides a counterintuitive example in which FO with Fermi surface coexists
with NFL quasi-particle, and it may be useful to detect hidden 'average Fermi
surface' structure in other correlated electron systems.
|
Miaomiao Zhao, Wei-Wei Yang, Hong-Gang Luo, Yin Zhong
|
2023-02-21T14:15:34Z
|
http://arxiv.org/abs/2303.00926v2
|
# Friedel oscillation in non-Fermi liquid: Lesson from exactly solvable
###### Abstract
When non-magnetic impurity immerses in Fermi sea, a regular modulation of charge density around impurity will appear and such phenomena is called Friedel oscillation (FO). Although both Luttinger liquid and Landau Fermi liquid show such characteristic oscillation, FO in generic non-Fermi liquid (NFL) phase is still largely unknown. Here, we show that FO indeed exists in NFL state of an exactly solvable model, i.e. the Hatsugai-Kohmoto model which has been intensively explored in recent years. Combining T-matrix approximation and linear-response-theory, an interesting picture emerges, if two interaction-induced quasi-particles bands in NFL are partially occupied, FO in this situation is determined by a novel structure in momentum space, i.e. the 'average Fermi surface' (average over two quasi-particle Fermi surface), which highlights the inter-band particle-hole excitation. We hope our study here provides a counterintuitive example in which FO with Fermi surface coexists with NFL quasi-particle, and it may be useful to detect hidden 'average Fermi surface' structure in other correlated electron systems.
## I Introduction
Recently, an exactly solvable many-body fermionic model with an infinite-range interaction, i.e. the Hatsugai-Kohmoto (HK) model,[1; 2; 3] has been hotly studied.[4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21] In contrast with celebrated and more familiar Sachdev-Ye-Kitaev, Kitaev's toric code and honeycomb model with either quenched disorder or local \(Z_{2}\) gauge symmetry,[22; 23; 24; 25; 26; 27; 28; 29] the original HK model has translation invariance with topologically trivial nature, but surprisingly, it provides a strictly exact playground for non-Fermi liquid (NFL) and featureless Mott insulator in any spatial dimension, which is rare in statistical mechanics and condensed matter physics.
The solvability of HK model results from its locality in momentum space and one can diagonalize HK Hamiltonian (just diagonal \(4\times 4\)-matrix) for each momentum. The current studies have mainly focused on an interesting extension of HK model, i.e. the superconducting instability from the intrinsic NFL state in HK model,[6] which is inspired by ubiquitous NFL behaviors and their link to unconventional superconductivity in cuprate, iron-based superconductors (SC) and many heavy fermion compounds. Unexpected properties such as topological \(s\)-wave pairing and two-stage superconductivity have been discovered.[8; 9; 13] However, before comparing these novel theoretical predictions with real-world unconventional SC in cuprate or heavy fermion systems, one should note that non-magnetic impurity is essential to explain realistic thermodynamic and transport date in SC, e.g. impurity effect changes linear-\(T\) behavior in superfluid density of the nodal \(d\)-wave paring state into \(T^{2}\) form.[30] However, to our knowledge, the mentioned non-magnetic impurity effect has not been investigated in HK model, let alone the superconducting HK system.
For metals, it is well-known that non-magnetic impurity immersed in Fermi sea induces a regular modulation of charge density around impurity, e.g. the Friedel oscillation (FO).[31] When involving electron-electron interaction, both the Luttinger liquid in one spatial dimension (\(d=1\)) and the Landau Fermi liquid (FL) show such characteristic oscillation.[32; 33; 34; 35; 36; 37] The origin of FO is generally believed to tie to the \(2k_{F}\) singularity of density-density correlation in the system with sharp Fermi surface, thus even quantum spin liquid with ghost (spinon) Fermi surface may show signature of FO.[38; 39] Consequently, FO can act as diagnosis for fermionic system with well-defined Fermi surface whatever its FL or NFL nature and may shed light on how to detect putative NFL state in realistic quantum materials proposed by existing effective field theory or slave-particle theory.[40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50]
Therefore, considering the need of exploration on FO for generic NFL phases and the request from superconducting HK models, in this work, we take a first step study on this timely issue. Specifically, we focus on the simplest but essential case in non-magnetic impurity effect, i.e. the possible FO in single impurity problem.
To our surprise, we find that the conventional wisdom of FO must be extended, since the NFL phase in HK model with two Fermi surface but no FL quasi-particle, indeed shows clear signature of FO. As a matter of fact, FO in this situation is determined by a novel structure in momentum space, i.e. the 'average Fermi surface', this means the average over the mentioned two Fermi surface and it results from the inter-band particle-hole excitation.
After all, our study here provides a counterintuitive example in which Fermi surface coexisting with NFL quasi-particle can support the existence of FO, and we expect that it may be interesting to detect hidden 'average Fermi surface' structure in other correlated electron systems.
The remaining part of this article is organized as follows. In Sec. II, we give a quick review of HK model, which will be useful in next sections. Sec. III is devoted
to the discussion of FO in terms of \(T\)-matrix approximation and the linear-response theory. Discussion will be given in Sec. IV. Finally, a brief summary is given in Sec. V.
## II The HATSUGI-KOHMOO model
The HK model we study has the following form, (see also Fig. 1(a) and (b) for illustration of HK model in one spatial dimension and on a square lattice)
\[\hat{H} = -\sum_{i,j,\sigma}t_{ij}\hat{c}_{i\sigma}^{\dagger}\hat{c}_{j \sigma}-\mu\sum_{j\sigma}\hat{c}_{j\sigma}^{\dagger}\hat{c}_{j\sigma} \tag{1}\] \[+ \frac{U}{N_{s}}\sum_{j_{1},j_{2},j_{3},j_{4}}\delta_{j_{1}+j_{3}= j_{2}+j_{4}}\hat{c}_{j_{1}\uparrow}^{\dagger}\hat{c}_{j_{2}\uparrow}\hat{c}_{j_{3} \downarrow}^{\dagger}\hat{c}_{j_{4}\downarrow}.\]
Here, \(\hat{c}_{j\sigma}^{\dagger}\) is the creation operator of conduction electron (called \(c\)-electron for simplicity) at site \(j\) with spin \(\sigma=\uparrow,\downarrow\) and it satisfies anti-commutative relation \([\hat{c}_{i\sigma},\hat{c}_{j\sigma^{\prime}}^{\dagger}]_{+}=\delta_{i,j} \delta_{\sigma,\sigma^{\prime}}\). \(t_{ij}\) are hopping integral between \(i,j\) sites. Furthermore, chemical potential \(\mu\) has been added to fix electron's density. \(N_{s}\) is the number of sites. The last term of \(\hat{H}\) is the HK interaction,[1] unlike the usual on-site Hubbard interaction \(U\sum_{j}\hat{c}_{j\uparrow}^{\dagger}\hat{c}_{j\uparrow}\hat{c}_{j\downarrow} ^{\dagger}\hat{c}_{j\downarrow}\), HK interaction is an infinite-range interaction between four electrons but preserves the center of motion due to the constraint of Dirac's \(\delta\) function. This interaction plays a fundamental role in solving this model as we will see later. Amusingly, one may note that the HK interaction indeed includes the Hubbard interaction if we consider a two-site version of HK model. However, the true effect of the latter one for HK-like models beyond perturbative treatment is still unknown and such issue seems to be important for our further understanding on HK-like systems.
Importantly, Eq. 1 is local in momentum space after Fourier transformation (i.e. \(\hat{c}_{j\sigma}=\frac{1}{\sqrt{N_{s}}}\sum_{k}e^{ikR_{j}}\hat{c}_{k\sigma}\)) and the resultant Hamiltonian reads as \(\hat{H}=\sum_{k}\hat{H}_{k}\),
\[\hat{H}_{k}=\sum_{\sigma}(\varepsilon_{k}-\mu)\hat{c}_{k\sigma}^{ \dagger}\hat{c}_{k\sigma}+U\hat{c}_{k\uparrow}^{\dagger}\hat{c}_{k\uparrow} \hat{c}_{k\downarrow}^{\dagger}\hat{c}_{k\downarrow}, \tag{2}\]
where \(\varepsilon_{k}\) are dispersion of electrons. It is emphasized that the locality of above Hamiltonian stems from infinite-range HK interaction preserving center of motion. In contrast, the Hubbard interaction in momentum space is rather nonlocal as \(U\sum_{k,k^{\prime},q}\hat{c}_{k+q\uparrow}^{\dagger}\hat{c}_{k\uparrow}\hat{ c}_{k^{\prime}-q\downarrow}^{\dagger}\hat{c}_{k^{\prime}\downarrow}\), thus it cannot lead to solvability for \(d>1\).
Now, if we choose Fock state
\[|n_{1},n_{2}\rangle\equiv(\hat{c}_{k\uparrow}^{\dagger})^{n_{1}}|0\rangle(\hat {c}_{k\downarrow}^{\dagger})^{n_{2}}|0\rangle \tag{3}\]
with \(n_{i}=0,1\) as basis, \(\hat{H}_{k}\) can be written as a diagonal \(4\times 4\) matrix, whose eigen-energy is \(0,\varepsilon_{k}-\mu,\varepsilon_{k}-\mu,2(\varepsilon_{k}-\mu)+U\) and the corresponding eigen-state is \(|0\rangle_{k}\equiv|00\rangle,|\sigma=\uparrow\rangle_{k}\equiv|10\rangle,| \sigma=\downarrow\rangle_{k}\equiv|01\rangle,|\uparrow\uparrow\rangle_{k} \equiv|11\rangle\), which means states are empty, single occupied with spin-up and spin-down, and double occupied.
Therefore, the many-body ground-state of \(\hat{H}\) is just the direct-product state of each \(\hat{H}_{k}\)'s ground-state, i.e. \(|\Psi_{g}\rangle=\prod_{k\in\Omega_{0}}|0\rangle_{k}\prod_{k\in\Omega_{1}}| \sigma\rangle_{k}\prod_{k\in\Omega_{2}}|\uparrow\downarrow\rangle_{k}\). (\(\Omega_{0},\Omega_{1},\Omega_{2}\) are the momentum range for different occupation) Because states with spin-up or down electron in \(\Omega_{1}\) is degenerated without external magnetic field, the ground-state of HK model has huge degeneracy. This point must be kept in mind if one performs numerical calculation like exact diagonalization (ED) which only selects one of ground-states.[16]
If \(\Omega_{0}=\Omega_{2}=0\), the system is a Mott insulator which happens when \(U>U_{c}=W\) with \(W\) being the bandwidth of \(c\)-electron. Otherwise, we obtain a metallic state with NFL properties, e.g. violation of Luttinger theorem, Haldane's exclusion statistics and Curie-like spin susceptibility.[6; 51; 3] It is interesting to note that as a result of HK interaction which preserves the center of motion, NFL states do not have collective mode in charge degree of freedom, such as the plasmon in Coulomb electron gas or zero sound in FL.[52; 53] Moreover, the transition from metallic state to gapped Mott insulating phase belongs to the universality of the continuous Lifshitz transition, in which the chemical potential-tuning and the interaction-tuning Mott transition have identical critical exponents.[51; 54] (see also Fig. 1(c)) Similarly, excited states and their energy are easy to be constructed, so \(\hat{H}\) (Eq. 1) has been solved since all eigen-states and eigen-energy are found.
For our purpose, it is useful to present the single-particle Green's function and some ground-state or thermodynamic quantities for HK model. For example, the single-particle Green's function can be obtained in terms of equation of motion,[55](see Appendix. A) which is read
Figure 1: (a) The Hatsugai-Kohmoto (HK) model in one spatial dimension and (b) on a square lattice with hopping \(t\) and interaction \(U\). (c) The exact ground-state phase diagram for HK model exhibits a Mott insulator and a non-Fermi-liquid-like metal. (\(\mu\) denotes chemical potential, \(U_{c}=W\) and \(W\) is band-width) The transition from metallic state to gapped Mott insulating phase belongs to the universality of the continuous Lifshitz transition.
as
\[G_{\sigma}(k,\omega) =\frac{1+\frac{U\langle\hat{n}_{k\sigma}\rangle}{\omega-(\varepsilon_ {k}-\mu+U)}}{\omega-(\varepsilon_{k}-\mu)}\] \[=\frac{1-\langle\hat{n}_{k\bar{\sigma}}\rangle}{\omega-( \varepsilon_{k}-\mu)}+\frac{\langle\hat{n}_{k\bar{\sigma}}\rangle}{\omega-( \varepsilon_{k}-\mu+U)} \tag{4}\]
where \(\langle\hat{n}_{k\bar{\sigma}}\rangle\) is the expectation value of electron number operator \(\hat{n}_{k\bar{\sigma}}=\hat{c}_{k\bar{\sigma}}^{\dagger}\hat{c}_{k\bar{\sigma}}\) with spin \(\bar{\sigma}=-\sigma\). The pole structure implies that there exist two quasi-particle bands as
\[E_{k-}=\varepsilon_{k}-\mu,\ \ \ \ E_{k+}=\varepsilon_{k}-\mu+U, \tag{5}\]
which corresponds to holon \(\hat{h}_{k\sigma}=\hat{c}_{k\sigma}(1-\hat{n}_{k\bar{\sigma}})\) and doublon \(\hat{d}_{k\sigma}=\hat{c}_{k\sigma}\hat{n}_{k\bar{\sigma}}\). In fact, the related Green's function is found to be
\[\langle\langle\hat{h}_{k\sigma}|\hat{h}_{k\sigma}^{\dagger}\rangle\rangle_{ \omega}=\frac{1-\langle\hat{n}_{k\bar{\sigma}}\rangle}{\omega-\varepsilon_{k} +\mu}=\frac{1-\langle\hat{n}_{k\bar{\sigma}}\rangle}{\omega-E_{k-}},\]
\[\langle\langle\hat{d}_{k\sigma}|\hat{d}_{k\sigma}^{\dagger}\rangle\rangle_{ \omega}=\frac{\langle\hat{n}_{k\bar{\sigma}}\rangle}{\omega-\varepsilon_{k}+ \mu-U}=\frac{\langle\hat{n}_{k\bar{\sigma}}\rangle}{\omega-E_{k+}}.\]
Thus, we see that the elementary excitations of HK model are holon and doublon. But we should emphasize that holon or doublon does not satisfy standard fermionic anti-commutative relation and cannot adiabatically evolve into \(U=0\) limit, thus they are not FL-like quasi-particle.
Since we are interested in paramagnetic states, we have \(n_{k}=\langle\hat{n}_{k\sigma}\rangle=\langle\hat{n}_{k\bar{\sigma}}\rangle\) and it is straightforward to find
\[n_{k}=\frac{f_{F}(E_{k-})}{f_{F}(E_{k-})+1-f_{F}(E_{k+})} \tag{6}\]
with the help of spectral theorem of \(G_{\sigma}(k,\omega)\). (\(f_{F}(x)=1/(e^{x/T}+1)\) is the Fermi distribution function)
Next, at finite-\(T\), the thermodynamics of HK model is determined by its free energy density \(f\), which is related to partition function \(\mathcal{Z}\) as
\[f=-\frac{T}{N_{s}}\ln\mathcal{Z},\ \ \mathcal{Z}=\mathrm{Tr}e^{-\beta\hat{H}}= \prod_{k}\mathrm{Tr}e^{-\beta\hat{H}_{k}}=\prod_{k}f_{k}.\]
Here, one notes that the partition function is easy to calculate since each \(k\)-state contributes independently. We have defined \(f_{k}=1+2z_{k}+z_{k}^{2}e^{-\beta U}\) and \(z_{k}=e^{-\beta(\varepsilon_{k}-\mu)}\). Then, the typical thermodynamic quantity, i.e. the heat capacity, is calculated by standard thermodynamic relation \(C_{V}=-T\frac{\partial^{2}f}{\partial T^{2}}\). In addition to \(C_{V}\), one can also calculate spin susceptibility \(\chi_{s}\) if one inserts Zeeman energy term \(\hat{H}_{h}=-B(\hat{c}_{k\uparrow}^{\dagger}\hat{c}_{k\uparrow}-\hat{c}_{k \downarrow}^{\dagger}\hat{c}_{k\downarrow})\) into Hamiltonian \(\hat{H}_{k}\). Then, it follows that the magnetization \(M=-\frac{\partial f}{\partial B}\) and \(\chi_{s}=\frac{\partial M}{\partial B}=-\frac{\partial^{2}f}{\partial B^{2}}\).
At zero temperature, the free energy density reduces into the ground-state energy density, which has very simple expression,
\[e_{g}=\frac{1}{N_{s}}\sum_{k}[E_{k-}\theta(-E_{k-})+E_{k+}\theta(-E_{k+})],\]
where \(\theta(x)\) is the standard unit-step function (\(\theta(x)=1\) for \(x>0\) and \(\theta(x)=0\) if \(x<0\)). Therefore, the electron density at \(T=0\) is found to be
\[n=-\frac{\partial e_{g}}{\partial\mu}=\frac{1}{N_{s}}\sum_{k}[\theta(-E_{k-})+ \theta(-E_{k+})]. \tag{7}\]
## III Single impurity in HK model
In this section, we study the impurity effect in HK model. It is well-known that for non-interacting Fermi gas and interacting FL or Luttinger liquid, the electron density around impurity shows characteristic oscillation called FO.
### \(T\)-matrix approximation
Now, we consider the effect of a single impurity, which is assumed to locate on zero-th site and only electron on this site feels its scattering, thus we have the following impurity Hamiltonian:
\[\hat{H}_{imp}=V\sum_{\sigma}\hat{c}_{0\sigma}^{\dagger}\hat{c}_{0\sigma}=\frac{ V}{N_{s}}\sum_{k,k^{\prime},\sigma}\hat{c}_{k\sigma}^{\dagger}\hat{c}_{k^{ \prime}\sigma}\]
Here, \(V\) is the strength of impurity potential and the second term of the right-hand side is the Hamiltonian in momentum space.
If HK interaction is turned off, one can solve the non-interacting electron problem \(\hat{H}+\hat{H}_{imp}\) in terms of \(T\)-matrix formalism, which means the Green's function satisfies the following equations,[56]
\[G_{\sigma}^{(0)}(k,k^{\prime},\omega)=\delta_{k,k^{\prime}}G_{ \sigma}^{(0)}(k,\omega)+G_{\sigma}^{(0)}(k,\omega)T_{\sigma}(\omega)G_{\sigma} ^{(0)}(k^{\prime},\omega)\] \[T_{\sigma}(\omega)=\frac{V/N_{s}}{1-VF_{\sigma}(\omega)},\ \ F_{\sigma}( \omega)=\frac{1}{N_{s}}\sum_{k}G_{\sigma}^{(0)}(k,\omega)\]
Here, we have defined the so-called \(T\)-matrix \(T_{\sigma}(\omega)\), which encodes the effect of impurity scattering. Meanwhile, \(G^{(0)}\) denotes the Green's function for \(U=0\) and
\[G_{\sigma}^{(0)}(k,\omega)=\frac{1}{\omega-(\varepsilon_{k}-\mu)}.\]
However, one can see that when \(U\neq 0\), \(\hat{H}_{imp}\) mixes different momentum sectors of original HK model (Eq.1), therefore, the solvability of HK model is lost and we can only obtain accurate results from numerical computation like ED.
To proceed, let us use the above \(T\)-matrix formalism as an approximation and we expect that such approximated treatment may be appropriate if impurity strength \(V\) is not large. Then, we replace non-interacting Green's function \(G^{(0)}\) with interacting \(G\) without impurity (Eq. 4) as
what has usually been done in dynamic mean-field theory study or cuprate superconductor.[57; 58; 59] So, we find
\[G_{\sigma}(k,k^{\prime},\omega)\simeq\delta_{k,k^{\prime}}G_{ \sigma}(k,\omega)+G_{\sigma}(k,\omega)T_{\sigma}(\omega)G_{\sigma}(k^{\prime},\omega)\] \[T_{\sigma}(\omega)=\frac{V/N_{s}}{1-VF_{\sigma}(\omega)},\ \ F_{ \sigma}(\omega)=\frac{1}{N_{s}}\sum_{k}G_{\sigma}(k,\omega). \tag{8}\]
In reality, the effect of single impurity can be observed via the well-known FO,[31] which states that the electron's density around impurity shows a characteristic oscillation behavior. For systems with well-defined Fermi surface, such as Landau FL in \(d=2,3\) and Luttinger liquid in \(d=1\),[32; 33; 34; 35; 36; 37] the FO behaves as
\[\delta n_{i}\equiv n_{i}-n\sim\frac{\cos(2k_{F}|R_{i}|)}{|R_{i}|^{g}},\ \ \ \ |R_{i}|>>1\]
where \(n_{i}\) is the electron's density at \(i\)-site, \(n\) denotes the average electron density without impurity, \(R_{i}\) is the distance versus impurity (assumed on 0-site in our model), \(k_{F}\) is Fermi wavevector and \(g\) is equal to the spatial dimension for FL,[34] or determined by interaction strength in Luttinger liquid.[32; 36]
Our aim of this section is to examine whether the above FO survives in metallic NFL state of HK model. Mathematically, we can write \(\delta n_{i}\) as
\[\delta n_{i} =\frac{1}{N_{s}}\sum_{k,k^{\prime},\sigma}e^{i(k-k^{\prime})R_{i} }\langle c^{\dagger}_{k^{\prime}\sigma}c_{k\sigma}\rangle-n\] \[=\frac{1}{N_{s}}\sum_{k,k^{\prime},\sigma}e^{i(k-k^{\prime})R_{i} }\int d\omega f_{F}(\omega)\frac{-1}{\pi}\text{Im}G(k,k^{\prime},\omega)-n\] \[=\frac{1}{N_{s}}\sum_{k,k^{\prime},\sigma}e^{i(k-k^{\prime})R_{i} }\int d\omega f_{F}(\omega)\frac{-1}{\pi}\text{Im}\delta G_{\sigma}(k,k^{ \prime},\omega).\]
Here, we have defined the scattering shifted Green's function \(\delta G_{\sigma}(k,k^{\prime},\omega)\equiv G_{\sigma}(k,\omega)T_{\sigma}( \omega)G_{\sigma}(k^{\prime},\omega)\) and \(f_{F}(x)=1/(e^{x/T}+1)\) is the Fermi distribution function. Next, sum over momentum, one finds
\[\delta n_{i}=\sum_{\sigma}\int d\omega f_{F}(\omega)\frac{-1}{\pi}\text{Im} \left[\frac{G_{\sigma}(R_{i},\omega)VG_{\sigma}(-R_{i},\omega)}{1-VF_{\sigma} (\omega)}\right] \tag{9}\]
Here, \(G_{\sigma}(R_{i},\omega)=\frac{1}{N_{s}}\sum_{k}e^{ikR_{i}}G_{\sigma}(k,\omega)\) is the local Green's function on \(i\)-site. Then, we are able to utilize Eq. 9 to calculate \(\delta n_{i}\), such that the check on FO is straightforward.
In Fig. 2 and 3, we have plotted \(\delta n_{i}\) for \(V/t=0.1,0.2\) with different \(U/t=0,1,2,3,4,5,6,8\) and in \(1D\) case. (\(\varepsilon_{k}=-2t\cos(k)\)) To explore both metallic and insulating phases, we have fixed \(\mu=U/2\). Here, we see that if the system is located in metallic phase (\(U/t\leq 4\)), FO is clearly visible since all data is similar to non-interacting case (\(U=0\)), which shows FO-like behavior as \(\delta n_{i}\sim\frac{\cos(2k_{F}|R_{i}|)}{|R_{i}|}\). (\(k_{F}=\pi/2\)) In contrast, when the ground-state is in the Mott insulating phase (\(U/t>4\)), signal of FO vanishes.
However, we should emphasize that although metallic NFL phase in HK shows FO and seems to fit with non-interacting formula \(\delta n_{i}\sim\frac{\cos(2k_{F}|R_{i}|)}{|R_{i}|}\), there does not exist well-defined Fermi wave-vector in \(k_{F}\). This fact can be seen in Fig. 4(a), where the particle distribution \(n_{k}\) (calculated with Eq. 6 at \(T=0\)) shows FL-like jump at \(k_{1}\) and \(k_{2}\) but not at putative Fermi wavevector \(k_{F}=\pi/2\). The jump at \(k_{1},k_{2}\) suggests two-Fermi-surface structure and their location is determined by
\[k_{2}=\left|\arccos\frac{U-\mu}{2t}\right|,\ \ \ \ k_{1}=\left|\arccos \frac{-\mu}{2t}\right|,\]
which results from inspecting Eq.6 at \(T=0\) (\(n_{k}^{T=0}=\frac{1}{2}\left[\theta(2t\cos k+U/2)+\theta(2t\cos k-U/2)\right]\)). If we focus on regime with \(k>0\), the jump at \(k_{1}\) (\(k_{2}\)) corresponds to the occupation of electron from \(n_{k\sigma}=1\) to \(1/2\) (from \(1/2\) to \(0\)). Furthermore, the real part of Green's function at zero-frequency (\(\text{Re}G(k,0)\)) diverges at \(k_{1},k_{2}\), (Fig. 4(b)) thus, we may call \(k_{1},k_{2}\) as quasi-Fermi wavevector.
At the same time, one is able to calculate density of electron in terms of \(k_{2},k_{1}\),
\[n=\overbrace{2k_{1}}^{\Omega_{2}}\frac{2}{2\pi}+\overbrace{2(k_{2}-k_{1})}^{ \Omega_{1}}\frac{2}{2\pi}\frac{1}{2}=(k_{1}+k_{2})\frac{2}{2\pi}\equiv\frac{ 1}{\pi}2k_{a}.\]
where the prefactor \(\frac{2}{2\pi}\) denotes the density of state in momentum space with spin degeneracy while the factor
\(\frac{1}{2}\) uncovers the fact that only single occupation exists in \(\Omega_{1}\) and we have \(n_{k}=1/2\) in this regime. We have also defined the average Fermi wavevector \(k_{a}=(k_{1}+k_{2})/2\), whose effect is just like the Fermi wavevector \(k_{F}\) in \(1D\) Fermi gas.
Note that, one observes zero point in the real part of Green's function (\(\text{Re}G(k,0)\)), (Fig. 4(b)) such zero point defines the Luttinger surface instead of more familiar Fermi surface.[60] So, the corresponding characteristic wavevector is named Luttinger wavevector \(k_{L}\). In our case, we find \(k_{L}=\pi/2\) from \(\text{Re}G(k=k_{L},0)=0\) (\(\mu=U/2\)), which is identical to non-interacting Fermi wavevector \(k_{F}\) and the average Fermi wavevector \(k_{a}\).
Considering \(k_{L},k_{a}\), we should identify which one determines the FO in NFL phase of HK,
\[\delta n_{i}^{HK}\sim\frac{\cos(2k_{a}|R_{i}|)}{|R_{i}|},\hskip 28.452756pt|R_{i}|>>1,\]
or
\[\delta n_{i}^{HK}\sim\frac{\cos(2k_{L}|R_{i}|)}{|R_{i}|},\hskip 28.452756pt|R_{i}|>>1.\]
### Linear response theory
In last subsection, we have seen that in terms of \(T\)-matrix approximation, metallic NFL state in HK model indeed exhibits FO, but unexpectedly, it seems to be determined by Luttinger wavevector \(k_{L}\) or average Fermi wavevector \(k_{a}\). To pin down which one is responsible for FO, here, we use linear response theory to estimate the electron density and it may be considered as a crosscheck on previous computation.
According to linear response theory,[56] we have
\[\delta n_{i}(t) =\frac{1}{i}\int_{-\infty}^{t}dt^{\prime}\langle[\hat{n}_{i}(t), \hat{H}_{imp}(t^{\prime})]\rangle\] \[=\frac{1}{i}\int_{-\infty}^{\infty}dt^{\prime}\theta(t-t^{\prime} )\langle[\hat{n}_{i}(t),\hat{n}_{0}(t^{\prime})]\rangle V(t^{\prime})\] \[=-\int_{-\infty}^{\infty}dt^{\prime}\chi_{c}(R_{i},R_{0},t-t^{ \prime})V(t^{\prime})\]
Here, the charge susceptibility is defined as
\[\chi_{c}(R_{i},R_{0},t-t^{\prime})=-\frac{1}{i}\theta(t-t^{\prime})\langle[ \hat{n}_{i}(t),\hat{n}_{0}(t^{\prime})]\rangle.\]
Thus, if we are able to calculate \(\chi_{c}(R_{i},R_{0},t-t^{\prime})\), the electron density is easy to obtain after integrating over time. Equivalently, \(\delta n_{i}(t)=-\int\frac{dt\omega}{2\pi}e^{-i\omega t}\chi_{c}(R_{i},R_{0}, \omega)V(\omega)\) and \(\chi_{c}(R_{i},R_{0},\omega)=\int dte^{i\omega t}\chi_{c}(R_{i},R_{0},t)\).
In many-body physics, one always uses Wick rotation and calculates imaginary-time charge susceptibility
\[\chi_{c}(R_{i},R_{0},\tau)=\langle\hat{T}_{\tau}\hat{n}_{i}(\tau)\hat{n}_{0}\rangle,\]
or its Fourier transformation \(\chi_{c}(R_{i},R_{0},i\Omega_{n})\). Finally, we obtain \(\chi_{c}(R_{i},R_{0},\omega)=\chi_{c}(R_{i},R_{0},i\Omega_{n}\to\omega+i0^{+})\).
In the framework of perturbation theory with the help of Feynman diagrams, using Wick theorem, one can calculate \(\chi_{c}(R_{i},R_{0},i\Omega_{n})\) or its Fourier transformation \(\chi_{c}(q,i\Omega_{n})\) easily. When interaction is turned off (\(U=0\)), we just obtain (see Appendix. B)
\[\chi_{c}^{(0)}(q,i\Omega_{n})=\frac{-1}{N_{s}\beta}\sum_{k,\omega_{n}}\sum_{ \sigma}G_{\sigma}^{(0)}(k+q,\omega_{n}+\Omega_{n})G_{\sigma}^{(0)}(k,\omega_{ n}). \tag{10}\]
After frequency summation, one finds the standard result \(\chi_{c}^{(0)}(q,i\Omega_{n})=\frac{2}{N_{s}}\sum_{k}\frac{f_{F}(\varepsilon_ {k+q}-\mu)-f_{F}(\varepsilon_{k}-\mu)}{i\Omega_{n}-\varepsilon_{k+q}+ \varepsilon_{k}}\).
However, for HK model, we have not noticed any perturbation theory which can reproduce the exact solution like Green's function or free energy. (Note however that Ref. [17] has proposed a Hartree-Fock based perturbation theory for HK model at \(T=0\).) Therefore, one should be careful when calculating multi-particle correlation like \(\chi_{c}\).
Fortunately, Ref. [4] and [61] tell us that, the charge and spin susceptibility of HK has identical formalism to familiar non-interacting electron gas (e.g. Eq. 10) and the only difference is to replace the non-interacting Green's function with the interacting one (Eq. 4). For HK model, we just replace \(G_{\sigma}^{(0)}\) with \(G_{\sigma}\) and the explicit result is
Figure 4: (a) Electron’s distribution function \(n_{k}\) and (b) the real (imaginary) part of single-particle Green’s function at zero-frequency (\(\text{Re}G(k,0),\text{Im}G(k,0)\)) for \(U/t=3,\mu=U/2\).
given by Ref. [4] as
\[\chi_{c}(q,i\Omega_{n}) = \frac{-1}{N_{s}}\sum_{k,\sigma}(1-n_{k})(1-n_{k+q}) \tag{11}\] \[\times \frac{f_{F}(\varepsilon_{k}-\mu)-f_{F}(\varepsilon_{k+q}-\mu)}{i \Omega_{n}-\varepsilon_{k+q}+\varepsilon_{k}}\] \[+ \frac{-1}{N_{s}}\sum_{k,\sigma}(1-n_{k})n_{k+q}\] \[\times \frac{f_{F}(\varepsilon_{k}-\mu)-f_{F}(\varepsilon_{k+q}-\mu+U)} {i\Omega_{n}-\varepsilon_{k+q}-U+\varepsilon_{k}}\] \[+ \frac{-1}{N_{s}}\sum_{k,\sigma}n_{k}(1-n_{k+q})\] \[\times \frac{f_{F}(\varepsilon_{k}-\mu+U)-f_{F}(\varepsilon_{k+q}-\mu)} {i\Omega_{n}-\varepsilon_{k+q}+\varepsilon_{k}+U}\] \[+ \frac{-1}{N_{s}}\sum_{k,\sigma}n_{k}n_{k+q}\] \[\times \frac{f_{F}(\varepsilon_{k}-\mu+U)-f_{F}(\varepsilon_{k+q}-\mu+U )}{i\Omega_{n}-\varepsilon_{k+q}+\varepsilon_{k}}.\]
It is noted that the above density-density correlation function is not an approximation but is exact for HK model due to the solvability. It is easy to check that when \(U=0\), the above result reduces into the non-interacting one \(\chi_{c}^{(0)}\).
Since the strength of impurity is static, we should have \(V(\omega)=2\pi V\delta(\omega)\), so \(\delta n_{i}=-V\chi_{c}(R_{i},R_{0},\omega=0)\). This can be calculated if we replace \(i\Omega_{n}\) with \(\omega+i0^{+}\) in Eq. 11 and make a Fourier transformation,
\[\delta n_{i}=-V\frac{1}{N_{s}}\text{Re}\left[\sum_{q}e^{iq(R_{i}-R_{0})}\chi_ {c}(q,i\Omega_{n}\rightarrow\omega+i0^{+})|_{\omega=0}\right]. \tag{12}\]
In Fig. 5, we have plotted the results from the linear response theory and good agreement with our previous \(T\)-matrix calculation (Fig. 2) has been found.
Because, \(k_{L}\) and \(k_{a}\) are identical in the symmetric case \(\mu=U/2\). Instead, we investigate the asymmetric case, e.g. the situation with electron density \(n=0.5\) in Fig. 6. (Results for electron density \(n=0.4\) and \(n=0.6\) are shown Appendix. C and no physics is changed.) For this electron density, certain regime of quasi-particle bands \(E_{k-}=\varepsilon_{k}-\mu,E_{k+}=\varepsilon_{k}-\mu+U\) are doubly occupied when \(U/t\) is small, in contrast, only \(E_{k-}\) band will be occupied when \(U/t\) is large. (See corresponding band structure \(E_{k\pm}\) in Fig. 7.) Here, although FO is still visible but it is not easy to distinguish \(k_{L}\) and \(k_{a}\), thus we plot \(\chi_{c}(q,0)\) (\(\chi_{c}(q,0)\propto\delta n(q)\), the Fourier transformation of \(\delta n_{i}\)) in Fig. 8, which is able to show the dominating wavevector for charge response.
As one can see from Fig. 8, when \(U/t=0\), the correct \(2k_{F}\) singularity of non-interacting electron gas is reproduced as expected. This means we can approximate \(\chi_{c}(q)\) as \(\chi_{c}(2k_{F})\) and it leads to \(\delta n_{i}\sim\cos(2k_{F}(R_{i}-R_{0}))\chi_{c}(2k_{F})\). Thus, the correct \(2k_{F}\) oscillation of \(\delta n_{i}\) has been reproduced.
Next, if one enhances the interaction with \(U/t=1\), three dominating peaks located in \(2k_{1},2k_{2}\) and \(2k_{a}\) are found. In this case,
\[\delta n_{i} \sim \cos(2k_{1}(R_{i}-R_{0}))\chi_{c}(2k_{1})+\cos(2k_{2}(R_{i}-R_{0 }))\chi_{c}(2k_{2})\] \[+ \cos(2k_{a}(R_{i}-R_{0}))\chi_{c}(2k_{a}),\]
and the last one with \(2k_{a}\) singularity appears to be the final winner. (We will provide an intuitive explanation later.)
In addition, if \(U\) is larger, \(\chi_{c}(q,0)\) has peak near \(\pi\), which is just \(2k_{2}\) since in this case, only one quasi-particle band \(E_{k-}\) has been occupied.
Actually, as can be seen in Fig. 7(b), one may consider \(2k_{a}=k_{1}+k_{2}\) as the inter-band particle-hole excitation (scattering) process induced by the HK interaction. In other words, this can be explained as the inter-band '\(2k_{F}\)' singularity, which is in contrast with the intra-band '\(2k_{F}\)' singularity like \(2k_{2}\) in Fig. 7(d). At the same time, we see that in Eq. 11, the first and the fourth term are mainly dominated by intra-band particle-hole excitation while the second and the third term show inter-band particle-hole excitation. If one considers the numerator in Eq. 11, e.g. the prefactor of first term \((1-n_{k})(1-n_{k+q})(f_{F}(\varepsilon_{k}-\mu)-f_{F}(\varepsilon_{k+q}-\mu))\) and the prefactor of third term \(n_{k}(1-n_{k+q})(f_{F}(\varepsilon_{k}-\mu+U)-f_{F}(\varepsilon_{k+q}-\mu))\), it is found that the former one is more mismatched than the latter one, thus
Figure 5: \(\delta n_{i}\) calculated from the linear response theory for \(U/t=0,1,2,3,4,5,6,8\) with \(V/t=0.1\) and \(\mu=U/2\).
the inter-band excitation (described by the second and third term in Eq. 11) wins over the intra-band excitation (the first and fourth term in Eq. 11). So, we expect that the charge susceptibility is determined by the average Fermi surface, which highlights the inter-band particle-hole excitation.
Therefore, we should conclude that FO in NFL state with two Fermi surface is determined by the average Fermi surface while NFL state with only one Fermi surface behaves like usual \(2k_{F}\) singularity.
### FO at finite temperature
Before ending this section, we try to explore the finite temperature effect of FO. It is expected that the thermal effect will wash out the sharp jump around Fermi surface, such that the charge response will be weakened if elevating temperature. Thus, the amplitude of FO decreases when temperature is increased, which can be seen in Fig. 9. Here, \(\delta n_{i}\) for \(n=0.5\) and \(n=1\) are shown in Fig. 9(a) and (b). We see that when \(T/t\lesssim 0.1\), the amplitude of FO is visible while higher temperature does not lead to noticeable FO. The amplitude of FO has also been fitted with exponential function \(\sim e^{-T}\) and good agreement with the calculated ones has been found Fig. 9(c) and (d).
## IV Discussion
### FO on \(2d\) square lattice
Now, we turn our attention to the \(2D\) square lattice, where the dispersion of electron is chosen to be \(\varepsilon_{k}=-2t(\cos k_{x}+\cos k_{y})\). Generally, the findings in the \(1D\) situation are still valid in the present \(2D\) square lattice as can be seen in Figs. 10 and 11, which is not unexpected since the NFL states in HK model belong to the same universality class and the effect of space dimension does not change the nature of NFL. This feature is quite different from the case in Hubbard model, where NFL in \(1D\) is faithfully described by the Luttinger liquid paradigm, however its extension to the most important \(2D\) case is still lacking. In our opinion, such a difference may result from competing symmetry-breaking states involving charge-density, spin-density and pairing order in Hubbard model and including these orders for HK-like model is able to clarify this issue.
Fig. 10 shows \(\delta n_{i}\) for \(U/t=0\) and \(U/t=2\), and FO exists in these two situations despite of the latter one being a NFL. Furthermore, in Fig. 11(b), it is seen that \(\chi_{c}(q,0)\) in NFL is dominated by the average Fermi surface (\(2k_{a}\) in the figure) as expected. As comparison the non-interacting Fermi gas in Fig. 11(a) has the usual \(2k_{F}\) charge response. In addition, the two-Fermi-surface structure of quasi-particle in NFL exists in Fig. 11(d).
Figure 9: \(\delta n_{i}\) versus temperature for (a) \(n=0.5\) and (b) \(n=1.0\) with \(V/t=0.1,U/t=2\). (c) \(\delta n_{4}\) denotes amplitude of FO on the fourth nearest-neighbor site and (d) \(\delta n_{1}\) is for the nearest-neighbor site around impurity.
### More impurities?
In previous sections, we have studied the details of the single impurity problem and the clear signature of FO is shown in metallic NFL states. But, if more impurities are involved, what will happen? It is expected that if the density of impurities is small, the interference effect between impurities can be safely ignored and one just uses the picture of the single impurity problem. However, if more impurities exist, interference effect must be included, which invalidates the T-matrix formalism we have developed for single impurity case. Although we cannot make any definite prediction due to the lack of appropriate theoretical formalism, it seems that localization of electron (Anderson localization) is an inevitable one for strong impurity scattering. Future study on the interplay between localization and electron correlation in HK-like models is desirable and it may relate to the timely issue of quantum thermalization or many-body localization.[62]
### Magnetic impurity
In the main text, only effect from the nonmagnetic impurity is analyzed. However, we all know that the understanding on magnetic impurity is an essential issue in condensed matter physics, such as Kondo effect, Ruderman-Kittel-Kasuya-Yosida exchange interaction and the spin glass state.[63; 64] It is noted that the former one has already been explored by poorman's scaling approach and deviation from usual Kondo impurity in FL is noticeable.[10]
Due to the perturbative nature of poorman's scaling, the ground-state of Kondo impurity in HK model has not been established and it seems that the state-of-art numerical renormalization group, which is very successful on Kondo impurity in non-interacting environment,[65] cannot be utilized without nontrivial modification. To explore the ground-state, it will be helpful to follow the classic variational wave-function calculation of Yosida as the first step.[66]
## V Conclusion and future direction
In conclusion, we have found that Friedel oscillation exists in non-Fermi liquid phase in the HK model, which results from the calculation of \(T\)-matrix approximation and linear response theory. When there exits two-Fermi-surface structure, inter-band particle-hole excitation dominates and one observes the average Fermi surface. We should emphasize that the two-Fermi-surface structure in HK model is an intrinsic effect induced by interaction and no symmetry breaking is involved. This is in contrast with the usual multi-band system, in which the bands can appear without electron correlation.
In fact, besides the HK model, the average Fermi surface structure may naturally arise in many correlated electron systems, e.g. the phenomenological description of underdoped cuprate in terms of Yang-Rice-Zhang ansatz,[67] Hubbard-I approximation solution of Hubbard model, Falicov-Kimball model and Ising-Kondo lattice.[68; 69; 70; 55] Thus, it is interesting to examine whether the finding in this work is still valid in those more realistic systems, which contributes to our understanding on high-temperature cuprate superconductivity.
For future study, considering the recent progress on superconductivity in HK-like models,[6; 8; 9; 13] it will be interesting to explore the impurity effect in those superconducting phases in terms of the framework developed in this work. Since the superconducting phases in HK models are rather different from the usual Bardeen-Cooper-Schrieffer pairing state, we expect that impurity effect will be a good guide to identify the mentioned ones.
Therefore, we hope our work here provides a useful framework to understand Friedel oscillation and related impurity effect in exotic correlated electron models like HK model. Certain extensions of our work will contribute to the exploration of impurity effect in generic strongly
Figure 10: \(\delta n_{i}\) on square lattice for (a) \(U/t=0\), (b) \(U/t=2\) with \(V/t=0.1\) and electron density is fixed to \(0.5\).
correlated electron systems.
###### Acknowledgements.
We acknowledge funding from the National Key Research and Development Program of China (Grant No.2022YFA1402704) and the National Natural Science Foundation of China (Grant No.12047501 and No.11834005).
## Appendix A Derivation of singe-particle Green's function
Follow the treatment of Hubbard model,[55] let us define the single-particle Green's function as \(G_{\sigma}(k,\omega)=\langle\langle\hat{c}_{k\sigma}|\hat{c}_{k\sigma}^{\dagger} \rangle\rangle_{\omega}\), which is just the Fourier transformation of the retarded Green's function
\[G_{\sigma}(k,t)=-i\theta(t)\langle[\hat{c}_{k\sigma}(t),\hat{c}_{k\sigma}^{ \dagger}]+\rangle.\]
Then, in terms of
\[[\hat{c}_{k\sigma},\hat{H}]=(\varepsilon_{k}-\mu)\hat{c}_{k\sigma} +U\hat{c}_{k\sigma}\hat{n}_{k\bar{\sigma}},\] \[[\hat{c}_{k\bar{\sigma}}\hat{n}_{k\bar{\sigma}},\hat{H}]=( \varepsilon_{k}-\mu+U)\hat{c}_{k\sigma}\hat{n}_{k\bar{\sigma}},\]
we find
\[\omega\langle\langle\hat{c}_{k\sigma}|\hat{c}_{k\sigma}^{\dagger}\rangle \rangle_{\omega}=1+(\varepsilon_{k}-\mu)\langle\langle\hat{c}_{k\sigma}|\hat{ c}_{k\sigma}^{\dagger}\rangle\rangle_{\omega}+U\langle\langle\hat{c}_{k \sigma}\hat{n}_{k\bar{\sigma}}|\hat{c}_{k\sigma}^{\dagger}\rangle\rangle_{\omega}\]
and
\[\omega\langle\langle\hat{c}_{k\sigma}\hat{n}_{k\bar{\sigma}}|\hat{c}_{k\sigma }^{\dagger}\rangle\rangle_{\omega}=\langle\hat{n}_{k\bar{\sigma}}\rangle+( \varepsilon_{k}-\mu+U)\langle\langle\hat{c}_{k\sigma}\hat{n}_{k\bar{\sigma}}| \hat{c}_{k\sigma}^{\dagger}\rangle\rangle_{\omega}\]
Since above equations are closed, we obtain
\[\langle\langle\hat{c}_{k\sigma}\hat{n}_{k\bar{\sigma}}|\hat{c}_{k\sigma}^{ \dagger}\rangle\rangle_{\omega}=\frac{\langle\hat{n}_{k\bar{\sigma}}\rangle}{ \omega-\varepsilon_{k}+\mu-U}\]
and
\[G_{\sigma}(k,\omega) =\frac{1+\frac{U\langle\hat{n}_{k\sigma}\rangle}{\omega-( \varepsilon_{k}-\mu+U)}}{\omega-(\varepsilon_{k}-\mu)}\] \[=\frac{1-\langle\hat{n}_{k\bar{\sigma}}\rangle}{\omega-( \varepsilon_{k}-\mu)}+\frac{\langle\hat{n}_{k\bar{\sigma}}\rangle}{\omega-( \varepsilon_{k}-\mu+U)}\]
which is just the wanted Eq. 4 in the main text.
## Appendix B Charge susceptibility of non-interacting electron
For self-content, we derive the non-interacting formula for charge susceptibility as follows. Firstly, we transformation \(\chi_{c}(R_{i},R_{0},\tau)\) into momentum-energy space via
\[\chi_{c}^{(0)}(R_{i},R_{0},\tau)=\frac{1}{\beta N_{s}}\sum_{\Omega_{n}}\sum_{ q}e^{iq(R_{i}-R_{0})-i\Omega_{n}\tau}\chi_{c}^{(0)}(q,\Omega_{n}). \tag{21}\]
Then, it is straightforward to derive
\[\chi_{c}^{(0)}(q,\Omega_{n}) =\int_{0}^{\beta}d\tau e^{i\Omega_{n}\tau}\sum_{j}e^{-iqR_{j}} \chi_{c}^{(0)}(R_{j},0,\tau)\] \[=\sum_{\sigma,\sigma^{\prime}}\int_{0}^{\beta}d\tau e^{i\Omega_{n }\tau}\sum_{j}e^{-iqR_{j}}\] \[\times\langle\hat{T}_{\tau}\hat{c}_{j\sigma}^{\dagger}(\tau)\hat{ c}_{j\sigma}(\tau)\hat{c}_{0\sigma^{\prime}}^{\dagger}\hat{c}_{0\sigma^{\prime}}\rangle\] \[=\frac{1}{N_{s}^{2}}\sum_{k_{1},k_{2},k_{3},k_{4}}\sum_{\sigma, \sigma^{\prime},\sigma^{\prime}}\int_{0}^{\beta}d\tau e^{i\Omega_{n}\tau}\sum_{ j}e^{-iqR_{j}}\] \[\times e^{-ik_{1}R_{j}}e^{ik_{2}R_{j}}\langle\hat{T}_{\tau}\hat{c}_ {k_{1}\sigma}^{\dagger}(\tau)\hat{c}_{k_{2}\sigma}(\tau)\hat{c}_{k_{3}\sigma^{ \prime}}^{\dagger}\hat{c}_{k_{4}\sigma^{\prime}}\rangle\]
Then, using the standard Wick theorem, we find
\[\chi_{c}^{(0)}(q,\Omega_{n}) =\frac{-1}{N_{s}}\sum_{k_{1},\sigma}\int_{0}^{\beta}d\tau e^{i \Omega_{n}\tau}G_{\sigma}^{(0)}(k_{1}+q,\tau)G_{\sigma}^{(0)}(k_{1},-\tau)\] \[=\frac{-1}{N_{s}\beta}\sum_{k_{1},\omega_{n},\sigma}G_{\sigma}^{(0 )}(k_{1}+q,\omega_{n}+\Omega_{n})G_{\sigma}^{(0)}(k_{1},\omega_{n})\]
Now, if we use the free electron Green's function \(G_{\sigma}^{(0)}(k,\omega_{n})=\frac{1}{i\omega_{n}-(\varepsilon_{k}-\mu)}\), it is found that
\[\chi_{c}^{(0)}(q,\Omega_{n}) =\frac{-2}{N_{s}\beta}\sum_{k_{1},\omega_{n}}\frac{1}{i\omega_{n}-( \varepsilon_{k}-\mu)}\] \[\times\frac{1}{i\omega_{n}+i\Omega_{n}-(\varepsilon_{k+q}-\mu)}\] \[=\frac{2}{N_{s}}\sum_{k_{1}}\frac{f_{F}(\varepsilon_{k+q}-\mu)-f_ {F}(\varepsilon_{k}-\mu)}{i\Omega_{n}-\varepsilon_{k+q}+\varepsilon_{k}}.\]
which is the textbook result for non-interacting electron gas.
Appendix C \(\delta n_{i}\) for electron density \(n=0.4\) and \(n=0.6\) from linear response theory
In the main text, we have given the FO results for electron density \(n=0.5\) in terms of linear response theory.
To see whether the physics depends on the choice of electron density, here the results for \(n=0.4\) and \(n=0.6\) are shown in Figs. 12 and 13. It is clearly that, just as the case of \(n=0.5\), there also exists FO for \(n=0.4\) and \(n=0.6\). Therefore, we may say that the NFL state indeed exhibits FO for generic electron filling.
|
2306.03429
|
New class of Gibbs measures for two state Hard-Core model on a Cayley
tree
|
In this paper, we consider a Hard-Core $(HC)$ model with two spin values on
Cayley trees. The conception of alternative Gibbs measure is introduced and
translational invariance conditions for alternative Gibbs measures are found.
Also, we show that the existence of alternative Gibbs measures which are not
translation-invariant. In addition, we study free energy of the model.
|
R. M. Khakimov, M T. Makhammadaliev, F. H. Haydarov
|
2023-06-06T06:06:49Z
|
http://arxiv.org/abs/2306.03429v1
|
# New class of Gibbs measures for two state hard-core model on a Cayley tree
###### Abstract.
In this paper, we consider a Hard-Core (\(HC\)) model with two spin values on Cayley trees. The conception of alternative Gibbs measure is introduced and translational invariance conditions for alternative Gibbs measures are found. Also, we show that the existence of alternative Gibbs measures which are not translation-invariant. In addition, we study free energy of the model.
**Key words.** Cayley tree, configuration, hard-core model, Gibbs measure, translation-invariant measure, Alternating Gibbs Measure, free energy.
AMS Subject Classification: 20B07, 20E06.
## 1. Introduction
The problems arising in the study of the thermodynamic properties of physical and biological systems are typically solved within the framework of the theory of Gibbs measures. The Gibbs measure is a fundamental concept that determines the probability of a microscopic state of a given physical system (defined by a specific Hamiltonian). It is known that each Gibbs measure is associated with one phase of a physical system, and if the Gibbs measure is not unique, then there exists a phase transition. For a wide class of Hamiltonians, it is known that the set of all Gibbs measures (corresponding to a given Hamiltonian) is a nonempty, convex, compact subset of the set of all probability measures (see, e.g., [1], [3]) and each point of this convex set can be uniquely decomposed in terms of its extreme points. In this regard, it is of particular interest to describe all the extreme points of this convex set, i.e., extreme Gibbs measures.
For convenience, we first describe the basic concepts used in this paper and then give the statement of the problem and the history of its study.
**The Cayley tree.** Let \(\Im^{k}=(V,L,i)\), \(k\geq 1\), be the Cayley tree of order \(k\), i.e., an infinite tree with exactly \(k+1\) edges coming out of each vertex, and let \(V\) be the set of vertices, \(L\) the set of edges of \(\Im^{k}\) and \(i\) is the incidence function setting each edge \(l\in L\) into correspondence with its endpoints \(x,y\in V\). If \(i(l)=\{x,y\}\), then the vertices \(x\) and \(y\) are called the _nearest neighbors_, denoted by \(l=\langle x,y\rangle\).
For an arbitrary point \(x^{0}\in V\) we set
\[W_{n}=\ \{x\in V\ \mid\ d(x,x^{0})=n\},\ \ V_{n}=\bigcup_{m=0}^{n}W_{m},\ \ L_{n}=\ \{l=\langle x,y\rangle\in L\ \mid\ x,y\in V_{n}\},\]
## 1. Introduction
Let \(\Omega\) be a bounded domain with \(n\) elements. Let \(\Omega\) be a bounded domain with \(n\) elements. Let \(\Omega_{V_{n}}\) be the set of all bounded domains with \(n\) elements.
Let \(z:x\mapsto z_{x}=(z_{0,x},z_{1,x})\in R_{+}^{2}\) vector-valued function on \(V\). For \(n=1,2,\ldots\) and \(\lambda>0\) consider the probability measure \(\mu^{(n)}\) on \(\Omega_{V_{n}}\), defined as
\[\mu^{(n)}(\sigma_{n})=\frac{1}{Z_{n}}\lambda^{\#\sigma_{n}}\prod_{x\in W_{n}}z_ {\sigma(x),x}. \tag{1}\]
Here \(Z_{n}\) is the normalizing divisor:
\[Z_{n}=\sum_{\widetilde{\sigma}_{n}\in\Omega_{V_{n}}}\lambda^{\#\widetilde{ \sigma}_{n}}\prod_{x\in W_{n}}z_{\widetilde{\sigma}(x),x}.\]
The sequence of probability measures \(\mu^{(n)}\) is said to be consistent if for any \(n\geq 1\) and \(\sigma_{n-1}\in\Omega_{V_{n-1}}\):
\[\sum_{\omega_{n}\in\Omega_{W_{n}}}\mu^{(n)}(\sigma_{n-1}\vee\omega_{n}) \mathbf{1}(\sigma_{n-1}\vee\omega_{n}\in\Omega_{V_{n}})=\mu^{(n-1)}(\sigma_{n -1}). \tag{2}\]
In this case, there is a unique measure \(\mu\) on \((\Omega,\mathbf{B})\) such that for all \(n\) and \(\sigma_{n}\in\Omega_{V_{n}}\)
\[\mu(\{\sigma|_{V_{n}}=\sigma_{n}\})=\mu^{(n)}(\sigma_{n}).\]
**Definition 2.** The measure \(\mu\) that is the limit of a sequence \(\mu^{(n)}\) defined by (1) with consistency condition (2) is called the splitting _HC_-Gibbs measure (SGM) with \(\lambda>0\) corresponding to the function \(z:\,x\in V\setminus\{x^{0}\}\mapsto z_{x}\). Moreover, an _HC_-Gibbs measure corresponding to a constant function \(z_{x}\equiv z\) is said to be translation-invariant (TI).
**Problem statement.** The main task is to study the structure of the set \(\mathcal{G}(H)\) of all Gibbs measures corresponding to a given Hamiltonian \(H\).
A measure \(\mu\in\mathcal{G}(H)\) is called extreme if it cannot be expressed as \(\mu=\lambda\mu_{1}+(1-\lambda)\mu_{2}\) for some \(\mu_{1},\mu_{2}\in\mathcal{G}(H)\) with \(\mu_{1}\neq\mu_{2}\).
As noted above, the set \(\mathcal{G}(H)\) of all Gibbs measures (for a given Hamiltonian \(H\)) is a nonempty convex compact set \(\mathcal{G}(H)\) in the space of all probability measures on \(\Omega\).
Using theorem (12.6) in [1] and section 1.2.4 in [8], we can note the following.
* _Any SGM corresponds to the solution of Eq. (3) (see below). Thus, our main task reduces to solving functional equation (3)._
It is known [13] that each Gibbs measure for _HC_-model on the Cayley tree can be associated with the collection of values \(z=\{z_{x},x\in V\}\) satisfying
\[z_{x}=\prod_{y\in S(x)}(1+\lambda z_{y})^{-1}, \tag{3}\]
where \(\lambda=e^{-J\beta}>0\) is a parameter, \(\beta=\frac{1}{T}\), \(T>0\) is a temperature.
Let \(G_{k}\) be a free product of \(k+1\) cyclic groups \(\{e,a_{i}\}\) of order two with the respective generators \(a_{1},a_{2},...,a_{k+1},a_{i}^{2}=e\). There is a one-to-one correspondence between the set of vertices \(V\) of the Cayley tree of order \(k\) and the group \(G_{k}\) (see [5, 6, 29]).
Let \(\widehat{G}_{k}\) be a normal divisor of a finite index \(r\geq 1\) and \(G_{k}/\widehat{G}_{k}=\{H_{1},...,H_{r}\}\) be the quotient group.
**Definition 3.** A collection of quantities \(z=\{z_{x},x\in G_{k}\}\) is said to be \(\widehat{G}_{k}\)-periodic if \(z_{yx}=z_{x}\) for \(\forall x\in G_{k},y\in\widehat{G}_{k}.\) The \(G_{k}\)-periodic collections are called translation invariant.
For any \(x\in G_{k},\) the set \(\{y\in G_{k}:\langle x,y\rangle\}\setminus S(x)\) contains a unique element denoted by \(x_{\downarrow}\) (see [9, 10]).
**Definition 4.** A collection of quantities \(z=\{z_{x},x\in G_{k}\}\) is called \(\widehat{G}_{k}\)-weakly periodic if \(z_{x}=z_{ij}\) for any \(x\in H_{i},\)\(x_{\downarrow}\in H_{j}\) for any \(x\in G_{k}.\)
**Definition 5**. A measure \(\mu\) is called \(\widehat{G}_{k}\)-(weakly) periodic if it corresponds to a \(\widehat{G}_{k}\)-(weakly) periodic collection of quantities \(z.\)
**History of the study of SGMs for the \(\mathit{HC}\)-model.** We present a brief overview of the work related to the Potts model on the Cayley tree.
In [12] A. Mazel and Yu. Suhov introduced and studied the \(\mathit{HC}\)-model on the \(d\)-dimensional lattice \(\mathbb{Z}^{d}\). Studying Gibbs measures for the two state \(\mathit{HC}\)-model on the Cayley tree was the topic in [13]-[23]. In [13], the uniqueness of the translation-invariant Gibbs measure and the nonuniqueness of periodic Gibbs measures for the \(\mathit{HC}\)-model were proved. For the parameters of the \(\mathit{HC}\)-model, a sufficient condition was also found in [13] under which the translation-invariant Gibbs measure is nonextreme. In the case where the translation-invariant Gibbs measure is extreme, a sufficient condition was found in [14]. The range of the extremes of this measure was extended in [15]. Weakly periodic Gibbs measures for the \(\mathit{HC}\)-model in the case of a normal divisor of index \(2\) were studied in [16] and a complete description of the weakly periodic Gibbs measures was given.
Weakly periodic Gibbs measures for the \(\mathit{HC}\)-model in the case of a normal divisor of index \(4\) were studied in [17]-[22]. In this case conditions for the existence of weakly periodic (nonperiodic) Gibbs measures are found. We also found conditions for the translation-invariance of the weakly periodic Gibbs measures (see Chap. 7 in [4] for other HC model properties and their generalizations on a Cayley tree).
In this paper, we study a two-state \(\mathit{HC}\)-model on a Cayley tree. The concept of an alternative Gibbs measure is introduced. Translational invariance conditions for alternative Gibbs measures are found. In addition, the existence of alternative Gibbs measures that are not translation invariant is proved.
## 2. A new class of Gibbs measures
We consider the half-tree. Namely the root \(x^{0}\) has \(k\) nearest neighbors. We construct below new solutions of the functional equation (3). Consider the following matrix
\[M=\begin{pmatrix}m&k-m\\ r&k-r\end{pmatrix}\]
where \(0\leq m\leq k\) and \(0\leq r\leq k\) are non-negative integers. This matrix defines the number of times the values \(h\) and \(l\) occur in the set \(S(x)\) for each \(z_{x}\in\{h,l\}\). More precisely, the boundary condition \(z=\{z_{x},x\in G_{k}\}\) with fields taking values \(h\), \(l\) defined by the following steps:
\(\bullet\) if at vertex \(x\) we have \(z_{x}=h\), then the function \(z_{y}\), which gives real values to each vertex \(y\in S(x)\) by the following rule
\[\begin{cases}h\text{ on }m\text{ vertices of }S(x),\\ l\text{ on }k-m\text{ remaining vertices},\end{cases}\]
\(\bullet\) if at vertex \(x\) we have \(z_{x}=l\), then the function \(z_{y}\), which gives real values to each vertex \(y\in S(x)\) by the following rule
\[\begin{cases}l\text{ on }r\text{ vertices of }S(x),\\ h\text{ on }k-r\text{ remaining vertices}.\end{cases}\]
For an example of such a function see Fig.1.
Then the system (3) has the form
\[\left\{\begin{array}{l}h=\frac{1}{(1+\lambda h)^{m}}\cdot\frac{1}{(1+ \lambda l)^{k-m}},\\ l=\frac{1}{(1+\lambda l)^{r}}\cdot\frac{1}{(1+\lambda h)^{k-r}},\end{array}\right. \tag{4}\]
where \(l>0,h>0,\lambda>0\).
As was mentioned above, for any boundary condition satisfying the functional equation (3) there exists a unique Gibbs measure. A measure constructed in this way and which is not translation-invariant is called alternative Gibbs measure (AGM) and denoted as \(\mu_{m,r}\).
**Remark 1.** Note that the solution \(l=h\) in (4) corresponds to the only TIGM for the \(HC\)-model (see [13]). Therefore, we are interested in solutions of the form \(l\neq h\).
**Remark 2.** From (4) for \(m=r=0\) we obtain a system of equations whose solutions correspond to the \(G_{k}^{(2)}\)-periodic Gibbs measures for the \(HC\)-model.
The following theorem holds.
Figure 1. In this figure the values of function \(z_{x}\) on the vertices of the Cayley tree of order 5 are shown. This is the case when \(m=3\) and \(r=2\).
**Theorem 1.** Let \(k\geq 2\). If \(m+r\geq k-1\) then for the \(HC\)-model there is a unique AGM, which coincides with the unique TIGM.
**Proof.** For convenience, we denote \(h=x\) and \(l=y\). Then (4) can be rewritten as follows:
\[\left\{\begin{array}{l}x=\frac{1}{(1+\lambda x)^{m}}\cdot\frac{1}{(1+\lambda y )^{k-m}},\\ y=\frac{1}{(1+\lambda y)^{r}}\cdot\frac{1}{(1+\lambda x)^{k-r}}.\end{array}\right. \tag{5}\]
If the first equation (5) is divided by the second, then
\[\frac{x}{y}=\left(\frac{1+\lambda x}{1+\lambda y}\right)^{k-m-r} \tag{6}\]
We denote \(m+r-k=t\), \(t\geq-1\). Then by (6) we have
\[x\left(1+\lambda x\right)^{t}=y\left(1+\lambda y\right)^{t}\]
It is easy to check that the function \(f(x)=x\left(1+\lambda x\right)^{t}\) is increasing for \(t\geq-1\). Therefore, if \(m+r\geq k-1\), then the system of equations (5) has only a solution of the form \(x=y\), and this solution corresponds to the TIGM and is known to be unique. The theorem is proved.
By theorem 1 follows the next
**Consequence.** Let \(k\geq 2\). If there are AGMs (non TI) for the \(HC\)-model, then \(m+r\leq k-2\).
Let \(k-m-r=n\) (\(n\in N,n\geq 2\)). Then from the (6) we get
\[x\left(1+\lambda y\right)^{n}=y\left(1+\lambda x\right)^{n}.\]
From this equation after simple algebra, we obtain the equation
\[(y-x)\Bigl{(}-1+C_{n}^{2}\lambda^{2}xy+C_{n}^{3}\lambda^{3}xy(x+y)+\ldots+C_{n }^{n}\lambda^{n}xy(x^{n-2}+x^{n-3}y+x^{n-4}y^{2}+\ldots+y^{n-2})\Bigr{)}=0.\]
Hence \(x=y\) or \(g(x,y)=0\), where
\[g(x,y)=C_{n}^{2}\lambda^{2}xy+C_{n}^{3}\lambda^{3}xy(x+y)+\ldots+C_{n}^{n} \lambda^{n}xy(x^{n-2}+x^{n-3}y+x^{n-4}y^{2}+\ldots+y^{n-2})-1.\]
In the case \(x=y\) the corresponding measure is TIGM.
The case \(x\neq y\). We consider the equation \(g(x,y)=0\) with respect to the variable \(x\) (or \(y\)). Then it's clear that \(g(0,y)=-1\) and \(g(x,y)\rightarrow+\infty\) for \(x\rightarrow+\infty\). Then the equation \(g(x,y)=0\) for variable \(x\) has at least one positive root. On the other hand, by Descartes' theorem, the equation \(g(x,y)=0\) for variable \(x\) has at most one positive root. Hence the equation \(g(x,y)=0\) for variable \(x\) has exactly one positive root, i.e., there exists a solution \((x,y)\) of the system of equations (5), different from \((x,x)\). Thus, the following statement is true.
**Statement 1.** Let \(k\geq 2\). If \(m+r\leq k-2\) then for the \(HC\)-model there exists AGM (not TI).
In particular, if \(\lambda^{2}xy=1\) for \(m+r=k-2\) (\(n=2\)) or if \(\lambda^{2}xy(3+\lambda(x+y))=1\) for \(m+r=k-3\) (\(n=3\)), then in both cases there exists AGM (not TI).
The case \(x=y\). We check the multiplicity of the root \(x=y\). In this case from \(g(x,x)=0\), we have
\[C_{n}^{2}\lambda^{2}x^{2}+2C_{n}^{3}\lambda^{3}x^{3}+\ldots+(n-1)C_{n}^{n} \lambda^{n}x^{n}-1=0 \tag{7}\]
and the equation (7) also has exactly one positive root, i.e., \((x,x)\) is a multiple root for the system of equations (5). This means that, AGM coincide with TIGM.
In particular, if \(\lambda x=1\) for \(m+r=k-2\) (\(n=2\)) or if \(3\lambda^{2}x^{2}+2\lambda^{3}x^{3}=1\) for \(m+r=k-3\) (\(n=3\)), then in both cases there is no AGM (not TI).
Let
\[\left\{\begin{array}{l}x=f(y),\\ y=f(x),\end{array}\right. \tag{8}\]
where \(f(x)=\frac{1}{(1+\lambda x)^{x}}\).
The next lemma is obvious.
**Lemma 1.** If \((x_{0},y_{0})\) is a solution to the system of equations (8), then \((y_{0},x_{0})\) is also a solution to the system of equations (8).
**Remark 3.** If the solution \((x,y)\) of the system of equations (8) corresponds to alternative Gibbs measure denoted by \(\mu\), then the solution \((y,x)\) corresponds to alternative Gibbs measure denoted by \(\mu^{{}^{\prime}}\).
### Alternative Gibbs measures in the case \(m+r\leq k-2\)
In this section, we consider the cases \(k=2\), \(k=3\) and \(k=4\). In the case \(k=2\) we have only the case \(m=0\) and \(r=0\). In the case \(k=3\) we have \(m=0\) and \(r=0\); \(m=0\) and \(r=1\); \(m=1\) and \(r=0\). In the case \(k=4\) we have \(m=0\) and \(r=0\); \(m=0\) and \(r=1\); \(m=1\) and \(r=0\); \(m=1\) and \(r=1\); \(m=0\) and \(r=2\); \(m=2\) and \(r=0\). In all cases, by Remark 2, we will not consider the case \(m=0\) and \(r=0\). corresponds to the translation-invariant Gibbs measure and solutions \((x_{1},y_{1}),(x_{2},y_{2})\) in Statement 1 (\((x^{*},y^{*}),(y^{*},x^{*})\) in Statement 2) correspond to two-periodic Gibbs measures (see [23]).
**The case \(k=3\), \(m=1\) and \(r=0\).** For \(m=1\) and \(r=0\) (resp. \(m=0\) and \(r=1\)) the system of equations (5) can be rewritten
\[\left\{\begin{array}{l}x=\frac{1}{1+\lambda x}\cdot\frac{1}{(1+\lambda y)^{ 2}},\\ y=\frac{1}{(1+\lambda x)^{3}}.\end{array}\right. \tag{9}\]
From the system of equations (9) due to (6) we obtain \((x-y)\Big{(}\lambda^{2}xy-1\Big{)}=0\). Hence, \(x=y\) or \(\lambda^{2}xy=1\). The case \(x=y\) has already been considered.
Let \(\lambda^{2}xy=1\). Then \(\lambda x=\frac{1}{\lambda y}\) for \(x\neq y\). From here and from (9) after some algebras we can get:
\[\left\{\begin{array}{l}(1+\lambda x)^{3}-\lambda^{2}x=0,\\ (1+\lambda y)^{3}-\lambda^{3}y^{2}=0.\end{array}\right. \tag{10}\]
From \(\lambda^{2}xy=1\) we find \(y\) and substitute into the second equation of the system (10). Then
\[\left\{\begin{array}{l}(1+\lambda x)^{3}-\lambda^{2}x=0,\\ \frac{(1+\lambda x)^{3}-\lambda^{2}x}{\lambda^{3}x^{3}}=0.\end{array}\right.\]
We introduce the notation \(f(x)=(1+\lambda x)^{3}-\lambda^{2}x\). Then the roots of the equation \(f(x)=0\) are also roots of the system (9). Using the Cardano formulas, we find the positive solution of the last equation
\[\lambda^{3}x^{3}+3\lambda^{2}x^{2}+\Big{(}3-\lambda^{2}\Big{)}x+1=0.\]
Let \(x=q-\frac{1}{\lambda}\), then
\[f(q)=\lambda^{3}q^{3}-\lambda^{2}q+\lambda,\ \ D=\frac{1}{\lambda^{4}}\left( \frac{1}{4}-\frac{1}{27\lambda}\right).\]
If \(D>0\), i.e., \(\lambda<\frac{27}{4}\) then by Cardano's formula the equation \(f(q)=0\) has one negative root.
If \(D=0\), i.e., \(\lambda=\frac{27}{4}\) then the equation \(f(q)=0\) has one multiple positive root of the form \(q^{\prime}=\frac{2}{9}\), i.e. \(x^{\prime}=\frac{2}{27}\), \(y^{\prime}=\frac{8}{27}\).
By Cardano's formula, the equation \(f(q)=0\) has three real roots if \(D<0\). Hence, \(f(x)=0\) has three real roots if \(\lambda>\frac{27}{4}\). Let these solutions be \(x_{1},x_{2},x_{3}\). By the Vieta's formulas we have
\[x_{1}+x_{2}+x_{3}=-\frac{3}{\lambda},\ \ x_{1}x_{2}+x_{1}x_{3}+x_{2}x_{3}= \frac{3-\lambda^{2}}{\lambda^{3}}<0,\ \ x_{1}x_{2}x_{3}=-\frac{1}{\lambda^{3}}.\]
From equality \(x_{1}x_{2}+x_{1}x_{3}+x_{2}x_{3}=\frac{3-\lambda^{2}}{\lambda^{3}}\) we obtain that at least one and at most two roots of the equation are positive.
From the equality \(x_{1}x_{2}x_{3}=-1\) it follows that exactly two roots are positive. Hence, \(f(x)=0\) has two positive roots if \(\lambda>\frac{27}{4}\). These roots have the following form
\[x_{1}=\frac{\sqrt[3]{t^{2}}-6\sqrt[3]{t}+12\lambda}{6\lambda\sqrt[3]{t}},\ \ x_{2}=\frac{6\sqrt[3]{p}}{\lambda(\sqrt[3]{p^{2}}+(2\lambda-6)\sqrt[3]{p}+4 \lambda^{2}-24\lambda)}.\]
Here
\[t=-108\lambda+12\lambda\sqrt{-12\lambda+81},\ \ p=108\lambda+8\lambda^{3}-72 \lambda^{2}+12\lambda\sqrt{-12\lambda+81}.\]
From the equality \(\lambda^{2}xy=1\) we find \(y_{1}\) and \(y_{2}\) corresponding to \(x_{1}\) and \(x_{2}\):
\[y_{1}=\frac{6\sqrt[3]{t}}{\lambda(\sqrt[3]{t^{2}}-6\sqrt[3]{t}+12\lambda)},\ \ y_{2}=\frac{\sqrt{p^{2}}+(2\lambda-6)\sqrt[3]{p}+4\lambda^{2}-24\lambda}{6 \sqrt[3]{p}\lambda}.\]
Thus, the following statement is true.
**Statement 2.** Let \(k=3\) and \(\lambda_{cr}=\frac{27}{4}\). Then the system of equations (9):
1. for \(0<\lambda<\lambda_{cr}\) has a unique solution \((x,x)\);
2. for \(\lambda=\lambda_{cr}\) has two solutions \((x,x),(\frac{2}{27},\frac{8}{27})\);
3. for \(\lambda>\lambda_{cr}\) has three solutions \((x,x),(x_{1},y_{1}),(x_{2},y_{2})\).
**Remark 4.** The measure corresponding to the solution \((x,x)\) is translation invariant and measures corresponding to solutions \((\frac{2}{27},\frac{8}{27}),\)\((x_{1},y_{1}),(x_{2},y_{2})\) are AGMs (not TI).
**Theorem 3.** Let \(k=3\) and \(r+m\leq 1,\) i.e., \(m=1\) and \(r=0\) or \(m=0\) and \(r=1.\) Then for the HC-model there exists \(\lambda_{cr}=\frac{27}{4}\) such that for \(0<\lambda<\lambda_{cr}\) there is a unique AGM which coincides with the only TIGM \(\mu_{0},\) for \(\lambda=\lambda_{cr}\) there are exactly two AGMs \(\mu_{0}\) and \(\mu^{\prime},\) where \(\mu^{\prime}\) is AGM (not TI) and for \(\lambda>\lambda_{cr}\) there are exactly three AGMs \(\mu_{0},\)\(\mu_{1}\) and \(\mu_{2},\) where \(\mu_{1}\) and \(\mu_{2}\) are AGMs (not TI).
**The case \(k=4\), \(m=1\) and \(r=0\) (\(m=0\) and \(r=1\)).** In this case from (5) we get
\[\left\{\begin{array}{l}x=\frac{1}{1+\lambda x}\cdot\frac{1}{(1+\lambda y)^{ 3}},\\ y=\frac{1}{(1+\lambda x)^{4}}.\end{array}\right. \tag{11}\]
From the system of equations (11) due to (6) we can get
\[(y-x)\Big{(}\lambda^{2}xy(3+\lambda(x+y))-1\Big{)}=0.\]
Hence \(x=y\) or \(\lambda^{2}xy(3+\lambda(x+y))=1\). It is clear that in the case \(x=y\) we obtain a solution corresponding to the TIGM.
Suppose \(x\neq y\) and \(\lambda^{2}xy(3+\lambda(x+y))=1\). Then, substituting the expression for \(y\) from the second equation of the system (11) into the last equality, we obtain the equation
\[f(x,\lambda)=\lambda^{8}x^{8}+8\lambda^{7}x^{7}-\lambda^{7}x^{6}+28\lambda^{6 }x^{6}-7\lambda^{6}x^{5}+56\lambda^{5}x^{5}-18\lambda^{5}x^{4}+70\lambda^{4}x^ {4}-22\lambda^{4}x^{3}+\]
\[+56\lambda^{3}x^{3}-13\lambda^{3}x^{2}-\lambda^{3}x+28\lambda^{2}x^{2}-3 \lambda^{2}x+8\lambda x+1=0.\]
Denoting \(\lambda x=u,\)\(u>0\) we then have the equation
\[f(u)=u^{8}+8u^{7}+(28-\lambda)u^{6}+(56-7\lambda)u^{5}+(70-18\lambda)u^{4}+(56 -22\lambda)u^{3}+(28-13\lambda)u^{2}-(\lambda^{2}+3\lambda-8)u+1=0,\]
which has a solution \(u=u(\lambda)\). But we regard this as an equation for \(\lambda\) and obtain solutions \(\lambda=\lambda(u)\):
\[\lambda_{1}(u)=\frac{(u+1)^{4}}{2u}\cdot\left(\sqrt{u^{4}+6u^{3}+9u^{2}+4u}-u ^{2}-3u\right),\]
\[\lambda_{2}(u)=-\frac{(u+1)^{4}}{2u}\cdot\left(\sqrt{u^{4}+6u^{3}+9u^{2}+4u}+ u^{2}+3u\right).\]
Therefore, because \(\lambda_{2}<0\) for \(u>0,\) we have
\[\lambda-\lambda_{1}=0\ \Rightarrow\ \lambda=\frac{(u+1)^{4}}{2u}\cdot\left( \sqrt{u^{4}+6u^{3}+9u^{2}+4u}-u^{2}-3u\right)=\psi(u).\]
Analysis of the function \(\psi(u)\) shows that \(\psi(u)>0.\) In addition, \(\psi(u)\rightarrow+\infty\) as \(u\to 0\) and as \(u\rightarrow+\infty,\) and each value of \(\lambda\) therefore corresponds to at least two values of \(u\) for \(\lambda>\psi(u^{*})\) but to one value at \(\lambda=\psi(u^{*}),\) and the equation \(\lambda=\psi(u)\) has no solutions for \(\lambda<\psi(u^{*}),\) where \(u^{*}\) a solution of the equation \(\psi^{\prime}(u)=0\) (see Fig.2). We calculate the
derivative
\[\psi^{\prime}(u)=\frac{(u+1)^{3}\big{[}-(5u^{2}+13u)\sqrt{u^{2}+4u}+5u^{3}+23u^{2} +16u-2\big{]}}{2\sqrt{u^{2}+4u}}.\]
It is clear that if \(5u^{3}+23u^{2}+16u-2<0\) then \(\psi^{{}^{\prime}}(u)<0\) and the equation \(\psi^{{}^{\prime}}(u)=0\) has no solutions. So it must be \(5u^{3}+23u^{2}+16u-2>0\).
\[5u^{3}+23u^{2}+16u-2=5(u+1)\left(u+\frac{9-\sqrt{91}}{5}\right)\left(u+\frac{9 +\sqrt{91}}{5}\right)\ \ \Rightarrow\ \ u>\frac{\sqrt{91}-9}{5}.\]
We solve the equation \(\psi^{{}^{\prime}}(u)=0\) for \(u>0\):
\[-(5u^{2}+13u)\sqrt{u^{2}+4u}+5u^{3}+23u^{2}+16u-2=0\ \ \Rightarrow\ 10u^{3}+41u^{2}-16u+1=0.\]
We solve the last equation by the Cardano method:
\[u_{1}=\frac{\sqrt{2161}}{15}\cdot\cos\left(\frac{\arccos\left(\frac{-99791}{ \sqrt{10091699281}}\right)+2\pi}{3}\right)-\frac{41}{30}\approx 0.284824838,\]
\[u_{2}=\frac{\sqrt{2161}}{15}\cdot\cos\left(\frac{\arccos\left(\frac{-99791}{ \sqrt{10091699281}}\right)+2\pi}{3}\right)-\frac{41}{30}\approx-4.463483795,\]
\[u_{3}=\frac{\sqrt{2161}}{15}\cdot\cos\left(\frac{\arccos\left(\frac{-99791}{ \sqrt{10091699281}}\right)+4\pi}{3}\right)-\frac{41}{30}\approx 0.078658955.\]
Hence, since \(u>\frac{\sqrt{91}-9}{5}\) we get the solution \(u^{*}=u_{1}\). We set
\[\lambda_{cr}=\psi(u^{*})\approx 2.31.\]
We note that if \(\psi^{\prime\prime}(u)>0\), then each value of \(\lambda\) corresponds to only two values of \(u\) for \(\lambda>\lambda_{cr}\). We therefore prove that \(\psi^{\prime\prime}(u)>0\). Indeed,
\[\psi^{\prime\prime}(u)=\frac{2h(u)}{u(u+1)\sqrt{(u^{2}+4u)^{3}}}.\]
Here
\[h(u)=5u^{8}+56u^{7}+234u^{6}+463u^{5}+460u^{4}+210u^{3}+26u^{2}-u+3-(5u^{2}+11u) \sqrt{(u^{4}+6u^{3}+9u^{2}+4u)^{3}}.\]
From the inequality \(h(u)>0\) for \(u>0\) we obtain
\[\big{(}5u^{8}+56u^{7}+234u^{6}+463u^{5}+460u^{4}+210u^{3}+26u^{2}-u+3\big{)}^{2 }-\big{(}5u^{2}+11u\big{)}^{2}\big{(}u^{4}+6u^{3}+9u^{2}+4u\big{)}^{3}=\]
\[=10u^{13}+182u^{12}+1372u^{11}+5505u^{10}+12786u^{9}+17913u^{8}+15564u^{7}+\]
\[+9186u^{6}+5034u^{5}+3016u^{4}+1208u^{3}+156u^{2}+(u-3)^{2}>0.\]
Thus, each value of \(\lambda\) corresponds to only two values of \(u\) for \(\lambda>\lambda_{cr}\).
This can also be seen by computer analysis, i.e., computer analysis shows that the equation \(f(x,\lambda)=0\) for \(\lambda<\lambda_{cr}\) has no positive solution, at \(\lambda=\lambda_{cr}\) has one positive solution and for \(\lambda>\lambda_{cr}\) there are exactly two positive solutions (see Fig. 3).
Thus, the following statement is true.
**Statement 3.** Let \(k=4\) and \(\lambda_{cr}\approx 2.31\). Then the system of equations (11):
1. for \(0<\lambda<\lambda_{cr}\) has a unique solution \((x,x)\);
2. for \(\lambda=\lambda_{cr}\) has two solutions \((x,x),(x^{\prime},y^{\prime})\);
3. for \(\lambda>\lambda_{cr}\) has three solutions \((x,x),(x_{1},y_{1}),(x_{2},y_{2})\).
**Remark 5.** The measures corresponding to the solution in the Statement 3 for \(x\neq y\) are AGMs (not periodic) and they different from previous AGMs.
**The case \(k=4\), \(m=1\) and \(r=1\).** In this case from the system of equations (5) we obtain
\[\left\{\begin{array}{l}x=\frac{1}{1+\lambda x}\cdot\frac{1}{(1+\lambda y)^{3 }},\\ y=\frac{1}{1+\lambda y}\cdot\frac{1}{(1+\lambda x)^{3}}.\end{array}\right. \tag{12}\]
From (12) due to (6) we can get
\[(x-y)\Big{(}\lambda^{2}xy-1\Big{)}=0.\]
Figure 3. Graph of the function \(f(x,2)\) (dotted line), \(f(x,2.3143)\) (continuous line) and \(f(x,2.5)\) (dashed line).
Hence \(x=y\) or \(\lambda^{2}xy=1\). The case \(x=y\) corresponds to the only TIGM.
Let \(x\neq y\) and \(\lambda^{2}xy=1\), i.e., \(\lambda x=\frac{1}{\lambda y}\). After some algebras the system of equations (12) has the form
\[\left\{\begin{array}{l}(1+\lambda x)^{4}-\lambda^{3}x^{2}=0,\\ (1+\lambda y)^{4}-\lambda^{3}y^{2}=0.\end{array}\right. \tag{13}\]
Obviously, that the roots of the equation \(f(x)=(1+\lambda x)^{4}-\lambda^{3}x^{2}=0\) are also roots of (12). The solutions of the equations \(f(x)=0\) and \(f(y)=0\) have the form
\[x_{1,2}=\frac{\sqrt{\lambda}-2\pm\sqrt{\lambda-4\sqrt{\lambda}}}{2\lambda}, \ \ y_{1,2}=\frac{\sqrt{\lambda}-2\pm\sqrt{\lambda-4\sqrt{\lambda}}}{2\lambda}.\]
It is easy to see that \(x_{1,2}>0\) (\(y_{1,2}>0\)) for \(\lambda\geq 16\), and they take complex values for \(\lambda<16\). Moreover, \(x_{1}=x_{2}\) (\(y_{1}=y_{2}\)) for \(\lambda=16\) and it coincides with the only translation-invariant solution of (12).
By virtue of the equation \(\lambda^{2}xy=1\) and Lemma 1, we obtain that in the case \(x\neq y\) the system of equations (12) has solutions of the form \((x,y)\) and \((y,x)\) for \(\lambda>\lambda_{cr}=16\), where
\[x=x_{1}=\frac{\sqrt{\lambda}-2+\sqrt{\lambda-4\sqrt{\lambda}}}{2\lambda},\ \ y=y_{2}=\frac{\sqrt{\lambda}-2-\sqrt{\lambda-4\sqrt{\lambda}}}{2\lambda},\]
\[y=x_{2}=\frac{\sqrt{\lambda}-2-\sqrt{\lambda-4\sqrt{\lambda}}}{2\lambda},\ \ x=y_{1}=\frac{\sqrt{\lambda}-2+\sqrt{\lambda-4\sqrt{\lambda}}}{2\lambda}.\]
Thus, the following statement holds
**Statement 4.** Let \(k=4\) and \(\lambda_{cr}=16\). Then the system of equations (12):
1. for \(0<\lambda\leq\lambda_{cr}\) has a unique solution \((x,x)\);
2. for \(\lambda>\lambda_{cr}\) has three solutions \((x,x),(x,y),(y,x)\).
**Remark 6.** The measure corresponding to the solution \((x,y),(y,x)\) in the Statement 4 are AGMs (not periodic) and they different from previous AGMs.
**The case \(k=4\), \(m=2\) and \(r=0\) (\(m=0\) and \(r=2\)).** In this case from (5) we obtain
\[\left\{\begin{array}{l}x=\frac{1}{(1+\lambda x)^{2}}\cdot\frac{1}{(1+\lambda y )^{2}},\\ y=\frac{1}{(1+\lambda x)^{4}}.\end{array}\right. \tag{14}\]
Using (6) from (14) we can get
\[(x-y)\Big{(}\lambda^{2}xy-1\Big{)}=0.\]
Hence, \(x=y\) or \(\lambda^{2}xy=1\). The case \(x=y\) corresponds to the only TIGM.
We consider the case \(x\neq y\) and \(\lambda^{2}xy=1\)\(\Big{(}\lambda x=\frac{1}{\lambda y}\Big{)}\). After some algebras (14) has the form
\[\left\{\begin{array}{l}(1+\lambda x)^{4}-\lambda^{2}x=0,\\ (1+\lambda y)^{4}-\lambda^{4}y^{3}=0.\end{array}\right. \tag{15}\]
From the equation \(\lambda^{2}xy=1\) we find \(y\) and substitute it for the second equation (15). Then
\[\left\{\begin{array}{l}(1+\lambda x)^{4}-\lambda^{2}x=0,\\ \frac{(1+\lambda x)^{4}-\lambda^{2}x}{\lambda^{4}x^{4}}=0.\end{array}\right.\]
Let's rewrite the equation \(f(x)=(1+\lambda x)^{4}-\lambda^{2}x=0\) as
\[\lambda^{4}x^{4}+4\lambda^{3}x^{3}+6\lambda^{2}x^{2}+\lambda(4-\lambda)x+1=0.\]
We solve the last equation by the Ferrari method from linear algebra. We introduce the notation \(x=t-\frac{1}{\lambda}\). Then
\[f\left(t-\frac{1}{\lambda}\right)=\lambda^{4}t^{4}-\lambda^{2}t+\lambda=( \lambda^{2}t^{2}+p)^{2}-2\lambda^{2}p\Big{(}t+\frac{1}{4p}\Big{)}^{2}=\]
\[=\left(\lambda^{2}t^{2}+p-\lambda\sqrt{2p}\Big{(}t+\frac{1}{4p}\Big{)}\right) \left(\lambda^{2}t^{2}+p+\lambda\sqrt{2p}\Big{(}t+\frac{1}{4p}\Big{)}\right)=0,\]
where
\[p=\frac{\sqrt[3]{108\lambda^{2}+12\sqrt{81\lambda^{4}-768\lambda^{3}}}}{12}+ \frac{4\lambda}{\sqrt[3]{108\lambda^{2}+12\sqrt{81\lambda^{4}-768\lambda^{3}}}}.\]
Solutions have the following form
\[t_{1,2}=\frac{\sqrt{2p^{3}}\pm\sqrt{\sqrt{2p^{3}}\lambda-2p^{3}}}{2\lambda p}, \;\;t_{3,4}=\frac{-\sqrt{2p^{3}}\pm\sqrt{-\sqrt{2p^{3}}\lambda-2p^{3}}}{2 \lambda p}.\]
By virtue \(x=t-\frac{1}{\lambda}\), for solutions we obtain
\[x_{1,2}=\frac{\sqrt{2p^{3}}\pm\sqrt{\sqrt{2p^{4}}\lambda-2p^{3}}-2p}{2\lambda p },\;\;x_{3,4}=\frac{-\sqrt{2p^{3}}\pm\sqrt{-\sqrt{2p^{3}}\lambda-2p^{3}}-2p}{ 2\lambda p}.\]
Computer analysis shows that \(x_{1,2}>0\) for \(\lambda>\lambda_{cr}\approx 9.48\), and values \(x_{3,4}\) are negative or take on complex values for \(\lambda>0\) (see Fig. 4). Values \(y_{1}\) and \(y_{2}\) corresponding to values
\(x_{1}\) and \(x_{2}\) have the form:
\[y_{1,2}=\frac{2p}{\lambda\Big{(}\sqrt{2p^{3}}\pm\sqrt{\sqrt{2p^{3}}\lambda-2p^{3} }-2p\Big{)}}.\]
For \(\lambda=\lambda_{cr}\approx 9.4815\) the system of equations (15) has solutions of the form
\[x^{\prime}=\frac{\sqrt{2p^{3}}-2p}{2\lambda p},\ \ y^{\prime}=\frac{2p}{ \lambda\sqrt{2p^{3}}-2p}.\]
Thus, the following statement is true.
**Statement 5.** Let \(k=4\) and \(\lambda_{cr}\approx 9.48\). Then the system of equations (15):
1. for \(0<\lambda<\lambda_{cr}\) has a unique solution \((x,x)\);
2. for \(\lambda=\lambda_{cr}\) has two solutions \((x,x),(x^{\prime},y^{\prime})\);
3. for \(\lambda>\lambda_{cr}\) has three solutions \((x,x),(x_{1},y_{1}),(x_{2},y_{2})\).
By using all propositions, we get the following theorem.
**Theorem 4.** Let \(k=4\) and \(r+m\leq 2\). For the HC-model the following statements are true:
1. If \(m=1\) and \(r=0\) or \(m=0\) and \(r=1\), then there exists \(\lambda_{cr}\approx 2.31\) such that for \(0<\lambda<\lambda_{cr}\) there is a unique AGM which coincides with the only TIGM \(\mu_{0}\), for \(\lambda=\lambda_{cr}\) there are exactly two AGMs \(\mu_{0}\) and \(\mu^{\prime}\), where \(\mu^{\prime}\) is AGM (not TI) and for \(\lambda>\lambda_{cr}\) there are exactly three AGMs \(\mu_{0}\), \(\mu_{1}\) and \(\mu_{2}\), where \(\mu_{1}\) and \(\mu_{2}\) are AGMs (not TI).
2. If \(m=1\) and \(r=1\) then there exists \(\lambda_{cr}=16\) such that for \(0<\lambda\leq\lambda_{cr}\) there is a unique AGM which coincides with the only TIGM \(\mu_{0}\), for \(\lambda>\lambda_{cr}\) there are exactly three AGMs \(\mu_{0}\), \(\mu_{1}\) and \(\mu_{2}\), where \(\mu_{1}\) and \(\mu_{2}\) are AGMs (not TI).
3. If \(m=2\) and \(r=0\) or \(m=0\) and \(r=2\), then there exists \(\lambda_{cr}\approx 9.48\) such that for \(0<\lambda<\lambda_{cr}\) there is a unique AGM which coincides with the only TIGM \(\mu_{0}\), for \(\lambda=\lambda_{cr}\) there are exactly two AGMs \(\mu_{0}\) and \(\mu^{\prime}\), where \(\mu^{\prime}\) is AGM (not TI) and for \(\lambda>\lambda_{cr}\) there are exactly three AGMs \(\mu_{0}\), \(\mu_{1}\) and \(\mu_{2}\), where \(\mu_{1}\) and \(\mu_{2}\) are AGMs (not TI).
## 3. The case \(m+r\leq k-2\)\((m=r)\)
The following lemma is known.
**Lemma 2.**[20] _Let \(f:[0,1]\rightarrow[0,1]\) be a continuous function with a fixed point \(\xi\in(0,1)\). We assume that \(f\) is differentiable at \(\xi\) and \(f^{{}^{\prime}}(\xi)<-1.\) Then there exist points \(x_{0}\) and \(x_{1}\), \(0\leq x_{0}<\xi<x_{1}\leq 1,\) such that \(f(x_{0})=x_{1}\) and \(f(x_{1})=x_{0}.\)_
For \(m=r\) by (5) we obtain
\[\begin{cases}x=\frac{1}{(1+\lambda x)^{m}}\cdot\frac{1}{(1+\lambda y)^{k-m}}; \\ y=\frac{1}{(1+\lambda y)^{m}}\cdot\frac{1}{(1+\lambda x)^{k-m}}.\end{cases} \tag{16}\]
Here \(x,y\in(0;1)\). After some transformations from (16) we obtain the following system of equations:
\[\begin{cases}y=f(x);\\ x=f(y),\end{cases} \tag{17}\]
where
\[f(x)=\frac{1}{\lambda}\cdot\bigg{(}\frac{1}{x(1+\lambda x)^{m}}\bigg{)}^{\frac {1}{k-m}}-\frac{1}{\lambda}.\]
From (17) we get the equation \(f(f(x))=x\).
First, we consider the equation \(f(x)=x\). The function \(f(x)\) is differentiable and decreasing for \(0<x<1\):
\[f^{\prime}(x)=-\frac{1+\lambda(m+1)x}{\lambda(k-m)x^{\frac{k-m+1}{k-m}}(1+ \lambda x)^{\frac{k}{k-m}}}<0.\]
We rewrite the equation \(f(x)=x\):
\[x=\frac{1}{\lambda}\cdot\bigg{(}\frac{1}{x(1+\lambda x)^{m}}\bigg{)}^{\frac{ 1}{k-m}}-\frac{1}{\lambda}\ \ \Rightarrow\ \ (1+\lambda x)^{k}=\frac{1}{x}.\]
It is known from [13] that the last equation has a unique solution \(\tilde{x}\), i.e., the equation \(f(x)=x\) has a unique solution \(\tilde{x}\).
We solve the inequality \(f^{\prime}(\tilde{x})<-1\):
\[-\frac{1+\lambda(m+1)\tilde{x}}{\lambda(k-m)\tilde{x}^{\frac{k-m+1}{k-m}}(1+ \lambda\tilde{x})^{\frac{k}{k-m}}}<-1\ \ \Rightarrow\ \ \frac{1+(m+1)\lambda\tilde{x}}{\lambda(k-m)\tilde{x}}>1\ \ \Rightarrow\ \tilde{x}<\frac{1}{\lambda(k-2m-1)}.\]
Then from \(f(\tilde{x})=\tilde{x}\) we get
\[\bigg{(}1+\frac{1}{k-2m-1}\bigg{)}^{k}<\lambda(k-2m-1)\ \ \Rightarrow\ \ \lambda>\lambda_{cr}=\bigg{(}\frac{k-2m}{k-2m-1}\bigg{)}^{k}\cdot\frac{1}{k-2 m-1}.\]
Hence, by Lemma 1 and Lemma 2 the system of equations (16) for \(\lambda>\lambda_{cr}\) has at least three positive solutions \((x,y),\ (\tilde{x},\tilde{x}),\ (y,x)\), where \(x\neq y\).
Thus, the following theorem is true.
**Theorem 5.** Let \(k\geq 2\), \(m+r\leq k-2\ (m=r)\) and \(\lambda_{cr}=\big{(}\frac{k-2m}{k-2m-1}\big{)}^{k}\cdot\frac{1}{k-2m-1}\). Then for the HC-model for \(\lambda>\lambda_{cr}\) there are at least three Gibbs measures one of which is TI and the other are AGMs (not TI).
### The case \(m+r=k-2\), \(k\geq 2\)
In the case \(m+r=k-2\) (\(n=2\)), the system of equations (5) has the form:
\[\left\{\begin{array}{l}x=\frac{1}{(1+\lambda x)^{m}}\cdot\frac{1}{(1+ \lambda y)^{k-m}};\\ y=\frac{1}{(1+\lambda y)^{k-m-2}}\cdot\frac{1}{(1+\lambda x)^{m+2}}.\end{array}\right. \tag{18}\]
From the system of equations (18) due to (6) we can get
\[(x-y)\Big{(}\lambda^{2}xy-1\Big{)}=0.\]
Hence, \(x=y\) or \(\lambda^{2}xy=1\). The case \(x=y\) has already been considered.
Let \(\lambda^{2}xy=1\). Then \(\lambda x=\frac{1}{\lambda y}\) for \(x\neq y\). By virtue (18), after some algebras, we can obtain the system of equations
\[\left\{\begin{array}{l}x=\frac{(\lambda x)^{k-m}}{(1+\lambda x)^{k}},\\ y=\frac{(\lambda y)^{m+2}}{(1+\lambda y)^{k}},\end{array}\right.\]
which is equivalent to the system of equations:
\[\left\{\begin{array}{l}(1+\lambda x)^{k}-\lambda^{k-m}x^{k-m-1}=0,\\ (1+\lambda y)^{k}-\lambda^{m+2}y^{m+1}=0.\end{array}\right. \tag{19}\]
From the equation \(\lambda^{2}xy=1\) we find \(y\) and substitute it for the second equation of the system equations (19). Then
\[\left\{\begin{array}{l}(1+\lambda x)^{k}-\lambda^{k-m}x^{k-m-1}=0,\\ \frac{(1+\lambda x)^{k}-\lambda^{k-m}x^{k-m-1}}{\lambda^{k}x^{k}}=0.\end{array}\right.\]
We consider the function
\[f(x)=(1+\lambda x)^{k}-\lambda^{k-m}x^{k-m-1}.\]
Obviously, the roots of the equation \(f(x)=0\) are also roots of (19). Let's rewrite \(f(x)\) as a polynomial:
\[f(x)=\lambda^{k}x^{k}+C_{k}^{1}\lambda^{k-1}x^{k-1}+\cdots+C_{k}^{m+1}\lambda ^{k-m-1}x^{k-m-1}+\cdots+C_{k}^{k-1}\lambda x+1-\lambda^{k-m}x^{k-m-1}\]
or
\[f(x)=\lambda^{k}x^{k}+C_{k}^{1}\lambda^{k-1}x^{k-1}+\cdots+(C_{k}^{m+1}- \lambda)\lambda^{k-m-1}x^{k-m-1}+\cdots+C_{k}^{k-1}\lambda x+1.\]
If \(\lambda<C_{k}^{m+1}\), then \(f(x)=0\) has no positive solutions, if \(\lambda>C_{k}^{m+1}\), then number of sign changes of the first equation of the last equality is two. Due to the Descartes' theorem, the equation \(f(x)=0\) has at most two positive solutions.
On the other hand, it is easy to see that \(0<x<1\), \(f(0)=1\) and \(f(1)=(1+\lambda)^{k}-\lambda^{k-m}>0.\) Moreover, \(f\Big{(}\frac{1}{\lambda}\Big{)}=2^{k}-\lambda<0\), if \(\lambda>2^{k}\).
It follows from the above that there exists \(\lambda_{cr}:C_{k}^{m+1}<\lambda_{cr}\leq 2^{k}\) such that for \(\lambda>\lambda_{cr}\) the equation \(f(x)=0\) has two positive solutions, for \(\lambda=\lambda_{cr}\) has twice multiplicity positive solution and \(\lambda<\lambda_{cr}\) has no positive solution.
When \(m=r\), the system of equations (18) can be written as
\[\left\{\begin{array}{l}x=\frac{1}{(1+\lambda x)^{m}}\cdot\frac{1}{(1+\lambda y )^{k-m}};\\ y=\frac{1}{(1+\lambda y)^{m}}\cdot\frac{1}{(1+\lambda x)^{k-m}}.\end{array}\right. \tag{20}\]
It follows from the Lemma 1 that if the number of solutions of the equation \(x=f(x)\) is odd or even, then the number of solutions of \(x=f(f(x))\) is also respectively odd or even.
As a result, when \(m=r\) and \(\lambda=\lambda_{cr}\), the number of solutions of the system of equations (20) cannot be even. Because the TI solution was unique. It follows that the solution
of the system of equations (20) corresponding to \(\lambda=\lambda_{cr}\) coincides with the translation-invariant solution.
Thus, we have proved the following theorem.
**Theorem 6.** Let \(k\geq 2\) and \(r+m=k-2\). Then for the HC-model there exists \(\lambda_{cr}\) such that next statements are true:
1. for \(0<\lambda<\lambda_{cr}\) there is a unique AGM and it coincides with the only TIGM \(\mu_{0}\);
2. if \(m=r\) and \(\lambda=\lambda_{cr}\) then there is a unique AGM and it coincides with the only TIGM \(\mu_{0}\);
3. if \(m\neq r\) and \(\lambda=\lambda_{cr}\) there is at least one AGM;
4. for \(\lambda>\lambda_{cr}\) there are exactly three Gibbs measures \(\mu_{0}\), \(\mu_{1}\) and \(\mu_{2}\), where \(\mu_{1}\) and \(\mu_{2}\) are AGMs (not TI).
## 4. Relation of the Alternative Gibbs measures to known ones
_Translation invariant measures._ (see [13]) Such measures correspond to \(z_{x}\equiv z\), i.e. constant functions. These measures are particular cases of our measures mentioned which can be obtained for \(m=k\), i.e. \(k-m=0\). In this case the condition (3) reads
\[z=\frac{1}{(1+\lambda z)^{k}}. \tag{21}\]
The equation (21) has a unique solution for all \(\lambda>0\).
_Bleher-Ganikhodjaev construction_. Consider an infinite path \(\pi=\{x^{0}=x_{0}<x_{1}<...\}\) on the half Cayley tree (the notation \(x<y\) meaning that paths from the root to \(y\) go through \(x\)). Associate to this path a collection \(z^{\pi}\) of numbers given by the condition
\[z_{x}^{\pi}=\begin{cases}l\text{ if }x\prec x_{n},\ x\in W_{n},\\ h,\text{ if }x_{n}\prec x,\ x\in W_{n},\\ h,\text{ if }x=x_{n}.\end{cases}\]
\(n=1,2,...\) where \(x\prec x_{n}\) (resp. \(x_{n}\prec x\)) means that \(x\) is on the left (resp. right) from the path \(\pi\) and \(z_{x_{n}}\in\{h,l\}\) are arbitrary numbers. For any infinite path \(\pi\), the collection of numbers \(z^{\pi}\) satisfying relations (3) exists and is unique (see Fig. 5).
_Periodic Gibbs measures._ (see [13]) Let \(G_{k}\) be a free product of \(k+1\) cyclic groups of the second order with generators \(a_{1},a_{2},...,a_{k+1}\), respectively.
It is known that there exists an one-to-one correspondence between the set of vertices \(V\) of the Cayley tree \(\Im^{k}\) and the group \(G_{k}\).
**Definition 6.** Let \(\widehat{G}\) be a normal subgroup of the group \(G_{k}\). The set \(z=\{z_{x},x\in G_{k}\}\) is said to be \(\widehat{G}\) -periodic if \(z_{yx}=z_{x}\) for \(\forall x\in G_{k},y\in\widehat{G}_{k}\).
Let \(G_{k}^{(2)}=\{x\in G_{k}:\) the length of word \(x\) is even\(\}.\) Note that \(G_{k}^{(2)}\) is the set of even vertices (i.e. with even distance to the root). Consider the boundary condition \(h\) and \(l\):
\[z_{x}=\begin{cases}h\text{ if }x\in G_{k}^{(2)},\\ l\text{ if }x\in G_{k}\setminus G_{k}^{(2)}.\end{cases}\]
and denote by \(\mu_{1}\), \(\mu_{2}\) the corresponding Gibbs measures. The \(\widehat{G}\)- periodic solutions of equation (3) are either translation-invariant (\(G_{k}\)- periodic) or \(G_{k}^{(2)}\) -periodic, they are solutions to
\[\begin{cases}h=\frac{1}{(1+\lambda l)^{k}},\\ l=\frac{1}{(1+\lambda l)^{k}}.\end{cases}\]
We note that these measures are particular cases of measures of \(\mu_{h,l}\) which can be obtained for \(m=r=0\) (See figure 6, for \(k=4\)).
_Weakly periodic Gibbs measures._ Following [17], [18], [22] recall the notion of weakly periodic Gibbs measures. Let \(G_{k}/\widehat{G}_{k}=\{H_{1},...,H_{r}\}\) be a factor group, where \(\widehat{G}_{k}\) is a normal subgroup of index \(r>1\).
Figure 5. In this figure the values of function \(z_{x}\) on the vertices of the Cayley tree of order \(5\) are shown. This is the case when \(m=4\) and \(r=4\).
Figure 6. In this figure the values of function \(z_{x}\) on the vertices of the Cayley tree of order \(4\) are shown.
**Definition 7.** A set \(z=\{z_{x},x\in G_{k}\}\) is called \(\widehat{G}_{k}\) - weakly periodic, if \(z_{x}=z_{ij}\), for any \(x\in H_{i}\), \(x_{\downarrow}\in H_{j}\), where \(x_{\downarrow}\) denotes the ancestor of \(x\).
We recall results known for the cases of index two. Note that any such subgroup has the form
\[H_{A}=\Big{\{}x\in G_{k}:\sum_{i\in A}w_{x}(a_{i})\text{ is even}\Big{\}}\]
where \(\emptyset\neq A\subseteq N_{k}=\{1,2,...,k+1\}\), and \(w_{x}(a_{i})\) is the number of \(a_{i}\) in a word \(x\in G_{k}\). We consider \(A\neq N_{k}\): when \(A=N_{k}\) weak periodicity coincides with standard periodicity. Let \(G_{k}/H_{A}=\{H_{0},H_{1}\}\) be the factor group, where \(H_{0}=H_{A}\), \(H_{1}=G_{k}\setminus H_{A}\). Then, in view of (3), the \(H_{A}\) - weakly periodic b.c. has the form
\[z_{x}=\begin{cases}z_{1},\ x\in H_{A},\ x_{\downarrow}\in H_{A},\\ z_{2},\ x\in H_{A},\ x_{\downarrow}\in G_{k}\setminus H_{A},\\ z_{3},\ x\in G_{k}\setminus H_{A},\ x_{\downarrow}\in H_{A},\\ z_{4},\ x\in G_{k}\setminus H_{A},\ x_{\downarrow}\in G_{k}\setminus H_{A}. \end{cases}\]
where the \(h_{i}\) satisfy the following equations:
\[z_{1}=\frac{1}{\Big{(}1+\lambda z_{3}\Big{)}^{i}}\frac{1}{ \Big{(}1+\lambda z_{1}\Big{)}^{k-i}},\ \ \ z_{2}=\frac{1}{\Big{(}1+\lambda z_{3}\Big{)}^{i-1}}\frac{1}{ \Big{(}1+\lambda z_{1}\Big{)}^{k-i+1}},\] \[z_{3}=\frac{1}{\Big{(}1+\lambda z_{2}\Big{)}^{i-1}}\frac{1}{ \Big{(}1+\lambda z_{4}\Big{)}^{k-i+1}},\ \ z_{4}=\frac{1}{\Big{(}1+\lambda z_{2}\Big{)}^{i}} \frac{1}{\Big{(}1+\lambda z_{4}\Big{)}^{k-i}}. \tag{22}\]
It is obvious that the following sets are invariant with respect to the operator \(W:R^{4}\to R^{4}\) defined by RHS of (22):
\[I_{1}=\Big{\{}z\in R^{4}:z_{1}=z_{2}=z_{3}=z_{4}\Big{\}},\ \ \ \ I_{2}=\Big{\{}z\in R^{4}:z_{1}=z_{4};z_{2}=z_{3}\Big{\}}\]
It is obvious to see that
\(\bullet\) measures corresponding to solutions on \(I_{1}\) are translation invariant
\(\bullet\) measures corresponding to solutions on \(I_{2}\) are weakly periodic, which coincide with the measures given for \(m=k-i\), \(k-m=i\), \(r=i-1\), \(k-r=k-i+1\).
## 5. Free energy
In this section, we consider free energy of \(HC\)-model Gibbs measure. In fact, Gibbs measures give the probability of the system \(X\) being in state \(x\in X\) (equivalently, of the random variable \(X\) having value \(x\)) as
\[\mu(X=x)=\frac{1}{Z(\beta)}\exp(-\beta H(x)),\]
where \(H(x)\) is a function from the space of states to the real numbers. The parameter \(\beta\) is (a free parameter) the inverse temperature. The normalizing constant \(Z(\beta)\) is the partition function.
Consider an infinite graph \(G\), and let \(\Lambda\subset G\) be finite subset. It is convinient to work with reduced free energy \(f=-\beta F\), which per unit volume is
\[f(\beta,\Lambda)=\frac{1}{|\Lambda|}\ln Z(\beta,\Lambda),\]
where \(Z(\beta,\Lambda)\) is the restiriction of the partition function \(Z(\beta)\) on the set \(\Lambda\), by fixing the state of the system outside of \(\Lambda\).
Note that by Theorem 6, we can construct Aternating Gibbs measures and by using these measures we compute the free energy for such measures. From [30, 31] it's known that the free energy of a compatible boundary condition (b.c.) is defined as the limit:
\[F(h)=-\lim_{n\to\infty}\frac{1}{\beta|V_{n}|}\ln Z_{n} \tag{23}\]
if it exists. Here \(|\cdot|\) denotes the cardinality of a set and \(Z_{n}\) is a partition function. We recall that in our case:
\[Z_{n}=\sum_{\widetilde{\sigma}_{n}\in\Omega_{V_{n}}}\lambda^{\#\widetilde{ \sigma}_{n}}\prod_{x\in W_{n}}z_{\widetilde{\sigma}(x),x}. \tag{24}\]
We consider ALT Gibbs measures on the half tree and from above, the family of probability measures ce compatible iff \(z=\{z_{x},x\in G_{k}\}\) satisfies the equality (3). Also, we considered a special class of \(z=\{z_{x},x\in G_{k}\}\) such that:
\(\bullet\) if at vertex \(x\) we have \(z_{x}=h\), then the function \(z_{y}\), which gives real values to each vertex \(y\in S(x)\) by the following rule
\[\begin{cases}h\text{ on }m\text{ vertices of }S(x),\\ l\text{ on }k-m\text{ remaining vertices},\end{cases}\]
\(\bullet\) if at vertex \(x\) we have \(z_{x}=l\), then the function \(z_{y}\), which gives real values to each vertex \(y\in S(x)\) by the following rule
\[\begin{cases}l\text{ on }r\text{ vertices of }S(x),\\ h\text{ on }k-r\text{ remaining vertices}.\end{cases}\]
Denote
\[\alpha_{n}=|\{x\in W_{n}:z_{x}=h\}|;\ \ \beta_{n}=|\{x\in W_{n}:z_{x}=l\}|. \tag{25}\]
Recall that \(W_{n}\) is the sphere with the center \(x^{0}\) and radius \(n\) on the half tree.
Consequently, the following recurrence system holds
\[\left\{\begin{array}{l}\alpha_{n+1}=m\alpha_{n}+(k-r)\beta_{n}\\ \beta_{n+1}=(k-m)\alpha_{n}+r\beta_{n}.\end{array}\right. \tag{26}\]
Denoting \(\varphi_{n}=\alpha_{n}+\beta_{n}\), from (26) one gets
\[\varphi_{n+1}=k\varphi_{n}\ \Rightarrow\ \varphi_{n}=k^{n},\ n\in\mathbb{N}. \tag{27}\]
Since \(\alpha_{n}=k^{n}-\beta_{n}\), we get
\[k^{n+1}-mk^{n}=(k-m-r)\beta_{n}+\beta_{n+1}. \tag{28}\]
Put
\[\beta_{n}=\frac{(k-m)k^{n}}{2k-m-r}+k^{n}\phi_{n}.\]
Then the last equation can be written as
\[(m+r-k)\phi_{n}=k\phi_{n+1}.\]
After short calculations, we obtain
\[\phi_{n}=\frac{\beta_{1}(2k-m-r)-k(k-m)}{k}\left(\frac{m+r-k}{k}\right)^{n-1}.\]
Hence,
\[\beta_{n}=\frac{(k-m)k^{n}}{2k-m-r}+\left(\beta_{1}(2k-m-r)-k(k-m)\right)(m+r- k)^{n-1}.\]
Thus
\[\beta_{1}=\frac{(k-m)k}{2k-m-r}+\left(\beta_{1}(2k-m-r)-k(k-m)\right)\ \ \Rightarrow\ \ \beta_{1}=\frac{(k-m)k}{2k-m-r}.\]
Then
\[\beta_{n}=\frac{(k-m)k^{n}}{2k-m-r}.\]
Note that \(\alpha_{n}+\beta_{n}=k^{n}\), then
\[\alpha_{n}=\frac{k^{n}(k-r)}{2k-r-m}+\left(k(k-m)-\beta_{1}(2k-m-r)\right)(m+ r-k)^{n-1}.\]
Since
\[\beta_{1}=\frac{(k-m)k}{2k-m-r},\]
one gets
\[\alpha_{n}=\frac{k^{n}(k-r)}{2k-r-m}.\]
Consequently, it is easy to check that
\[\lim_{n\rightarrow\infty}\frac{(k-1)\alpha_{n}}{k^{n+1}-1}=\frac{(k-1)(k-r)}{ k(2k-m-r)}\]
and
\[\lim_{n\rightarrow\infty}\frac{(k-1)\beta_{n}}{k^{n+1}-1}=\frac{(k-1)(k-m)}{ k(2k-m-r)}. \tag{29}\]
Then
\[F_{ALT}(h)=-\frac{1}{\beta}\cdot\left[\frac{(k-1)(k-r)\ln h-(k^{2}-(m+1)k+m) \ln l}{k(2k-m-r)}+\lim_{n\rightarrow\infty}\frac{(k-1)\ln\left(\sum_{i=0}^{ |V_{n}|}\lambda^{C_{|V_{n}|}^{i}}\right)}{k^{n+1}-1}\right]. \tag{30}\]
By AM-GM inequality
\[\sum_{i=0}^{|V_{n}|}\lambda^{C_{|V_{n}|}^{i}}\geq|V_{n}|\cdot\sqrt[|V_{n}|]{\lambda ^{2^{|V_{n}|}}}=|V_{n}|\cdot\lambda^{2^{|V_{n}|\cdot|V_{n}|-1}}.\]
Since \(\ln x\) is an increasing function
\[\ln\left(\sum_{i=0}^{|V_{n}|}\lambda^{C_{|V_{n}|}^{i}}\right)\geq\ln|V_{n}|+2^ {|V_{n}|}\cdot|V_{n}|^{-1}\ln\lambda.\]
Then
\[\lim_{n\to\infty}\frac{(k-1)\ln\left(\sum_{i=0}^{|V_{n}|}\lambda^{C_{|V_{n}|}^ {i}}\right)}{k^{n+1}-1}\geq\lim_{n\to\infty}\frac{\ln|V_{n}|}{|V_{n}|}+\lim_{n \to\infty}2^{|V_{n}|}\cdot|V_{n}|^{-2}\ln\lambda.\]
If \(\lambda>1\) then
\[\lim_{n\to\infty}\frac{(k-1)\ln\left(\sum_{i=0}^{|V_{n}|}\lambda^{C_{|V_{n}|}^ {i}}\right)}{k^{n+1}-1}=\infty. \tag{31}\]
By (29), (30) and (31) one gets
\[F_{ALT}(h)=-\lim_{n\to\infty}\frac{1}{\beta|V_{n}|}\ln\left[h^{\alpha_{n}}l^{ \beta_{n}}\left(\sum_{i=0}^{|V_{n}|}\lambda^{C_{|V_{n}|}^{i}}\right)\right]=-\infty. \tag{32}\]
Also, if \(\lambda\in(0,1]\) then
\[0\leq\lim_{n\to\infty}\frac{(k-1)\ln\left(\sum_{i=0}^{|V_{n}|}\lambda^{C_{|V_ {n}|}^{i}}\right)}{k^{n+1}-1}\leq\lim_{n\to\infty}\frac{\ln|V_{n}|}{|V_{n}|}= \lim_{n\to\infty}\ln\sqrt[|V_{n}|]{|V_{n}|}=0.\]
Namely, from (30)
\[F_{ALT}(h)=-\frac{1}{\beta}\cdot\left[\frac{(k-1)(k-r)\ln h-(k^{2}-(m+1)k+m) \ln l}{k(2k-m-r)}\right].\]
Hence from above results and by Theorem 5 and Theorem 6 we can conclude the following theorem.
**Theorem 7.** a) Let \(k\geq 2\), \(m+r\leq k-2\) (\(m=r\)) and \(\lambda_{cr}^{(1)}=\left(\frac{k-2m}{k-2m-1}\right)^{k}\cdot\frac{1}{k-2m-1}\). Then the following statements are true
* if \(\lambda_{cr}^{(1)}\in(0,1]\) and \(\lambda\in[\lambda_{cr}^{(1)},1]\) (resp. \(\lambda\in(1,+\infty)\)) then free energies \(F_{ALT}\) of b.c (3) is equal to \[-\frac{1}{\beta}\cdot\left[\frac{(k-1)(k-r)\ln h-(k^{2}-(m+1)k+m)\ln l}{k(2k-m -r)}\right](\text{resp. }-\infty).\]
* if \(\lambda_{cr}^{(1)}\in(1,\infty)\) then free energies \(F_{ALT}\) equals \(-\infty\).
b) Let \(k\geq 2\), \(r+m=k-2\) and \(C_{k}^{m+1}\lambda_{cr}\leq 2^{k}\). Then the following statements hold:
* if \(m\neq r\) and \(\lambda=\lambda_{cr}\) then free energies \(F_{ALT}\) equals \(-\infty\). Also, if \(\lambda>\lambda_{cr}\) then free energies \(F_{ALT}\) equals \(-\infty\).
|
2301.10394
|
Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning
|
Federated Learning (FL) has become a popular distributed learning paradigm
that involves multiple clients training a global model collaboratively in a
data privacy-preserving manner. However, the data samples usually follow a
long-tailed distribution in the real world, and FL on the decentralized and
long-tailed data yields a poorly-behaved global model severely biased to the
head classes with the majority of the training samples. To alleviate this
issue, decoupled training has recently been introduced to FL, considering it
has achieved promising results in centralized long-tailed learning by
re-balancing the biased classifier after the instance-balanced training.
However, the current study restricts the capacity of decoupled training in
federated long-tailed learning with a sub-optimal classifier re-trained on a
set of pseudo features, due to the unavailability of a global balanced dataset
in FL. In this work, in order to re-balance the classifier more effectively, we
integrate the local real data with the global gradient prototypes to form the
local balanced datasets, and thus re-balance the classifier during the local
training. Furthermore, we introduce an extra classifier in the training phase
to help model the global data distribution, which addresses the problem of
contradictory optimization goals caused by performing classifier re-balancing
locally. Extensive experiments show that our method consistently outperforms
the existing state-of-the-art methods in various settings.
|
Wenkai Yang, Deli Chen, Hao Zhou, Fandong Meng, Jie Zhou, Xu Sun
|
2023-01-25T03:18:10Z
|
http://arxiv.org/abs/2301.10394v2
|
Integrating Local Real Data with Global Gradient Prototypes for Classifier Re-Balancing in Federated Long-Tailed Learning
###### Abstract
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively in a data privacy-preserving manner. However, the data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model severely biased to the head classes with the majority of the training samples. To alleviate this issue, decoupled training has recently been introduced to FL, considering it has achieved promising results in centralized long-tailed learning by re-balancing the biased classifier after the instance-balanced training. However, the current study restricts the capacity of decoupled training in federated long-tailed learning with a sub-optimal classifier re-trained on a set of pseudo features, due to the unavailability of a global balanced dataset in FL. In this work, in order to re-balance the classifier more effectively, we integrate the local real data with the global gradient prototypes to form the local balanced datasets, and thus re-balance the classifier during the local training. Furthermore, we introduce an extra classifier in the training phase to help model the global data distribution, which addresses the problem of contradictory optimization goals caused by performing classifier re-balancing locally. Extensive experiments show that our method consistently outperforms the existing state-of-the-art methods in various settings.
## 1 Introduction
Federated Learning (FL) [13] is proposed as an effective distributed learning framework to enable local clients to collaboratively train a global model without exposing their local private data to each other. In the real world, there are two data distribution phenomena that introduce great challenges to the good convergence of FL algorithms. One is that the data samples are not identically and independently distributed (non-i.i.d.) across different clients. Furthermore, the other important phenomenon is that the global data distribution (i.e., the data distribution of the training samples merged from all clients' local data) usually shows a long-tailed/class-imbalanced pattern [20], where head classes occupy a much larger proportion of the training samples than tail classes. Directly applying FL on such long-tailed data will produce a global model with poor generalization ability that is severely biased to the head classes [17]. However, it is challenging to deal with FL on the non-i.i.d. and long-tailed data due to two aspects. **First**, affected by the non-i.i.d. data partitions, the local data distributions (i.e., local imbalance) show inconsistent long-tailed patterns with that of the global data distribution (i.e., global imbalance) [17]. Thus, tackling the local imbalance problem only (e.g., Fed-Focal Loss [2]) will not help to address the global imbalance problem in FL. **Second**, considering the data privacy concern, it is infeasible to obtain the imbalance pattern of the global data distribution from the local data information. This further limits the application of the global class re-weighting strategy [14].
To deal with the above problems, some existing studies manage to estimate the global imbalance pattern by utilizing either the uploaded gradients w.r.t. the classifier [17] or the values of local training losses [20]. They then apply the class-level [17] or client-level [21] re-weighting practice to focus more on the gradients contributed by tail classes or poorly-learned clients. However, previous studies [18, 23] have shown that the re-weighting practice will do harm to the representation learning phase. Therefore, the improvement brought by this kind of method is limited.
Recently, some centralized long-tailed learning studies [18, 22] manage to decouple the model learning on long-tailed data into the representation learning phase and the classifier learning phase, and find that the instance-balanced training (i.e., uniform sampling on the entire training set to make the contribution of each sample the same) leads to the well-learned representations but a biased classifier. Therefore, centralized decoupled training aims re-train the classifier on a small balanced dataset after the instance-balanced training, and has achieved very promising results. However, decoupled training is difficult to be implemented in FL due to the lack of a public balanced dataset. Then, CReFF [20] proposes to re-train the classifier on a set of pseudo features created on the server.
Nevertheless, the improvement brought by CReFF is restricted by the high similarity of the pseudo features per class, and the fundamental problem - lack of real balanced data still exists.
To better solve the lack of the real balanced data issue in the application of decoupled training in FL, we propose a different yet more effective classifier re-balancing algorithm, and achieve state-of-the-art results in federated long-tailed learning. That is, we choose to take full advantage of the abundant real data stored in the local clients, and allow the clients to re-balance the classifier during local training. Specifically, we make each client re-balance the classifier on a local balanced dataset that is mixed with the local real data and the global gradient prototypes of the classifier sent by the server, while the latter is supposed to address the issue of missing classes in the local datasets. Additionally, we add an extra classifier in the local training phase to jointly model the global data distribution. This practice helps to overcome the optimization difficulty on the global representation learning brought by the practice of local classifier re-balancing. Compared with CReFF, we allow the clients to collaboratively train a balanced classifier with their sufficient real data during local training, which needs no extra requirements on the server and produces an optimal classifier with better generalization ability. We conduct extensive experiments on the three popular long-tailed image classification tasks, and the results show that our method can significantly outperform all existing federated long-tailed learning methods in various settings.
## 2 Related Work
### Federated Learning
Federated Averaging (FedAvg) McMahan _et al._ (2017) is the most widely-used FL algorithm, but it has been shown that the performance of FedAvg drops greatly when the data is non-i.i.d. Karimireddy _et al._ (2020). Therefore, plenty of existing FL studies target on dealing with the non-i.i.d. data partitions in FL. For example, FedProx Li _et al._ (2018) and FedDyn Lacar _et al._ (2020) manage to make the local models converge to the same global optimum by adding the regularization terms in the local training objectives, SCAF-FOLD Karimireddy _et al._ (2020) chooses to correct the local gradient in each step with the gradients from other clients to reduce the gradient variance across different clients. FedAvgM Hsu _et al._ (2019) and FedOPT Reddi _et al._ (2020) adopt the server momentum and the adaptive server optimizer in the server aggregation phase.
### Long-Tailed/Imbalanced Learning
In the real world, the data points usually show a long-tailed distribution pattern. Therefore, learning good models on the long-tailed/class-imbalanced data has been widely studied Zhang _et al._ (2021) in the traditional centralized learning, and attracts more and more attention in the FL setting.
#### 2.2.1 Centralized Long-Tailed Learning
The methods to tackle the class imbalance problem in the centralized long-tailed learning can be mainly divided into three categories: (1) **Class-level re-balancing methods** that includes over-sampling training samples from tail classes Chawla _et al._ (2002), undersampling data points from head classes Liu _et al._ (2008), or re-weighting the loss values or the gradients of different training samples based on the label frequencies Cui _et al._ (2019); Cao _et al._ (2019) or the predicted probabilities of the model Lin _et al._ (2017). (2) **Augmentation-based methods** aim to create more data samples for tail classes either from the perspective of the feature space Chu _et al._ (2020); Zang _et al._ (2021) or the sample space Chou _et al._ (2020). (3) **Classifier re-balancing mechanisms** are based on the finding that the uniform sampling on the whole dataset during training benefits the representation learning but leads to a biased classifier, so they design specific algorithms to adjust the classifier during or after the representation learning phase Zhou _et al._ (2020); Kang _et al._ (2019).
#### 2.2.2 Federated Long-Tailed Learning
Recently, a few studies begin to focus on the class imbalance problem in FL, as FL becomes a more practical and popular learning paradigm and the long-tailed data distribution is unavoidable in the real world. Fed-Focal Loss Sarkar _et al._ (2020) directly applies Focal Loss Lin _et al._ (2017) in the clients' local training, but it neglects the fact that the local imbalance pattern is inconsistent with the global imbalance pattern. Ratio Loss Wang _et al._ (2021) utilizes an auxiliary dataset on the server (which is usually impractical in real cases) to estimate the global data distribution, and send the estimated information to clients to perform class-level re-weighting during local training. CLIMB Shen _et al._ (2021) is proposed as a client-level re-weighting method to give more aggregation weights to the clients with larger local training losses. However, both Ratio Loss and CLIMB bring negative effects to the representation learning caused by the re-weighting practice. FEDIC Shang _et al._ (2022) also needs the impractical assumption to own an auxiliary balanced dataset for fine-tuning the global model on the server, and uses the fine-tuned model along with the local models as teachers to perform knowledge distillation on the original global model. Most recently, CReFF Shang _et al._ (2022) adopts the decoupled training idea to re-train the classifier on the server by creating a number of federated features for each class, and achieves previously state-of-the-art performance. However, the low quality and the limited number of federated features restrict its potential.
## 3 Methodology
### Problem Definition
In the FL framework, each client \(k\) (\(k=1,\cdots,N\)) has its own local dataset \(\mathcal{D}_{k}\), and all clients form a federation to jointly train a good global model under the constraint that the local data is always kept in the local devices. Then, the optimization goal of FL can be formulated as
\[\boldsymbol{\theta}^{*}=\operatorname*{arg\,min}_{\boldsymbol{\theta}}L( \boldsymbol{\theta})=\operatorname*{arg\,min}_{\boldsymbol{\theta}}\sum_{k=1} ^{N}\frac{|\mathcal{D}_{k}|}{\sum_{i=1}^{N}|\mathcal{D}_{i}|}L(\boldsymbol{ \theta};\mathcal{D}_{k}), \tag{1}\]
where \(|\mathcal{D}_{k}|\) represents the total number of training samples in \(\mathcal{D}_{k}\), and \(L(\cdot;\mathcal{D}_{k})\) is the local training objective in client \(k\).
Federated Averaging (FedAvg) McMahan _et al._ (2017) is the most popular FL framework to solve the above optimization problem. Specifically, at the beginning of each communication round \(t\), the server sends the updated global model \(\boldsymbol{\theta}^{t-1}\)
from the last round to all available/sampled clients \(k\in\mathcal{C}^{t}\) in the current round, and each client \(k\) takes \(\mathbf{\theta}^{t-1}\) as the initial model to perform multiple updates on its local dataset \(\mathcal{D}_{k}\) and gets the new model \(\mathbf{\theta}_{k}^{t}\). Then the clients will only send the accumulated gradients \(\mathbf{g}_{k}^{t}=\mathbf{\theta}^{t-1}-\mathbf{\theta}_{k}^{t}\) back to the server, the server aggregates the collected local gradients and updates the global model as the following:
\[\mathbf{\theta}^{t}=\mathbf{\theta}^{t-1}-\eta_{s}\frac{1}{|\mathcal{C}^{t}|}\sum_{k \in\mathcal{C}^{t}}\frac{|\mathcal{D}_{k}|}{\sum_{i\in\mathcal{C}^{t}}| \mathcal{D}_{i}|}\mathbf{g}_{k}^{t}, \tag{2}\]
where \(\eta_{s}\) is the server learning rate, \(|\mathcal{C}^{t}|\) is the number of clients participating in the current round.
In this paper, we study the optimization problem of FL in the setting where the global data distribution \(\mathcal{D}=\bigcup_{k}\mathcal{D}_{k}\) is long-tailed. Previous studies in centralized long-tailed learning [14] propose to decouple the training on the long-tailed classification tasks into the representation learning and classifier learning phases, and point out that performing class-level re-weighting rather than instance-balanced training brings negative impact on the representation learning, and the imbalanced data distribution mainly affects the classifier learning. Thus, **our main motivation is to effectively re-balance the classifier when dealing with the long-tailed global data.** Specifically, in order to tackle the problem of the lack of a global balanced dataset in FL, we manage to make each client re-balance the classifier locally during training, by taking great advantage of the abundant real data stored in the local datasets.
### Our Optimization Target
We split the original model architecture \(\mathbf{\theta}\) into two parts: the representation encoder \(\mathbf{P}\) and the classifier \(\mathbf{W}\), and aim to re-balance \(\mathbf{W}\) during the local training to make it behave well on the class-balanced data distribution \(\mathcal{D}^{bal}\). However, re-balancing classifier during (instead of after) the representation learning phase leads to a contradictory optimization target:
\[(\mathbf{P}^{*},\mathbf{W}^{*}) =\operatorname*{arg\,min}_{\mathbf{P},\mathbf{W}}L(\mathbf{P},\mathbf{W};\bigcup _{k}\mathcal{D}_{k}) \tag{3}\] \[=\operatorname*{arg\,min}_{\mathbf{P},\mathbf{W}}\sum_{k=1}^{N}\frac{| \mathcal{D}_{k}|}{\sum_{i=1}^{N}|\mathcal{D}_{i}|}L(\mathbf{P},\mathbf{W};\mathcal{D}_ {k}),\] \[\text{s.t.}\qquad\mathbf{W}^{*} =\operatorname*{arg\,min}_{\mathbf{W}}L(\mathbf{P}^{*},\mathbf{W};\mathcal{D }^{bal}),\]
As we can see, when the global data distribution \(\mathcal{D}=\bigcup_{k}\mathcal{D}_{k}\) is long-tailed, the above optimization problem has no solution, caused by the contradictory goals when updating \(\mathbf{W}\). To address to negative impact of performing local classifier re-balancing, we design an architecture of the two-stream classifiers by adding a new classifier \(\widehat{\mathbf{W}}\) in the training phase, in order to help model the global data distribution \(\mathcal{D}\) and make re-balancing \(\mathbf{W}\) possible. The full illustrations of our model architecture and training process are in Figure 1, and we reformulate our global optimization target as:
\[(\mathbf{P}^{*},\mathbf{W}^{*},\widehat{\mathbf{W}}^{*}) =(\mathbf{\theta}^{*},\widehat{\mathbf{W}}^{*}) \tag{4}\] \[=\operatorname*{arg\,min}_{\mathbf{\theta},\widehat{\mathbf{W}}}\sum_{k=1 }^{N}\frac{|\mathcal{D}_{k}|}{\sum_{i=1}^{N}|\mathcal{D}_{i}|}L(\mathbf{\theta}, \widehat{\mathbf{W}};\mathcal{D}_{k}),\] \[\text{s.t.}\qquad\mathbf{W}^{*} =\operatorname*{arg\,min}_{\mathbf{W}}L(\mathbf{P}^{*},\mathbf{W};\mathcal{D }^{bal}).\]
By making the combination of two classifiers model the global data distribution in the first part of Eq. (4), we make sure that the representation encoder is trained under the instance-balanced training paradigm, which benefits the representation learning most. In the following, we introduce our algorithm to solve Eq. (4) from three aspects, including the local training stage, the server aggregation stage, and the inference stage.
### Classifier Re-Balancing by Integrating Local Real Data with Global Gradient Prototypes
#### Local Training Stage
In the local training, each client aims to solve the sub-problem of Eq. (4) as
\[(\mathbf{P}^{*},\mathbf{W}^{*},\widehat{\mathbf{W}}^{*}) =\operatorname*{arg\,min}_{\mathbf{P},\mathbf{W},\widehat{\mathbf{W}}}L(\mathbf{P },\mathbf{W},\widehat{\mathbf{W}};\mathcal{D}_{k}), \tag{5}\] \[\text{s.t.}\qquad\mathbf{W}^{*} =\operatorname*{arg\,min}_{\mathbf{W}}L(\mathbf{P}^{*},\mathbf{W};\mathcal{D }^{bal}).\]
It is a constrained optimization problem that is non-trivial to solve, we choose to address it by considering it as a multi-target learning task and optimizing all parameters concurrently. We briefly summarize the whole process of local training in our method in Algorithm 1. To specific, the encoder parameters \(\mathbf{P}\) and the additional classifier \(\widehat{\mathbf{W}}\) will be trained under an instance-balanced manner (Line 3). When updating \(\mathbf{W}\), besides the gradients of the batch samples from \(\mathcal{D}_{k}\) (Line 4), our method creates a local balanced dataset \(\mathcal{D}_{k}^{bal}\) to help re-balancing \(\mathbf{W}\) (Line 5-6) following the second target of Eq. (5). The detailed steps including the following parts:
Updating \(\mathbf{P}\) and \(\widehat{\mathbf{W}}\).In the \(t\)-th round, we perform the normal stochastic gradient decent mechanism1 in which an instance-balanced dataloader is applied to update \(\mathbf{P}\) and \(\widehat{\mathbf{W}}\).2 That is, for the local step \(i=1,2,\cdots,I\), a random batch of examples \(\mathcal{B}^{i}_{k}\) is sampled from \(\mathcal{D}_{k}\) to perform that:
Footnote 1: We do not have the assumption about the local optimizer, and any local optimizer (e.g., SGDM or Adam [10]) is acceptable. Here, we take SGD as an example.
Footnote 2: For simplicity, we omit the bias term here, while our method is still applicable when the bias term exists.
\[\mathbf{P}^{i}_{k} =\mathbf{P}^{i-1}_{k}-\eta_{l}\nabla_{\mathbf{P}^{i-1}_{k}}L(\mathbf{P}^{i-1} _{k},\mathbf{W}^{i-1}_{k},\widehat{\mathbf{W}}^{i-1}_{k};\mathcal{B}^{i}_{k}), \tag{6}\] \[\widehat{\mathbf{W}}^{i}_{k} =\widehat{\mathbf{W}}^{i-1}_{k}-\eta_{l}\nabla_{\widehat{\mathbf{W}}^{i-1 }_{k}}L(\mathbf{P}^{i-1}_{k},\mathbf{W}^{i-1}_{k},\widehat{\mathbf{W}}^{i-1}_{k};\mathcal{ B}^{i}_{k}),\]
in which the initial model \((\mathbf{P}^{0}_{k},\mathbf{W}^{0}_{k},\widehat{\mathbf{W}}^{0}_{k})\) is chosen as the global model \((\mathbf{P}^{t-1},\mathbf{W}^{t-1},\widehat{\mathbf{W}}^{t-1})\) received from the server in the current round, \(\eta_{l}\) is the local learning rate. One important thing is, when calculating the above loss \(l\) on each sample \((\mathbf{x},y)\), the representation vector \(\mathbf{h}:=f(\mathbf{x};\mathbf{P})\) will be first fed into both two classifiers and get two logits \(\mathbf{W}^{T}\mathbf{h}\) and \(\widehat{\mathbf{W}}^{T}\mathbf{h}\). Then we perform the element-wise addition to get the final logits \(\mathbf{z}=\mathbf{W}^{T}\mathbf{h}+\widehat{\mathbf{W}}^{T}\mathbf{h}\), and use \(\mathbf{z}\) for the loss calculation.
Updating \(\mathbf{W}\).When updating \(\mathbf{W}\), one part of gradients comes from the above back propagation process on \(\mathcal{B}^{i}_{k}\) as
\[\mathbf{g}^{local}_{\mathbf{W}^{i-1}_{k}}=\nabla_{\mathbf{W}^{i-1}_{k}}L(\mathbf{P}^{i-1}_{k},\mathbf{W}^{i-1}_{k},\widehat{\mathbf{W}}^{i-1}_{k};\mathcal{B}^{i}_{k}), \tag{7}\]
which corresponds to the first part in Eq. (5). For the second part of Eq. (5), it needs to calculate the gradients of \(\mathbf{W}^{i-1}_{k}\) on a small balanced set \(\mathcal{D}^{bal}_{k}\), that is supposed to be created in the local. However, there exists difficulty in constructing \(\mathcal{D}^{bal}_{k}\) from \(\mathcal{D}_{k}\), since it is very likely that there are some classes missing in the local label set \(\mathcal{L}_{k}\) of \(\mathcal{D}_{k}\) due to the non-i.i.d. data partitions. Then, we propose a _mixed gradient re-balancing mechanism_ to overcome this challenge by integRating local rrEal **Data** with Global gRAdient prototype**s (**RedGrape** as our method). Specifically, for each class \(c\), **(1)** if the sample quantity of class \(c\) in \(\mathcal{D}_{k}\) reaches a threshold \(T\), we think client \(k\) have sufficient samples of class \(c\) in its local dataset, and randomly sample \(T\) training samples of class \(c\) to form \(\mathcal{D}^{bal}_{k,c}\) for \(\mathcal{D}^{bal}_{k}\).3 Then, the gradients contributed by class \(c\) in \(\mathcal{D}^{bal}_{k}\) is
Footnote 3: In different rounds, client \(k\) can choose different \(T\) samples of class \(c\) for \(\mathcal{D}^{bal}_{k,c}\), in order to make fully use of the local real data.
\[\mathbf{g}_{\mathbf{W}^{i-1}_{k},c}=\nabla_{\mathbf{W}^{i-1}_{k}}L(\mathbf{P}^{i-1}_{k},\mathbf{W} ^{i-1}_{k};\mathcal{D}^{bal}_{k,c}). \tag{8}\]
**(2)** If client \(k\) does not have enough data of class \(c\) in its local dataset, we choose to estimate the gradient contribution of
Figure 1: The full illustration of our method. We add a new global classifier \(\widehat{\mathbf{W}}\), and perform the instance-balanced training on the whole network (including the encoder and the two classifiers). Furthermore, we propose a novel algorithm to re-balance the original classifier during the local training by integrating the local real data and the global gradient prototypes (\(\mathbf{g}^{pro}_{W,c}\) in the figure) to form a local balanced dataset for adjusting the original classifier. In the inference phase, we only keep the global encoder \(\mathbf{P}\) and the re-balanced classifier \(\mathbf{W}\).
class \(c\) with the global gradient prototype of class \(c\) in the \((t-1)\)-th round, which is the averaged gradient of training samples belonging to class \(c\) w.r.t. the classifier \(\mathbf{W}^{t-2}\) across all available clients in the last round [22]:
\[\mathbf{g}^{pro}_{\mathbf{W}^{t-2},c}=\frac{1}{|\mathcal{C}_{c}^{t-1}|}\sum_{k\in \mathcal{C}_{c}^{t-1}}\mathbf{g}^{pro}_{\mathbf{W}^{t-2},k,c}, \tag{9}\]
\[\mathbf{g}^{pro}_{\mathbf{W}^{t-2},k,c}=\nabla_{\mathbf{W}^{t-2}}L(\mathbf{P}^{t-2},\mathbf{W}^{t- 2};\mathcal{D}_{k,c}), \tag{10}\]
where \(\mathcal{C}_{c}^{t-1}\) represents the set of clients sampled in the \((t-1)\)-th round and have the training samples of class \(c\), and \(\mathcal{D}_{k,c}\) denotes all training samples of class \(c\) in \(\mathcal{D}_{k}\). Thus, it requires each client sampled in the previous round to first calculate the local gradient prototype of each class \(c\in\mathcal{L}_{k}\) on the same model \((\mathbf{P}^{t-2},\mathbf{W}^{t-2})\), return \(\{\mathbf{g}^{pro}_{\mathbf{W}^{t-2},k,c}|c\in\mathcal{L}_{k}\}\) back to the server along with other local gradients, and receive the global gradient prototypes averaged and sent by the server. Then, the final gradients on the local balanced dataset to optimize the second part of Eq. (5) is
\[\mathbf{g}^{bal_{k-1}}_{\mathbf{W}^{t-1}_{k}}=\frac{1}{|\mathcal{L}|}(\sum_{c\in \mathcal{L}_{k}^{bal}}\mathbf{g}_{\mathbf{W}^{t-1},c}+\sum_{c\in\mathcal{L}\setminus \mathcal{L}_{k}^{bal}}\mathbf{g}^{pro}_{\mathbf{W}^{t-2},c}), \tag{11}\]
where \(\mathcal{L}_{k}^{bal}\subset\mathcal{L}_{k}\) is the label set in which each class consists of more than \(T\) samples and \(\mathcal{L}\) is the entire label set. Finally, the updating rule for \(\mathbf{W}^{i-1}_{k}\) is4
Footnote 4: The local classifier re-balancing starts from the 2nd round.
\[\mathbf{W}^{i}_{k}=\mathbf{W}^{i-1}_{k}-\eta_{l}\left[\mathbf{g}^{local}_{\mathbf{W}^{i-1}_{k }}+\lambda\mathbf{g}^{bal}_{\mathbf{W}^{i-1}_{k}}\frac{\|\mathbf{g}^{local}_{\mathbf{W}^{i-1}_ {k}}\|}{\|\mathbf{g}^{bal}_{\mathbf{W}^{i-1}_{k}}\|}\right], \tag{12}\]
where \(\lambda\) is a re-balance factor to control the re-balancing strength for updating \(\mathbf{W}\). In Eq. (12), we normalize the scale5 of \(\mathbf{g}^{bal}_{\mathbf{W}^{i-1}_{k}}\) at each step in order to address the unstable training caused by the constant part of \(\mathbf{g}^{pro}_{\mathbf{W}^{t-1},c}\), by making its scale consistent with the decreasing trend of the scale of real gradients during training.
Footnote 5: Here, \(\|\cdot\|\) represents the Frobenius Norm.
After training, the new model is \((\mathbf{P}^{t}_{k},\mathbf{W}^{t}_{k},\widehat{\mathbf{W}}^{t}_{k})\), and client \(k\) sends the local gradients \((\mathbf{g}_{\mathbf{P}^{t-1},k},\mathbf{g}_{\mathbf{W}^{t-1},k},\mathbf{g}_{\widehat{\mathbf{W}}^{t- 1},k})=(\mathbf{P}^{t}_{k}-\mathbf{P}^{t-1},\mathbf{W}^{t}_{k}-\mathbf{W}^{t-1},\widehat{\mathbf{ W}}^{t}_{k}-\widehat{\mathbf{W}}^{t-1})\) along with the local gradient prototypes \(\{\mathbf{g}^{pro}_{\mathbf{W}^{t-1},k,c}|c\in\mathcal{L}_{k}\}\) to the server.
#### 3.2.2 Server Aggregation Stage
The server first aggregates the gradients and updates the global model as
\[\begin{split}&(\mathbf{P}^{t},\mathbf{W}^{t},\widehat{\mathbf{W}}^{t})=(\bm {P}^{t-1},\mathbf{W}^{t-1},\widehat{\mathbf{W}}^{t-1})\\ &-\eta_{s}\sum_{k\in\mathcal{C}^{t}}\frac{|\mathcal{D}_{k}|}{ \sum_{i\in\mathcal{C}^{t}}|\mathcal{D}_{i}|}(\mathbf{g}_{\mathbf{P}^{t-1},k},\mathbf{g}_{ \mathbf{W}^{t-1},k},\mathbf{g}_{\widehat{\mathbf{W}}^{t-1},k}).\end{split} \tag{13}\]
Also, the server needs to update the global gradient prototypes as
\[\mathbf{g}^{pro}_{\mathbf{W}^{t-1},c}=\begin{cases}\frac{1}{|\mathcal{C}_{c}^{t}|} \sum_{k\in\mathcal{C}_{c}^{t}}\mathbf{g}^{pro}_{\mathbf{W}^{t-1},k,c},\quad\mathcal{ C}_{c}^{t}\neq\emptyset,\\ \mathbf{g}^{pro}_{\mathbf{W}^{t-2},c},\quad\mathcal{C}_{c}^{t}=\emptyset,\end{cases} \tag{14}\]
in which the second case corresponds to the situation where all clients in \(C^{t}\) from the current round do not contain samples of class \(c\). In this case, we re-use the global gradient prototype of class \(c\) from the previous round. The updated global model and global gradient prototypes are broadcast to the sampled clients in the next round.
### Inference Stage
After federated training, we only keep the re-balanced classifier \(\mathbf{W}\) and abandon \(\widehat{\mathbf{W}}\) in the reference stage:
\[y_{\text{pred}}=\operatorname*{arg\,max}_{i}[\mathbf{W}^{T}f(x;\mathbf{P})]. \tag{15}\]
## 4 Experiments and Analysis
### Experimental Settings
Datasets and ModelsWe conduct experiments on three popular image classification benchmarks: MNIST [1], CIFAR-10 and CIFAR-100 [15]. We follow existing studies [1, 19] to create the long-tailed versions of training sets of above three datasets (i.e., MNIST-LT, CIFAR-10/100-LT), and keep the test sets as balanced. We first define the term _Imbalance Ratio_: \(\text{IR}=\frac{\max_{c}\{n_{c}\}}{\min_{c}\{n_{c}\}}\), which is the ratio between the maximum sample number across all classes and the minimum sample number across all classes, to reflect the imbalance degree of the global data distribution. Then, the training sample quantity of each class follows an exponential decay. We choose \(\text{IR}=10,50,100\) in our main experiments. Furthermore, we follow the existing studies [1, 19] to adopt the Dirichlet distribution \(\text{Dir}(\alpha)\) for the non-i.i.d. data partitioning, in which \(\alpha\) controls the non-i.i.d. degree. We set \(\alpha=1.0\) in our main experiments, and put the results of other \(\alpha\)s in the Appendix. We use the convolutional neural network (CNN) [18] for MNIST, and use ResNet-56 [14] for CIFAR-10/100 datasets. More details can be found in the Appendix.
Baseline MethodsWe compare our method with the existing federated long-tailed learning algorithms, including the traditional FedAvg algorithm with the CrossEntropy Loss (FedAvg+CE) applied in the local training [18], Fed-Focal Loss [20], Ratio Loss [21], CLIMB [22], and the state-of-the-art method CReFF [23].
Training DetailsWe conduct experiments in two popular FL settings based on the ratio of clients participating in each round: (1) **Full client participation** setting: all clients participate in updating the global model in each round, and the total number of clients is 10 in this setting; (2) **Partial client participation** setting: the total number of clients is 50 but only 10 clients are randomly sampled in each round. We adopt SGDM as the optimizer for local training. The local learning rate is 0.01 for MNIST-LT and 0.1 for CIFAR-10/100-LT. The number of local epochs is 5 for all datasets. As for our method, the re-balance factor \(\lambda\) is fixed as 0.1 in all experiments, and we explore the effect of different values of \(\lambda\) in Section 5.1. The quantity threshold \(T\) for each class to create the local balanced dataset is set as 8 for MNIST-LT and CIFAR-10-LT,
and 2 for CIFAR-100-LT, and we put further discussion in Section 5.2. Each experiment is run on 3 random seeds. Complete training details (e.g., the number of communication rounds in each setting, detailed hyper-parameters of other baselines) are in the Appendix. Our code is implemented on the FedML [11] platform.6
Footnote 6: Our code will be released upon acceptance.
### Main Results
In the main paper, we report the averaged accuracy over the last 10 rounds on the balanced testing set of each dataset following existing studies [10]. We also display the averaged test accuracy on tail classes in each setting in the Appendix to show that our method can significantly bring improvement to the model's performance on tail classes. The results under the full client participation setting are in Table 1, and Table 2 displays the results under the partial client participation setting. We can draw the main conclusion from these tables as: **our method can consistently outperform the existing algorithms in all settings.**
As we can see, Fed-Focal Loss achieves lower performance than the FedAvg with CE loss in some settings, which validates the claim that directly applying the centralized long-tailed learning methods can not help to address the global class imbalance problem in FL, as it ignores the mismatch between the global and the local imbalance patterns. Ratio Loss and CLIMB apply the class-level re-weighting and client-level re-weighting idea separately, and gain slight improvement compared with FedAvg. We analyze that the reason for the limited improvement lies in that though the re-weighting practice helps the model to focus more on the learning of tail classes, it is not conducive to the representation learning on the abundant data of the head classes [13]. Moreover, the assumption of obtaining a global auxiliary dataset makes Ratio Loss impractical in real cases.
The superior performance of CReFF helps to validates the effectiveness of the classifier re-balancing on the final performance. However, the optimization of the federated features requires massive computations on the server (especially when the number of classes is large), and the federated features from the same class may converge to be similar. Thus, the re-trained classifier faces the problem that it may overfit on the highly similar and small amount of the federated features (reflected in the poorer performance under smaller IR). Our method instead takes full advantage of the local real data integrated with the global gradient prototypes to locally re-balance the classi
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{MNIST-LT} & \multicolumn{3}{c}{CIFAR-10-LT} & \multicolumn{3}{c}{CIFAR-100-LT} \\ \cline{2-10} & IR \(=10\) & IR \(=50\) & IR \(=100\) & IR \(=10\) & IR \(=50\) & IR \(=100\) & IR \(=10\) & IR \(=50\) & IR \(=100\) \\ \hline FedAvg+CE & 97.99 & 95.98 & 92.71 & 76.21 & 68.41 & 59.83 & 49.08 & 36.47 & 33.28 \\ Fed-Focal Loss & 97.90 & 96.14 & 92.97 & 77.92 & 61.21 & 59.86 & 48.14 & 35.51 & 30.05 \\ Ratio Loss & 97.96 & 96.20 & 92.99 & 78.58 & 68.01 & 59.27 & 48.30 & 37.62 & 31.92 \\ CLIMB & 97.89 & 95.87 & 92.71 & 78.95 & 66.25 & 57.67 & 49.27 & 36.13 & 32.18 \\ CReFF & 97.68 & 96.49 & 93.85 & 83.18 & 73.46 & 69.36 & 46.58 & 35.82 & 33.46 \\ \hline Ours & **98.34** & **97.06** & **95.73** & **83.74** & **74.01** & **71.04** & **51.09** & **38.49** & **34.63** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results under the **full client participation** setting. We report the overall test accuracy on the balanced testing set of each dataset.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{MNIST-LT} & \multicolumn{3}{c}{CIFAR-10-LT} & \multicolumn{3}{c}{CIFAR-100-LT} \\ \cline{2-10} & IR \(=10\) & IR \(=50\) & IR \(=100\) & IR \(=10\) & IR \(=50\) & IR \(=100\) & IR \(=10\) & IR \(=50\) & IR \(=100\) \\ \hline FedAvg+CE & 95.51 & 91.82 & 89.92 & 60.38 & 45.15 & 40.06 & 40.81 & 24.62 & 22.08 \\ Fed-Focal Loss & 96.79 & 92.59 & 90.45 & 61.16 & 46.20 & 41.10 & 40.85 & 24.73 & 20.17 \\ Ratio Loss & 95.17 & 91.10 & 89.64 & 63.97 & 44.22 & 42.11 & 40.96 & 24.12 & 23.06 \\ CLIMB & 95.67 & 92.24 & 89.75 & 61.75 & 46.91 & 42.02 & 40.64 & 23.99 & 21.44 \\ CReFF & 96.29 & 94.16 & 92.16 & 69.38 & 60.52 & 55.63 & 39.38 & 25.42 & 24.77 \\ \hline Ours & **97.54** & **95.17** & **93.61** & **71.68** & **61.42** & **57.11** & **42.97** & **27.73** & **25.64** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results under the **partial client participation** setting. We report the overall test accuracy on the balanced testing set of each dataset.
Figure 2: The test accuracy curves on CIFAR-10-LT with IR \(=100\) under the full client participation setting. Our method achieves faster convergence speed and better performance than all existing baselines.
fier while maintaining the good effects of instance-balanced training on the representation learning, and consistently outperforms all previous methods by a large margin. Compared with CReFF and Ratio Loss, we do not have extra requirements except for the aggregation procedures on the server, and produce a re-balanced classifier that has better generalization ability with the help of abundant real data.
We further display the evaluation accuracy curve after each round in CIFAR-10-LT (\(\text{IR}=100\)) under the full client participation setting in Figure 2. As we can see, **our method not only has the best converged performance, but also achieves much faster convergence speed than all baseline methods.** That is because our method re-balances the classifier at each local training step, and this makes it converge faster to the optimal balanced classifier.
### Results in Another Class Imbalance Setting
We also conduct experiments in a binary class imbalance setting in FL [21], in which three classes are randomly chosen as the tail classes, and they are assigned with \(1/\text{IR}\) number of sampled compared with other normal/head classes. The experiments are conducted on MNIST and CIFAR-10 datasets with \(\text{IR}=100\), and other experimental settings are kept as the same as that in our main experiments. The results are in Table 3. The conclusion remains the same that, **our method achieves the best performance in all cases.**
## 5 Further Explorations
In this section, we make further explorations about the two crucial hyper-parameters of our method. The following experiments are conducted on CIFAR-10-LT with \(\text{IR}=100\) under the full client participation setting.
### Re-Balancing Strength Decides on The Convergence Trade-off
In order to solve the optimization target of Eq. (4), we consider updating \(\mathbf{W}\) in a multi-task learning setting as Eq. (12), where a re-balance factor \(\lambda\) is used to control the re-balancing strength. Here, we conduct experiments to explore the effect of different \(\lambda\)s on the model's performance, and the results are shown in Figure 3. We find that the smaller \(\lambda\) results in slower convergence speed but obtains relatively better performance of the converged model. We analyze the reason lies in that, the \(\mathbf{g}_{\mathbf{W}_{i-1}}^{bal}\) contains a part of global gradient prototypes calculated in the previous round and is a constant when updating \(\mathbf{W}\). It will adversely affect the model's convergence in the late stage of the training when we are still using a large \(\lambda\) to re-balance the classifier. An interesting direction to improve our method is designing an adaptive \(\lambda\) that decays along with the training, which we leave to future work. When \(\lambda=0.0\), the addition of two classifiers equals to one normal classifier used in FedAvg, so FedAvg is a special case of our method in this case.
### Local Real Data Plays An Important Role in Re-Balancing The Classifier
During creating the local balanced datasets, we set a threshold \(T\) to decide whether the local clients own the enough data of a specific class. Larger \(T\) decreases the number of available classes in which the real data can be used to calculate the gradients for classifier re-balancing, while smaller \(T\) leads to the relatively unreliable gradients of class \(c\). We then explore the effect of different \(T\)s, and put the results in Figure 4. We indeed observe a trade-off pattern as expected and find that \(T=4,8\) are the most proper choices. \(T=\infty\) means we remove the role of local real data on the classifier re-balancing and only use the global gradient prototypes instead, and we find the performance degrades greatly, which verifies the large benefits of using local real data to adjust the classifier.
Figure 4: The test accuracy of using different sample quantity thresholds for each class when creating the local balanced datasets in CIFAR-10-LT.
Figure 3: The test accuracy of using different re-balance factor \(\lambda\) in CIFAR-10-LT. Smaller \(\lambda\) leads to slower convergence speed but relatively better generalization ability.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Full Participation} & \multicolumn{2}{c}{Partial Participation} \\ \cline{2-5} & MNIST & CIFAR-10 & MNIST & CIFAR-10 \\ \hline FedAvg+CE & 93.91 & 70.91 & 91.97 & 61.27 \\ Fed-Focal Loss & 93.53 & 70.77 & 91.77 & 59.18 \\ Ratio Loss & 94.23 & 73.22 & 92.32 & 62.86 \\ CLIMB & 93.89 & 72.04 & 92.11 & 63.18 \\ CReFF & 96.70 & 78.98 & 96.01 & 71.41 \\ \hline Ours & **96.86** & **79.88** & **96.43** & **73.31** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results in the binary class imbalance setting. \(\text{IR}=100\) and the randomly chosen tail classes are 0, 7 and 8.
Conclusion
In this paper, motivated by the decoupled training idea, we propose a novel and effective classifier re-balancing algorithm for tackling federated long-tailed learning. In order to overcome the lack of a public balanced dataset in FL, we propose to re-balance the classifier during local training by integrating local real data with global gradient prototypes. Furthermore, in order to address the problem of contradictory optimization goals during training brought by performing local classifier re-balancing, we introduce a two-stream classifiers architecture to help model the global data distribution. Thorough experiments verify the great effectiveness of our method over strong baselines without extra data requirements.
## Ethical Statement
Our purpose is to address the optimization problem of FL on the non-i.i.d. and long-tailed data and help to learn a better global model that has good performance on all classes. The datasets used in our experiments are all publicly available. Also, our method only requires the normal gradients transmission between the server and the clients, which will not expose the local data privacy and does not have any ethical concerns.
## Acknowledgments
This work was supported by a Tencent Research Grant. Xu Sun is the corresponding author of this paper.
|
2305.02267
|
Remarks on Nahm sums for symmetrizable matrices
|
Nahm sums are specific $q$-hypergeometric series associated with symmetric
positive definite matrices. In this paper we study Nahm sums associated with
symmetrizable matrices. We show that one direction of Nahm's conjecture, which
was proven by Calegari, Garoufalidis, and Zagier for the symmetric case, also
holds for the symmetrizable case. This asserts that the modularity of a Nahm
sum implies that a certain element in a Bloch group associated with the Nahm
sum is a torsion element. During the proof, we investigate the radial
asymptotics of Nahm sums. Finally, we provide lists of candidates of modular
Nahm sums for symmetrizable matrices based on numerical experiments.
|
Yuma Mizuno
|
2023-05-03T16:59:08Z
|
http://arxiv.org/abs/2305.02267v1
|
# Remarks on Nahm sums for symmetrizable matrices
###### Abstract
Nahm sums are specific \(q\)-hypergeometric series associated with symmetric positive definite matrices. In this paper we study Nahm sums associated with symmetrizable matrices. We show that one direction of Nahm's conjecture, which was proven by Calegari, Garoufalidis, and Zagier for the symmetric case, also holds for the symmetrizable case. This asserts that the modularity of a Nahm sum implies that a certain element in a Bloch group associated with the Nahm sum is a torsion element. During the proof, we investigate the radial asymptotics of Nahm sums. Finally, we provide lists of candidates of modular Nahm sums for symmetrizable matrices based on numerical experiments.
## 1 Introduction
The Rogers-Ramanujan identities relate infinite sum and infinite product expressions of \(q\)-series:
\[\sum_{n=0}^{\infty}\frac{q^{n^{2}}}{(q;q)_{n}}=\frac{1}{(q,q^{4};q^{5})_{ \infty}},\quad\sum_{n=0}^{\infty}\frac{q^{n^{2}+n}}{(q;q)_{n}}=\frac{1}{(q^{2}, q^{3};q^{5})_{\infty}}, \tag{1}\]
where \((x;q)_{n}\coloneqq(1-x)(1-qx)\cdots(1-q^{n-1}x)\), \((x;q)_{\infty}\coloneqq\prod_{i=0}^{\infty}(1-q^{i}x)\), and \((x_{1},\ldots,x_{n};q)_{\infty}\coloneqq(x_{1};q)_{\infty}\cdots(x_{n};q)_{\infty}\). One of the remarkable consequences of the Rogers-Ramanujan identities is that the infinite sums in (1), which are \(q\)-hypergeometric series, are also modular functions. Specifically, the vector-valued function on the upper-half plane
\[g(\tau)=\left(q^{-1/60}{\sum_{n=0}^{\infty}\frac{q^{n^{2}}}{(q;q)_{n}}},\ \ q^{11/60}{\sum_{n=0}^{\infty}\frac{q^{n^{2}+n}}{(q;q)_{n}}}\right)^{\sf T}\]
with \(q=e^{2\pi i\tau}\) has the following transformation formulas:
\[g(\tau+1)=\begin{pmatrix}\zeta_{60}^{-1}&0\\ 0&\zeta_{60}^{11}\end{pmatrix}g(\tau),\quad g(-\frac{1}{\tau})=\frac{2}{\sqrt {5}}\begin{pmatrix}\sin\frac{2\pi}{5}&\sin\frac{\pi}{5}\\ \sin\frac{\pi}{5}&-\sin\frac{2\pi}{5}\end{pmatrix}g(z).\]
where \(\zeta_{N}=e^{2\pi i/N}\).
In the study of rational conformal field theories, Nahm [14] considered a higher rank generalization of the infinite sums appearing in the Rogers-Ramanujan identities (1), which we call _Nahm sum_. Let \(N\) be a natural number, and suppose that \(A\in\mathbb{Q}^{N\times N}\) is a symmetric positive definite matrix, \(B\in\mathbb{Q}^{N}\) is a vector, and \(C\in\mathbb{Q}\) is a scalar. The Nahm sum is defined by
\[f_{A,b,c}(q)\coloneqq\sum_{n\in\mathbb{N}^{N}}\frac{q^{\frac{1}{2}n^{\sf T}An +n^{\sf T}b+c}}{(q;q)_{n_{1}}\cdots(q;q)_{n_{N}}}. \tag{2}\]
In general, Nahm sums rarely have modularity. However, as discussed in [14], a character in rational conformal field theories is often expressed as a Nahm sum and also possess modularity. Motivated by such situations, Nahm conjectured that the modularity of Nahm sums are related
to torsion elements of the Bloch group. Let us explain the conjecture in a little more detail. The matrix \(A\) determines the _Nahm's equation_
\[1-z_{i}=\prod_{j=1}^{N}z_{j}^{A_{ij}}, \tag{3}\]
and each solution of this equation determines an element in the Bloch group \(B(F)\):
\[\xi_{A}\coloneqq\sum_{i=1}^{N}[z_{i}], \tag{4}\]
where \(F\) is the number field generated by \(z_{1},\ldots,z_{N}\). The _Nahm's conjecture_ states that, for any \(A\), the Nahm sum \(f_{A,B,C}(q)\) is modular for some \(B,C\) if and only if \(\xi_{A}\) is a torsion in \(B(F)\) for any solution \(z_{i}\)[14, 22]. Although this conjecture itself is known to be false [21, 22], Calegari, Garoufalidis, and Zagier [2] proved one direction of Nahm's conjecture: If the Nahm sum \(f_{A,B,C}(q)\) is modular, then the element (4) associated with the solution of Nahm's equation (3) in the real interval \((0,1)^{N}\) is a torsion. They construct a map
\[R_{\zeta}:B(F)/mB(F)\to F_{m}^{\times}/F_{m}^{\times m}\]
where \(m\) is an integer, \(\zeta\) is a primitive \(m\)th root of unity, and \(F_{m}=F(\zeta)\), and they show that this map is injective for sufficiently many \(m\). The one direction of Nahm's conjecture follows from this injectivity result combined with the formula of the asymptotic expansion of Nahm sums given in [5].
In this paper, we study Nahm sums associated with symmetrizable matrices, which has the following form:
\[f_{A,b,c,d}(q)\coloneqq\sum_{n\in\mathbb{N}^{N}}\frac{q^{\frac{1}{2}n^{ \mathsf{T}}ADn+n^{\mathsf{T}}b+c}}{(q^{d_{1}};q^{d_{1}})_{n_{1}}\cdots(q^{d_{N}} ;q^{d_{N}})_{n_{N}}}. \tag{5}\]
We have a new input \(d=(d_{1},\ldots,d_{N})\in\mathbb{Z}_{>0}^{N}\), and the symmetric matrix \(A\) in the original Nahm sum (2) is replaced by a symmetrizable matrix \(A\) with the symmetrizer \(D\coloneqq\operatorname{diag}(d_{1},\ldots,d_{N})\). Specifically, in (5), we require that \(AD\) is symmetric positive definite rather than \(A\) itself.
Prototypical examples of this type of Nahm sums appear in the Kanade-Russell conjecture [9] (in the form by Kursungoz [12]), which gives the mod \(9\) version of the Rogers-Ramanujan type identities. The conjecture predicts the following \(q\)-series identities:
\[f_{1}(q) \coloneqq\sum_{n_{1},n_{2}\geq 0}\frac{q^{n_{1}^{2}+3n_{1}n_{2}+3n _{2}^{2}}}{(q;q)_{n_{1}}(q^{3};q^{3})_{n_{2}}}\ \stackrel{{?}}{{=}}\frac{1}{(q,q^{3},q^{6},q^{8};q^{9})_{ \infty}}, \tag{6}\] \[f_{2}(q) \coloneqq\sum_{n_{1},n_{2}\geq 0}\frac{q^{n_{1}^{2}+3n_{1}n_{2}+3n _{2}^{2}+n_{1}+3n_{2}}}{(q;q)_{n_{1}}(q^{3};q^{3})_{n_{2}}}\ \stackrel{{?}}{{=}}\frac{1}{(q^{2},q^{3},q^{6},q^{7};q^{9})_{ \infty}},\] (7) \[f_{3}(q) \coloneqq\sum_{n_{1},n_{2}\geq 0}\frac{q^{n_{1}^{2}+3n_{1}n_{2}+3n _{2}^{2}+2n_{1}+3n_{2}}}{(q;q)_{n_{1}}(q^{3};q^{3})_{n_{2}}}\ \stackrel{{?}}{{=}}\frac{1}{(q^{3},q^{4},q^{5},q^{6};q^{9})_{ \infty}}. \tag{8}\]
The Nahm sums in the left-hand sides are given by the matrix \(A=\left[\begin{smallmatrix}2&1\\ 3&2\end{smallmatrix}\right]\) with the symmetrizer \(D=\operatorname{diag}(1,3)\). The three linear terms are given by \(b=\left[\begin{smallmatrix}0\\ 0\end{smallmatrix}\right]\), \(\left[\begin{smallmatrix}1\\ 3\end{smallmatrix}\right]\), and \(\left[\begin{smallmatrix}2\\ 3\end{smallmatrix}\right]\), respectively. Assuming these identities (6)-(8), we see that the vector-valued function
\[g(\tau)=\begin{pmatrix}q^{-1/18}f_{1}(q)\\ q^{5/18}f_{2}(q)\\ q^{11/18}f_{3}(q)\end{pmatrix}\]
satisfies the following modular transformation formula:
\[g(\tau+1)=\begin{pmatrix}\zeta_{18}^{-1}&0&0\\ 0&\zeta_{18}^{5}&0\\ 0&0&\zeta_{18}^{11}\end{pmatrix}g(\tau),\quad g(-\frac{1}{\tau})=\begin{pmatrix} \alpha_{1}&\alpha_{2}&\alpha_{4}\\ \alpha_{2}&-\alpha_{4}&-\alpha_{1}\\ \alpha_{4}&-\alpha_{1}&\alpha_{2}\end{pmatrix}g(\frac{\tau}{3}) \tag{9}\]
where \(\alpha_{k}=\frac{1}{2\sqrt{3}\sin\frac{k\pi}{9}}\). The formula (9) implies that \(g(\tau)\) is a vector-valued modular function on the congruence subgroup \(\Gamma_{0}(3)\coloneqq\left\{\left[\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right]\mid c\equiv 0\bmod 3\right\}\subseteq\mathrm{SL}(2,\mathbb{Z})\).
Nahm sums for symmetrizable matrices also appear in various areas of mathematics, such as partition identities [10, 11, 12, 20], character formulas of principal subspaces for affine Lie algebras of twisted type [1, 6, 15, 16, 17], and \(Y\)-systems on cluster algebras associated with skew-symmetrizable matrices [13].
The purpose of this paper is to demonstrate that the theory of Nahm sums for symmetric matrices can be similarly applied to Nahm sums for symmetrizable matrices. In section 3.2, we see that one direction of Nahm's conjecture, proved by Calegari, Garoufalidis, and Zagier in [2] for the symmetric case, also holds for symmetrizable case (Theorem 3.3). As with the symmetric case, the proof uses the injectivity of the map \(R_{\zeta}:B(F)/mB(F)\to F_{m}^{\times}/F_{m}^{\times m}\). Assuming the injectivity result, the main step of the proof is to study the asymptotic behavior of the Nahm sum at roots of unity, which is given in Section 2. We provide the explicit formula for the asymptotic expansion of Nahm sums, which is a generalization of the symmetric case given in [5]. We also need some number-theoretic properties of the constant term in the asymptotic expansion, which are provided in Proposition 3.2. The main ingredient in the constant term is the _cyclic quantum dilogarithm function_. In Appendix A, we provide some properties of the cyclic quantum dilogarithm function that we use in the proof of Proposition 3.2. Finally, in Section 4, we present some experiments on modular Nahm sums for symmetrizable matrices. We use a numerical method from [22] to search for modular Nahm sum for rank 2 and rank 3 cases. The resulting lists of modular candidates are given in Table 1 for rank 2 case, and Table 2 and 3 for rank 3 case. For rank 2 we might expect that Table 1, together with symmetric case in [22, Table 2], exhausts all modular Nahm sums. In addition, many candidates in Table 1 are already known in the literature, and some of them are known to be modular. For rank 3, however, the number of modular candidates explodes. We should emphasize that Table 2 and 3 are just the tip of the iceberg. Providing a complete picture and a systematic understanding are left as a future task. The only observation on the rank 3 lists in this paper is that there seem to be "Langlands dual" pairs of modular Nahm sums. As an example, we verify numerically that two Nahm sums associated with \(A=\left[\begin{smallmatrix}1&0&1/2\\ 0&2&1\end{smallmatrix}\right]\) and \(A^{\vee}=\left[\begin{smallmatrix}1&0&1/2\\ 0&2&1\end{smallmatrix}\right]\), which are related by taking the transpose, are related by the modular \(S\)-transformation.
_Acknowledgment._ The author would like to thank Shunsuke Tsuchioka for fruitful discussions and for pointing out the identity (49). This work is supported by JSPS KAKENHI Grant Number JP21J00050.
## 2 Asymptotics of Nahm sums
Let \(N\) be a natural number. Suppose that \(Q=(A,b,c,d)\) is a quadruple where \(A\in\mathbb{Q}^{N\times N}\) is a matrix, \(b\in\mathbb{Q}^{N}\) is a vector, \(c\in\mathbb{Q}\) is a scalar, and \(d\in\mathbb{Z}^{N}\) is a vector, such that \(AD\) is symmetric positive definite where \(D=\mathrm{diag}(d_{1},\ldots,d_{N})\). The _Nahm sum_ associated with \(Q\) is a \(q\)-series defined by
\[f_{Q}(q)=\sum_{n\in\mathbb{N}^{N}}\frac{q^{Q(n)}}{(q^{d_{1}};q^{d_{1}})_{n_{1} }\cdots(q^{d_{N}};q^{d_{N}})_{n_{N}}} \tag{10}\]
where we denote by, abuse of notation,
\[Q(n)=\frac{1}{2}n^{\mathsf{T}}ADn+n^{\mathsf{T}}b+c \tag{11}\]
the quadratic form associated with \(Q=(A,b,c,d)\). We say that \(Q\) is _symmetric_ if \(d_{i}=1\) for any \(i\). We also use the term _symmetrizable_ to mean not necessarily symmetric.
### Asymptotics at roots of unity
In this section, we study the asymptotics of Nahm sums \(f_{Q}(q)\) at roots of unity. We think \(f_{Q}(q)\) as a function on the upper half-plane by setting \(q^{\lambda}=e^{2\pi i\tau\lambda}\) for any \(\lambda\in\mathbb{Q}\) and \(\tau\in\mathbb{C}\) with \(\Im\tau>0\). We will consider the limit where \(\tau\) tends to a rational number from above in the complex plane.
Let \((z_{1},\dots,z_{N})\in\mathbb{R}^{N}\) be the unique solution of the _Nahm's equation_
\[1-z_{i}=\prod_{j=1}^{N}z_{j}^{A_{ij}} \tag{12}\]
such that \(0<z_{i}<1\) for any \(i\). In fact, the existence and the uniqueness are proved in the same way as in the symmetric case (see e.g. [21, Lemma 2.1]) by using the fact that \(AD\) is positive definite. We define a real number
\[\Lambda=-\sum_{i=1}^{N}d_{i}^{-1}\mathrm{L}(z_{i}) \tag{13}\]
where
\[\mathrm{L}(z)=\mathrm{Li}_{2}(z)+\frac{1}{2}\log(z)\log(1-z)-\frac{\pi^{2}}{6} \tag{14}\]
is the Rogers dilogarithm function shifted by the constant \(\pi^{2}/6\) so that \(\mathrm{L}(1)=0\).
For an invertible matrix \(A\) and an analytic function \(f(x)=f(x_{1},\dots,x_{N})\), we consider the formal Gaussian integral:
\[\mathbf{I}_{A}[f]=\frac{\int_{\mathbb{R}^{N}}e^{-\frac{1}{2}x^{ \mathsf{T}}Ax}f(x)\,dx}{\int_{\mathbb{R}^{N}}e^{-\frac{1}{2}x^{\mathsf{T}}Ax} \,dx}. \tag{15}\]
For a positive integer \(m\), an \(m\)th root of unity \(\zeta\), and a complex number \(w\) with \(|w|<1\), we define a formal power series
\[\psi_{w,\zeta}(\nu,\varepsilon)=-\sum_{r\geq 2}\sum_{t=1}^{m}\Bigl{(}B_{r} \Bigl{(}1-\frac{t+\nu}{m}\Bigr{)}-\delta_{2,r}\frac{\nu^{2}}{m^{2}}\Bigr{)} \,\mathrm{Li}_{2-r}(\zeta^{t}w)\frac{\varepsilon^{r-1}}{r!}, \tag{16}\]
where \(B_{r}(x)\) is the Bernoulli polynomial of degree \(r\) defined by \(te^{tx}/(e^{t}-1)=\sum_{r=0}^{\infty}B_{r}(x)t^{r}/r!\), and \(\mathrm{Li}_{r}(w)=\sum_{k=1}^{\infty}w^{k}/k^{r}\) is the polylogarithm function. We then consider the formal Gaussian integral
\[I_{Q,\zeta}(k,\varepsilon)=\mathbf{I}_{\frac{1}{m}\widetilde{A}D}\left[e^{- \frac{1}{m}x^{\mathsf{T}}be^{1/2}-\frac{1}{m}(c+\frac{\mathsf{T}D}{24}) \varepsilon}\prod_{i=1}^{N}\exp\Bigl{(}\psi_{\zeta_{i}^{k_{i}}z_{i}^{\frac{1} {2}},\zeta_{i}}(x_{i}\varepsilon^{-1/2},d_{i}\varepsilon)\Bigr{)}\right], \tag{17}\]
for any \(k\in(\mathbb{Z}/m\mathbb{Z})^{N}\) where
\[\widetilde{A}=A+\mathrm{diag}(z/(1-z)) \tag{18}\]
and \(\zeta_{i}=\zeta^{d_{i}}\). The expression (17) has a well-defined meaning as a formal power series in \(\mathbb{C}[\![\varepsilon]\!]\).
For any coprime integers \(p\) and \(q\), we define the _dedekind sum_ by
\[s(p,q)\coloneqq\sum_{r\bmod q}\left(\!\!\left(\frac{r}{q}\right)\!\right)\left( \!\!\left(\frac{pr}{q}\right)\!\right) \tag{19}\]
where we use the saw wave function defined by
\[((x))=\begin{cases}x-\lfloor x\rfloor-\frac{1}{2}&\text{if }x\notin\mathbb{Z} \\ 0&\text{if }x\in\mathbb{Z}.\end{cases} \tag{20}\]
For example, we have \(s(1,m)=\frac{(m-1)(m-2)}{12m}\).
We say that a positive integer \(\delta\) is a _strong denominator_ of \(Q\) if the value of \(Q(k)\) modulo \(1\) for \(k\in\mathbb{Z}^{N}\) depends only on the residue class of \(k\) modulo \(\delta\). We define the quadratic Gauss sum
\[G(Q,\alpha)=\frac{1}{\delta^{N}}\sum_{k\in(\mathbb{Z}/\delta\mathbb{Z})^{N}} \mathbf{e}(\bar{\alpha}Q(k)) \tag{21}\]
where \(\mathbf{e}(x)=e^{2\pi ix}\), \(\bar{\alpha}\) is the reduction of \(\alpha\) modulo \(\delta\), and \(\delta\) is a chosen strong denominator of \(Q\). The \(G(Q,\alpha)\) does not depend on the choice of \(\delta\).
Let \(\tilde{f}_{Q}(\tau)=f_{Q}(e^{2\pi i\tau})\). We now have the following asymptotic formula for the Nahm sum, whose proof is obtained as a minor modification of the proof for the symmetric case in [5].
**Theorem 2.1**.: _Let \(Q=(A,b,c,d)\) be a quadruple as above, and \(\delta\) be a strong denominator of \(Q\). Suppose that \(\alpha\) is a rational number whose denominator \(m\) is prime to \(\delta\) and \(d_{1},\ldots,d_{N}\). Let \(\zeta=\mathbf{e}(\alpha)\) be the corresponding \(m\)th root of unity. Let \(\theta_{i}\) be the \(m\)th root of \(z_{i}\) such that \(\theta_{i}\in(0,1)\), where \((z_{1},\ldots,z_{N})\) is the unique solution of the Nahm's equation (12) such that \(z_{i}\in(0,1)\). Then we have the asymptotic formula_
\[e^{-\frac{\Lambda}{mc}}\tilde{f}_{Q}\Big{(}\alpha+\frac{i\varepsilon}{2\pi m }\Big{)}\sim m^{-\frac{N}{2}}\chi(d,\alpha)c(Q)G(Q,\alpha)S_{Q,\zeta}(\varepsilon) \tag{22}\]
_as \(\varepsilon\) tends to \(0\) from the right-half in the complex plane. Here \(\chi(d,\alpha)=\prod_{i=1}^{N}\chi_{i}\) is the product of the \(12\)th root of \(\zeta\) defined by \(\chi_{i}=\mathbf{e}(s(p_{i},q_{i})/2)\) where \(p_{i}/q_{i}=\alpha d_{i}\),_
\[c(Q)=(\det\widetilde{A})^{-\frac{1}{2}}\prod_{i=1}^{N}\theta_{i}^{b_{i}/d_{i} }(1-z_{i})^{\frac{1}{2}-\frac{1}{m}}, \tag{23}\]
_and_
\[S_{Q,\zeta}(\varepsilon)=\left(\prod_{i=1}^{N}D_{\zeta_{i}}(\zeta_{i}\theta_ {i})^{-\frac{1}{m}}\right)\sum_{k\in(\mathbb{Z}/m\mathbb{Z})^{N}}\zeta^{ \overline{Q(k)}}\!\!\left(\prod_{i=1}^{N}\frac{\theta_{i}^{(k^{\mathrm{T}}A)_ {i}}}{(\zeta_{i}\theta_{i};\zeta_{i})_{k_{i}}}\right)\!I_{Q,\zeta}(k, \varepsilon), \tag{24}\]
_where \(D_{\zeta}(x)\) is the cyclic quantum dilogarithm function defined by (55), and \(\overline{Q(k)}\) is the reduction of \(Q(k)\) modulo \(m\)._
**Remark 2.2**.: We have corrected the following errors in [5]: (a) the sign of \(-\frac{1}{m}x^{\mathsf{T}}b\varepsilon^{1/2}\) in (17), (b) the missing term \(\frac{\mathrm{tr}\,D}{24}\) (which reduces \(\frac{N}{24}\) for symmetric case) in (17), (c) the absence of the Dedekind sum in \(\chi(d,\alpha)\).
**Remark 2.3**.: When \(\alpha=0\), the asymptotic formula for symmetrizable Nahm sums was previously obtained in [8].
## 3 Nahm sums and Bloch groups
### Bloch groups
Let \(F\) be a number field. We denote by \(Z(F)\) the (additive) free abelian group on \(\mathbb{P}^{1}(F)=F\sqcup\{\infty\}\). We denote by \([X]\in Z(F)\) the element corresponding to \(X\in\mathbb{P}^{1}(F)\). The _Bloch group_ of \(F\) is the quotient
\[B(F)=A(F)/C(F) \tag{25}\]
where \(A(F)\) is the kernel of the map
\[d:Z(F)\to\wedge^{2}F^{\times},\quad[X]\to X\wedge(1-X) \tag{26}\]
(and \([0],[1],[\infty]\mapsto 0\)), and \(C(F)\subseteq A(F)\) is the subgroup generated by the _five-term relation_
\[[X]-[Y]+\left[\frac{Y}{X}\right]-\left[\frac{1-X^{-1}}{1-Y^{-1}} \right]+\left[\frac{1-X}{1-Y}\right] \tag{27}\]
for any \(X,Y\in\mathbb{P}^{1}(F)\) except where \(\frac{0}{0}\) or \(\frac{\infty}{\infty}\) appears in (27). There are several conventions for the definition of the Bloch group, which agree up to \(6\)-torsion. We use the definition in [2].
Let \(F_{m}\) be a field obtained by adjoining to \(F\) a primitive \(m\)th root of unity \(\zeta=\zeta_{m}\). The extension \(F_{m}/F\) is Galois, and we have an injective morphism
\[\chi:\operatorname{Gal}(F_{m}/F)\to(\mathbb{Z}/m\mathbb{Z})^{\times}\]
determined by \(\sigma(\zeta)=\zeta^{\chi(\sigma)}\) for any \(\sigma\). The group \(F_{m}^{\times}/F_{m}^{\times m}\) is equipped with the (multiplicative) \(\mathbb{Z}/m\mathbb{Z}\)-module structure given by \(k\mapsto(x\mapsto x^{k})\). We define the \(\chi^{-1}\)-eigenspace by
\[(F_{m}^{\times}/F_{m}^{\times m})^{\chi^{-1}}\coloneqq\{x\in F_{m}^{\times}/F _{m}^{\times m}\mid\forall\sigma\in\operatorname{Gal}(F_{m}/F),\ \sigma(x)=x^{\chi(\sigma)^{-1}}\}.\]
For a number field \(F\) that does not contain any non-trivial \(m\)th root of unity, Calegari-Garoufalidis-Zagier [2] defined a map
\[R_{\zeta}:B(F)/nB(F)\to F_{m}^{\times}/F_{m}^{\times m} \tag{28}\]
such that its image is contained in the \(\chi^{-1}\)-eigenspace. This map is defined by using cyclic dilogarithm function (55). They prove the following injectivity result:
**Theorem 3.1**.: _[_2_, Theorem 1.2]_ _Let \(F\) be a number field. There exists a integer \(M\) (depending only on \(F\)) such that, for any integer \(m\) prime to \(M\) and any \(m\)th root of unity \(\zeta\), the map \(R_{\zeta}\) is injective._
The proof in [2] uses the compatibility between \(R_{\zeta}\) and a Chern class map in algebraic \(K\)-theory, and the injectivity of this Chern class map.
### Nahm's conjecture for symmetrizable case
We first observe number-theoretic features of the constant term in the asymptotic expansion (22). Let \(Q=(A,b,c,d)\) be a quadruple as in Section 2. We fix a strong denominator \(\delta\) of \(Q\) such that \(\delta d_{i}^{-1}b_{i}\) and \(\delta d_{i}^{-1}A_{ij}\) are integers for any \(i,j\). Let \(F\) be a number field obtained by adjoining \(z_{1}^{1/\delta},\ldots,z_{N}^{1/\delta}\) to \(\mathbb{Q}\), where \(z_{i}\) is the solution of the Nahm's equation (12) such that
\(0<z_{i}<1\). Let \(m\) be an integer prime to \(\delta\) and \(d_{1},\dots,d_{N}\). Recall that \(F_{m}\) is a field obtained by adjoining to \(F\) a primitive \(m\)th root of unity \(\zeta\). We define a number
\[u=\biggl{(}\prod_{i=1}^{N}\theta_{i}^{b_{i}/d_{i}}(1-z_{i})^{-\frac{1}{m}}D_{ \zeta_{i}}(\zeta_{i}\theta_{i})^{-\frac{1}{m}}\biggr{)}\sum_{k\in(\mathbb{Z}/m \mathbb{Z})^{N}}\zeta^{\overline{Q(k)}}\prod_{i=1}^{N}\frac{\theta_{i}^{(k^{ \mathsf{T}}A)_{i}}}{(\zeta_{i}\theta_{i};\zeta_{i})_{k_{i}}} \tag{29}\]
which is a part of the constant term in the asymptotic formula (22). We see that the \(m\)th power of \(u\) satisfies the following properties, which is stated (without a proof) in [2, Theorem 7.1] for the symmetric case. We will provide the proof for the sake of completeness.
**Proposition 3.2**.: _We have \(u^{m}\in F_{m}\). Moreover, if \(u^{m}\neq 0\), then its image in \(F_{m}^{\times}/F_{m}^{\times m}\) belongs to the \(\chi^{-1}\)-eigenspace._
Proof.: We first see that \(u\) can be also written as
\[u=\biggl{(}\prod_{i=1}^{N}\theta_{i}^{b_{i}/d_{i}}D_{\zeta_{i}}(\theta_{i})^{ -\frac{1}{m}}\biggr{)}a_{\zeta}(\theta)\]
where we define
\[a_{\zeta}(\theta)=\sum_{k\in(\mathbb{Z}/m\mathbb{Z})^{N}}\zeta^{\overline{Q(k) }}\prod_{i=1}^{N}\frac{\theta_{i}^{(k^{\mathsf{T}}A)_{i}}}{(\theta_{i};\zeta_ {i})_{k_{i}+1}}. \tag{30}\]
This follows from the formula
\[\frac{D_{\zeta}(\zeta x)}{D_{\zeta}(x)}=\frac{(1-x)^{m}}{1-x^{m}}. \tag{31}\]
Also note that the summand in (30) is well-defined over the set \((\mathbb{Z}/m\mathbb{Z})^{N}\), which follows from the fact that \(\theta_{i}^{m}\) is a solution of the Nahm's equation (12) (see Lemma 3.1 in [4]).
Let \(H\) be the Kummer extension of \(F_{m}\) obtained by adjoining the \(m\)th roots of \(z_{1}^{1/\delta},\dots,z_{N}^{1/\delta}\) to \(F_{m}\). We see that \(u^{m}\in H\) by definition. We now see that \(u^{m}\) belongs to the smaller field \(F_{m}\) by showing that it is fixed by any automorphism \(\tau\in\operatorname{Gal}(H/F_{m})\). Let \(e_{i}\) be integers such that \(\zeta_{i}^{e_{i}}=\tau(\theta_{i})/\theta_{i}\). Then we see that
\[\tau\biggl{(}\prod_{i=1}^{N}D_{\zeta_{i}}(\theta_{i})^{-1}\biggr{)}=\prod_{i= 1}^{N}D_{\zeta_{i}}(\theta_{i})^{-1}\frac{\theta_{i}^{m(e^{\mathsf{T}}A)_{i}} }{(\theta_{i};\zeta_{i})_{e_{i}}^{m}} \tag{32}\]
by repeatedly using (31) and also using the fact that \(\theta_{i}^{m}\) is a solution of the Nahm's equation. We also see that
\[\biggl{(}\prod_{i=1}^{N}\frac{\theta_{i}^{m(e^{\mathsf{T}}A)_{i}}}{(\theta_{i };\zeta_{i})_{e_{i}}^{m}}\biggr{)}\cdot\tau(a_{\zeta}(\theta)^{m})=\biggl{(} \sum_{k\in(\mathbb{Z}/m\mathbb{Z})^{N}}\zeta^{\overline{Q(k)}}\prod_{i=1}^{N} \frac{\theta_{i}^{((k+e)^{\mathsf{T}}A)_{i}}}{(\theta_{i};\zeta_{i})_{k_{i}+ e_{i}+1}}\biggr{)}^{m}=a_{\zeta}(\theta)^{m}. \tag{33}\]
The equations (32) and (33) imply that \(\tau(u^{m})=u^{m}\).
Now we assume that \(u^{m}\neq 0\). Suppose that \(\sigma\in\operatorname{Gal}(F_{m}/F)\). Let \(q=\chi(\sigma)\in(\mathbb{Z}/m\mathbb{Z})^{\times}\), and let \(p\in\mathbb{Z}_{>0}\) be a lift of \(q^{-1}\). Let \(\tilde{\sigma}\in\operatorname{Gal}(H/F)\) be a lift of \(\sigma\), and define \(\tilde{\theta}_{i}\coloneqq\tilde{\sigma}(\theta_{i})\). Let \(s_{i}\) be integers such that \(\zeta_{i}^{s_{i}}=\tilde{\theta}_{i}/\theta_{i}\). We define an element
\[v=\biggl{(}\prod_{i=1}^{N}h_{i}\biggr{)}\cdot\frac{a_{\zeta^{q}}(\tilde{ \theta})}{a_{\zeta}(\theta)^{p}}\quad\in H^{\times} \tag{34}\]
where
\[h_{i}=\frac{\tilde{\theta}_{i}^{b_{i}/d_{i}}}{\theta_{i}^{b_{i}/d_{i}}}\cdot\frac {\theta_{i}^{p(s^{\mathsf{T}}A)_{i}}}{\theta_{i}^{(\mathsf{T}_{i}^{\mathsf{ \prime}})_{psi}}}\cdot\prod_{t=1}^{p-1}\frac{1}{(\theta_{i};\zeta_{i})_{qt}} \quad\in H^{\times}. \tag{35}\]
Then we have \(v^{m}\equiv\sigma(u^{m})/u^{mp}\bmod F_{m}^{\times m}\) by Proposition A.1 in Appendix A. Thus, to prove the proposition, it suffices to see that \(v\in F_{m}\) by showing that \(v\) is fixed by any automorphism \(\tau\in\operatorname{Gal}(H/F_{m})\). Again, let \(e_{i}\) be integers such that \(\zeta_{i}^{e_{i}}=\tau(\theta_{i})/\theta_{i}\). We see that
\[\prod_{i=1}^{N}\frac{\tau(h_{i})}{h_{i}} =\zeta^{ps^{\mathsf{T}}ADe+be^{\mathsf{T}}-pbe^{\mathsf{T}}}\prod _{i=1}^{N}\frac{(\theta_{i};\zeta_{i})_{e_{i}}^{p}(\theta_{i};\zeta_{i}^{q})_{ ps_{i}}}{(\theta_{i};\zeta_{i}^{q})_{pe_{i}}(\zeta^{e_{i}}\theta_{i};\zeta_{i}^{q})_{ ps_{i}}}\] \[=\zeta^{ps^{\mathsf{T}}ADe+be^{\mathsf{T}}-pbe^{\mathsf{T}}}\prod _{i=1}^{N}\frac{(\theta_{i};\zeta_{i})_{e_{i}}^{p}}{(\tilde{\theta}_{i};\zeta_ {i}^{q})_{pe_{i}}} \tag{36}\] \[=\zeta^{be^{\mathsf{T}}-pbe^{\mathsf{T}}}\prod_{i=1}^{N}\frac{ \tilde{\theta}_{i}^{p(e^{\mathsf{T}}A)_{i}}(\theta_{i};\zeta_{i}^{q})_{e_{i}} }{\theta_{i}^{(e^{\mathsf{T}}A)_{i}}(\tilde{\theta}_{i};\zeta_{i}^{q})_{pe_{i }}},\]
where we use (59) in the first equality. We now compute
\[\zeta^{be^{\mathsf{T}}}\biggl{(}\prod_{i=1}^{N}\frac{\tilde{ \theta}_{i}^{p(e^{\mathsf{T}}A)_{i}}}{(\tilde{\theta}_{i};\zeta_{i}^{q})_{pe_ {i}}}\biggr{)}\tau(a_{\zeta^{q}}(\tilde{\theta})) =\sum_{k\in(\mathbb{Z}/m\mathbb{Z})^{N}}\zeta^{q\overline{Q(pk)}+ pk^{\mathsf{T}}ADe+be^{\mathsf{T}}}\prod_{i=1}^{N}\frac{\tilde{\theta}_{i}^{p((k+e)^{ \mathsf{T}}A)_{i}}}{(\tilde{\theta}_{i};\zeta_{i}^{q})_{p(k+e_{i})+1}} \tag{37}\] \[=\sum_{k\in(\mathbb{Z}/m\mathbb{Z})^{N}}\zeta^{q\overline{Q(p(k-e ))}+p(k-e)^{\mathsf{T}}ADe+be^{\mathsf{T}}}\prod_{i=1}^{N}\frac{\tilde{\theta}_ {i}^{p(k^{\mathsf{T}}A)_{i}}}{(\tilde{\theta}_{i};\zeta_{i}^{q})_{pk_{i}+1}}\] \[=\zeta^{-p^{\mathsf{T}}ADe}a_{\zeta^{q}}(\tilde{\theta}).\]
Similarly, we also compute
\[\zeta^{pbe^{\mathsf{T}}}\biggl{(}\prod_{i=1}^{N}\frac{\theta_{i}^ {p(e^{\mathsf{T}}A)_{i}}}{(\theta_{i};\zeta_{i})_{e_{i}}^{p}}\biggr{)}\tau(a_{ \zeta}(\theta))^{p} =\biggl{(}\sum_{k\in(\mathbb{Z}/m\mathbb{Z})^{N}}\zeta^{\overline {Q(k)}+k^{\mathsf{T}}ADe+be^{\mathsf{T}}}\prod_{i=1}^{N}\frac{\theta_{i}^{((k+ e)^{\mathsf{T}}A)_{i}}}{(\theta_{i};\zeta_{i})_{k_{i}+e_{i}+1}}\biggr{)}^{p} \tag{38}\] \[=\biggl{(}\sum_{k\in(\mathbb{Z}/m\mathbb{Z})^{N}}\zeta^{\overline {Q(k-e)}+(k-e)^{\mathsf{T}}ADe+be^{\mathsf{T}}}\prod_{i=1}^{N}\frac{\theta_{i}^ {(k^{\mathsf{T}}A)_{i}}}{(\theta_{i};\zeta_{i})_{k_{i}+1}}\biggr{)}^{p}\] \[=\zeta^{-p^{\mathsf{T}}ADe}a_{\zeta}(\theta)^{p}.\]
Combining (34)-(38), we see that \(\tau(v)=v\), which completes the proof.
We now prove the one direction of Nahm's conjecture for symmetrizable case. We define an element in the Bloch group \(B(F)\) by
\[\xi_{A,d}=\sum_{i=1}^{N}d_{i}^{-1}[z_{i}].\]
To be precise, \(\xi_{A,d}\) itself does not belong to \(B(F)\), rather a multiple of \(\xi_{A,d}\) by at least \(\operatorname{lcm}(d_{1},\dots,d_{N})\) belongs to \(B(F)\). But we shall not concern ourselves too much with that since we will only care whether \(\xi_{A,d}\) is a torsion or not. Finally, we say that the Nahm sum \(f_{A,b,c,d}(q)\) is modular to mean that the function \(\tilde{f}(\tau)=f_{A,b,c,d}(e^{2\pi i\tau})\) is invariant with respect to some finite index subgroup of \(\operatorname{SL}(2,\mathbb{Z})\).
**Theorem 3.3**.: _Suppose that the Nahm sum \(f_{Q}(q)\) associated with \(Q=(A,b,c,d)\) is modular. Then \(\xi_{A,d}\) is a torsion in the Bloch group \(B(F)\)._
Proof.: Suppose that \(\tilde{f}(\tau)=f_{A,b,c,d}(e^{2\pi i\tau})\) is modular with respect to a finite index subgroup \(\Gamma\subseteq\operatorname{SL}(2,\mathbb{Z})\). We can assume that \(\Gamma\) is contained in the principal congruence subgroup \(\Gamma(M)\) for a fixed integer \(M\) by replacing \(\Gamma\) by its intersection with \(\Gamma(M)\).
By the asymptotic formula (22) for \(\alpha=0\), we have
\[\tilde{f}\Big{(}\frac{i\varepsilon}{2\pi}\Big{)}=e^{-\frac{\Lambda}{ \varepsilon}}(K+O(\varepsilon)) \tag{39}\]
as \(\varepsilon\) tends to \(0\) from the right, where \(K\) is an algebraic number such that \(K^{2}\in F^{\times}\). The number \(\lambda=\Lambda/(2\pi)^{2}\) must be rational since the modularity of \(\tilde{f}(\tau)\) implies that the function \(\tilde{f}(-1/\tau)\) is invariant under some power of \(\left[\begin{smallmatrix}1&1\\ 0&1\end{smallmatrix}\right]\). We denote by \(\ell\) the denominator of \(\lambda\).
For any positive real number \(h\) and any \(\left[\begin{smallmatrix}p&q\\ r&s\end{smallmatrix}\right]\in\Gamma\), taking \(\varepsilon=\frac{sh}{1-irh/2\pi}\), we find that
\[\tilde{f}\Big{(}\frac{i\varepsilon}{2\pi}\Big{)}=\tilde{f}\Big{(}\frac{pi \varepsilon/2\pi+q}{ri\varepsilon/2\pi+s}\Big{)}=\tilde{f}\Big{(}\alpha+\frac{ ih}{2\pi m}\Big{)} \tag{40}\]
where we set \(\alpha=q/s\) and \(m=s\). Comparing the asymptotics of the left and the right expressions in (40) for \(h\) tending to \(0\) on the positive real line by using (39) and (22), we have
\[K\mathbf{e}(\lambda r/m)=\mu u_{\zeta} \tag{41}\]
where \(u_{\zeta}=u\) is defined by (29), and \(\mu\) is an algebraic number such that \(\mu^{12}\in F_{m}^{\times}\). The equation (41) implies that \(u_{\zeta}^{12\ell}\in F_{m}^{\times}\). Since \(\Gamma\) is contained in \(\Gamma(M)\), we see that \(m\) is prime to \(M\).
Consequently, we see that there are infinitely many \(m\) such that \(u_{\zeta}^{12\ell}\) for a primitive \(m\)th root of unity \(\zeta\) belongs to \(F_{m}^{\times}\). By Proposition 3.2 and [2, Lemma 2.4 (e) and Remark 2.6], this implies that \(R_{\zeta}(\xi_{A,d})^{12\ell}\) is trivial in \(F_{m}^{\times}/F_{m}^{\times m}\) for infinitely many \(m\). Thus, by the injectivity result Theorem 3.1, we see that \(12\ell\xi_{A,d}\) has a trivial image in \(B(F)/mB(F)\) for infinitely many \(m\). If \(\xi_{A,d}\) is not a torsion, this contradicts the fact that \(B(F)\) is a finitely generated abelian group.
## 4 Experiments on modular Nahm sums
In this section, we present some experiments on modular Nahm sums for symmetrizable matrices by using the numerical method explained in [22]. The method aims to detect, for a given \(q\)-series \(f(q)\), whether \(q^{c}f(q)\) is likely modular for some \(c\). The actual procedure is as follows:
1. Compute \(\phi(N)=N\log f(e^{-1/N})\) for four successive values (say \(N=20,21,22,23\)).
2. Take the third difference of the values computed in (1). If this value is extremely small, then the Nahm sum \(f(q)\) is likely modular.
This method is justified by the following reasoning: If \(f(q)\) is modular, \(\phi(N)\) is approximated to high precision by a quadratic polynomial in \(n\) (see [21, Lemma 3.1]), and thus its third difference should be extremely small. For Nahm sums we actually only need three values in the step (1) since we know the constant term of the quadratic polynomial by the formula (13). We use this method to give lists of candidates of modular Nahm sums in rank \(2\) (Table 1), and rank \(3\) with specific symmetrizers (Table 2 for \(d=(1,1,2)\) and Table 3 for \(d=(2,2,1)\)).
We provide some comments on the lists. The matrix \(A\) in the right table in Table 1 is the inverse of the matrix \(A\) of the same row in the left table. More precisely, we expect that the following holds in general:
**Conjecture 4.1**.: _Suppose that the Nahm sum associated with \((A,b,c,d)\) is modular. Then the Nahm sum associated with \((A^{*},b^{*},c^{*},d^{*})\) is also modular, where_
\[A^{*}=A^{-1},\quad b^{*}=A^{-1}b,\quad c^{*}=\frac{1}{2}b^{\mathsf{T}}(AD)^{-1 }b-\frac{\operatorname{tr}D}{24}-c,\quad d^{*}=d.\]
Next, we provide case-by-case comments on several Nahm sums in Table 1.
* \(A=\left[\begin{smallmatrix}1&1/2\\ 1&1\end{smallmatrix}\right]\). For \(\sigma=0,1\), we define \[f_{-3/56,\sigma}(q) \coloneqq\sum_{\begin{subarray}{c}n_{1},n_{2}\geq 0\\ n_{1}\equiv\sigma\bmod 2\end{subarray}}\frac{q^{\frac{1}{2}n_{1}^{2}+n_{1}n_{2}+n_{2}^ {2}}}{(q;q)_{n_{1}}(q^{2};q^{2})_{n_{2}}}\] (42) \[f_{1/56,\sigma}(q) \coloneqq\sum_{\begin{subarray}{c}n_{1},n_{2}\geq 0\\ n_{1}\equiv\sigma\bmod 2\end{subarray}}\frac{q^{\frac{1}{2}n_{1}^{2}+n_{1}n_{2}+n_{2}^ {2}+n_{2}}}{(q;q)_{n_{1}}(q^{2};q^{2})_{n_{2}}}\] (43) \[f_{9/56,\sigma}(q) \coloneqq\sum_{\begin{subarray}{c}n_{1},n_{2}\geq 0\\ n_{1}\equiv\sigma\bmod 2\end{subarray}}\frac{q^{\frac{1}{2}n_{1}^{2}+n_{1}n_{2}+n_{2} ^{2}+n_{1}+n_{2}}}{(q;q)_{n_{1}}(q^{2};q^{2})_{n_{2}}}\] (44) so that \(f_{c,0}(q)+f_{c,1}(q)\) is the Nahm sum associated with \(A\). We conjecture that the following
\begin{table}
\begin{tabular}{l c
modular transformation formula holds:
\[g(-\frac{1}{\tau})=\begin{pmatrix}S&S\\ S&-S\end{pmatrix}g(\frac{\tau}{2}), \tag{45}\]
where \(g(\tau)=(f_{-3/56,0}(q),f_{1/56,1}(q),f_{9/56,1}(q),f_{-3/56,1}(q),f_{1/56,0}(q), f_{9/56,0}(q))^{\mathsf{T}}\) and
\[S=\begin{pmatrix}\alpha_{3}&\alpha_{2}&\alpha_{1}\\ \alpha_{2}&-\alpha_{1}&-\alpha_{3}\\ \alpha_{1}&-\alpha_{3}&\alpha_{2}\end{pmatrix},\quad\alpha_{k}=\sqrt{\frac{2 }{7}}\sin\frac{k\pi}{7} \tag{46}\]
\(\bullet\): \(A=\left[\begin{smallmatrix}2&1\\ 3&2\end{smallmatrix}\right]\). The modularity follows from the Kanade-Russell conjecture (6)-(8), as we mentioned in the introduction. \(\bullet\): \(A=\left[\begin{smallmatrix}3&2\\ 4&4\end{smallmatrix}\right]\). We have the following identities:
\[\sum_{n_{1},n_{2}\geq 0}\frac{q^{\frac{3}{2}n_{1}^{2}+4n_{1}n_{2} +4n_{2}^{2}-\frac{1}{2}n_{1}}}{(q;q)_{n_{1}}(q^{2};q^{2})_{n_{2}}} =\sum_{n\geq 0}\frac{q^{n^{2}}}{(q;q)_{n}}=\frac{1}{(q^{2},q^{3}; q^{5})_{\infty}}, \tag{47}\] \[\sum_{n_{1},n_{2}\geq 0}\frac{q^{\frac{3}{2}n_{1}^{2}+4n_{1}n_{2} +4n_{2}^{2}+\frac{1}{2}n_{1}+2n_{2}}}{(q;q)_{n_{1}}(q^{2};q^{2})_{n_{2}}} =\sum_{n\geq 0}\frac{q^{n^{2}+n}}{(q;q)_{n}}=\frac{1}{(q^{2},q^{3}; q^{5})_{\infty}}. \tag{48}\]
The second equalities in (47) and (48) are the Rogers-Ramanujan identities. For the first equalities, the author learned the following proof from Shunsuke Tsuchioka. We can verify that the identities
\[\sum_{n_{1},n_{2}\geq 0}\frac{q^{\frac{3}{2}n_{1}^{2}+4n_{1}n_{2} +4n_{2}^{2}-\frac{1}{2}n_{1}}}{(q;q)_{n_{1}}(q^{2};q^{2})_{n_{2}}}x^{n_{1}+2n_ {2}}=\sum_{n\geq 0}\frac{q^{n^{2}}}{(q;q)_{n}}x^{n} \tag{49}\]
holds by showing that both sides satisfy the same \(q\)-difference equation, since the coefficients of \(x^{0}\) (resp. \(x^{n}\) for \(n<0\)) of the both sides are \(1\) (resp. \(0\)). The first equalities in (47) and (48) are obtained by substituting \(x\) by \(1\) and \(q\), respectively, into (49). In fact, the desired \(q\)-difference equation can be found algorithmically by using the \(q\)-version of Sister Celine's technique (e.g., qMultiSum package [18] can handle such computations).
\(\bullet\): \(A=\left[\begin{smallmatrix}3&2&1/2\\ 1&1\end{smallmatrix}\right]\). We have the following identities:
\[\sum_{\begin{subarray}{c}n_{1},n_{2}\geq 0\\ n_{1}=0\bmod 2\end{subarray}}\frac{q^{\frac{3}{4}n_{1}^{2}+n_{1}n_{2}+n_{2}^{2} -n_{1}+n_{2}}}{(q;q)_{n_{1}}(q^{2};q^{2})_{n_{2}}}\] \[=\sum_{\begin{subarray}{c}n_{1},n_{2}\geq 0\\ n_{1}=1\bmod 2\end{subarray}}\frac{q^{\frac{3}{4}n_{1}^{2}+n_{1}n_{2}+n_{2}^{2} -\frac{1}{2}n_{1}}}{(q;q)_{n_{1}}(q^{2};q^{2})_{n_{2}}}=\sum_{n\geq 0}\frac{q^{n^{2}+n} }{(q;q)_{2n+1}}=\frac{(q,q^{9},q^{10};q^{10})_{\infty}(q^{8},q^{12},q^{20})}{(q ;q)_{\infty}}, \tag{50}\] \[\sum_{\begin{subarray}{c}n_{1},n_{2}\geq 0\\ n_{1}=1\bmod 2\end{subarray}}\frac{q^{\frac{3}{4}n_{1}^{2}+n_{1}n_{2}+n_{2}^{2} -n_{1}+n_{2}}}{(q;q)_{n_{1}}(q^{2};q^{2})_{n_{2}}}\] \[=\sum_{\begin{subarray}{c}n_{1},n_{2}\geq 0\\ n_{1}=0\bmod 2\end{subarray}}\frac{q^{\frac{3}{4}n_{1}^{2}+n_{1}n_{2}+n_{2}^{2} -\frac{1}{2}n_{1}}}{(q;q)_{n_{1}}(q^{2};q^{2})_{n_{2}}}=\sum_{n\geq 0}\frac{q^{n^{2}}}{(q;q)_{2n}} =\frac{(q^{2},q^{8},q^{10};q^{10})_{\infty}(q^{6},q^{14},q^{20})}{( q;q)_{\infty}},\] (51) \[\sum_{n_{1},n_{2}\geq 0}\frac{q^{\frac{3}{4}n_{1}^{2}+n_{1}n_{2}+n_{2} ^{2}+n_{2}}}{(q;q)_{n_{1}}(q^{2};q^{2})_{n_{2}}} =\sum_{n\geq 0}\frac{q^{n^{2}+n}}{(q;q)_{2n}} =\frac{(q^{3},q^{7},q^{10};q^{10})_{\infty}(q^{4},q^{16},q^{20}) }{(q;q)_{\infty}}, \tag{52}\]
\[\sum_{n_{1},n_{2}\geq 1}\frac{q^{\frac{3}{2}n_{1}^{2}+n_{1}n_{2}+n_{2}^{2}+n_{2}} }{(q;q)_{n_{1}}(q^{2};q^{2})_{n_{2}}} =\sum_{n\geq 0}\frac{q^{n^{2}+2n}}{(q;q)_{2n+1}}=\frac{(q^{4},q^{6},q ^{10};q^{10})_{\infty}(q^{2},q^{18},q^{20})}{(q;q)_{\infty}}. \tag{53}\]
The rightmost equalities in (50), (51), (52), and (53) are (99), (98), (94), and (96), respectively, in the Slater's list [19]. Other equalities can be easily verified in the same way as for the left equalities in (47) and (48) explained earlier.
\(\bullet\): \(A=\left[\begin{smallmatrix}4&2\\ 6&4\end{smallmatrix}\right]\). The modularity follows from the Capparelli identities [3] in the form presented in [10, 11]:
\[\sum_{n_{1},n_{2}\geq 0}\frac{q^{2n_{1}^{2}+6n_{1}n_{2}+6n_{2}^{2}}}{(q;q)_{n _{1}}(q^{3};q^{3})_{n_{2}}}=(-q^{2},-q^{3},-q^{4},-q^{6};q^{6})_{\infty}.\]
\(\bullet\): \(A=\left[\begin{smallmatrix}3&1\\ 4&2\end{smallmatrix}\right]\). The modularity follows from the Gollnitz-Gordon identities in the form presented by Kursunogoz [11, (21) and (22)]:
\[\sum_{n_{1},n_{2}\geq 0}\frac{q^{\frac{3}{2}n_{1}^{2}+4n_{1}n_{2}+4 n_{2}^{2}-\frac{1}{2}n_{1}}}{(q;q)_{n_{1}}(q^{4};q^{4})_{n_{2}}} =\frac{1}{(q,q^{4},q^{7};q^{8})_{\infty}},\] \[\sum_{n_{1},n_{2}\geq 0}\frac{q^{\frac{3}{2}n_{1}^{2}+4n_{1}n_{2}+4 n_{2}^{2}+\frac{3}{2}n_{1}+4n_{2}}}{(q;q)_{n_{1}}(q^{4};q^{4})_{n_{2}}} =\frac{1}{(q^{3},q^{4},q^{5};q^{8})_{\infty}}.\]
\(\bullet\): \(A=\left[\begin{smallmatrix}1&-1/2\\ -1&1\end{smallmatrix}\right]\) or \(\left[\begin{smallmatrix}1&-3/2\\ -1&1\end{smallmatrix}\right]\). If \(b=\left[\begin{smallmatrix}0\\ 0\end{smallmatrix}\right]\), the Nahm sum coincides, up to a multiplication by a product of the dekekind eta function, with a certain character of the integrable highest weight module \(L(2\Lambda_{0})\) of the affine Lie algebra of type \(D_{3}^{(2)}\) or \(D_{4}^{(3)}\). Such a character formula was conjectured in [6] and recently proved in [15] for any twisted type affine Lie algebra. Then the modularity follows from the result by Kac and Peterson [7].
Finally, we provide a single comment on Table 2 and 3. There is no apparent relationship between the candidates in these two lists, which are associated with symmetrizers \(d=(1,1,2)\) and \(d=(2,2,1)\), except that there are several "Langlands dual" pairs:
\[\begin{pmatrix}2&2&1\\ 2&4&2\\ 2&4&3\end{pmatrix} \leftrightarrow\begin{pmatrix}2&2&2\\ 2&4&4\\ 1&2&3\end{pmatrix}, \left(\begin{matrix}1&1&1/2\\ 1&2&1\\ 1&2&3/2\end{matrix}\right)\leftrightarrow\begin{pmatrix}1&1&1\\ 2&2&2\\ 1/1&1&3/2\end{pmatrix},\] \[\begin{pmatrix}1&0&1/2\\ 0&2&1\\ 1&2&2\end{pmatrix} \leftrightarrow\begin{pmatrix}1&0&1\\ 0&2&2\\ 1/2&1&2\end{pmatrix}.\]
These pairs consist of matrices that are transposed to each other. The second (resp. first) pair consists of the inverses of (resp. the twice the inverse of) the Cartan matrices of type \(B_{3}\) and \(C_{3}\). The third pair is more mysterious. We expect that two Nahm sums for the third pair are related by the modular \(S\)-transformation. More precisely, according to numerical experiments, we conjecture that the following modular transformation formulas hold. For matrices \(A=\left[\begin{smallmatrix}1&0&1/2\\ 0&2&1\\ 1&2&2\end{smallmatrix}\right]\) and \(A^{\vee}=\left[\begin{smallmatrix}1&0&1/2\\ 0&2&1\\ 1&2&2\end{smallmatrix}\right]\) with symmetrizers \(d=(1,1,2)\) and \(d^{\vee}=(2,2,1)\), respectively, we define
\begin{table}
\begin{tabular}{c c c} \hline \(A\) & \(b\) & \(c\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&2&1\\ 0&2&4\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 0\end{bmatrix}\) & \(\frac{7}{120}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&4&2\\ 0&4&4\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2\end{bmatrix}\) & \(\frac{7}{24}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&4&2\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2\end{bmatrix}\) & \(\frac{7}{24}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&5&4\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 0\end{bmatrix}\) & \(\frac{7}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&5&4\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2&8&8\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 4\end{bmatrix}\) & \(\frac{103}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&5&4\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2&8&8\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2&8&8\end{pmatrix}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&5&4\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2&8&8\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2&8&8\end{pmatrix}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&5&4\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2&8&8\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2&8&8\end{pmatrix}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&5&4\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2&8&8\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2&8&8\end{pmatrix}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&5&4\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2&8&8\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2&8&8\end{pmatrix}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&5&4\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2&8&8\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 4\end{bmatrix}\) & \(\frac{103}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&5&4\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2&8&8\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 4\end{bmatrix}\) & \(\frac{103}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&5&4\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 4\end{bmatrix}\) & \(\frac{11}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 2&8&8\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 4\end{bmatrix}\) & \(\frac{11}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&5&4\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 3/2\end{bmatrix}\) & \(\frac{11}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 2&8&8\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 4\end{bmatrix}\) & \(\frac{11}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&8&2\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 4\end{bmatrix}\) & \(\frac{11}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&8&1\\ 0&2&1\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 4\end{bmatrix}\) & \(\frac{11}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&8&2\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 4\end{bmatrix}\) & \(\frac{11}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&8&1\\ 0&2&1\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 4\end{bmatrix}\) & \(\frac{11}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&8&1\\ 0&2&1\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 4\end{bmatrix}\) & \(\frac{11}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&8&2\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 4\end{bmatrix}\) & \(\frac{11}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&8&1\\ 0&2&1\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 1&2&2\end{bmatrix}\) & \(\frac{11}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&8&1\\ 0&2&1\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 1&2&2\end{bmatrix}\) & \(\frac{11}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&8&1\\ 0&2&1\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 1&2&2\end{bmatrix}\) & \(\frac{11}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&8&1\\ 0&2&1\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 4\end{bmatrix}\) & \(\frac{11}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&8&2\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2&8&1\\ 0&2&1\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 2&8&8\end{bmatrix}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&8&2\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2&0\end{bmatrix}\) & \(\frac{47}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 0&4&2\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 2&8&1\\ 0&2&1\end{bmatrix}\) & \(\frac{47}{168}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 0&4&2\end{pmatrix}\) & \(\begin{bmatrix}1/2\\ 4\end{bmatrix}\) & \(\frac{143}{168}\) \\ \hline \end{tabular}
\end{table}
Table 2: Symmetrizable modular candidates with \(d=(1,1,2)\).
\begin{table}
\begin{tabular}{c c c} \hline \(A\) & \(b\) & \(c\) \\ \hline \(\begin{pmatrix}0\\ 0\\ 1/2\end{pmatrix}\) & \(\begin{bmatrix}0\\ 0\end{bmatrix}\) & \(-\frac{7}{88}\) \\ \(\begin{pmatrix}1&0&1\\ 0&2&2\\ 1/2&1&2\end{pmatrix}\) & \(\begin{bmatrix}1\\ 0\\ 1\end{bmatrix}\) & \(\frac{1}{88}\) \\ \(\begin{pmatrix}1&0&1\\ 0&2&2\\ 1/2&1&2\end{pmatrix}\) & \(\begin{bmatrix}1\\ 0\\ 1\end{bmatrix}\) & \(\frac{1}{88}\) \\ \(\begin{pmatrix}1&1&1\\ 1/2&1&2\end{pmatrix}\) & \(\begin{bmatrix}1\\ 0\\ -1/2\end{pmatrix}\) & \(\frac{1}{72}\) \\ \(\begin{pmatrix}1&1&1\\ 1/2&1&2\end{pmatrix}\) & \(\begin{bmatrix}1\\ -1/2\end{pmatrix}\) & \(\frac{1}{72}\) \\ \(\begin{pmatrix}1&1&1\\ 1/2&1&2\end{pmatrix}\) & \(\begin{bmatrix}1\\ -1/2\end{pmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}1/2&1&1\\ 1/2&1&2\end{pmatrix}\) & \(\begin{bmatrix}1\\ 1/2\end{pmatrix}\) & \(\frac{1}{56}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&4&4\\ 0&2&3\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ 3/2&4\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ 0&2&2\\ 1/2&1&5/2\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ -1/2\end{pmatrix}\) & \(\frac{1}{8}\) \\ \(\begin{pmatrix}1&1&1&1\\ 1/2&1&2\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ -1/2\end{pmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}1&1&1&1\\ 1&2&2\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ -1/2\end{pmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}1&2&1&2\\ 1&2&2\end{pmatrix}\) & \(\begin{bmatrix}1&2\\ 1/2&1&5/2\end{pmatrix}\) & \(\frac{1}{56}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&4&4\\ 0&2&3\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ 3/2&4\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ 1/2&1&5/2\end{pmatrix}\) & \(\frac{1}{56}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&4&4\\ 0&2&3\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ 3/2&4\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ 1/2&1&5/2\end{pmatrix}\) & \(\frac{1}{56}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&4&4\\ 0&2&3\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ 3/2&4\end{bmatrix}\) & \(\begin{bmatrix}1&0\\ 1/2&1&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&4&4\\ 0&2&3\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ 3/2&4\end{bmatrix}\) & \(\begin{bmatrix}1&0\\ 1/2&1&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&4&4\\ 0&2&3\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ 3/2&4\end{bmatrix}\) & \(\begin{bmatrix}1&0\\ 1/2&1&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&4&4\\ 0&2&3\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ 3/2&4\end{bmatrix}\) & \(\begin{bmatrix}1&0\\ 1/2&1&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \hline \(\begin{pmatrix}1&1&0\\ 1&4&4\\ 0&2&3\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ 3/2&4\end{bmatrix}\) & \(\begin{bmatrix}1&0\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \hline \(\begin{pmatrix}1&1&1\\ 1&2&3/2&5/2\end{bmatrix}\) & \(\begin{bmatrix}1&1\\ 1/2&3/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \hline \(\begin{pmatrix}2&1&0\\ 1&2&2\\ 0&1&2\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ 1/2&1\end{bmatrix}\) & \(\frac{1}{56}\) \\ \hline \(\begin{pmatrix}2&1&0\\ 1&2&2\\ 0&1&2\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \hline \(\begin{pmatrix}2&2&2\\ 2&4&4\\ 1&2&3\end{pmatrix}\) & \(\begin{bmatrix}1&0\\ 3/2&4\end{bmatrix}\) & \(\begin{bmatrix}1&0\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}2&2&2\\ 4&4&4\end{bmatrix}\) & \(\begin{bmatrix}1&0\\ 1/2&1&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}2&2&2\\ 4&4&4\end{bmatrix}\) & \(\begin{bmatrix}1&0\\ 3/2&4\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}2&2&3\\ 3/2&4\end{bmatrix}\) & \(\begin{bmatrix}1&0\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}2&3&4\end{bmatrix}\) & \(\begin{bmatrix}1&0\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}2&3&4\end{bmatrix}\) & \(\begin{bmatrix}1&0\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1&0\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 1/2&5/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 1/2&5/2\end{bmatrix}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 1/2&5/2\end{bmatrix}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 1/2\end{bmatrix}\) & \(\frac{1}{56}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \(\begin{bmatrix}1/2\\ 1/2&5/2\end{bmatrix}\) \\ \(\begin{pmatrix}3/2&4&4\end{bmatrix}\) & \
vector-valued functions
\[g(\tau)=\begin{pmatrix}f_{A,\begin{bmatrix}0\\ 0\\ 0\end{bmatrix},-5/88;\,0\\ \begin{pmatrix}f_{A,\begin{bmatrix}0\\ 0\end{bmatrix},-1/88;\,1\\ \end{bmatrix}}(q)\\ f_{A,\begin{bmatrix}0\\ 0\end{bmatrix},7/88;\,1\\ \end{pmatrix}}(q)+qf_{A,\begin{bmatrix}\frac{1}{2}\\ \frac{1}{3}\end{bmatrix},7/88;\,1\\ \begin{pmatrix}f_{A,\begin{bmatrix}0\\ 0\end{bmatrix},19/88;\,0\\ \end{bmatrix}}(q)\\ f_{A,\begin{bmatrix}\frac{1}{2}\\ \frac{1}{2}\end{bmatrix},35/88;\,0}(q)\\ \end{pmatrix},\ g^{\vee}(\tau)=\begin{pmatrix}f_{A^{\vee},\begin{bmatrix}0\\ 0\\ 0\end{bmatrix},-7/88}(q)\\ f_{A^{\vee},\begin{bmatrix}0\\ 0\end{bmatrix},25/88}(q)+f_{A^{\vee},\begin{bmatrix}\frac{1}{3}\\ \frac{1}{3}\end{bmatrix},25/88}(q)\\ f_{A^{\vee},\begin{bmatrix}0\\ 0\end{bmatrix},1/88}(q)\\ f_{A^{\vee},\begin{bmatrix}\frac{1}{0}\\ 0\end{bmatrix},9/88}(q)\\ f_{A^{\vee},\begin{bmatrix}\frac{1}{2}\\ \frac{1}{2}\end{bmatrix},49/88}(q)\\ \end{pmatrix},\]
where \(f_{A,b,c,d;\,\sigma}(q)\) for \(\sigma=0,1\) is the partial sum of the Nahm sum where the sum is taken over \(n\in\mathbb{N}^{3}\) such that \(n_{1}\equiv 0\bmod\sigma\), and we omit \(d\) and \(d^{\vee}\) from the notation. Then we have
\[g(\tau+1)=\text{diag}(\zeta_{88}^{-5},\zeta_{88}^{43},\zeta_{88}^{51},\zeta_{88 }^{19},\zeta_{88}^{35})g(\tau),\quad g^{\vee}(\tau+1)=\text{diag}(\zeta_{88}^{- 7},\zeta_{88}^{25},\zeta_{88}^{1},\zeta_{88}^{9},\zeta_{88}^{49})g^{\vee}(\tau),\]
and we conjecture that
\[g(-\frac{1}{\tau})=S\ g^{\vee}(\frac{\tau}{2}),\quad g^{\vee}(-\frac{1}{\tau} )=2S\ g(\frac{\tau}{2}), \tag{54}\]
where
\[S=\begin{pmatrix}\alpha_{5}&\alpha_{4}&\alpha_{3}&\alpha_{2}&\alpha_{1}\\ \alpha_{4}&\alpha_{1}&-\alpha_{2}&-\alpha_{5}&-\alpha_{3}\\ \alpha_{3}&-\alpha_{2}&-\alpha_{4}&\alpha_{1}&\alpha_{5}\\ \alpha_{2}&-\alpha_{5}&\alpha_{1}&\alpha_{3}&-\alpha_{4}\\ \alpha_{1}&-\alpha_{3}&\alpha_{5}&-\alpha_{4}&\alpha_{2}\end{pmatrix},\quad \alpha_{k}=\sqrt{\frac{2}{11}}\sin\frac{k\pi}{11}.\]
## Appendix A Cyclic quantum dilogarithm function
The _cyclic quantum dilogarithm function_ is defined by
\[D_{\zeta}(x)=\prod_{t=1}^{m-1}(1-\zeta^{t}x)^{t} \tag{55}\]
where \(\zeta\) is a primitive \(m\)th root of unity. Noting that the function \(x\mapsto x-\lfloor x\rfloor\) is periodic with period \(1\), we have the following expression:
\[D_{\zeta}(x)=\prod_{t\bmod m}(1-\zeta^{t}x)^{m(\frac{t}{m}-\left\lfloor\frac{ t}{m}\right\rfloor)}. \tag{56}\]
**Proposition A.1**.: _Let \(p,q\) be integers coprime to \(m\). Suppose that \(p>0\) and \(pq\equiv 1\bmod m\). Then we have_
\[\frac{D_{\zeta}(x)^{p}}{D_{\zeta^{q}}(x)}=\left((1-x^{m})^{p-1}\prod_{t=1}^{p- 1}\frac{1}{(x;\zeta)_{\left\lfloor\frac{mt}{p}\right\rfloor+1}}\right)^{m}. \tag{57}\]
_Moreover, we also have_
\[\frac{D_{\zeta}(x)^{p}}{D_{\zeta^{q}}(x)}\equiv\left(\prod_{t=1}^{p-1}\frac{1} {(x;\zeta)_{tq}}\right)^{m}\qquad\bmod(1-x^{m})^{m}. \tag{58}\]
Proof.: We first see that
\[\frac{D_{\zeta}(x)^{p}}{D_{\zeta^{q}}(x)}=\prod_{t=1}^{m-1}(1-\zeta^{t}x)^{m\left\lfloor \frac{tp}{m}\right\rfloor}\]
since
\[\prod_{t=1}^{m-1}(1-\zeta^{t}x)^{m(\frac{pt}{m}-\left\lfloor\frac{pt}{m} \right\rfloor)}=\prod_{t=1}^{m-1}(1-\zeta^{qt}x)^{m(\frac{\text{not}}{m}- \left\lfloor\frac{p\text{not}}{m}\right\rfloor)}=\prod_{t=1}^{m-1}(1-\zeta^{qt }x)^{t}.\]
Let \(N_{r}\) for \(r=1,\ldots,p\) be the number of integer \(t\) such that \(0<t<m-1\) and \(\left\lfloor\frac{tp}{m}\right\rfloor\leq r-1\). The \(N_{r}\) is exactly the number of integer \(t\) such that \(0<t<\frac{mr}{p}\), and thus we have the following formulas:
\[N_{r}=\left\lfloor\frac{mr}{p}\right\rfloor\quad\text{for }r=1,\ldots,p-1,\quad N _{p}=m-1.\]
Now (57) follows from the following computation:
\[\prod_{t=1}^{m-1}(1-\zeta^{t}x)^{\left\lfloor\frac{tp}{m}\right\rfloor} =\prod_{r=1}^{p}\frac{(x;\zeta)_{N_{r}+1}^{r-1}}{(x;\zeta)_{N_{r-1}+1}^{r-1}} =(x;\zeta)_{N_{p}+1}^{p-1}\prod_{r=1}^{p-1}\frac{1}{(x;\zeta)_{N_ {r}+1}}\] \[=(1-x^{m})^{p-1}\prod_{t=1}^{p-1}\frac{1}{(x;\zeta)_{\left\lfloor \frac{mt}{p}\right\rfloor+1}}.\]
To prove (58), it suffices to show that the sets \(M(r)\) and \(M^{\prime}(r)\) defined by
\[M(r) =\{t\in[1,p-1]\mid\left\lfloor\frac{mt}{p}\right\rfloor+1=r\}\] \[M^{\prime}(r) =\{t\in[1,p-1]\mid tq\equiv r\bmod m\}\]
have the same cardinality for all \(r=1,\ldots,m-1\), where \([1,p-1]\coloneqq\{1,2,\ldots,p-1\}\). The equation \(tq\equiv r\bmod m\) holds if and only if \(t\equiv pr\bmod m\). The number of such \(t\) is the same as the number of \(s\in[1,p-1]\) such that \(0<pr-ms<p\), which is equivalent to the condition \(\left\lfloor\frac{ms}{p}\right\rfloor=r-1\).
We also see that
\[\prod_{t=0}^{p-1}\frac{(x;\zeta)_{tq}}{(\zeta^{e}x;\zeta)_{tq}}=\prod_{t=0}^{p -1}\frac{(x;\zeta)_{e}}{(\zeta^{tq}x;\zeta)_{e}}=\frac{(x;\zeta)_{k}^{e}}{(x; \zeta^{q})_{pe}}, \tag{59}\]
for any \(e\in\mathbb{Z}\), where the left-hand side is the ratio of the \(m\)th root of the right-hand side of (58) for \(\zeta^{e}x\) and \(x\). The second equality in (59) follows from
\[\prod_{t=0}^{p-1}(\zeta^{tq}x;\zeta)_{e}=\prod_{i=0}^{e-1}\prod_{t=0}^{p-1}(1- \zeta^{tq+i}x)=\prod_{t=0}^{pe-1}(1-\zeta^{tq}x)=(x;\zeta^{q})_{pe}.\]
|
2301.02948
|
Quantum synchronization effects induced by strong nonlinearities
|
A paradigm for quantum synchronization is the quantum analog of the
Stuart--Landau oscillator, which corresponds to a van der Pol oscillator in the
limit of weak (i.e. vanishingly small) nonlinearity. Due to this limitation,
the quantum Stuart--Landau oscillator fails to capture interesting
nonlinearity-induced phenomena such as relaxation oscillations. To overcome
this deficiency we propose an alternative model which approximates the van der
Pol oscillator to finitely large nonlinearities while remaining numerically
tractable. This allows us to uncover interesting phenomena in the deep-quantum
strongly-nonlinear regime with no classical analog, such as the persistence of
amplitude death on resonance. We also report nonlinearity-induced position
correlations in reactively coupled quantum oscillators. Such coupled
oscillations become more and more correlated with increasing nonlinearity
before reaching some maximum. Again, this behavior is absent classically. We
also show how strong nonlinearity can enlarge the synchronization bandwidth in
both single and coupled oscillators. This effect can be harnessed to induce
mutual synchronization between two oscillators initially in amplitude death.
|
Yuan Shen, Wai-Keong Mok, Changsuk Noh, Ai Qun Liu, Leong-Chuan Kwek, Weijun Fan, Andy Chia
|
2023-01-08T00:05:24Z
|
http://arxiv.org/abs/2301.02948v2
|
# Quantum synchronization effects induced by strong nonlinearities
###### Abstract
A paradigm for quantum synchronization is the quantum analog of the Stuart-Landau oscillator, which corresponds to a van der Pol oscillator in the limit of weak (i.e. vanishingly small) nonlinearity. Due to this limitation, the quantum Stuart-Landau oscillator fails to capture interesting nonlinearity-induced phenomena such as relaxation oscillations. To overcome this deficiency we propose an alternative model which approximates the van der Pol oscillator to finitely large nonlinearities while remaining numerically tractable. This allows us to uncover interesting phenomena in the deep-quantum strongly-nonlinear regime with no classical analog, such as the persistence of amplitude death on resonance. We also report nonlinearity-induced position correlations in relatively coupled quantum oscillators. Such coupled oscillations become more and more correlated with increasing nonlinearity before reaching some maximum. Again, this behavior is absent classically. We also show how strong nonlinearity can enlarge the synchronization bandwidth in both single and coupled oscillators. This effect can be harnessed to induce mutual synchronization between two oscillators initially in amplitude death.
_Introduction.--_Mathematical modelling has shown us how the immense variety and beauty of nature can be governed by nonlinear differential equations [1; 2; 3; 4]. Such equations, owing to their nonlinearity, are difficult to analyze and their application to physical processes has come to be known as nonlinear science [5; 6]. In physics, interest in nonlinear phenomena has spread to quantum-mechanical systems. Effects such as chaos [7; 8; 9; 10], stochastic resonance [11; 12; 13; 14], and coherence resonance [15; 12], are some of the better known examples. Besides fundamental research, there are also several promising applications of nonlinear dissipation, e.g. stabilizing bosonic qubits for fault-tolerant quantum computing [16; 17; 18], and enhancing the sensitivity of quantum sensors [19; 20].
A relative newcomer to the study of nonlinear effects in quantum systems is synchronization [21]. Its most elementary form consists of applying a sinusoidal force, say with amplitude \(f\), and frequency \(\Omega_{\rm d}\), to a self-sustained oscillator. Synchronization is then the modification of the oscillator frequency to \(\Omega_{\rm d}\). A prototypical model is the driven van der Pol (vdP) oscillator [22], defined by phase-space coordinates \((x,y)\) satisfying
\[x^{\prime}=y\,,\quad y^{\prime}=f\cos(\Omega_{\rm d}t)-\omega_{0}^{2}\,x-\mu \,(x^{2}-q^{2})\,y\;, \tag{1}\]
where primes denote differentiation with respect to the argument (in this case \(t\), representing time). In the absence of forcing (\(f=0\)) the oscillator is characterized by \(\omega_{0}\), and a nonlinearity parameter \(\mu\) which controls how much the oscillator is damped towards an amplitude of order \(|q|\). An important feature of the undriven vdP system is the existence of a supercritical Hopf bifurcation at \(\mu=0\), via which a stable limit cycle appears for \(\mu>0\)[22].
At \(\mu=0\), (1) is entirely linear. This motivates one to consider the quasilinear limit of (1), defined by \(\mu\longrightarrow 0^{+}\). In this limit the vdP oscillator is well approximated by the Stuart-Landau (SL) oscillator, the steady state of which is rotationally symmetric in phase space (a circular limit cycle). This makes the SL oscillator much simpler to analyze, and has thus served as a starting point in the literature on quantum synchronization for continuous-variable systems, e.g. Refs. [23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. The trade-off of course, is that effects taking place at finite values of \(\mu\) are excluded. A prominent example is relaxation oscillations in the undriven vdP oscillator1[3; 22; 43]. More effects start to appear if driving is included, such as quasiperiodicity and chaos [44; 45; 46], both of which are absent in the driven SL oscillator.
Footnote 1: In fact, vdP intended for (1) to model relaxation oscillations in an electrical circuit [39]. To observe relaxation oscillations in quantum theory one needs to quantize the exact vdP model, and it is only relatively recently that such efforts have been made [40; 41; 42].
In this work, we investigate the effects of nonlinearity in quantum oscillators by considering a more general model based on the classical Duffing-van der Pol (DvdP) oscillator. This adds \(\zeta\,x^{3}\) to \(y^{\prime}\) where \(\zeta\) is another nonlinearity parameter. To overcome the inadequacy of the SL model we propose a quantum D
the vdP and Duffing nonlinearities (respectively \(\mu\) and \(\zeta\)) are nonvanishing, but also not arbitrarily large. Our model is accurate up to order \((\mu/\omega_{0})^{2}\), at which the distinct signatures of strong nonlinearity appear, such as relaxation oscillations [41]. Our approach has the benefit of capturing novel nonlinear effects while evading the large computational cost of simulating quantum systems with very strong nonlinear dissipations.
We show that for a single oscillator with periodic forcing there exists a critical Duffing nonlinearity, above which further increases in \(\zeta\) enlarges the synchronization bandwidth (the amount of detuning the forcing can tolerate from the oscillator and still entrain it). This result is similar to the synchronization enhancement from the classical literature [47], but now generalized to quantum oscillators.2 In contrast, the vdP nonlinearity activates genuine quantum effects. Coupling two vdP oscillators dissipatively may lead to either amplitude death (the cessation of oscillations), or mutual synchronization. Classically, amplitude death occurs only when the two oscillators are sufficiently detuned [49]. Interestingly, we find this need not be the case for quantum oscillators. We show that two quantum vdP oscillators possessing relatively small limit cycles and nonvanishing nonlinearities can exhibit amplitude death even with zero detuning. Larger limit cycles on the other hand can mutually synchronize from a state of amplitude death if their nonlinearity is increased.
Footnote 2: It is also worth mentioning that nonlinear oscillators are of interest to quantum information too, when they are coupled to qubits. In this context the Duffing nonlinearity has been shown to both increase and stabilize the oscillator-qubit entanglement [48].
We also consider reactively coupled oscillators. Two such SL oscillators cannot develop positional correlations, and hence do not synchronize. This is true regardless of whether the oscillators are classical or quantum. We show here that at finitely large nonlinearity, position correlations behave rather differently between the classical and quantum oscillators: Two reactively coupled quantum vdP oscillators can undergo nonlinearity-induced correlations whereby their position correlation increases as they become more nonlinear. In contrast, we find that making the analogous classical oscillators more nonlinear monotonically reduces their position correlation. The nonlinearity-induced correlations in the quantum vdP oscillators are thus a consequence of both their quantum nature and strong nonlinearity.
_Model.--_For simplicity we consider here a dimensionless DvdP model in terms of the nonlinearity parameters \(\lambda\equiv\mu q^{2}/\omega_{0}r^{2}\) and \(\beta\equiv\zeta q^{2}/\omega_{0}^{2}r^{2}\) in which \(r\) is a dimensionless scale parameter:
\[\tilde{x}^{\prime}=\tilde{y}\,,\quad\tilde{y}^{\prime}=F\cos(\omega_{\rm d} \tilde{t}\,)-\tilde{x}-\lambda(\tilde{x}-r^{2})\tilde{x}^{\prime}-\beta\, \tilde{x}^{3}. \tag{2}\]
Note that \(\tilde{x}\equiv xr/q\) is now a function of \(\tilde{t}=\omega_{0}t\), and we have also included a dimensionless external force parameterized by \(F=fr/\omega_{0}^{2}q\) and \(\omega_{\rm d}=\Omega_{\rm d}/\omega_{0}\). From the approximate analysis of (2), the leading contribution to the oscillator frequency is quadratic in \(\lambda\), and linear in \(\beta\)[50], given by \(\omega\approx 1+r^{2}(3\beta/2-\lambda^{2}r^{2}/16)\). This motivates a Bogoliubov-Krylov time-average of the equations of motion up to these orders, giving [50; 51]
\[\begin{split}\alpha^{\prime}=& i\,\frac{F}{2}\cos( \omega_{\rm d}t)-i\,\alpha-i\frac{3\beta}{2}|\alpha|^{2}\alpha+\frac{\lambda}{ 2}(r^{2}-|\alpha|^{2})\alpha\\ &+i\frac{\lambda^{2}}{8}\left(r^{4}-6r^{2}|\alpha|^{2}+\frac{11}{ 2}|\alpha|^{4}\right)\alpha,\end{split} \tag{3}\]
where \(\alpha=(\tilde{x}+i\,\tilde{y})/2\). For \(F=0\), (3) predicts a limit-cycle amplitude of \(2|\alpha|=2r\) with the expected frequency shifts due to \(\lambda\) and \(\beta\). Additionally, note the first-order averaging in \(\lambda\) for \(\beta=0\) yields the SL equation. Our approximate model captures the effects of strong vdP nonlinearity of order \(\lambda^{2}\). We seek a quantum master equation \(\rho^{\prime}=\mathcal{L}\rho\) such that \(\left\langle\hat{a}\right\rangle^{\prime}=\operatorname{Tr}[\hat{a}\,\mathcal{ L}\rho]\) (with \([\hat{a},\hat{a}^{\dagger}]=\hat{1}\)) agrees with (3) in the mean-field limit [41]. It can then be shown that this is satisfied by the Lindbladian [52; 50; 53; 54]
\[\mathcal{L}=-i\,[\hat{H},\cdot]+\lambda r^{2}\mathcal{D}[\hat{a}^{\dagger}]+ \frac{\lambda}{2}\,\mathcal{D}[\hat{a}^{2}]\,, \tag{4}\]
where
\[\begin{split}\hat{H}=&\,\left(1-\frac{\lambda^{2}r ^{4}}{8}\right)\hat{a}^{\dagger}\hat{a}+\frac{3\lambda^{2}r^{2}}{8}\,\hat{a}^{ \dagger 2}\hat{a}^{2}-\frac{11\lambda^{2}}{48}\,\hat{a}^{\dagger 3}\hat{a}^{3}\\ &+\frac{3\beta}{4}\,\hat{a}^{\dagger 2}\hat{a}^{2}-\frac{F}{2} \cos(\omega_{\rm d}t)(\hat{a}+\hat{a}^{\dagger})\,.\end{split} \tag{5}\]
We have also defined \(\mathcal{D}[\hat{c}]\equiv\hat{c}\,\mathbf{\cdot}\,\hat{c}^{\dagger}-(\hat{c}^{ \dagger}\hat{c}\,\mathbf{\cdot}+\cdot\hat{c}^{\dagger}\hat{c})/2\) for any \(\hat{c}\), and a dot denotes the position of \(\rho\) when acted upon by a superoperator. We remark that both the higher-order Kerr terms and the nonlinear two-photon dissipation in our proposed model can be implemented in circuit QED [17; 55]. The tunability of the limit cycle radius \(r\) allows us to access different parameter regimes of the quantum oscillator, in particular the quantum (\(r\ll 1\)), and semiclassical (\(r\approx 1\)) regimes. We have included the second-order contributions in \(\lambda\) in our model for its nonlinearity-tuning capability, since the terms linear in \(\lambda\) neither affect the limit-cycle amplitude nor phase dynamics. The Duffing nonlinearity translates to a Kerr term in \(\mathcal{L}\). This model can be considered as an alternative to the quantum SL oscillator, but with flexibility in tuning the nonlinearity. All our numerical results for a given parameter set are obtained with a sufficiently large truncation of the Hilbert space by ensuring the corresponding steady-state power spectrum converge.
_Nonlinearity-enhanced synchronization.--_We study first the frequency locking of the approximate quantum DvdP oscillator to a periodic force [(4) and (5)]. The synchronization bandwidth is the range of \(\omega_{\rm d}\) for which the oscillator frequency is locked to the driving frequency at
steady state. This is achieved when \(|\omega_{\rm d}-\tilde{\omega}|=0\), where \(\tilde{\omega}\) the observed frequency of the driven oscillator, obtained from the peak of its spectrum averaged over one period of the drive [41].
Here we find the Duffing nonlinearity to enhance quantum synchronization: For a range of \(\lambda\) and a fixed \(r\), increasing \(\beta\) past a critical value widens the synchronization bandwidth linearly. This is illustrated in Fig. 1(a) where the synchronization bandwidth is plotted as a contour against \(\bar{\beta}\equiv\beta r^{2}\) and \(F/r\). The critical value of \(\bar{\beta}\) is indicated by the red dashed line, where the bandwidth is equal to its corresponding value at \(\bar{\beta}=0\). However, this enhancement does not occur for all values of the vdP nonlinearity. In Fig. 1(b) we see that an increase of \(\bar{\lambda}\equiv\lambda r^{2}\) from its value in Fig. 1(a) can ruin the gain in synchronization bandwidth due to \(\bar{\beta}\). Noting that the dissipative terms in \(\mathcal{L}\) are all proportional to \(\lambda\), this effect can be qualitatively attributed to the phase diffusion due to quantum noise, which is known to inhibit synchronization [23, 24, 27]. We can develop some understanding of the quantum DvdP by examining its classical analog. Using the method of harmonic balance on \(x\), we are able to derive the conditions for nonlinearity-enhanced synchronization analytically for the classical DvdP oscillator, given by [50]
\[\bar{\beta}>\frac{\bar{\lambda}^{2}}{3(1-\bar{\lambda}^{2})}\;,\quad 0<\bar{ \lambda}<1\;. \tag{6}\]
This shows clearly the existence of a critical value of \(\bar{\beta}\), and a finite interval of \(\bar{\lambda}\) over which the synchronization enhancement occurs. These results are consistent with Fig. 1 except for the fact that quantum noise makes the range of \(\lambda\) for synchronization enhancement in the quantum DvdP oscillator smaller compared to the classical range as seen in Fig. 1(b).
We have also shown in Ref. [50] that increasing the vdP nonlinearity in the quantum model can only reduce the synchronization bandwidth. One can thus be certain that the synchronization enhancement seen in our model is induced by the Duffing nonlinearity. We shall see next that the vdP nonlinearity can induce synchronization in coupled oscillators from a state of amplitude death.
_Nonlinearity-induced synchronization and amplitude death.--_Two dissipatively coupled vdP oscillators (i.e. no Duffing nonlinearity) can be described by the Lindbladian
\[\mathcal{L}=\mathcal{L}_{1}+\mathcal{L}_{2}-i\,\Delta[\hat{a}_{2}^{\dagger} \hat{a}_{2},\cdot]+\eta\,\mathcal{D}[\hat{a}_{1}-\hat{a}_{2}]\;, \tag{7}\]
where \(\mathcal{L}_{k}\) (\(k=1,2\)) is the Lindbladian for oscillator \(k\), defined by setting \(\hat{a}\) to \(\hat{a}_{k}\) and \(\beta=F=0\) in (4) and (5). We have assumed the oscillators to be identical (i.e. same \(\lambda\) and \(r\)) except for an initial detuning of \(\Delta\), and denoted their coupling strength by \(\eta\).
In this case, frequency locking occurs when the observed frequencies of the two oscillators become identical at steady state. As before, we define an oscillator's observed frequency by the location of its spectral peak, except now we must use the reduced state derived by a partial trace over the two-oscillator steady state. For a fixed \(\eta\), we define the synchronization bandwidth to be the range of \(\Delta\) for which the two oscillators lock frequencies. It will also be interesting to look at position correlations in the two oscillators at steady state, defined by
\[\Sigma=\frac{\left\langle\hat{x}_{1}\hat{x}_{2}\right\rangle-\left\langle\hat {x}_{1}\right\rangle\left\langle\hat{x}_{2}\right\rangle}{\sqrt{\left[\, \left\langle\hat{x}_{1}^{2}\right\rangle-\left\langle\hat{x}_{1}\right\rangle ^{2}\,\right]\left[\,\left\langle\hat{x}_{2}^{2}\right\rangle-\left\langle \hat{x}_{2}\right\rangle^{2}\,\right]}}\;, \tag{8}\]
where \(\hat{x}_{k}=\hat{a}_{k}+\hat{a}_{k}^{\dagger}\). Note that frequency locking implies a nonzero \(\Sigma\), but not vice versa.
In addition to frequency locking, dissipatively coupled oscillators can also cease to oscillate. If the oscillators are classical, then this may happen for a range of \(\eta\) provided that \(\Delta\) is sufficiently large. And if both oscillators stabilize to the same phase-space point, which may be taken to be the origin without loss of generality, then the effect is termed amplitude death [49, 56]. To define amplitude death in quantum oscillators we generalize the notion of P-bifurcations from classical stochastic systems to the steady-state Wigner function of a reduced state (see Ref. [57] and other references therein). In this case, amplitude death is said to occur if the single-oscillator Wigner functions peak at the origin in quantum phase space. This approach is consistent with previous studies on amplitude death in coupled quantum oscillators [32, 33, 34, 35].
In Fig. 2 we work out regions of frequency locking and
Figure 1: Contour plot of the synchronization bandwidth for the quantum DvdP oscillator as a function of \(\bar{\beta}\equiv\beta r^{2}\) (vertical axis) and \(F/r\) (horizontal axis) with unit limit-cycle radius, i.e. \(r=1\). In this case \(\bar{\beta}=\beta\) and \(\bar{\lambda}=\lambda\). The axes are also indicated in subplot (b). (a) Illustration of synchronization enhancement for \(\bar{\lambda}=0.1\). Above a critical value of \(\bar{\beta}\), indicated by a red dashed line (obtained numerically), the synchronization bandwidth is enlarged as the Duffing nonlinearity is increased. (b) Synchronization enhancement disappears if we increase the vdP nonlinearity from \(\bar{\lambda}=0.1\) to \(\bar{\lambda}=0.5\), demonstrating the finite range of \(\bar{\lambda}\) over which the enhancement is effective.
amplitude death in the \((\eta,\Delta)\) parameter space for (7), along with \(\Sigma\), shown as a contour in Fig. 2(a) and (c). Two especially interesting scenarios are--when the limit cycles are relatively large compared to quantum noise [Fig. 2(a)]; and when they become small, being more susceptible to quantum noise [Fig. 2(c)].
In Fig. 2(a) we have indicated the boundary between frequency locking and amplitude death by a dash-dotted line, while no identifiable phenomenon occurs to the left of the solid line. Note the dash-dotted line is a Hopf-bifurcation curve because the transition from amplitude death to frequency locking is facilitated by a Hopf bifurcation. Clearly, \(\Sigma\) is larger inside the frequency-locking region. Especially significant here is the effect of the vdP nonlinearity on synchronization. Whereas in the single-oscillator case the vdP nonlinearity had only detrimental effects, it now has a constructive role by enlarging the (mutual) synchronization bandwidth. We illustrate this in Fig. 2(b) where additional frequency-locking boundaries for different values of vdP nonlinearity \(\lambda\) are plotted. It can be seen that increasing \(\lambda\) enlarges the frequency-locking region (see also Ref. [50] for some classical analysis). This means that two oscillators in a state of amplitude death with an \((\eta,\Delta)\) lying above the Hopf-bifurcation curve in Fig. 2(a) will transit suddenly to a state of synchronized oscillations when \(\lambda\) is sufficiently increased. This may be appropriately called nonlinearity-induced mutual synchronization.
Turning now to the case of a small limit cycle (\(r\ll 1\)) in Fig. 2(c), we find frequency locking to be absent while position correlations become negligible. The blue solid line delineates the boundary of amplitude death. Most striking is the persistence of amplitude death at zero detuning (i.e. \(\Delta=0\)). Classically, some frequency mismatch between the two oscillators must be present in order for amplitude death to occur [49]. This signifies a clear distinction between classical and quantum dynamics. A loss of amplitude death at \(\Delta=0\) may then be expected if we increased either \(r\) or \(\lambda\) (while holding the other constant). This is indeed what we find, as illustrated in Fig. 2(d), where such a quantum-to-semiclassical transition is captured by increasing \(\bar{\lambda}\equiv\lambda r^{2}\).
Since we have focused exclusively on the vdP nonlinearity here, we note that incorporating a Duffing nonlinearity into our model will not in fact change the frequency-locking boundary. We can understand this from the classical coupled equations of motion, where we see that \(\beta\) appears only in the phase dynamics of the oscillators, and that such terms vanish at steady state for identical oscillators (equal limit cycle radii) [50; 58].
_Nonlinearity-induced correlations.--_It is known that two reactively coupled SL oscillators cannot synchronize nor share position correlations. This is true even in the quantum case [50]. However, we show here that positional correlations between two reactively coupled quantum vdP oscillators do develop. Here we must use the
Figure 2: Regions of frequency locking and amplitude death (as defined in the text by the power spectrum and Wigner function) for two dissipatively coupled vdP oscillators [subplots (a)–(d)] along with contours of \(\Sigma\) [subplots (a) and (c)]. All subplots have \(\Delta\) on the vertical axis, and \(\eta\) on horizontal axis which we also indicate in subplot (c). (a) Large \(r\) (semiclassical regime). Note the region on the left does not correspond to any identifiable effect and is demarcated using a solid line while the boundary between frequency locking and amplitude death is a Hopf bifurcation, which we denote by a dash-dotted line. As \(r\) is increased, the classical boundary is recovered. (b) Effect of varying \(\lambda\) on synchronization for \(r=1\). Boundaries of the frequency-locking region and its corresponding \(\lambda\) are shown (i.e. the frequency-locking region is the area underneath each curve). Increasing \(\lambda\) can enlarge the synchronization bandwidth and induce frequency locking from a state of amplitude death. (c) Small \(r\) (deep quantum regime). At small \(r\), amplitude death occurs even at zero initial detuning, which is a quantum effect. Note that \(\bar{\lambda}\) in (a) and (c) are approximately equal, leaving the differences between the two plots only as a result of quantum effects. (d) Shifts in amplitude-death boundary as \(\bar{\lambda}\) is increased (quantum-to-semiclassical transition).
Figure 3: Positional correlation (8) for two reactively coupled vdP oscillators. (a) Contour of \(\Sigma\) as a function of \(\lambda\) and \(g\) for \(r=0.3\) and \(\Delta=0.05\). The bottom gap of zero correlation agrees with the SL limit. (b) Correlations along the three vertical dashed lines at \(g=0.2\), \(0.4\), and \(0.8\) in subplot (a).
exact vdP Lindbladian [41; 50], because under reactive coupling, the approximate model does not produce any off-diagonal elements in the steady state, and hence cannot generate correlations in the two oscillators. As with the dissipatively coupled system, two reactively coupled vdP oscillators can be modelled by considering two uncoupled vdP oscillators with annihilation operators \(\hat{a}_{1}\) and \(\hat{a}_{2}\), coupled by the Hamiltonian \(g(\hat{a}_{1}\hat{a}_{2}^{\dagger}+\hat{a}_{1}^{\dagger}\hat{a}_{2})\), where \(g\) is the reactive coupling strength. As before, we assume that both oscillators have the same nonlinearity \(\lambda\).
In Fig. 3(a), we generate a contour plot of \(\Sigma\) as a function of \(\lambda\) and \(g\) at \(r=0.3\) and \(\Delta=0.05\). From this we see that for a fixed \(g\), increasing the oscillator nonlinearity beyond the SL regime leads to stronger correlations. We illustrate this more clearly in Fig. 3(b) by showing how \(\Sigma\) varies as a function of \(\lambda\) for \(g=0.2\), \(0.4\), and \(0.8\), which are marked in Fig. 3(a) by vertical dashed lines. Note this also shows the existence of an optimal \(\lambda\) which maximizes \(\Sigma\). Such nonlinearity-induced correlations are absent in the corresponding classical model [50]. At a given coupling strength, the position correlation in two reactively coupled classical vdP oscillators decreases monotonically as \(\lambda\) increases [50].
_Conclusion.--_ Our work goes beyond the well-studied paradigm of weak nonlinearity in quantum synchronization, and provides the first systematic study of quantum synchronization effects for strong nonlinearity. We introduced a new quantum oscillator model which captures intriguing effects induced by two strong nonlinearities. We showed that a strong Duffing nonlinearity leads to a linear enhancement of the synchronization bandwidth in driven oscillators. We also reported genuine quantum synchronization effects exclusive to strong nonlinearity which are not observed previously: Increasing the vdP nonlinearity enhances the synchronization bandwidth, and revives synchronization between dissipatively-coupled oscillators in amplitude death. For reactively-coupled vdP oscillators on the other hand, we find that strong nonlinearity induces position correlations which are impossible in the weakly-nonlinear limit. Our model provides a new paradigm for studying other strongly nonlinear effects such as chaos [7; 22; 59; 60].
_Acknowledgements.--_YS and WJF would like to thank the support from NRF-CRP19-2017-01. CN was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2022R1F1A1063053). WKM, AC and LCK are grateful to the National Research Foundation, Singapore and the Ministry of Education, Singapore for financial support.
|
2303.05686
|
Generative AI for Rapid Diffusion MRI with Improved Image Quality,
Reliability and Generalizability
|
Diffusion MRI is a non-invasive, in-vivo biomedical imaging method for
mapping tissue microstructure. Applications include structural connectivity
imaging of the human brain and detecting microstructural neural changes.
However, acquiring high signal-to-noise ratio dMRI datasets with high angular
and spatial resolution requires prohibitively long scan times, limiting usage
in many important clinical settings, especially for children, the elderly, and
in acute neurological disorders that may require conscious sedation or general
anesthesia. We employ a Swin UNEt Transformers model, trained on augmented
Human Connectome Project data and conditioned on registered T1 scans, to
perform generalized denoising of dMRI. We also qualitatively demonstrate
super-resolution with artificially downsampled HCP data in normal adult
volunteers. Remarkably, Swin UNETR can be fine-tuned for an out-of-domain
dataset with a single example scan, as we demonstrate on dMRI of children with
neurodevelopmental disorders and of adults with acute evolving traumatic brain
injury, each cohort scanned on different models of scanners with different
imaging protocols at different sites. We exceed current state-of-the-art
denoising methods in accuracy and test-retest reliability of rapid diffusion
tensor imaging requiring only 90 seconds of scan time. Applied to tissue
microstructural modeling of dMRI, Swin UNETR denoising achieves dramatic
improvements over the state-of-the-art for test-retest reliability of
intracellular volume fraction and free water fraction measurements and can
remove heavy-tail noise, improving biophysical modeling fidelity. Swin UNeTR
enables rapid diffusion MRI with unprecedented accuracy and reliability,
especially for probing biological tissues for scientific and clinical
applications. The code and model are publicly available at
https://github.com/ucsfncl/dmri-swin.
|
Amir Sadikov, Xinlei Pan, Hannah Choi, Lanya T. Cai, Pratik Mukherjee
|
2023-03-10T03:39:23Z
|
http://arxiv.org/abs/2303.05686v2
|
# Generalized Diffusion MRI Denoising and Super-Resolution using Swin Transformers
###### Abstract
Diffusion MRI is a non-invasive, in-vivo medical imaging method able to map tissue microstructure and structural connectivity of the human brain, as well as detect changes, such as brain development and injury, not visible by other clinical neuroimaging techniques. However, acquiring high signal-to-noise ratio (SNR) datasets with high angular and spatial sampling requires prohibitively long scan times, limiting usage in many important clinical settings, especially children, the elderly, and emergency patients with acute neurological disorders who might not be able to cooperate with the MRI scan without conscious sedation or general anesthesia. Here, we propose to use a Swin UNEt TRansformers (Swin UNETR) model, trained on augmented Human Connectome Project (HCP) data and conditioned on registered T1 scans, to perform generalized denoising and super-resolution of diffusion MRI invariant to acquisition parameters, patient populations, scanners, and sites. We qualitatively demonstrate super-resolution with artificially downsampled HCP data in normal adult volunteers. Our experiments on two other unrelated datasets, one of children with neurodevelopmental disorders and one of traumatic brain injury patients, show that our method demonstrates superior denoising despite wide data distribution shifts. Further improvement can be achieved via finetuning with just one additional subject. We apply our model to diffusion tensor (\(2^{\text{nd}}\) order spherical harmonic) and higher-order spherical harmonic coefficient estimation and show results superior to current state-of-the-art methods. Our method can be used out-of-the-box or minimally finetuned to denoise and super-resolve a wide variety of diffusion MRI datasets. The code and model are publicly available at [https://github.com/ucsfncl/dmri-swin](https://github.com/ucsfncl/dmri-swin).
Keywords:Diffusion MRI Transformer DTI Microstructure
## 1 Introduction
Diffusion MRI (dMRI) can provide valuable clinical information and assess tissue microstructure; however, its low signal-to-noise (SNR) ratio results in poor diagnostic and quantitative accuracy. To improve SNR, most dMRI protocols require low angular and spatial resolution and/or long scan times, which limit
usage in many important clinical settings. Therefore, there is great interest in having short patient scan times without compromising SNR or resolution.
Several supervised methods have been proposed to denoise brain dMRI scans; however, they are limited by their lack of generalizability. Often, they work on only one b-value and a prespecified set of diffusion-encoding directions, are built to predict only one set of microstructural parameters, or are trained and validated on the same dataset, such as the Human Connectome Project (HCP) [16][10]. Diffusion data can vary widely due to different acquisition parameters, scanners, and patient populations and therefore unsupervised or self-supervised denoising methods are often preferred [5].
Here we propose to use a Swin UNEt Transformers (Swin UNETR) model [7] to denoise dMRI data conditioned on registered T1 scans. Unlike other supervised methods, which utilize a small subset of the HCP dataset (typically 40 subjects) for training [16][10], we use the full set of HCP data, training with all b-values and diffusion-encoding directions, and apply simple data augmentations, such as random flipping, rotation, scaling, and k-space downsampling. Training on a large dataset (approximately 270,000 3D scans) allows the Swin UNETR model to learn a denoising function that performs well in many settings. We validate our approach on a held-out HCP dataset as well as two unrelated datasets and show significant performance benefits over current state-of-the-art unsupervised methods. Additionally, we show that finetuning, even on only one subject, improves performance and that our approach can also super-resolve dMRI data via qualitative assessment in an HCP subject.
## 2 Methods
### Data
In our experiments, we used data from 1155 unique subjects from three different datasets.
* **HCP**: 1065 subjects from the Human Connectome Project (HCP) Young Adult dataset [18] acquired using 90 diffusion-encoding directions at b-values of b=1000, 2000, 3000 s/mm\({}^{2}\) with 1.25 mm resolution. We randomly selected 985 subjects for training, 40 subjects for validation, and 40 subjects for testing.
* **TBI**: 45 adult mild traumatic brain injury (mTBI) patients acquired two decades ago using protocols identical to [20] on a GE scanner with 55 diffusion-encoding directions at b=1000 s/mm\({}^{2}\) with a nominal resolution of 1.8 mm (0.9 mm in xy after zero-interpolation in k-space). We randomly selected 5 subjects for finetuning and 40 subjects for testing.
* **SPIN**: 45 children ages 8-12 years with neurodevelopmental disorders acquired on a Siemens scanner with 64 and 96 diffusion-encoding directions at b-values of \(b=1000,2500\) s/mm\({}^{2}\) respectively (TE=72.20 ms, TR=2420 ms, flip angle=\(85^{\circ}\)) with 2.00 mm resolution. We randomly selected 5 subjects for finetuning and 40 subjects for testing.
DMRI data was skull-stripped with Synthstrip [8], corrected for eddy current-induced distortions and subject movements with Eddy [1], and aligned to structural T1 scans with Boundary-Based Registration (BBR) [6]. T1 scans were also skull-stripped using Synthstrip and segmented using SynthSeg [2]. Co-registered T1 and dMRI scans were sampled at 1.25 mm and used as inputs to the model.
### Evaluation Strategy
For evaluation, model predictions were resampled to dMRI resolution. To evaluate diffusion tensor estimation, the mean absolute error (MAE) between the ground truth and the model predictions for the principal eigenvector (V1), fractional anisotropy (FA), axial diffusivity (AD), radial diffusivity (RD), and mean diffusivity (MD) were found. For evaluating higher order spherical harmonics, the Jensen-Shannon distance (JSD) between the ground truth and model predictions, projected onto a uniformly distributed 362 direction hemisphere, was used [3]. The minimum numbers of diffusion gradients necessary for a unique fit were chosen: 6 for diffusion tensor, 15 for 4th order spherical harmonic, and 28 for 6th order spherical harmonic using the procedure described in [17]. We compare the performance of our model with block-matching and 4D filtering (BM4D) [14] for diffusion tensor fitting and Marchenko-Pastur Principal Component Analysis (MPPCA) [19] for 4th and 6th order spherical harmonic estimation, since we found BM4D and MPPCA to perform best in these respective fields. In each case, the ground truth was found by fitting the model using all acquired diffusion gradient directions. To evaluate super-resolution, dMRI data from an HCP subject was k-space downsampled by a factor of two and then linearly upsampled to emulate a low resolution acquisition.
### Training and Implementation
The Swin UNETR model [7] is implemented using PyTorch and MONAI and its architecture is briefly illustrated in Fig. 1. The model was trained on 1 NVIDIA V100 GPU using mean-squared error loss between the model output and ground truth. To obtain ground truth dMRI data for training, a 6th order spherical harmonic was fit for each shell and projected onto the acquired directions. We chose AdamW [12] as an optimizer with a learning rate of 1e-4 and perform gradient clipping with a maximal norm of 1.0 and stochastic weight averaging [9]. To reduce memory usage, we use model checkpointing and train with 16-bit precision. Due to compute constraints, only five epochs of training were completed. During training, we first downsample the dMRI scan with a probability of 0.5 in frequency space, using a procedure similar to [13], to an anisotropic resolution between 1.25 and 3 mm and linearly upsample back to 1.25 mm resolution. Random patches of 128 x 128 x 128 are cropped from the scan and randomly flipped with a probability of 0.5 along all axes and randomly rotated by 0, 90, 180, or 270 degrees along all axes with equal probability. The input dMRI patch is normalized to have zero mean and unit variance and the input T1 patch is normalized to have zero mean and standard deviation uniformly scaled between
0.25 and 4.0. For inference, we use a sliding window approach with an overlap of 0.875. Finetuning was achieved via additional training on external data from one held-out subject out of five with a learning rate of 1e-6 for three epochs and the average result was reported.
## 3 Results and Discussion
### Comparison with Other Methods
For diffusion tensor fitting, Table 1 shows that the Swin model achieves lower MAE than BM4D in all metrics in white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) in both the HCP and TBI test datasets without finetuning. For the SPIN dataset, the Swin model outperforms BM4D, except for FA estimation in GM and CSF; however, applying no denoising outperforms both BM4D and the Swin model for MD estimation in CSF possibly because of noise due to motion left uncorrected after Eddy. Qualitative comparison in Fig. 2 is consistent with these quantitative results and shows that the Swin model is able to capture more of the finer features in the WM and GM microstructure without the excessive smoothing of BM4D.
For 4\({}^{\text{th}}\) and 6\({}^{\text{th}}\) order spherical harmonic fitting, Table 2 shows that the Swin model achieves the lowest JSD in GM and CSF across all datasets. The Swin model performs worse than MPPCA in WM for 6\({}^{\text{th}}\) order spherical harmonic fitting, even in HCP data, possibly because the model is underfitted (due to compute constraints only five training epochs could be completed) and also because the Swin model denoises one direction at a time, whereas MPPCA is able to collectively denoise all directions.
Our results were achieved with simple data augmentations, although using more extensive simulations, such as those in [15], could lead to greater generalizability. In addition, by processing each dMRI volume separately, we are able to
Figure 1: A general overview of the Swin-UNETR architecture. The input is a concatenated 3D T1 and dMRI scan, which is encoded by a Swin transformer [11] at multiple resolutions and fed into a residual convolutional neural net (CNN) decoder to reconstruct the ground truth dMRI scan. For further details, see [7].
Figure 2: Visual comparison between the ground truth (GT), no denoising (RAW), BM4D, Swin without finetuning (SWIN) for denoising on test data from HCP (top), TBI (middle), and SPIN (bottom). MAE images show the mean absolute error.
\begin{table}
\begin{tabular}{|l|l|l|r|r|r|r|} \hline Dataset & Tissue & Metric & RAW & BM4D & SWIN & SWIN-F1 \\ \hline \multirow{8}{*}{HCP} & & AD & 0.13 & 0.0848 & **0.081** & \multirow{8}{*}{**0.0496**} \\ & & FA & 0.0935 & 0.0563 & **0.0496** & \\ & WM & MD & 0.0657 & 0.0483 & **0.0451** & \\ & & RD & 0.0773 & 0.0541 & **0.0487** & \\ & & V1 & 19.7 & 13.9 & **12.6** & \\ \cline{2-6} & & AD & 0.133 & 0.0931 & **0.0793** & \\ & & FA & 0.111 & 0.0588 & **0.0484** & \\ & GM & MD & 0.0723 & 0.0683 & **0.0596** & \\ & & RD & 0.0849 & 0.0719 & **0.0625** & \\ & & V1 & 33.6 & 28.1 & **26.7** & \\ \cline{2-6} & & AD & 0.324 & 0.209 & **0.182** & \\ & & FA & 0.166 & 0.104 & **0.0709** & \\ & CSF & MD & 0.143 & 0.144 & **0.138** & \\ & & RD & 0.169 & 0.151 & **0.138** & \\ & & V1 & 43.9 & 41.9 & **40.8** & \\ \hline \multirow{8}{*}{TBI} & & AD & 0.211 & 0.206 & 0.131 & **0.13** \\ & & FA & 0.145 & 0.14 & 0.0922 & **0.0808** \\ & WM & MD & 0.0941 & 0.0907 & 0.073 & **0.0702** \\ & & RD & 0.116 & 0.111 & 0.085 & **0.076** \\ & & V1 & 26.6 & 26.1 & 22.1 & **21.0** \\ \cline{2-6} & & AD & 0.218 & 0.217 & 0.149 & **0.127** \\ & & FA & 0.181 & 0.175 & 0.123 & **0.0949** \\ & GM & MD & 0.0908 & 0.0893 & **0.0809** & 0.0817 \\ & & RD & 0.124 & 0.12 & 0.102 & **0.0948** \\ & & V1 & 38.0 & 37.6 & 36.2 & **35.0** \\ \cline{2-6} & & AD & 0.412 & 0.478 & 0.237 & **0.228** \\ & & FA & 0.203 & 0.204 & 0.126 & **0.117** \\ & CSF & MD & 0.15 & 0.164 & **0.144** & 0.146 \\ & & RD & 0.213 & 0.215 & 0.173 & **0.165** \\ & & V1 & 42.9 & 42.9 & 42.2 & **41.7** \\ \hline \multirow{8}{*}{SPIN} & & AD & 0.112 & 0.0999 & 0.0825 & **0.0767** \\ & & FA & 0.0829 & 0.0747 & 0.057 & **0.0518** \\ \cline{1-1} & WM & MD & 0.0553 & 0.0438 & 0.044 & **0.0402** \\ \cline{1-1} & & RD & 0.0657 & 0.0536 & 0.0504 & **0.0457** \\ \cline{1-1} & & V1 & 18.4 & 15.0 & 14.5 & **13.8** \\ \cline{1-1} \cline{2-6} & & AD & 0.106 & 0.0823 & 0.0859 & **0.0694** \\ \cline{1-1} & & FA & 0.0994 & **0.0508** & 0.0636 & 0.0517 \\ \cline{1-1} & GM & MD & 0.0527 & 0.0652 & 0.056 & **0.0471** \\ \cline{1-1} & & RD & 0.066 & 0.0703 & 0.0609 & **0.0529** \\ \cline{1-1} & & V1 & 30.9 & 28.8 & 28.3 & **26.9** \\ \cline{1-1} \cline{2-6} & & AD & 0.226 & 0.187 & 0.184 & **0.151** \\ \cline{1-1} & & FA & 0.15 & **0.0685** & 0.0753 & 0.0757 \\ \cline{1-1} & CSF & MD & **0.106** & 0.168 & 0.142 & 0.115 \\ \cline{1-1} & RD & 0.139 & 0.175 & 0.145 & **0.126** \\ \cline{1-1} & & V1 & 42.7 & 42.0 & 42.1 & **41.3** \\ \hline \end{tabular}
\end{table}
Table 1: MAE of FA, MD, RD, AD, and V1 estimation using six-direction HCP, SPIN, and TBI test data in white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) via no denoising (RAW), BM4D, Swin with no finetuning (SWIN), and Swin with finetuning on one subject (SWIN-F1). Best results are **bolded**.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline Dataset & Order & Shell & Tissue & RAW & MPPCA & SWIN & SWIN-F1 \\ \hline & & & WM & 0.0279 & 0.0254 & **0.0251** & \\ & & B1000 & GM & 0.0232 & 0.0198 & **0.0184** & \\ & & & CSF & 0.0461 & 0.0365 & **0.0335** & \\ \cline{2-6} & & & WM & 0.0423 & 0.0386 & **0.0383** & \\ & 4th & B2000 & GM & 0.0365 & 0.0307 & **0.0286** & \\ & & & CSF & 0.0566 & 0.0432 & **0.0392** & \\ \cline{2-6} & & & WM & 0.0522 & 0.0467 & **0.0458** & \\ & & B3000 & GM & 0.0486 & 0.0391 & **0.0364** & \\ HCP & & & CSF & 0.06 & 0.0441 & **0.0407** & \\ \cline{2-6} & & & WM & 0.0206 & **0.0193** & 0.0205 & \\ & & B1000 & GM & 0.0179 & 0.0163 & **0.0157** & \\ & & & CSF & 0.0357 & 0.0311 & **0.0288** & \\ \cline{2-6} & & & WM & 0.0309 & **0.029** & 0.0313 & \\ & 6th & B2000 & GM & 0.0284 & 0.0256 & **0.0252** & \\ & & & CSF & 0.0444 & 0.0378 & **0.0352** & \\ \cline{2-6} & & & WM & 0.0378 & **0.035** & 0.0376 & \\ & & B3000 & GM & 0.0377 & 0.0332 & **0.0327** & \\ & & & CSF & 0.0469 & 0.0391 & **0.0373** & \\ \hline & & & WM & 0.0395 & 0.0386 & **0.0354** & 0.0363 \\ & 4th & B1000 & GM & 0.0361 & 0.0351 & 0.0319 & **0.0305** \\ TBI & & & CSF & 0.0599 & 0.057 & 0.0504 & **0.0488** \\ \cline{2-6} & & & WM & 0.0269 & 0.0268 & **0.0262** & 0.0287 \\ & 6th & B1000 & GM & 0.026 & 0.0259 & **0.0244** & 0.0247 \\ & & & CSF & 0.0436 & 0.0429 & 0.0401 & **0.0397** \\ \hline & & & WM & 0.0253 & 0.0246 & 0.0242 & **0.0234** \\ & & B1000 & GM & 0.021 & 0.0201 & 0.0191 & **0.0184** \\ & 4th & & CSF & 0.0369 & 0.0341 & 0.0306 & **0.0301** \\ \cline{2-6} & & & WM & 0.0481 & 0.0462 & 0.0449 & **0.0436** \\ & & B2500 & GM & 0.0392 & 0.0352 & 0.0334 & **0.0328** \\ SPIN & & & CSF & 0.0414 & 0.0352 & 0.0314 & **0.031** \\ \cline{2-6} & & & WM & **0.0179** & 0.018 & 0.0187 & 0.0183 \\ & & B1000 & GM & 0.0154 & 0.0154 & 0.0149 & **0.0147** \\ & & CSF & 0.0273 & 0.0267 & 0.0247 & **0.0239** \\ \cline{2-6} & & & WM & 0.0344 & **0.0337** & 0.0343 & 0.0338 \\ & & B2500 & GM & 0.0301 & 0.0283 & **0.0273** & 0.0277 \\ & & & CSF & 0.0326 & 0.0295 & 0.0268 & **0.0265** \\ \hline \end{tabular}
\end{table}
Table 2: JSD between ground truth and model estimates using 15-direction (4th order) and 28-direction (6th order) HCP, SPIN, and TBI test data in white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) via no denoising (RAW), MPPCA, Swin with no finetuning (SWIN), and Swin with finetuning on one subject (SWIN-F1). Best results are **bolded**.
consider the full brain volume but do not utilize the correlation across volumes, such as the transformer patch-based approach in [10]. Due to GPU memory constraints, a trade-off exists between utilizing spatial and angular correlations. Finally, Swin transformers can efficiently capture long-range dependencies, but require more extensive pretraining when compared to CNNs, which is integral to the success of our approach[4].
### Finetuning
Finetuning on even one subject consistently led to improved denoising performance for diffusion tensor estimation, but the results were mixed in higher order spherical harmonic fitting. This could be because finetuning reduces model bias, but increases variance which can accumulate error over the 15 or 28 directions used to compute the spherical harmonics compared to the six directions used in diffusion tensor estimation. In addition, we did not experience significant performance benefits by finetuning on more than one subject. To the best of our knowledge, finetuning has never been reported in the context of dMRI denoising and merits further investigation.
### Super-Resolution
Although our model was not trained for super-resolution, it can be used to resample a dMRI dataset to 1.25 mm resolution. Qualitative comparison shows that Swin is able to capture more of the fine microstructure than BM4D in the posterior periventricular WM and avoids excessive blurring.
Figure 3: Visual comparison between the ground truth (GT), no post-processing (RAW), BM4D, and Swin without finetuning (SWIN) for super-resolution in the posterior periventricular WM of an HCP subject. Data was k-space downsampled by a factor of two and then linearly upsampled back to 1.25 mm.
|
2305.16212
|
Minimally Comparing Relational Abstract Domains
|
Value-based static analysis techniques express computed program invariants as
logical formula over program variables. Researchers and practitioners use these
invariants to aid in software engineering and verification tasks. When
selecting abstract domains, practitioners weigh the cost of a domain against
its expressiveness. However, an abstract domain's expressiveness tends to be
stated in absolute terms; either mathematically via the sub-polyhedra the
domain is capable of describing, empirically using a set of known properties to
verify, or empirically via logical entailment using the entire invariant of the
domain at each program point. Due to carry-over effects, however, the last
technique can be problematic because it tends to provide a simplistic and
imprecise comparisons.
We address limitations of comparing, in general, abstract domains via logical
entailment in this work. We provide a fixed-point algorithm for including the
minimally necessary variables from each domain into the compared formula.
Furthermore, we empirically evaluate our algorithm, comparing different
techniques of widening over the Zones domain and comparing Zones to an
incomparable Relational Predicates domain. Our empirical evaluation of our
technique shows an improved granularity of comparison. It lowered the number of
more precise invariants when comparing analysis techniques, thus, limiting the
prevalent carry-over effects. Moreover, it removed undecidable invariants and
lowered the number of incomparable invariants when comparing two incomparable
relational abstract domains.
|
Kenny Ballou, Elena Sherman
|
2023-05-25T16:18:42Z
|
http://arxiv.org/abs/2305.16212v1
|
# Minimally Comparing Relational Abstract Domains
# Minimally Comparing Relational Abstract Domains
Kenny Ballou0000-0002-6032-474X
1Boise State University
1
Elenas Sherman0000-0003-4522-9725
2
Boise State University
1
Footnote 1: email: [email protected],[email protected]
###### Abstract
Value-based static analysis techniques express computed program invariants as logical formula over program variables. Researchers and practitioners use these invariants to aid in software engineering and verification tasks. When selecting abstract domains, practitioners weigh the cost of a domain against its expressiveness. However, an abstract domain's expressiveness tends to be stated in absolute terms; either mathematically via the sub-polyhedra the domain is capable of describing, empirically using a set of known properties to verify, or empirically via logical entailment using the entire invariant of the domain at each program point. Due to _carry-over_ effects, however, the last technique can be problematic because it tends to provide a simplistic and imprecise comparisons.
We address limitations of comparing, in general, abstract domains via logical entailment in this work. We provide a fixed-point algorithm for including the minimally necessary variables from each domain into the compared formula. Furthermore, we empirically evaluate our algorithm, comparing different techniques of widening over the Zones domain and comparing Zones to an incomparable Relational Predicates domain. Our empirical evaluation of our technique shows an improved granularity of comparison. It lowered the number of more precise invariants when comparing analysis techniques, thus, limiting the prevalent _carry-over_ effects. Moreover, it removed undecidable invariants and lowered the number of incomparable invariants when comparing two incomparable relational abstract domains.
Keywords:Static Analysis Abstract Domain Comparison Data-Flow Analysis Abstract Interpretation
## 1 Introduction
Various value-based static analysis techniques express computed program invariants as a logical formula over program variables. For example, abstract interpretation [6] uses abstract domains such as Zones [15] and Octagons [17] to describe an invariant as a set of linear integer inequalities in a restricted format. Other techniques such as symbolic execution [11] and predicate analysis combined with a symbolic component [20] do the same, only using a general linear
integer arithmetic format. These invariants are then used for program verification [4, 24], program optimization [1, 10], and for software development tasks.
Static analysis developers rarely use a computed invariant by itself, but rather compare them to determine effects of new algorithms or abstract domain choices on the invariant precision. For example, to evaluate tuning analyzer parameters, static analysis researchers compare invariant values \(\mathcal{I}\) and \(\overset{\sim}{\mathcal{I}}\) from the original and tuned analyzer runs, respectively. If an invariant becomes more precise, we conclude that the new technique or a domain choice results in a more precise analysis. For relational domains one can use queries to an SMT solver, such as Z3 [18], to determine which invariant is more precise by checking their implication relations.
However, to objectively measure such effects in a computed invariant after statement \(s\), \(\mathcal{I}_{s}\), we need to compare only the part of \(\mathcal{I}_{s}\) affected by the transfer function of \(s\), \(\tau_{s}\). This way, if \(\overset{\sim}{\mathcal{I}}\) has already been more precise than \(\mathcal{I}\) before \(s\) and \(\tau_{s}\) has not changed the relevant facts, then the comparison should disregard the _carry-over_ precision improvement in \(\overset{\sim}{\mathcal{I}}_{s}\).
The comparison of two relational invariants \(\mathcal{I}\) and \(\overset{\sim}{\mathcal{I}}\) involves two steps: (1) identifying a changed component of each invariant at a given statement and (2) performing minimal comparison between the changed components of \(\mathcal{I}\) and \(\overset{\sim}{\mathcal{I}}\). In our previous work [3] we addressed step (1) for the Zones domain where using data-flow analysis (DFA) information, we developed efficient algorithms that find a minimally changed set of inequalities in a Zone invariant.
In this work we target step (2), assuming that an abstract domain has some means to perform step (1) using either elementary or sophisticated algorithms. Thus, the contributions of this paper include: **(a)** development and analysis of a minimal comparison algorithm for relational abstract domains and **(b)** investigating its effect on comparisons between different widening techniques for Zones domain as well as comparison between Zones and incomparable Predicate domains with a relational component.
The rest of the paper is organized as follows. In Section 2, we provide the background, context, and motivation for our work. In Section 3, we describe our fixed-point algorithm. In Section 4, we explain our experimental setup and evaluation, and in Section 5, we examine the results of our experiments. We connect this work with previous research in Section 6. Finally, we conclude and discuss future work in Section 7.
## 2 Background and Motivation
We refer to an invariant and the corresponding abstract domain as relational if it is expressed as a conjunction of formulas over program variables, _e.g.,_ a set of linear integer inequalities. We first explain the concept of the minimal change for an invariant and then explain challenges of comparing two relational domains, and sketch how our proposed approach works.
### Minimal changes in relational abstract domains
Consider the relational invariants computed by a data-flow analysis framework using the Zones abstract domain as shown in Figure 0(a). Let us assume the analyzed code has four program variables: \(w\), \(x\), \(y\), and \(z\). Here, the incoming flow to the conditional statement has the following invariant: \(\mathcal{I}_{in}=z\leq x\wedge w\rightarrow\top\wedge y\rightarrow\top\). That is, variables \(w\) and \(y\) are unbounded while \(x\) and \(z\) are bounded by a \(\leq\) relation. The transfer function of the true branch adds \(y\leq x\) inequality, thus making \(y\) bounded. This results in \(\mathcal{I}_{t}=z\leq x\wedge y\leq x\wedge w\rightarrow\top\) invariant. Similarly, the invariant for the false branch becomes \(\mathcal{I}_{f}=z\leq x\wedge x\leq y-1\wedge w\rightarrow\top\).
Even though \(\mathcal{I}_{f}\) and \(\mathcal{I}_{t}\) are new invariants, they inherit two unchanged inequalities \(z\leq x\) and \(w\rightarrow\top\) from \(\mathcal{I}_{in}\). This suggests that some part of a previously computed invariants have not changed by the transfer function of the conditional statement. Thus, if for some application \(\mathcal{I}_{in}\) is more precise because of \(z\leq x\) and remains more precise in \(\mathcal{I}_{t}\) because of the same inequality, such _carry-over_ precision results should be disregarded.
Previous work determining minimal changes in a relational abstract domain approach [3] addresses this problem by identifying the part the invariant affected by the statement's transfer function. For example, the minimal change algorithm for Zones [3] can compute the minimal sub-formula given the potentially changed variables \(x\) and \(y\). Specifically, the algorithm identifies only the \(y\leq x\) part of \(\mathcal{I}_{t}\) having changed from \(\mathcal{I}_{in}\). Likewise for \(\mathcal{I}_{f}\), the algorithm identifies two inequalities: \(z\leq x\) and \(x\leq y-1\) as the changed portion of the invariant.
The minimal change algorithm can be sophisticated and accurately compute the changed part of the invariants, or can be over-approximating, and in the worst case return the entire invariant. In our previous work we developed an efficient collection of such algorithms for the Zones abstract domain. In this work, we assume that a relational domain has an invariant change method \(\Delta\) implemented, which takes as input an invariant and a set of updated variables and returns a portion of \(\mathcal{I}\), _e.g.,_ in our example \(\Delta(\mathcal{I}_{t},\{x,y\})=y\leq x\). The green shaded regions of an invariant in the Figures 0(a) and 0(b) indicates the changed part of the state.
### Comparing relational domains
Now consider invariants in Figure 0(b) computed for the same code fragment, but using an improved algorithm. This algorithm is able to compute additional information for \(\overset{\sim}{\mathcal{I}}_{in}=z\leq x\wedge w\leq y\), which is more precise than \(\mathcal{I}_{in}\) since \(\overset{\sim}{\mathcal{I}}_{in}\) constrains the values of \(w\) and \(y\). The checkmark symbol, \(\check{\boldsymbol{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}}}\) \(\mathcal{I}\) in Figure 0(b) indicates an increased precision comparing to the corresponding invariants \(\mathcal{I}\) in Figure 0(a).
When we compare using the entirety of the invariants instead of simply the changed portion of the invariants for the false branch, the result would be that \(\overset{\sim}{\mathcal{I}}_{f}\) is more precise than \(\mathcal{I}_{t}\). Thus, simply applying \(\Delta\) for both invariants can filter out erroneous, _carry-over_ improvements, which we annotate with the \(\boldsymbol{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ }}}}}}}}}}}}}}\) \(\mathcal{I}\) symbol.
In the case of the true branch, the set of variables in their changed portion of the invariant are the same. However, this is not always the case, which we can see on the false branches. There, \(\Delta(\mathcal{I}_{f},\{x,y\})=y\leq x\), but \(\Delta(\widetilde{\mathcal{I}}_{f},\{x,y\})=y\leq x\wedge w\leq y\) has an extra variable \(w\). In order to make a sound comparison, we need to conjoin \(w\to\top\) with the result of \(\Delta(\mathcal{I}_{f},\{x,y\})\). The challenge here is to identify the smallest necessary additions to the changed portions of the invariants to perform a sound comparison.
In the next section we present our proposed approach that addressees this problem by developing a fixed-point algorithm that, in each iteration, discovers a minimal set of inequalities (modulo \(\Delta\)) in one invariant that is adequate for comparison with the changed part of the other invariant.
## 3 Approach
In this section, we explain the theoretical basis for our approach to minimally compare relational invariants via logical entailment. We start by defining the problem, and then we present our algorithm that solves it. At the end, we perform the analysis of the proposed algorithm.
### Problem definition
We define the problem in a context of a DFA framework, where the framework provides a set of updated variables \(dv\) that resulted in a new invariant \(\mathcal{I}\). An abstract domain for \(\mathcal{I}\) has a function \(\Delta\) implemented, which returns a portion of \(\mathcal{I}\) that is changed by \(dv\). In the worst case \(\Delta(\mathcal{I},dv)=\mathcal{I}\), _i.e.,_ the entire invariant has been affected. In the best case \(\Delta(\mathcal{I},dv)=\emptyset\), _i.e.,_ nothing has changed. We also introduce a function \(V\) that returns the set of variables used in \(\mathcal{I}\). For example, we use it to define the following property: \(V(\Delta(\mathcal{I},dv))\subseteq V(\mathcal{I})\).
Let \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) be two relational invariants, and let \(dv_{1}\) and \(dv_{2}\) be their corresponding sets of updated variables. Then the problem of finding a minimal
Figure 1: Original Static Analysis (a) and Improved Static Analysis (b)
changed part of two invariants reduces to finding a common minimal updated set of variables \(S\) such that
\[S=V(\Delta(\mathcal{I}_{1},S))=V(\Delta(\mathcal{I}_{2},S)) \tag{1}\]
A minimal solution for such recursive definitions is commonly obtained by a fixed point iteration algorithm with initial values \(S_{0}\) set to the smallest set, which in our case is \(S_{0}=dv_{1}\cup dv_{2}\).
### Finding a common changed variable set
Algorithm 1 shows the pseudocode of the optimized fixed point computation algorithm to solve Equation 1. The algorithm takes as arguments, the updated variables for each domain, \(dv_{1}\) and \(dv_{2}\), two invariants to compare, \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). It requires basic conditions for its correctness: both invariants are described over the same set of variables and \(\Delta\) does not introduce any new variables. The output is the solution for Equation 1.
```
0:\(V(\mathcal{I}_{1})=V(\mathcal{I}_{2})\wedge V(\Delta(\mathcal{I}_{1},dv_{1})) \subseteq V(\mathcal{I}_{1})\wedge V(\Delta(\mathcal{I}_{2},dv_{2}))\subseteq V (\mathcal{I}_{2})\)
0:\(S_{1}=S_{2}\subseteq V(\mathcal{I}_{1})\)
1:functionCommonVarSet(\(dv_{1}\), \(dv_{2}\), \(\mathcal{I}_{1}\), \(\mathcal{I}_{2}\))
2:\(S_{1}\leftarrow\) V(\(\Delta(\mathcal{I}_{1}\), \(dv_{1})\))
3:\(S_{2}\leftarrow\) V(\(\Delta(\mathcal{I}_{2}\), \(dv_{2})\))
4:while\(S_{1}\neq S_{2}\)do
5:if\(S_{1}\supset S_{2}\)then
6:\(dv_{2}\gets S_{1}\setminus S_{2}\)
7:\(S_{2}\gets S_{2}\cup\)\(V(\Delta(\mathcal{I}_{2}\), \(dv_{2}))\)
8:elseif\(S_{2}\supset S_{1}\)then
9:\(dv_{1}\gets S_{2}\setminus S_{1}\)
10:\(S_{1}\gets S_{1}\cup\) V(\(\Delta(\mathcal{I}_{1}\), \(dv_{1})\))
11:else
12:\(dv_{1}\gets S_{2}\setminus S_{1}\)
13:\(dv_{2}\gets S_{1}\setminus S_{2}\)
14:\(S_{1}\gets S_{1}\cup\) V(\(\Delta(\mathcal{I}_{1}\), \(dv_{1})\))
15:\(S_{2}\gets S_{2}\cup\) V(\(\Delta(\mathcal{I}_{2}\), \(dv_{2})\))
16:endif
17:endifwhile
18:return\(S_{1}\)
19:endfunction
```
**Algorithm 1** Common minimal changed variable set
The algorithm first computes the initial changed variable sets, \(S_{1}\) and \(S_{2}\) for each invariant, lines 2 and 3, affected by the updated variables \(dv_{1}\) and \(dv_{2}\), respectively
At line 4, the algorithm compares the two sets and if they are not equal, _i.e.,_ the fixed point has not been reached, the algorithm enters the main iteration loop. Inside the body of the loop, the algorithm first tests whether one set of variables is a proper superset of the other, lines 5 and 8.
If one of the sets is a proper superset, it only augments the smaller set as done on lines 6-7 and lines 9-10. For example, if \(S_{1}\supset S_{2}\), \(S_{2}\) is augmented by the variables which are not already in \(S_{2}\).
Afterwards, a new updated variable set is computed from the set difference of \(S_{1}\) and \(S_{2}\), line 6. Then, the changed variable set is computed as the union between the existing set \(S_{2}\) and the newly computed minimum variables, line 7. Similar computations are done for the case when \(S_{2}\supset S_{1}\), lines 9-10.
Finally, when the changed variable sets are incomparable-- line 11-- then both changed variable sets are recomputed in a similar fashion as described in lines 12-15. Upon the loop's termination, _i.e.,_ when \(S_{1}=S_{2}\), the algorithm returns one of the dependent sets, line 18.
To demonstrate how Algorithm 1 compares two invariants, consider the invariants on the true branch from our example in Figure 0(b). There, \(\mathcal{I}_{1}=z\leq x\wedge y\leq x\wedge w\rightarrow\top\) and \(\mathcal{I}_{2}=z\leq x\wedge y\leq x\wedge w\leq y\). The updated variables are \(dv_{1}=\{x,y\}\) and \(dv_{2}=\{x,y\}\).
The algorithm computes \(\{x,y\}\) for \(S_{1}\) and \(\{w,x,y\}\) for \(S_{2}\). Since \(S_{2}\) is a proper superset of \(S_{1}\), we recompute \(S_{1}\), lines 9 and 10. Specifically, \(dv_{1}\) becomes \(\{w\}\). \(S_{1}\) is then recomputed: \(S_{1}=S_{1}\cup V(\Delta(\mathcal{I}_{1},dv_{1}))\), which results in \(S_{1}=\{x,y\}\cup\{w\}=\{w,x,y\}\). At this point, \(S_{1}=S_{2}\), terminating the loop, and the algorithm returns the set \(S_{1}=\{w,x,y\}\). Then, an SMT solver can be used to compare logical relations of \(\Delta(\mathcal{I}_{1},S_{1})\) and \(\Delta(\mathcal{I}_{1},S_{1})\), for example, using implication relations. Or, in case of Zones, one can use its custom equivalence engine [15].
As mentioned, under worst-case conditions, Algorithm 1 returns the entire set of variables. In other words, it devolves into a full invariant comparison. This can happen if the variables within the invariant are tightly coupled with all other variables. Another situation which can cause a worst-case comparison is when an abstract domain has an ineffective \(\Delta\) function, which performs a basic dependency analysis such as slicing [3, 23].
Below we present termination and complexity analysis for Algorithm 1. We start with a proof sketch of termination.
Proof: First, we begin with the following assumptions: the variable projections for both domains are equivalent, _i.e.,_\(V(\mathcal{I}_{1})=V(\mathcal{I}_{2})\); and we assume the invariant minimization functions for each domain yield a subset of the variable projections, that is, \(\Delta(\mathcal{I}_{1},dv_{1})\subseteq V(\mathcal{I}_{1})\), and similarly for \(\mathcal{I}_{2}\).
At each iteration, the union of variables over the minimization function is always increasing by at least one variable in either \(S_{1}\) or \(S_{2}\). Therefore, within a finite number of iterations \(S_{1}\) and \(S_{2}\) reach fixed point, which is bounded by \(V(\mathcal{I}_{1})=V(\mathcal{I}_{2})\) condition. Thus, Algorithm 1 terminates.
The complexity of Algorithm 1 is \(O(N)\cdot(C_{\Delta_{1}}+C_{\Delta_{2}})\), where \(N\) is the number of variables in the program under analysis and \(C_{\Delta_{i}}\) is the complexity of the in
variant minimization function for the corresponding domain. In the worst-case, at each iteration the sets \(S_{1}\) and \(S_{2}\) augmented by a single variable from \(\Delta\) computations. Overall, the time-complexity of Algorithm 1 depends on the number of variables and the complexity of the \(\Delta\) functions of the abstract domains.
## 4 Methodology
To determine the effectiveness of the proposed algorithm, we use it to compare invariants produced by different techniques and by different abstract domains on the same program. For each subject program, each analysis outputs invariants after each statement. Over the corpus of programs, we compute 6564 total invariants. We store the invariants as logical formulas in SMT-LIB format. We run analyses on two relational domains, Zones and Relational Predicates [20], and compare the results of a standard Zones analysis to advanced Zones analyses, and Zones analysis to Relational Predicates analysis.
The goal of the empirical evaluation is to answer the following research questions:
**RQ1**: Does our technique affect the invariant comparison between different analysis techniques for the same abstract domain?
**RQ2**: Does our technique affect the invariant comparison between two different relational domains?
**RQ3**: How effective and efficient is Algorithm 1 on real-world invariant comparisons?
We consider different analysis techniques over the Zones domain to measure the precision gained by various advanced techniques. We consider the iteration parameter before widening. We also consider the widening method employed, which ensures termination for Zones analysis.
We then compare the most precise Zones technique to Relational Predicates [20], two incomparable domains. Our previous work [3] has shown the benefit of minimally comparing incomparable domains to demonstrate realized precision. However, in this case, we extend the invariants of the Predicates domain with a symbolic relational component.
For Relational Predicates, the minimization function is a selection based solely on notions of variable reachability, _e.g.,_ variable dependence, but it might not be minimal because of the generality of inequalities used in the relational part. We also computed minimization over Relational Predicates using a purely connected component concept, similar to the technique by Visser _et al._[23], however, reachable performed marginally better.
We use the Minimal Neighbors (MN) minimization function from our previous work [3] for Zones which provides the smallest invariant partition given a set of changed variables. This minimization algorithm considers the semantics of the formulas under the changed variables. Using these semantics, it selects the minimal dependent substate from the logical formula representing the invariant.
#### 4.2.2 Subject programs
Our subject programs consist of 192 Java methods from previous research on the Predicates domain [20]. These methods were extracted from a wide range of real-world, open-source projects and have a high number of integer operations. The subject programs range from 1 to 1993 Jimple instructions, a three address intermediate representation. The average branch count for the methods is 6 (\(\sigma=11\)), with one method containing a maximal 56 branches. A plurality of our subject methods, 81, contain at least one loop, with one method containing 12 loops.
#### 4.2.3 Experimental platform
We execute each of the analyses on a cluster of CentOS 7 GNU/Linux compute nodes, running Linux version 3.10.0-1160.76.1, each equipped with an Intel Xeon Gold 6252 and 192 GB of system memory. We use an existing DFA static analysis tool [2, 20] implemented in the Java programming language. The analysis framework uses Soot [19, 22] version 4.2.1. Similarly, we use Z3 [18], version 4.8.17 with Java bindings to compare SMT expressions for the abstract domain states. Finally, we use Java version 11 to execute the analyses, providing the following JVM options: -Xms4g, -XX:+UseG1GC, -XX:+UseStringDeduplication, and -XX:+UseNUMA.
#### 4.2.4 Implementation
We modified an existing DFA framework such that the Zones analysis outputs its entire invariant for each program point. Each invariant is further reduced using a redundant inequality reduction technique proposed by Larsen _et al._[12]. For all domains, unbounded variables are set to top, \(\top\), and excluded from the output expression. This further simplifies the formulas. After entailment, we use Z3, using the linear integer arithmetic (LIA) theory for Zones comparisons and the non-linear integer arithmetic (NIA) theory for Zones to Relational Predicates comparisons, to decide model behavior of each domain.
#### 4.2.5 Evaluations
In total, we perform _three_ different invariant comparisons, summarized in the following list:
\(Z\preceq Z_{k=5}\)--Zones using standard widening after two iterations and Zones widening after five iterations.
\(Z\preceq Z_{ths}\)--Zones with standard widening and Zones with threshold widening.
\(Z_{ths}\prec\succ P\)--Zones with threshold widening and Relational Predicates.
In all instances of Zones sans \(Z_{k=5}\), widening happens after _two_ iterations over widening nodes. We use a generic set of thresholds for Zones based on powers of 10: \(\{0,1,10,100,1000\}\). Using a tuned set of thresholds for each program would yield better results.
We use a generic disjoint domain for the basis of the Relational Predicates, based on Collberg _et al.'s_[5] study of numerical constants in Java Programs. Specifically, the predicate domain used in this study consists of the following set of disjoint elements: \(\{(-\infty,-5]\), \((-5,-2]\), \(-1\), \(0\), \(1\), \([2,5)\), \([5,+\infty)\}\).
## 5 Evaluation Results and Discussions
In this section, we present the results of our experiments and discuss their implications to the research questions posed in the previous section.
### Technique Comparisons
To answer **RQ1**, we consider the comparisons of different techniques using the Zones abstract domain. Since different techniques using the same domain create a partial ordering of their respective precision, we need only consider equivalent and less precise outcomes. To ensure correctness of our implementation, we ensured that there were no other precision outcomes.
Table 1 shows the breakdown of invariants computed by standard widening after _two_ iterations and standard widening after _five_ iterations. Comparing invariants using the entire invariant, deferred widening produces _nine_ more precise invariants. However, when using our minimized comparison technique, the slim advantage reduces to _two_ invariants.
Table 2 shows the breakdown of invariants between standard widening after two iterations and threshold widening after two iterations. Here, we see the largest gain in precision. Using the entire invariant to compare, threshold widening computes 45 more precise invariants. Again, however, the precision gain is cut by more than 50% when using minimal comparisons. The choice of thresholds could improve the precision, but for best results, the set of thresholds needs to be tailored specifically to each program.
As we can see between \(Z\preceq Z_{k=5}\) and \(Z\preceq Z_{ths}\), our comparison technique lowers the number of more precise invariants, thus eliminating the _carry-over_ precision instances. That is, our technique lowers the number of more precise invariants advanced techniques compute. However, in doing so, our technique presents a more granular image of the realized precision gain advanced techniques offer.
### Zones versus Relational Predicates
Table 3 shows the precision breakdown of Zones with threshold widening compared to Relational Predicates. Given that Zones and Predicates are inherently incomparable domains, we must consider all precision comparison categories. With the full invariant compairisons, Relational Predicates are more precise than Zones in about 50% of the invariants. The next largest category of invariants is
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Comparison** & \(Z\equiv Z_{ths}\) & \(Z\prec Z_{ths}\) \\ \hline Full & 6555 & 9 \\ \hline Minimal & 6562 & 2 \\ \hline \end{tabular}
\end{table}
Table 1: Zones \(k=2\) widening compared to Zones \(k=5\) widening
incomparable, \(\prec\succ\), which accounts for 30% of invariants. Here, Zones and Predicates are complementary, neither more nor less precise than the other. Zones and Predicates are equivalent in 19% of all invariants, and Zones are more precise in about 3% of all invariants. Finally, using the full invariant, 21 of the program points between the relation between two invariants could not be established by Z3 since it returned \(\mathtt{UNKNOWN}\).
Our technique eliminates the undecidable results. Moreover, it dramatically reduces the number of incomparable invariants- only 4% of invariants remain incomparable. Similar to _carry-over_ precision, incomparable invariants arise when one domain computes a more precise invariant for one variable, and the other domain computes a more precise invariant for another, unrelated variable at a later program point. Considering the entire invariant results in incomparable precision. However, by comparing only the relevant, changed variables, our technique largely disentangles the imprecision in the comparison.
The equivalent invariant category is the next largest affected category, where more than half, 56%, of computed invariants between Zones and Relational Predicates become equivalent. Relational Predicates lose 13% of more precise invariants, and Zones gains about 1% of invariants which it computes more precisely than Relational Predicates.
By comparing only the necessary variables at each program point, our technique allows general, relational abstract domains to be compared without undecidable results. The reduction in incomparable invariants between two otherwise difficult to compare domains provides a clearer precision performance picture between the two domains.
### Iterations and variable reductions
To determine if Algorithm 1 is efficient, **RQ3**, we use the iteration depth count to determine how many times the algorithm iterates before it reaches a stable set of variables for comparison. Over all instances of Zones comparisons, the iteration count was either _zero_ or _one_, with no outliers. That is, either Zones computed the same set of changed variables and the dependent set between two techniques was immediately equivalent. Or, the set of dependent variables is captured with only a single extension, mostly to the Zones using standard widening, \(Z\).
Comparing Zones to Relational Predicates, we see similar results. The average number of iterations is between _zero_ and _one_ iteration. However, we have several outliers at two iterations. Instrumentation found 12 instances of extreme outliers, 11 for _three_ iterations, and one instance of _four_ iterations. Furthermore, more variety exists in the branches for Zones versus Relational Predicates. Unlike
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline
**Comparison** & \(Z_{ths}\equiv P\) & \(Z_{ths}\prec P\) & \(Z_{ths}\succ P\) & \(Z_{ths}\prec\succ P\) & \(Z_{ths}\,?\,P\) \\ \hline Full & 1227 & 3173 & 196 & 1947 & 21 \\ \hline Minimal & 3675 & 2353 & 248 & 288 & 0 \\ \hline \end{tabular}
\end{table}
Table 3: Zones with Threshold Widening compared to Relational Predicates
comparing techniques between Zones invariants, comparing Zones to a more general, relational formula required more augmentation by each domain.
To evaluate effectiveness of Algorithm 1, **RQ3**, we consider the proportion of variables necessary for comparison. We instrumented our algorithm to compute the proportion of variables it returns after reaching a stable set, compared to the variable projection of the incoming invariants. We plot the frequency of proportions of variables returned by Algorithm 1 in Figure 2. In Figure 1(a), we plot variable reductions across all comparisons of Zones: standard widening after two iterations versus standard widening after five iterations and standard widening versus threshold widening. Figure 1(b) shows the variable reductions for Zones with threshold widening versus Relational Predicates. Considering a single bin in Figure 2, for example, 0.1, represents the frequency where Algorithm 1 needed only 10% of the variables occurring in the original invariants to adequately compare the two.
Shown in Figure 2, the large frequencies in the 0 bin shows our technique was able to remove all variables from the invariants from comparison, eliminating the need to compare the two invariants. Comparing advanced techniques utilizing Zones shows more than 6500 instances, and about 1850 in Zones versus Relational Predicates.
Our technique reduces the number of variables necessary for comparison by 50% or more in 90% of comparisons between techniques of Zones, and at least by 25% in 93% of comparisons. For Zones and Relational Predicates, our technique
Figure 2: Frequency plot of proportion of variables selected by Algorithm 1 which are necessary for comparing two invariants. (a) represents the frequencies of proportions when comparing techniques using Zones. (b) represents the frequencies of proportions when comparing Zones to Relational Predicates.
reduces the necessary, relevant variables by 50% or more in 80% of comparisons and by 12% in 93% of comparisons. That is, in the majority of comparisons, our technique reduces the number of variables necessary for comparing two relational domains or techniques. The quality of a domain's \(\Delta\) function affects the performance and effectiveness of Algorithm 1. We see only a few iterations in the algorithm when comparing analysis techniques utilizing Zones since we used a minimal \(\Delta\) function for Zones. However, we see an increase in iterations when comparing with a non-optimal \(\Delta\), as in Zones and Relational Predicates. That is, the quality of \(\Delta\) can have an outsized impact on the practicality of our technique. However, given the preponderance of variable reductions and low iteration counts over the corpus of methods and comparisons, we conclude that the proposed algorithm is practical and effective.
### Discussion
The evaluation results show our technique enables more precise comparison between relational abstract domain invariants. When comparing two techniques using the same domain, our minimal comparison strategy precisely captures the techniques' relative precision, disentangling accumulated, _carry-over_ effects from realized precision gains.
While we do not have a proven state minimization function for Relational Predicates, our technique still shows improvement when comparing incomparable relational abstract domains. Specifically, our comparison removes unknowns and dramatically reduced incomparable invariants, which makes it easier to make software engineering decisions.
The average iteration depth for Algorithm 1 shows the algorithm's efficiency and practicality. Even when using an imprecise minimization function for Relational Predicates, our technique only needed a maximum of _four_ iterations to arrive at a stable set of common variables for comparison. Moreover, in the majority of comparisons, Algorithm 1 returned a significantly smaller proportion of variables than the entirety of the variables in each invariant, demonstrating the efficacy of the technique.
## 6 Related Work
Our previous work [3] found a set of algorithms for efficiently computing \(\Delta\) for the Zones domain. Using the algorithms, it compared Zones to other non-relational domains, which in the context of data-flow analysis (DFA) and this work, have trivial \(\Delta\) functions. We extend the previous work by considering comparisons between relational abstract domains, abstracting the \(\Delta\) function for each domain.
Comparing the precision gain of new analysis techniques or comparing the precision of newly proposed abstract domains is a common problem in the literature. Previous work in this area generally compare precision in one of two ways.
One, the comparison is based on known _a priori_ program properties on benchmark programs [7, 8, 9, 13, 14]. Two, the comparison is based on logical entailment of computed invariants [9, 16, 20].
To the best of our knowledge, this work represents one of the first studies improving the granularity of precision characteristics for the latter categorization of relational abstract precision comparisons. We believe this work would benefit existing work which compares relational abstract domains or new analysis techniques using relational abstract domains.
## 7 Conclusion and Future Work
In this study, we defined the problem of minimally comparing relational invariants, proposed an algorithm which solves the problem, and experimentally evaluated whether the algorithm indeed solves the problem using real-world programs. Using our algorithm, we can remove the precision _carry-over_ effects advanced analysis techniques introduce, providing clear precision benefits for advanced techniques. For example, the benefits of deferred widening and threshold widening are smaller than anticipated. Moreover, our technique enables the comparison of relational abstract domains which are otherwise difficult to compare directly. Specifically, we see our technique removed the UNKNOWN invariants and dramatically reduced the incomparable invariants when comparing Zones to Relational Predicates. Finally, Algorithm 1's average iteration depth and variable reduction demonstrate the algorithm's overall practicality and usefulness when comparing analysis techniques and relational abstract domains.
#### 7.0.1 Future Work
Developing a minimization function, \(\Delta\) for Relational Predicates would enable a comprehensive, empirical study of the relative precision of weakly-relational numerical abstract domains to Predicates. Furthermore, we believe the proposed technique of comparison can benefit adaptive analysis techniques which selectively choose the appropriate abstract domain during analysis. Octagons [17] are not included in this study because a minimization strategy for Octagons has not been developed. However, this is an interesting avenue to pursue and we intend to use the technique of this work to compare Zones to Octagons, which will empirically quantify the precision gain of Octagons over Zones.
|
2307.00209
|
Image Matters: A New Dataset and Empirical Study for Multimodal
Hyperbole Detection
|
Hyperbole, or exaggeration, is a common linguistic phenomenon. The detection
of hyperbole is an important part of understanding human expression. There have
been several studies on hyperbole detection, but most of which focus on text
modality only. However, with the development of social media, people can create
hyperbolic expressions with various modalities, including text, images, videos,
etc. In this paper, we focus on multimodal hyperbole detection. We create a
multimodal detection dataset from Weibo (a Chinese social media) and carry out
some studies on it. We treat the text and image from a piece of weibo as two
modalities and explore the role of text and image for hyperbole detection.
Different pre-trained multimodal encoders are also evaluated on this downstream
task to show their performance. Besides, since this dataset is constructed from
five different topics, we also evaluate the cross-domain performance of
different models. These studies can serve as a benchmark and point out the
direction of further study on multimodal hyperbole detection.
|
Huixuan Zhang, Xiaojun Wan
|
2023-07-01T03:23:56Z
|
http://arxiv.org/abs/2307.00209v3
|
# Image Matters: A New Dataset and Empirical Study for Multimodal Hyperbole Detection
###### Abstract
Hyperbole, or exaggeration, is a common linguistic phenomenon. The detection of hyperbole is an important part of understanding human expression. There have been several studies on hyperbole detection, but most of which focus on text modality only. However, with the development of social media, people can create hyperbolic expressions with various modalities, including text, images, videos, etc. In this paper, we focus on multimodal hyperbole detection. We create a multimodal detection dataset1 from _Weibo_ (a Chinese social media) and carry out some studies on it. We treat the text and image from a piece of weibo as two modalities and explore the role of text and image for hyperbole detection. Different pre-trained multimodal encoders are also evaluated on this downstream task to show their performance. Besides, since this dataset is constructed from five different keywords, we also evaluate the cross-domain performance of different models. These studies can serve as a benchmark and point out the direction of further study on multimodal hyperbole detection.
Footnote 1: The dataset will be released to the community.
## 1 Introduction
As defined by Merriam Webster, exaggeration means "an act or instance of exaggerating something : overstatement of the truth" 2 and hyperbole has the same meaning. Though hyperbolic expressions state something beyond fact or truth, they are not considered as lies. On the contrary, hyperbole is a common way of expressing one's strong feeling or opinion. As the second most used rhetorical device (Kreuz et al., 1996), the detection of hyperbole bears great importance of understanding human language and expression.
Footnote 2: [https://www.merriam-webster.com/dictionary/](https://www.merriam-webster.com/dictionary/) exaggeration
Traditional studies focus more on text modality only. Their proposed datasets and methods considered mainly single short sentences (Troiano et al. (2018) Kong et al. (2020),Biddle et al. (2021)). However, the expression of hyperbole is not limited to text only. For example, we can hardly know whether the expression _"Winter comes early."_ is hyperbolic or not unless we can see whether it is indeed so cold (Figure 0(a)). Images, as widely seen on social media, serve an important role in expressing exaggerated meanings. They can serve as facts to reveal hyperbole contained in texts (Figure 0(a)) as well as express hyperbole themselves (Figure 0(b)) or together with texts (Figure 0(c)). Meanwhile, some images can also just serve as background information and does no help to the expression of hyperbole (Figure 0(d)), which adds additional difficulty to this task.
_Weibo3_ is one of the most popular social media in China and all over the world. With abundant publicly available posts with texts and images, it serves as the perfect resource for real-life multimodal hyperbole detection. We just need to automatically crawl enough posts from it and annotate them as hyperbole or not.
Footnote 3: [https://weibo.com](https://weibo.com)
Our contributions in this paper can be summarized as follows: (1) We propose the first Chinese multimodal hyperbole detection dataset which can be used for further study. (2) We analyze the dataset and get some statistical observations. We also summarize how two modalities (text and image) together express hyperbole. (3) We employ several neural methods on hyperbole detection, including unimodal and multimodal encoders, and reach these conclusions: (3.1) Image modality is more useful rather than misleading. With gating and attention mechanism, we can achieve relatively fine performance. We also point out that common sense is the potential direction to achieve better results. (3.2) Compared with unimodal encoders and fusing methods, pre-trained multimodal methods perform badly on this downstream task. We
analyzed its reason. (3.3) The dataset is built upon five different and disjoint keywords with generally disjoint contents. When applied on cross domains, the methods don't work very well. The systematic bias between posts with different keywords is the bottleneck hindering the generalization ability, especially for methods requiring deep interactions between modalities.
## 2 Related Works
### Studies on Hyperbole and Hyperbolic Detection
Not surprisingly, earlier studies on hyperbole focus more on its linguistic features. Mora (2009) made a detailed anylasis of the semantic features of hyperbole and first introduced a taxonomy which generally categorizes all exaggerations along two dimensions: quantity(overstating the quantitaive or objective property of something, such as time, mass, etc.) and quality(overstating the qualitative or subjective property of something, such as feeling, emotion, etc.). These two types can be subcategorized into smaller ones, but that is beyond our discussion.
Some multimodal analysis work has also been conducted. Ferre (2014) manually analyzed how hyperbole is expressed with text, video and audio modalities. He focused on gestures, facial expressions, tones, etc. His work directly pointed out that other modalities can help reveal hyperbole contained in texts or words.
For the automatic detection of hyperbole, Troiano et al. (2018) created a hyperbole detection dataset HYPO, which contains 709 hyperbolic sentences and their corresponding non-hyperbolic ones. They applied traditional machine learning methods and proposed some hand-crafted features (named QQ) to address this task.
Biddle et al. (2021) followed Troiano's work. They introduced triplet sampling and used triplet loss for forcing their model to differentiate hyperbolic and non-hyperbolic sentences. They used BERT as the backbone text encoder instead of pretrained word vectors(used by Troiano's work).
As for Chinese hyperbole detection, Kong et al. (2020) followed Troiano's idea and created the first Chinese hyperbole detection dataset HYPO-cn. They deeply analyzed statistical features of the sentences in the dataset. They also manually analyzed strategies human used to express hyperbole. They compared several traditional and deep-learning methods on this dataset. They reached the conclusion that deep-learning methods outperform traditional methods on this task.
However, to the best of our knowledge, there hasn't been any study on multimodal hyperbole detection.
### Other Multimodal Tasks
Multimodal sarcasm detection is a hot topic in recent years. Cai et al. (2019) first proposed a dataset built on Twitter, which is widely used by following researchers. They also proposed a hierarchical fusion model to address this task. Xu et al. (2020) followed this work and proposed D&R Net. Liang et al. (2021) introduced in-modal and cross-modal graphs(InCrossMGs) to leverage the ironic expressions from both in- and cross-modal perspectives by extracting the dependency relations of fine-grained phrases or patches within the distinct modality and across multiple modalities. Liu et al. (2022) leverages graph attention networks and image captions as knowledge enhancement.
On the other side, Castro et al. (2019) focused on video and text modalities and proposed a sarcasm detection dataset named MUStARD. MUStARD is compiled from popular TV shows and consists of audiovisual utterances annotated with sarcasm labels along with its corresponding context. Wu et al. (2021) continued on this work and proposed IWAN to model the word-level incongrurity between modalities via a scoring mechanism.
Multimodal sentiment analysis is also a widely-discussed topic. Wang et al. (2020) proposed an end-to-end fusion method based on transformers. Yang et al. (2021) introduced multi-channel graph neural networks. Xue et al. (2022), Du et al. (2022) focused on attention-based methods such as gated attention. Inspirations can be gained from these methods.
Multimodal hyperbole detection bears both similarities and differences with the tasks mentioned above. It requires the understanding and interaction of both modalities. However, the relation between the two modalities can be rather complicated. This will be discussed in Section 4.
## 3 Dataset Creation
### Hyperbole Definition
To avoid ambiguity, we need to give a clear definition to hyperbole. Based on the definition given
by Merriam Webster 4 and Troiano et al. (2018), we here define hyperbole as "an expression (words, images, etc) that goes significantly beyond fact or common sense but not taken as lies". A post is considered hyperbolic if and only if it contains hyperbolic expression.
Footnote 4: [https://www.merriam-webster.com/dictionary/exaggeration](https://www.merriam-webster.com/dictionary/exaggeration)
For example, the post "_I'm dying of heat_" is hyperbolic because normally most people do not easily die of heat. But if there is a post "_5 workers died of extreme heat_" with real pictures, the post is certainly not considered hyperbolic. However, if a post says "_100 billion workers died of extreme heat_", it is then considered neither hyperbole or non-hyperbole. Instead, it is probably a lie. As for images, Figure 0(b) provides a perfect example. Since people do not melt however hot it is, a melting face representing the unbearably hot weather is typical hyperbole.
### Data Collection and Preprocessing
_Weibo_ is a popular Chinese social media. We automatically crawl about 10000 posts from _Weibo_ with 5 keywords: (_[_[_]__(_traffic jam_), _[_[_]__(_scenery_), _[_[_]__(_weather_), _[_[_]__(_mood_), _[_[_]__(_clouding_)], _[_[_]__(_clouding_)], _[_[_]__(_clouding_)], _[_[_]__(_clouding_)], _[_[_]__(_clouding_)], _[_[_]__(_clouding_)], _[_[_]__(_clouding_)], _[_[_]__(_clouding_)], _[_[_]__(_clouding_)], _[_[_]__(_clouding_)], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_]], _[_[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_clouding_], _[_[_clouding_], _[_cloud
Since there are much more non-hyperbolic posts, we randomly choose from them and create the final dataset in which #hyperbolic:#non-hyperbolic is approximately 50%:50%. Some basic statistics of the final dataset are listed in Table 1.
## 4 Dataset Analysis
In this section, we make some analysis on our constructed dataset.
First, we consider the length of the text. In Chinese, the length is simply the number of characters in the text. We assume that hyperbolic posts and non-hyperbolic posts may have similar text length. But the result turns out that hyperbolic posts have significantly longer texts than non-hyperbolic ones. What's more, compared with non-hyperbolic ones, hyperbolic posts of different keywords also have more varied text length. The average text length of different keywords are shown in Table 2. As can be seen, non-hyperbolic posts have texts with relatively similar lengths among different keywords, while hyperbolic posts have quite varied text length among keywords.
We guess that there are two main reasons leading to this phenomenon. First, hyperbolic expressions are usually closely related with strong emotions. People tend to use more words to fully express their strong feelings. The other reason is that some hyperbolic sentences with certain keywords bear some kind of similarity. Taking figure 6 as an example, there are several hyperbolic posts of keyword \(\mathbb{R}\)(_scenery_) with this pattern to express their love towards a star. This pattern does not appear in other keywords or in most non-hyperbolic posts, which partly explains why hyperbolic posts contain longer texts and why hyperbolic posts have varied text length among different keywords.
Then we focus on lexical features of hyperbolic posts. There's no doubt that some words appear more often in hyperbolic posts than non-hyperbolic ones. To quantitatively examine this phenomenon, We calculate smoothed chi-score [22] of each word. Consider a certain word \(d\), the chi-score of the word \(K(d)\) is
\[K(d)=\frac{(A+B+C+D)(AD-BC)}{(A+C)(B+D)(A+B)(C+D)}\]
, where \(A\) is the number of hyperbolic posts that contain \(d\), \(B\) is the number of non-hyperbolic posts that contain \(d\), \(C\) is the number of hyperbolic posts that do not contain \(d\) and \(D\) is the number of non-hyperbolic posts that do not contain \(d\). To smooth it, we use
\[\tilde{A}=A+1,\tilde{B}=B+1,\tilde{C}=C+1,\tilde{D}=D+1\]
. So we have
\[\tilde{K}(d)=\frac{(\tilde{A}+\tilde{B}+\tilde{C}+\tilde{D})(\tilde{A}\tilde{ D}-\tilde{B}\tilde{C})}{(\tilde{A}+\tilde{C})(\tilde{B}+\tilde{D})(\tilde{A}+ \tilde{B})(\tilde{C}+\tilde{D})}\]
. The bigger \(\tilde{K}(d)\) is, the higher correlation between \(d\) and hyperbole is. Since there are no spaces between words in Chinese, we use _Spacy_5 to separate words and remove stop words. We calculate \(\tilde{K}(d)\) for all remaining words and select the top-10 words. As can be seen in Table 3, one can easily think of hyperbolic expressions with some words such as \(\mathbb{F}\)(death). Other words are corresponding to certain keywords, such as \(\mathbb{F}\)(cold) is closely related to \(\mathbb{F}\)(weather). However, when observing these words alone, most of them does not contain
\begin{table}
\begin{tabular}{c c c} \hline
**Keyword** & **Total Count** & **Hyperbole** \\ \hline \#(traffic jam) & 496 & 252 \\ \#(scenery) & 402 & 204 \\ \#(weather) & 396 & 194 \\ \(\mathbb{C}\)\#(mood) & 451 & 207 \\ \#(clothing) & 442 & 198 \\ total & 2160 & 1055 \\ \hline \end{tabular}
\end{table}
Table 1: Basic statistics of our dataset
Figure 2: An example of hyperbolic post expressing their love towards a star with the word \(\mathbb{R}\)(_scenery_).(There are many longer ones, for conciseness we choose a relatively short one here.)
hyperbolic meaning themselves, which means that there is no reason to determine whether a post is hyperbole through some kind of " keywords ", thus further proving the difficulty of this task. It is also interesting that though we do not consider emotional exaggeration(such as the expression "Ah Ah Ah it's so cold!") as real hyperbole (According to our definition, this expression says nothing beyond fact), [AhAh) is still closely related to hyperbolic. It meets our assumption above that hyperbole is related to strong emotions.
Finally, we consider the roles that images play in expressing hyperbole. We ask the annotators to check all the images alone, trying to determine whether the post can be classified as hyperbole with image modality only. We find out that 15.8% images are themselves hyperbolic (like the one in Figure 0(b)). We also need to note that some images are considered hyperbolic because they contain hyperbolic texts (Figure 3). We count roughly and find there are only 4% of all hyperbolic posts that belong to this category, which is acceptable. This result reveals that images mostly serve as an assistant when expressing hyperbole rather than being hyperbolic themselves. As is discussed above, images can serve as ground truth to reveal hyperbole (Figure 0(a)) or give us higher confidence to determine one post as hyperbole (Figure 0(c)).
In summary, image and text can express hyperbole in the following ways: (1) one modality expresses hyperbole itself and the other modality serves as background information (_case-I_); (2) two modalities together express hyperbolic meanings, one expressing hyperbole and the other serving as ground truth to reveal this hyperbole (_case-II_); (3) two modalities together express hyperbolic meaning, one expressing hyperbole and the other offering extra confidence (_case-III_).
## 5 Empirical Studies on Hyperbole Detection
We treat the task of multimodal hyperbole detection as a binary classification task with hyperbole as positive sample. In this section, we aim to investigate and answer the following three questions related to this task:
* **RQ1:** Is image modality more useful rather than misleading in multimodal hyperbole detection?
* **RQ2:** Is pre-trained multimodal encoders effective on this task?
* **RQ3:** Can typical methods achieve good performance on cross-domain circumstances?
In the next three subsections, we will conduct empirical studies to discuss the above three questions, respectively.
### Experiments on Different Modalities (RQ1)
As discussed in Section 4, images mostly serve as assistants rather than express hyperbole themselves. So it is natural to ask: is image modality more useful than misleading in hyperbole detection? If the answer is yes, then which way is more suitable for utilizing image in the task?
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \(\#\)**(traffic jam)** & \(\&\)**(scenery)** & \(\&\)**(weather)** & \(\&\)**(mood)** & \(\&\)**(clothing)** \\ \hline non-hyperbolic & 61.3 & 56.8 & 57.7 & 64.3 & 70.9 \\ hyperbole & 81.0 & 96.1 & 79.3 & 65.5 & 88.3 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average text lengths of different keywords.
Figure 3: An example of images with hyperbolic words in it. (The words in the images says “I hate the whole world before sleep.”)
\begin{table}
\begin{tabular}{c c} \hline \hline word(\(d\)) & \(K\)(d) \\ \hline \(\%\)(death) & 0.00412 \\ \(\%\)(cold) & 0.00372 \\ \(\%\)(bought) & 0.00372 \\ \(\%\)(leg) & 0.00359 \\ \(\%\)(cause) & 0.00359 \\ \(\%\)(fit) & 0.00342 \\ \(\%\)(AhAh) & 0.00342 \\ \(\%\)(plus) & 0.00342 \\ \(\%\)(hair) & 0.00342 \\ \(\#\)(term begins) & 0.00342 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Top 10 words with highest \(\tilde{K}(d)\)
We use two unimodal encoders: BERT Devlin et al. (2019) and ResNet50 He et al. (2016), to encode text and image modality individually. BERT is one of the most popular pre-trained text encoders. Specifically, we use BERT that is pre-trained on ChineseCui et al. (2020) for this task. We fine-tune them to achieve better performance.
After unimodal encoding, we apply different fusion methods on this task. Denote \(f_{text}\) as the feature vector of text (outputted by BERT) and \(f_{image}\) as the feature vector of image (outputted by ResNet), the most trivial idea is simply concatenating \(f_{text}\) and \(f_{image}\) (We name this method _concat_). A slight improvement on this idea is implementing a gating method (See details in appendix A. We name this method _gate_).
Attention is a widely used method which can automatically find out which part of the feature representation is more important. We utilize both text-level and token-level features outputted by BERT as well as both image-level and patch-level features outputted by ResNet. We implement a multi-head attention mechanism to extract cross-modality fine-grained features. To be more detailed, we use the fine-grained features (i.e., token-level or patch-level features) of one modality as \(Q\), the fine-grained features of the other modality as \(K\) and \(V\). Thus we obtain two cross-modality fine-grained feature representations. We use another two multi-head attention module to further aggregate these fine-grained feature representations respectively (using image- or text-level feature representation as \(Q\) and the corresponding cross-modality fine-grained feature representation as \(K\) and \(V\)) and get the encoding vector of either modality. We then concatenate these two encodings and implement a gating mechanism (We name this method _attn-gate_.) The main architecture of this method is shown in Figure 4.
After fusing with either of the above methods, we feed the feature vector into several fully connected layers to finish the final classification. We use cross-entropy as loss function and Adam Kingma and Ba (2014) as optimizer. We implement learning rate decay to accelerate the convergence. The hyper-parameters are listed in Appendix B. All these models share the same hyper-parameters.
We report 10-fold cross validation results of the models. We randomly divide all the posts into 10 folders equally and carefully ensure that each folder contains approximately equal hyperbole and non-hyperbole posts. For each experiment, we choose one folder for testing, randomly select one folder for validating and use the rest eight folders for training. The 10-fold average results are shown in Table 4.
Some conclusions can be drawn from the results. First, with single modality, text-only model (BERT) performs much better than image-only model (ResNet). This meets our analysis that most images are not hyperbolic themselves. This result also clearly shows that people tend to express hyperbole more with texts only rather than with images only. Therefore although we've already discussed the importance of image modality for humans to determine hyperbole, it's still questionable that whether image modality is more useful rather than misleading under this circumstance.
Our following result proves that image modality is indeed more useful rather than misleading. Though with text modality only can achieve a fine result, introducing image modality, even simply giving it the same significance as text modality (_concat_), still apparently increases the performance of the model. Comparing with _BERT_, this improvement is statistically significant. This result clearly shows that even though under most circumstances text modality only is enough while image modality only is misleading, introducing image modality together with text modality is useful for determining hyperbole.
Gating mechanism is important to increase model performance. This insight can be drawn from our analysis of the dataset. Since under most circumstances, there are probably some features serving as background information, we should give these features less focus in case they interfere with other useful features. As can be seen from the results, even with trivial gating mechanism, the result is slightly improved compared with simply
\begin{table}
\begin{tabular}{c|c c} \hline \hline
**model** & **F1-score** & **Accuracy** \\ \hline BERT & \(0.702_{\pm 0.174}\) & \(0.713_{\pm 0.027}\) \\ ResNet & \(0.553_{\pm 0.078}\) & \(0.495_{\pm 0.028}\) \\ Concat & \(0.734_{\pm 0.199}\) & \(0.743_{\pm 0.017}\) \\ Gate & \(0.742_{\pm 0.193}\) & \(0.749_{\pm 0.029}\) \\
**Attn-Gate** & **0.750\({}_{\pm 0.178}\)** & **0.758\({}_{\pm 0.018}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Average results with 10-fold cross-validation (BERT and ResNet use only single-modality information.). The subscripts show the standard deviation of each experiment.
concatenating the two modalities' outputs.
However, this trivial gating method still has its own problems. First, using BERT only can achieve relatively fine results, which shows that text modality contains more information for detecting hyperbole. Thus we observe that the trivial gating method tends to give text modality a higher priority. This systematic bias leads to insufficient utility of image modality.
Another problem is that _gate_, as well as _concat_, is unable to model the fine-grained relationship between text and image. Figure 4(a) gives an example. All other methods except _attn-gate_ failed on this case. Since they just simply use high-level features of two modalities, they are unable to notice the connection between certain tokens(_black charcoal_) and a certain feature(the person's skin color). On the contrast, _attn-gate_ is able to extract fine-grained features between two modalities. What's more, the average gate values calculated by _attn-gate_ for fused image feature and text feature are both around \(0.5\) in this case, which shows that it notices the importance of both modalities. In fact, comparing with the trivial _concat_ method, _attn-gate_ is statistically significantly better.
There are also cases that failed all these methods. Figure 4(b) is a typical failure case. The reason is that these models do not know even under such
Figure 4: The architecture of the _attn-gate_ fusion model.
Figure 5: Two examples of test cases.(Both of them are hyperbolic.)
hot weather, people still need to wear clothes. The expression "there's no need wearing any clothes" is an overstatement of how hot it is. Since models do not have any common sense knowledge, they are unable to recognize this kind of hyperbole, which leads to the failure.
In summary, our answer to this question (RQ1) is that: images are more useful rather than misleading. With gating mechanism, we can better avoid the misleading influence of any modality. Attention mechanism offers fine-grained fused features between two modalities, which can help identify hyperboles of _case-II_ and _case-III_. However, without common sense, the model can never fully understand hyperbole. To further increase the performance, the key is introducing common sense into the models.
### Pre-trained Multimodal Encoder Evaluation (RQ2)
Pre-trained multimodal models are proved effective in many downstream tasks including zero-shot and few-shot learning, image-text retrieval, VQA, etc. However, when speaking of multimodal classification task, few discussions are made before. In this subsection, we apply two pre-trained multimodal models CLIP (Radford et al., 2021) and BriVL (Huo et al., 2021) on this downstream task to evaluate their performance. CLIP (Radford et al., 2021) is a popular multimodal encoder. Its main idea is using contrastive loss to force the output of two unimodal encoders into one feature space. So we can use cosine-similarity to measure the distance between the two different modalities. BriVL (Huo et al., 2021) is another multimodal encoder that is pre-trained on Chinese. It claims to use a more advanced algorithm(inspired by MoCo (He et al., 2020)) which can incorporate more negative samples thus obtaining better encodings of image and text modalities.
We still report 10-fold cross-validation results. Specifically, we use CLIP that is pre-trained on Chinese texts (Yang et al., 2022). For BriVL, we directly use the open-source code and tools6.
Footnote 6: [https://github.com/BAAI-WuDao/BriVL](https://github.com/BAAI-WuDao/BriVL)
We try two ways to use these pre-trained models. One is giving prompts (_prompt_) to the text (We add prompts "(This is hyperbole.) and "(This is not hyperbole) at the beginning of each text. So we get two texts with different prompts and same content.). We add a fully connected layer after encoding either modality. To be more detailed, for each case we encode one image and the two texts with different prompts, and try to maximize the cosine similarity between the image and the text with correct prompt. This is a common method used in image classification tasks.
Another way is using these models simply as encoders and treat their output as feature vectors. Then we concatenate them (_concat_) or use gating methods (_gate_) just like what we have done above. In this way no prompts are added to texts. Both the two ways are seldom discussed in similar tasks like multimodal sarcasm detection previously. Details of both ways are shown in figure 6.
We fine-tune theses models to achieve better results. However, the result turns out to be very disappointing. As can be seen in Table 5, they perform much worse than most methods mentioned in Section 5.1.
We analyze why the results are much worse. Radford et al. (2021) have already pointed out that CLIP performs badly on classifying abstract concepts like determining whether a scene is normal or not. Hyperbole detection is a typical task involving abstract concept, so it is not surprising that _prompt_ does not work well on this task. What's more, since this kind of task involves classifying image and text together rather than image itself, the prompt method is even more unsuitable. We believe the same reason works for BriVL.
Besides, the text in this kind of task is usually not a simple description of the image. It is a common case that there is no strong semantic correlation between corresponding text and image, which is quite different from the dataset CLIP is trained on. As is discussed in Radford et al. (2021) and Huo et al. (2021), CLIP performs badly under this circumstance. Huo et al. (2021) claims to have tried to avoid assuming this strong correlation and assumed only weak correlation in BriVL. As can be seen, it turns out to be effective to some extent when
\begin{table}
\begin{tabular}{c|c c} \hline \hline
**model** & **F1-score** & **Accuracy** \\ \hline CLIP+prompt & \(0.632_{\pm 0.159}\) & \(0.642_{\pm 0.042}\) \\ CLIP+concat & \(0.584_{\pm 0.165}\) & \(0.644_{\pm 0.035}\) \\ CLIP+gate & \(0.580_{\pm 0.174}\) & \(0.642_{\pm 0.025}\) \\ BriVL+prompt & \(0.587_{\pm 0.170}\) & \(0.628_{\pm 0.025}\) \\ BriVL+concat & \(0.665_{\pm 0.165}\) & \(0.667_{\pm 0.038}\) \\ BriVL+gate & \(0.644_{\pm 0.162}\) & \(0.637_{\pm 0.031}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results with pre-trained multimodal models.
compared with CLIP. However, some texts in this task have very little or even no explicit semantic correlation with corresponding images. We believe that this bias probably leads to poor encoding of either modality and causes the bad performance.
### Cross-domain Experiments (RQ3)
In this part, we briefly evaluate the methods mentioned in Section 5.1 on cross-domain hyperbole detection. As is shown in Section 3.2, the posts are of five different keywords and their contents are generally disjoint. We use posts from three keywords for training, one for validation and one for testing. For each method, we repeat this procedure five times to test on all five keywords and report average performance. We still use the same hyper-parameters as shown in appendix B. The results are shown in Table 6. Since pre-trained multimodal models perform badly, we do not discuss them in this part.
As can be seen, all methods perform worse than in Section 5.1, which meets our expectation because cross-domain hyperbole detection is certainly a tougher task. The results are not that terrible, which shows the generalization ability of the methods proposed above. Introducing image modality still apparently increases the performance as can be seen in _concat_. However, we are surprised to find that gating and attention mechanism do not bring higher performance. We take two examples (Figure 7) from the keyword ##(traffic jam) to illustrate the reason.
As can be seen in Figure 6(a), the sentence contains an expression _won't be stuck in a traffic jam_ which is seemingly too absolute here. However, in this case, it is probably a fact that bikes will not be stuck in traffic jams, so it is not hyperbole. When trained on other keywords, the models will not have such knowledge ("bikes don't get stuck in traffic jams"), so they naturally mispredict this case. This explains why all methods perform worse on this task. Similarly, the screenshot of a navigating software only appears in posts with the certain keyword ##(traffic jam), so the models have no idea of what it is. Therefore, models requiring deep interaction between text modality and image modality may be confused more seriously. We believe that this systematic inconsistency is the reason why deep fusion methods perform much worse on this task.
\begin{table}
\begin{tabular}{c|c c} \hline \hline
**model** & **F1-score** & **Accuracy** \\ \hline BERT & \(0.652_{\pm 0.162}\) & \(0.667_{\pm 0.045}\) \\ ResNet & \(0.513_{\pm 0.079}\) & \(0.500_{\pm 0.020}\) \\ Concat & \(0.700_{\pm 0.168}\) & \(0.711_{\pm 0.028}\) \\ Gate & \(0.683_{\pm 0.161}\) & \(0.704_{\pm 0.037}\) \\ Attn-Gate & \(0.700_{\pm 0.167}\) & \(0.714_{\pm 0.043}\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results of cross-domain experiments. The subscripts show the standard deviation of each experiment.
Figure 7: Two examples of test cases (Both of them are not hyperbole).
## 6 Conclusion and Future Work
In this paper, we first introduce a new task of multimodal hyperbole detection and build a dataset for our study. We point out that different modalities can express hyperbole in mainly three ways. We conduct several experiments and prove that image modality is useful in hyperbole detection. We point out the importance of gating mechanism and deep fusion of two modalities. Pre-trained multimodal models are proved ineffective on this task. Cross-domain results show that the methods still work relatively fine while the inconsistency between posts with different keywords is certainly the bottleneck of this task. For future work, we plan to make use of common sense knowledge in our model to achieve a better result.
|
2304.01447
|
Off-Policy Action Anticipation in Multi-Agent Reinforcement Learning
|
Learning anticipation in Multi-Agent Reinforcement Learning (MARL) is a
reasoning paradigm where agents anticipate the learning steps of other agents
to improve cooperation among themselves. As MARL uses gradient-based
optimization, learning anticipation requires using Higher-Order Gradients
(HOG), with so-called HOG methods. Existing HOG methods are based on policy
parameter anticipation, i.e., agents anticipate the changes in policy
parameters of other agents. Currently, however, these existing HOG methods have
only been applied to differentiable games or games with small state spaces. In
this work, we demonstrate that in the case of non-differentiable games with
large state spaces, existing HOG methods do not perform well and are
inefficient due to their inherent limitations related to policy parameter
anticipation and multiple sampling stages. To overcome these problems, we
propose Off-Policy Action Anticipation (OffPA2), a novel framework that
approaches learning anticipation through action anticipation, i.e., agents
anticipate the changes in actions of other agents, via off-policy sampling. We
theoretically analyze our proposed OffPA2 and employ it to develop multiple HOG
methods that are applicable to non-differentiable games with large state
spaces. We conduct a large set of experiments and illustrate that our proposed
HOG methods outperform the existing ones regarding efficiency and performance.
|
Ariyan Bighashdel, Daan de Geus, Pavol Jancura, Gijs Dubbelman
|
2023-04-04T01:44:19Z
|
http://arxiv.org/abs/2304.01447v1
|
# Off-Policy Action Anticipation in Multi-Agent Reinforcement Learning
###### Abstract
Learning anticipation in Multi-Agent Reinforcement Learning (MARL) is a reasoning paradigm where agents anticipate the learning steps of other agents to improve cooperation among themselves. As MARL uses gradient-based optimization, learning anticipation requires using Higher-Order Gradients (HOG), with so-called HOG methods. Existing HOG methods are based on _policy parameter anticipation_, i.e., agents anticipate the changes in policy parameters of other agents. Currently, however, these existing HOG methods have only been applied to differentiable games or games with small state spaces. In this work, we demonstrate that in the case of non-differentiable games with large state spaces, existing HOG methods do not perform well and are inefficient due to their inherent limitations related to policy parameter anticipation and multiple sampling stages. To overcome these problems, we propose Off-Policy Action Anticipation (OffPA2), a novel framework that approaches learning anticipation through action anticipation, i.e., agents anticipate the changes in actions of other agents, via off-policy sampling. We theoretically analyze our proposed OffPA2 and employ it to develop multiple HOG methods that are applicable to non-differentiable games with large state spaces. We conduct a large set of experiments and illustrate that our proposed HOG methods outperform the existing ones regarding efficiency and performance.
**Keywords:** Multi-agent reinforcement learning, Reasoning, Learning anticipation, Higher-order gradients, action anticipation
## 1 Introduction
In multi-agent systems, the paradigm of _agents' reasoning about other agents_ has been explored and researched extensively (Goodie et al., 2012; Liu and Lakemeyer, 2021). Recently, this paradigm is also being studied in the subfield of Multi-Agent Reinforcement Learning (MARL) (Wen et al., 2019, 2020; Konan et al., 2022). Generally speaking, MARL deals with several agents simultaneously learning and interacting in an environment. In the context of MARL, one reasoning strategy is anticipating the learning steps of other agents (Zhang and Lesser, 2010), i.e., learning anticipation. As MARL uses gradient-based optimization, learning anticipation naturally leads to the usage of Higher-Order Gradients (HOG), with so-called HOG methods (Letcher et al., 2019). The significance of learning anticipation
in HOG methods has been frequently shown in the literature. For instance, Look-Ahead (LA) (Zhang and Lesser, 2010; Letcher et al., 2019) uses learning anticipation to guarantee convergence in cyclic games such as matching pennies, Learning with Opponent-Learning Awareness (LOLA) (Foerster et al., 2018) employs learning anticipation to ensure cooperation in general-sum games such as Iterated Prisoner's Dilemma (IPD), and Hierarchical Learning anticipation (HLA) (Bighashdel et al., 2023) utilizes learning anticipation to improve coordination among common-interested agents in fully-cooperative games. In this study, we explore the limitations of current HOG methods and propose novel solutions so that learning anticipation can be applied to a broader range of MARL problems. In Figure 1, we provide an overview of the applicability of both existing and our proposed HOG methods.
Learning anticipation in the current HOG methods is developed based on the _policy parameter anticipation_ approach, i.e., agents anticipate the cha
Figure 1: Overview of HOG methods and their applicability in various game settings. Existing HOG methods (gray rectangles) are based on the policy parameter approach. These methods have been only applied to differentiable games or non-differentiable games with small state spaces. Our proposed HOG methods (blue rectangles) are developed in our novel framework of Off-Policy Action Anticipation (OffPA2) and can be applied to non-differentiable games with large state spaces.
of other agents (Zhang and Lesser, 2010; Foerster et al., 2018, 2018) (see Figure 1). In this approach, first of all, agents should either have access to other agents' exact parameters or infer other agents' parameters from state-action trajectories (Foerster et al., 2018). In many game settings, these parameters are obscured. This is problematic because when the size of the state space increases, the dimensionality of the parameter spaces increases as well, making the parameter inference problem computationally expensive. Furthermore, anticipating the changes in high-dimensional policy parameters is inefficient, whether the parameters are inferred or exact. Finally, policy parameter anticipation requires higher-order gradients with respect to policy parameters which is shown to be challenging in MARL (Foerster et al., 2018, 2022). Current HOG methods mainly assume that the games are differentiable, i.e., agents have access to gradients and Hessians (Willi et al., 2022; Letcher et al., 2019) (Figure 1a). When the games are non-differentiable, existing HOG methods employ the Stochastic Policy Gradient (SPG) theorem (Sutton and Barto, 2018) with on-policy sampling to compute the gradients with respect to the policy parameters (Foerster et al., 2018, 2018). However, estimating higher-order gradients in SPG requires either analytical approximations - since the learning step for one agent in the standard SPG theorem is independent of other agents' parameters - or multi-stage sampling, which is inefficient and comes typically with high variance, making learning unstable (Foerster et al., 2018, 2018). In this work, we aim to propose novel HOG methods that overcome the aforementioned limitations of existing HOG methods, making learning anticipation applicable to non-differentiable games with large state spaces.
To accomplish our goal, we propose Off-Policy Action Anticipation (OffPA2), a novel framework that approaches learning anticipation through action anticipation (see Figure 1). Specifically, the agents in OffPA2 anticipate the changes in actions of other agents during learning. Unlike policy parameter anticipation, action anticipation is performed in the action space whose dimensionality is generally lower than the policy parameter space in MARL games with large state spaces (Lowe et al., 2017; Peng et al., 2021). Furthermore, we employ the Deterministic Policy Gradient (DPG) theorem with off-policy sampling to estimate differentiable objective functions. Consequently, high-order gradients can be efficiently computed, while still following the standard Centralized Training and Decentralized Execution (CTDE) setting where agents can observe the other agents' actions during training (Lowe et al., 2017). We theoretically analyze our OffPA2 in terms of performance and time complexity. The proposed OffPA2 framework allows us to develop HOG methods that, unlike existing HOG methods, are applicable to non-differentiable games with large state spaces. To show this, we apply the principles of LA, LOLA, and HLA to our OffPA2 framework and develop the LA-OffPA2, LOLA-OffPA2, and HLA-OffPA2 methods, respectively. We compare our methods with existing HOG methods in well-controlled studies. By doing so, we demonstrate that the overall performance and efficiency of our proposed methods do not dot decrease with increasing the state-space size, unlike for existing HOG methods, where they get drastically worse. Finally, we compare our methods with the standard, DPG-based MARL algorithms and highlight the importance of learning anticipation in MARL. Below, we summarize our contributions.
* We propose OffPA2, a novel framework that approaches learning anticipation through action anticipation, which makes HOG methods applicable to non-differentiable games
with large state spaces. We provide theoretical analyses of the influence of our proposed action anticipation approach on performance and time complexity.
* Within our OffPA2 framework, we develop three novel methods, i.e., LA-OffPA2, LOLA-OffPA2, and HLA-OffPA2. We show that our methods outperform the existing HOG methods and state-of-the-art DPG-based approaches.
## 2 Related work
In many real-world MARL tasks, communication constraints during execution require the use of decentralized policies. In these cases, one reasoning tool is Agents Modeling Agents (AMA) (Albrecht and Stone, 2018), where agents explicitly model other agents to predict their behaviors. Although AMA traditionally assumes naive opponents with no reasoning abilities (He et al., 2016; Hong et al., 2018), recent studies have extended AMA to further consider multiple levels of reasoning where each agent considers the reasoning process of other agents to make better decisions (Wen et al., 2019, 2020). For instance, Wen et al. (2019) proposed the probabilistic recursive reasoning (PR2) update rule for MARL agents to recursively reason about other agents' beliefs. However, in these approaches, agents do not take into account the learning steps of other agents, which has shown to be important in games where interaction among self-interested agents otherwise leads to worst-case outcomes (Foerster et al., 2018). In Section 5, we conduct several experiments and compare our proposed methods with these approaches.
HOG methods, on the other hand, are a range of methods that use higher-order gradients to consider the anticipated learning steps of other agents. These include: 1) LOLA and Higher-order LOLA (HOLA), proposed by Foerster et al. (2018) to improve cooperation in Iterated Prisoner's Dilemma (IPD), 2) Look-Ahead (LA), proposed by Zhang and Lesser (2010) to guarantee convergence in cyclic games, 3) Stable Opponent Shaping (SOS), developed by Letcher et al. (2019) as an interpolation between LOLA and LA to inherit the benefits of both, 4) Consistent LOLA (COLA), proposed by Willi et al. (2022) to improve consistency in opponent shaping, 5) Hierarchical Learning Anticipation (HLA), proposed by Bighashdel et al. (2023) to improve coordination among fully-cooperative agents, 6) Consensus Optimization (CO), proposed by Bertsekas (2014) to improve training stability and convergence in zero-sum games, and 7) Symplectic Gradient Adjustment (SGA), proposed by Balduzzi et al. (2018) to improve parameter flexibility of CO.
Despite their novel ideas, most existing HOG methods have been only applied to differentiable games, where the agents have access to the exact gradients or Hessians, and only LOLA has been evaluated on non-differentiable games. Specifically, Foerster et al. (2018) employed the SPG framework to estimate the gradients in LOLA. As the standard SPG is independent of other agents' parameters, the authors relied on Taylor expansions of the expected return combined with analytical derivations of the second-order gradients. Foerster et al. (2018) indicated that this approach is not stable in learning. To solve the problem, Foerster et al. (2018) proposed an infinitely differentiable Monte Carlo estimator, referred to as DiCE, to correctly optimize the stochastic objectives with any order of gradients. Similarly to meta-learning, the agents in the DiCE framework reason about and predict the learning steps of the opponents using inner learning loops and update their parameters in outer learning loops. However, each learning loop for each agent requires a sampling stage
which is very inefficient for high-order reasoning and games with large state spaces, i.e., beyond matrix games. In Section 5, we conduct a set of experiments to closely compare our proposed OffPA2 framework with DiCE.
## 3 Problem formulation and background
We formulate the MARL setup as a Markov Game (MG) (Littman, 1994). An MG is a tuple \((\mathcal{N},\mathcal{S},\{\mathcal{A}_{i}\}_{i\in\mathcal{N}},\{\mathcal{R}_{ i}\}_{i\in\mathcal{N}},\mathcal{T},\rho,\gamma)\), where \(\mathcal{N}\) is the set of agents (\(|\mathcal{N}|=n\)), \(\mathcal{S}\) is the set of states, and \(\mathcal{A}_{i}\) is the set of possible actions for agent \(i\in\mathcal{N}\). Agent \(i\) chooses its action \(a_{i}\in\mathcal{A}_{i}\) through the stochastic policy network \(\pi_{\theta_{i}}:\mathcal{S}\times\mathcal{A}_{i}\rightarrow[0,1]\) parameterized by \(\theta_{i}\) conditioning on the given state \(s\in\mathcal{S}\). Given the actions of all agents, each agent \(i\) obtains a reward \(r_{i}\) according to its reward function \(\mathcal{R}_{i}:\mathcal{S}\times\mathcal{A}_{1}\times...\times\mathcal{A}_{ n}\rightarrow\mathbb{R}\). Given an initial state, the next state is produced according to the state transition function \(\mathcal{T}:\mathcal{S}\times\mathcal{A}_{1}\times...\times\mathcal{A}_{n} \times\mathcal{S}\rightarrow[0,1]\). We denote an episode of horizon \(T\) as \(\tau=(\{s^{0},a_{1}^{0},...,a_{n}^{0},r_{1}^{0},...,r_{n}^{0}\},...,\{s^{T},a_ {1}^{T},...,a_{n}^{T},r_{1}^{T},...,r_{n}^{T}\})\), and the discounted return for each agent \(i\) at time step \(t\leq T\) is defined by \(G_{i}^{t}(\tau)=\sum_{l=t}^{t}\gamma^{l-t}r_{i}\) where \(\gamma\) is a predefined discount factor. The expected return given the agents' policy parameters approximates the state value function for each agent \(V_{i}(s,\theta_{1},...,\theta_{n})=\mathbb{E}[G_{i}^{t}(\tau|s^{t}=s)]\). The goal for each agent \(i\) is to find the policy parameters, \(\theta_{i}\), that maximize the expected return given the distribution of the initial state \(\rho(s)\), denoted by the performance objective \(J_{i}=\mathbb{E}_{\rho(s)}V_{i}(s,\theta_{1},...,\theta_{n})\).
**Naive gradient ascend**. In the naive update rule, agents do not perform learning anticipation to update their policy parameters. More specifically, each naive agent \(i\) maximizes its performance objective by updating its policy parameters in the direction of the objective's gradient
\[\nabla_{\theta_{i}}J_{i}=\mathbb{E}_{\rho(s)}\nabla_{\theta_{i}}V_{i}(s, \theta_{1},...,\theta_{n}). \tag{1}\]
**Learning With Opponent-Learning Awareness (LOLA)**. Unlike naive agents, LOLA agents modify their learning objectives by differentiating through the anticipated learning steps of the opponents (Foerster et al., 2018). Given \(n=2\) for simplicity, a first-order LOLA agent (agent One) assumes a naive opponent and uses policy parameter anticipation to optimize \(V_{1}^{\text{LOLA}}(s,\theta_{1},\theta_{2}+\Delta\theta_{2})\) where \(\Delta\theta_{2}=\mathbb{E}_{\rho(s)}\eta\nabla_{\theta_{2}}V_{2}(s,\theta_{1 },\theta_{2})\) and \(\eta\in\mathbb{R}^{+}\) is the prediction length. Using first-order Taylor expansion and by differentiating with respect to \(\theta_{1}\), the gradient adjustment for the first LOLA agent (Foerster et al., 2018) is given by
\[\nabla_{\theta_{1}}V_{1}^{\text{LOLA}}(s,\theta_{1},\theta_{2}+\Delta\theta_{ 2})\approx\nabla_{\theta_{1}}V_{1}+(\nabla_{\theta_{2}\theta_{1}}V_{1})^{ \intercal}\Delta\theta_{2}+\underbrace{(\nabla_{\theta_{1}}\Delta\theta_{2})^{ \intercal}\nabla_{\theta_{2}}V_{1}}_{\text{shaping}}, \tag{2}\]
where \(V_{1}=V_{1}(s,\theta_{1},\theta_{2})\). The rightmost term in the LOLA update allows for active shaping of the opponent's learning. This term has been proven effective in enforcing cooperation in various games, including IPD (Foerster et al., 2018, 2018). The LOLA update can be further extended to non-naive opponents, resulting in HOLA agents (Foerster et al., 2018; Willi et al., 2022).
**Look Ahead (LA)**. LA agents assume that the opponents' learning steps cannot be influenced, i.e., cannot be shaped (Zhang and Lesser, 2010; Letcher et al., 2019). In other
words, agent One assumes that the prediction step, \(\Delta\theta_{2}\), is independent of the current optimization, i.e., \(\nabla_{\theta_{1}}\Delta\theta_{2}=0\). Therefore, the shaping term disappears, and the gradient adjustment for the first LA agent will be
\[\nabla_{\theta_{1}}V_{1}^{\text{LA}}(s,\theta_{1},\theta_{2}+\perp\Delta\theta_ {2})\approx\nabla_{\theta_{1}}V_{1}+(\nabla_{\theta_{2}\theta_{1}}V_{1})^{ \intercal}\Delta\theta_{2}, \tag{3}\]
where \(\perp\) prevents gradient flowing from \(\Delta\theta_{2}\) upon differentiation.
**Hierarchical Learning Anticipation (HLA).** Unlike LOLA and LA, HLA is proposed to improve coordination in fully cooperative games with common interested agents (Bigashdel et al., 2023), i.e., \(\mathcal{R}_{i}=\mathcal{R}_{j}=\mathcal{R}\ \forall i,j\in\mathcal{N}\) and, consequently, \(V_{i}=V_{j}=V\ \forall i,j\in\mathcal{N}\). HLA randomly assigns the agents into hierarchy levels to specify their reasoning orders. In each hierarchy level, the assigned agent is a _leader_ of the lower hierarchy levels and a _follower_ of the higher ones, with two reasoning rules: 1) a leader knows the reasoning levels of the followers and is one level higher, and 2) a follower cannot shape the leaders and only follows their shaping plans. Concretely, if \(n=2\), and we assume that agent Two is the leader (HLA-L) and agent One is the follower (HLA-F), the gradient adjustment for the leader is:
\[\nabla_{\theta_{2}}V^{\text{HLA-L}}(s,\theta_{1}+\Delta\theta_{1},\theta_{2} )\approx\nabla_{\theta_{2}}V+(\nabla_{\theta_{1}\theta_{2}}V)^{\intercal} \Delta\theta_{1}+(\nabla_{\theta_{2}}\Delta\theta_{1})^{\intercal}\nabla_{ \theta_{1}}V, \tag{4}\]
where \(V=V(s,\theta_{1},\theta_{2})\) is the common value function, and \(\Delta\theta_{1}=\eta\nabla_{\theta_{1}}V\). The plan of the leader is to change its parameters \(\bar{\theta}_{2}=\theta_{2}+\eta\nabla_{\theta_{2}}V^{\text{HLA-L}}(s, \theta_{1}+\Delta\theta_{1},\theta_{2})\) in such a way that an optimal increase in the common value is achieved after its new parameters are taken into account by the follower. Therefore, the follower must follow the plan and adjust its parameters through
\[\nabla_{\theta_{1}}V^{\text{HLA-F}}(s,\theta_{1},\bar{\theta}_{2})\approx \nabla_{\theta_{1}}V+(\nabla_{\theta_{2}\theta_{1}}V)^{\intercal}\eta\nabla_ {\theta_{2}}V^{\text{HLA-L}}(s,\theta_{1}+\Delta\theta_{1},\theta_{2}). \tag{5}\]
## 4 Approach
In this section, we propose OffPA2, a framework designed to enable the application of HOG methods to non-differentiable games with large state spaces. To solve the problems regarding policy parameter anticipation, we propose the novel approach of action anticipation, where agents anticipate the changes in actions of other agents during learning. Furthermore, we employ the DPG theorem with off-policy sampling to estimate differentiable objective functions. Consequently, high-order gradients can be efficiently computed. Our proposed OffPA2 complies with the standard Centralized Training and Decentralized Execution (CTDE) setting in DPG, where the agents during training have access to the actions of other agents (Lowe et al., 2017).
### OffPA2: Off-policy action anticipation
We define a deterministic policy \(\mu_{\theta_{i}}:\mathcal{S}\rightarrow\mathcal{A}_{i}\), parameterized by \(\theta_{i}\) for each agent \(i\in\mathcal{N}\). Let \(Q_{i}(s,a_{1},...,a_{n})=\mathbb{E}[G_{i}^{t}(\tau|s^{t}=s,a_{i}^{t}=a_{i}\forall i \in\mathcal{N})]\) denote the state-action value function, then we have \(V_{i}(s,\theta_{1},...,\theta_{n})=Q_{i}(s,\mu_{\theta_{1}}(s),...,\mu_{\theta_ {n}}(s))\). Furthermore, we define a stochastic behavior policy for each agent \(i\) as \(\pi_{\theta_{i}}^{b}=\mu_{\theta_{i}}+SG\), where \(SG\) is a standard Gaussian
distribution. Given the behavior policies, the policy parameters can be learned off-policy, from trajectories generated by the behavior policies, i.e., \(\rho^{b}(s,\{a_{i}\}_{i\in\mathcal{N}},\{r_{i}\}_{i\in\mathcal{N}},s^{\prime})\) where \(s\) and \(s^{\prime}\) are consecutive states. Using the deterministic policy gradient theorem (Silver et al., 2014; Lowe et al., 2017), we can obtain the gradient of the performance objective for each naive agent \(i\) as \(\nabla_{\theta_{i}}J_{i}=\mathbb{E}_{\rho^{\beta}(s)}\nabla_{\theta_{i}}Q_{i}( s,\mu_{\theta_{1}}(s),...,\mu_{\theta_{n}}(s))\).
Without the loss of generality, we consider two agents, i.e., \(n=2\), and we assume that agent One wants to anticipate the learning step of agent Two, who is a naive learner. At each state \(s\sim\rho^{b}(s)\), agent One anticipates the changes in the policy parameters of agent Two as \(\Delta\theta_{2}(s)=\eta\nabla_{\theta_{2}}Q_{2}(s,\mu_{\theta_{1}}(s),\mu_{ \theta_{2}}(s))\), i.e., policy parameter anticipation. Therefore, agent One updates its policy parameters in the direction of:
\[\nabla_{\theta_{1}}J_{1}=\mathbb{E}_{\rho^{\beta}(s)}\nabla_{ \theta_{1}}Q_{1}(s,\mu_{\theta_{1}},\mu_{\theta_{2}+\Delta\theta_{2}(s)}(s)), \tag{6}\]
**Theorem 1** (action anticipation): _Using first-order Taylor expansion, the gradient of the performance objective for agent One, Eq. (6), can be approximated as:_
\[\nabla_{\theta_{1}}J_{1}\approx\mathbb{E}_{\rho^{\beta}(s)} \nabla_{\theta_{1}}\mu_{\theta_{1}}(s)\nabla_{a_{1}}Q_{1}(s,a_{1},a_{2}+\Delta a _{2})|_{a_{1}=\mu_{\theta_{1}}(s),a_{2}=\mu_{\theta_{2}}(s)}, \tag{7}\]
_where_
\[\Delta a_{2}=\hat{\eta}_{\mathit{1st}}\nabla_{a_{2}}Q_{2}(s,a_{1 },a_{2}), \tag{8}\]
_is the anticipated change of action, where \(\hat{\eta}_{\mathit{1st}}=\eta\left\|\nabla_{\theta_{2}}\mu_{\theta_{2}(s)} \right\|^{2}\in\mathbb{R}^{+}\) is the projected prediction length._
**Proof**. See Appendix A.1.
Theorem 1 indicates that agent One can anticipate the learning step of agent Two in the action space rather than the policy parameter space. This way of reasoning has two benefits. First, in the MARL games with large state spaces, the dimensionality of action space is significantly lower than that of the policy parameter space (Lowe et al., 2017; Peng et al., 2021). The justification is that large state spaces require more complex policy networks with more parameters to properly represent all possible states. Second, action anticipation, unlike policy parameter anticipation, complies with the standard centralized training and decentralized execution (CTDE) setting in DPG. In the standard CTDE setting, the agents during training have access to the centralized state-action value functions to train the decentralized policies. Consequently, the agents are informed of other agents' actions and can perform action anticipation during training. This is while in policy parameter anticipation, the agents need to additionally access the policy parameters of other agents.
#### 4.1.1 Influence of action anticipation on performance
Our proposed action anticipation approach employs the first-order Taylor approximation to map the anticipated learning from the policy parameter space to the action space. In other words:
\[\mu_{\theta_{2}+\Delta\theta_{2}(s)}(s)\approx a_{2}+\Delta a_{2}, \tag{9}\]
where \(a_{2}=\mu_{\theta_{2}}(s)\) and \(\Delta a_{2}=\hat{\eta}_{1\text{st}}\nabla_{a_{2}}Q_{2}(s,a_{1},a_{2})\). In the theorem below, we show how this approximation influences the principles of HOG methods, which have been theoretically and experimentally researched throughout the literature (Zhang and Lesser, 2010; Foerster et al., 2018; Letcher et al., 2019; Willi et al., 2022).
**Theorem 2**: _For a sufficiently small \(\hat{\eta}_{1\text{st}}\), there exists an \(\eta^{\prime}\in\mathbb{R}^{+}\) such that_
\[\mu_{\theta_{2}+\Delta\theta^{\prime}{}_{2}(s)}(s)=a_{2}+\Delta a_{2}, \tag{10}\]
_where_
\[\begin{split}&\Delta\theta^{\prime}_{2}(s)=\eta^{\prime}\nabla_{ \theta_{2}}\mu_{\theta_{2}}(s)\nabla_{a_{2}}Q_{2}(s,a_{1},a_{2})\\ & a_{2}=\mu_{\theta_{2}}(s)\\ &\Delta a_{2}=\hat{\eta}_{1\text{st}}\nabla_{a_{2}}Q_{2}(s,a_{1},a_{2})\qquad\hat{\eta}_{1\text{st}}=\eta\left\|\nabla_{\theta_{2}}\mu_{\theta _{2}(s)}\right\|^{2}\qquad\eta\in\mathbb{R}^{+}\end{split} \tag{11}\]
**Proof**. See Appendix A.2.
Based on Theorem 2, action anticipation via first-order Taylor expansion and sufficiently small \(\hat{\eta}_{1\text{st}}\) only scales the prediction length as both \(\eta\) and \(\eta^{\prime}\) are non-negative numbers. The general theoretical analyses on differentiable games reveal that scaling the prediction length influences the HOG methods' convergence behaviors (Letcher et al., 2019; Zhang and Lesser, 2010). However, in our OffPA2 framework, we directly set the _projected_ prediction length, i.e., \(\hat{\eta}_{1\text{st}}\), rather than the prediction length, i.e., \(\eta\). Consequently, the resulting prediction length in the policy parameter space, i.e., \(\eta^{\prime}\), can correspond to satisfactory convergence behaviors (see Section 5.2.3).
#### 4.1.2 Computing higher-order gradients
The state-action value function in Eq (7) is generally unknown and non-differentiable. Similarly to the DPG-based algorithms (Silver et al., 2014; Lowe et al., 2017), we substitute a differentiable state-action value function \(Q_{i}(s,a_{1},...,a_{n};\omega_{i})\), parameterized by \(\omega_{i}\), in place of the true state-action value function, i.e., \(Q_{i}(s,a_{1},...,a_{n};\omega_{i})\approx Q_{i}(s,a_{1},...,a_{n})\). The parameters of the state-action value function can be obtained by minimizing the Temporal Difference (TD) error, off-policy, from episodes generated by the behavior policies (Lowe et al., 2017):
\[\mathcal{L}(\omega_{i})=\mathbb{E}_{\rho^{b}(s,\{a_{i}\}_{i\in\mathcal{N}},\{ r_{i}\}_{i\in\mathcal{N}},s^{\prime})}[(Q_{i}(s,a_{1},...,a_{n};\omega_{i})-y_{i}) ^{2}], \tag{12}\]
where \(y_{i}\) is the TD target value:
\[y_{i}=r_{i}+\gamma Q^{\prime}_{i}(s,a^{\prime}_{1},...,a^{\prime}_{n})|_{a^{ \prime}_{i}=\mu^{\prime}_{i}(s^{\prime})\;\forall i\in\mathcal{N}}, \tag{13}\]
where \(Q^{\prime}_{i}\) and \(\mu^{\prime}_{i}\) are the target state-action value and policy functions, respectively. The differentiability of objective functions in OffPA2 is particularly beneficial for HOG methods as they need to frequently compute the higher-order gradients to anticipate the agents' learning.
#### 4.1.3 Influence of action anticipation on time complexity
Apart from the differentiability of objective functions, the action anticipation approach further reduces the gradient computation complexity as it requires the anticipated changes of actions, i.e., \(\Delta a_{i}\), rather than the anticipated changes of policy parameters, i.e., \(\Delta\theta_{i}\). Assume that the policy and state-action value networks are multi-layer perceptrons (as done in most experiments), then:
**Theorem 3**: _Action anticipation, compared to policy parameter anticipation, reduces the time complexity of anticipating the learning step of a naive opponent by \(O(LN^{2})\), where \(L\) is the number of fully connected layers, and \(N\) is the number of neurons per layer in the policy and state-action value networks._
**Proof**. See Appendix A.3.
### OffPA2-based HOG methods
Having the OffPA2 framework, we can now develop HOG methods that are applicable to non-differentiable games with large state spaces. In the following sections, we develop LOLA-OffPA2, LA-OffPA2, and HLA-OffPA2 by applying the LOLA, LA, and HLA principles to our OffPA2 framework, respectively.
#### 4.2.1 Lola-OffPA2
As described in Section 3, LOLA agents predict and shape the learning steps of other agents to improve cooperation in non-team games. Given two agents (\(n=2\)) for simplicity, the LOLA-OffPA2 agent (agent One) predicts and shapes the action of the opponent (agent Two) that is assumed by agent One to be naive. Using first-order Taylor expansion, the gradient adjustment for the first LOLA-OffPA2 agent is given by
\[\begin{split}\nabla_{\theta_{1}}J_{1}^{\text{LOLA-OffPA2}}& =\mathbb{E}_{\rho^{\beta}(s)}\nabla_{\theta_{1}}\mu_{\theta_{1}}(s )\nabla_{a_{1}}Q_{1}(s,a_{1},a_{2}+\Delta a_{2})|_{a_{1}=\mu_{\theta_{1}}(s),a _{2}=\mu_{\theta_{2}}(s)}\\ &\approx\mathbb{E}_{\rho^{\beta}(s)}\nabla_{\theta_{1}}\mu_{ \theta_{1}}(s)\left(\nabla_{a_{1}}Q_{1}+(\nabla_{a_{2}a_{1}}Q_{1})^{\intercal} \Delta a_{2}+\underbrace{(\nabla_{a_{1}}\Delta a_{2})^{\intercal}\nabla_{a_{ 2}}Q_{1}}_{\text{action shaping}},\right)\end{split} \tag{14}\]
where
\[\begin{split}&\Delta a_{2}=\hat{\eta}_{\text{1st}}\nabla_{a_{2}}Q_{ 2}(s,a_{1},a_{2})|_{a_{1}=\mu_{\theta_{1}}(s),a_{2}=\mu_{\theta_{2}}(s)}\\ & Q_{1}=Q_{1}(s,a_{1},a_{2})|_{a_{1}=\mu_{\theta_{1}}(s),a_{2}= \mu_{\theta_{2}}(s)}.\end{split} \tag{15}\]
The rightmost term in the LOLA-OffPA2 update, i.e., Eq. (14), allows for active _action shaping_ of the opponent.
In practice, we don't need to rely on Taylor expansion for the update rules in LOLA-OffPA2 as we can use an automatic differentiation engine, e.g., PyTorch autograd (Paszke et al., 2019), to directly compute the gradients. Algorithm 1 in Appendix illustrates the
LOLA-OffPA2 optimization framework for the case of \(n\) agents. At each state \(s\sim\rho^{b}(s)\) the agent \(i\in\mathcal{N}\) first anticipates the changes in actions of all agents \(j\in\{\mathcal{N}-\{i\}\}\):
\[\Delta a_{j}=\hat{\eta}_{\text{1st}}\nabla_{a_{2}}Q_{2}(s,a_{1},...,a_{n})|_{a_ {i}=\mu_{\theta_{i}}(s)\;\forall i\in\mathcal{N}}. \tag{16}\]
Then, agent \(i\) updates its parameters \(\theta_{i}\) by the following gradient adjustment:
\[\nabla_{\theta_{i}}J_{i}^{\text{LOLA-OffPA2}}=\mathbb{E}_{\rho^{ \beta}(s)}\nabla_{\theta_{i}}\mu_{\theta_{i}}(s)\nabla_{a_{i}}Q_{i}(s,a_{1}+ \Delta a_{1},...,a_{i},...,a_{n}+\Delta a_{n})|_{a_{i}=\mu_{\theta_{i}}(s)\; \forall i\in\mathcal{N}}. \tag{17}\]
Equation (17) denotes the update rule for _first-order_ LOLA-OffPA2 agents that assume naive opponents. However, we can also consider a _second-order_ LOLA-OffPA2 agent that differentiates through the learning steps of first-order LOLA-OffPA2 opponents. Likewise, we can extend the update rules to include higher-order reasoning, as in HOLA (Foerster et al., 2016).
#### 4.2.2 La-OffPA2
Similarly to the LA principles (Zhang and Lesser, 2010; Letcher et al., 2019), LA-OffPA2 agents cannot shape the opponents' learning steps, i.e., they cannot shape the opponent's actions. Consequently, in the two-agent case, we have \(\nabla_{a_{1}}\Delta a_{2}=0\). Using first-order Taylor expansion, the gradient adjustment for the first LA-OffPA2 agent (Foerster et al., 2018) is given by
\[\begin{split}\nabla_{\theta_{1}}J_{1}^{\text{LA-OffPA2}}& =\mathbb{E}_{\rho^{\beta}(s)}\nabla_{\theta_{1}}\mu_{\theta_{1}}(s )\nabla_{a_{1}}Q_{1}(s,a_{1},a_{2}+\perp\Delta a_{2})|_{a_{1}=\mu_{\theta_{1}}( s),a_{2}=\mu_{\theta_{2}}(s)}\\ &\approx\mathbb{E}_{\rho^{\beta}(s,\hat{a}_{2})}\nabla_{\theta_{1 }}\mu_{\theta_{1}}(s)\left(\nabla_{a_{1}}Q_{1}+(\nabla_{a_{2}a_{1}}Q_{1})^{ \intercal}\Delta a_{2}\right),\end{split} \tag{18}\]
where \(\perp\) prevents gradient flowing from \(\Delta a_{2}\) upon differentiation, and \(\Delta a_{2}\) and \(Q_{1}\) are defined in Eq. (15).
As in the case of LOLA-OffPA2, we can use an automatic differentiation engine to directly compute the gradients. Algorithm 2 in Appendix illustrates the LA-OffPA2 optimization framework for the case of \(n\) agents. At each state \(s\sim\rho^{b}(s)\) the agent \(i\in\mathcal{N}\) first anticipates the changes in actions of all agents \(j\in\{\mathcal{N}-\{i\}\}\) using Eq. (16). Then, agent \(i\) updates its parameters \(\theta_{i}\) by the following gradient adjustment:
\[\nabla_{\theta_{i}}J_{i}^{\text{LA-OffPA2}}=\mathbb{E}_{\rho^{ \beta}(s)}\nabla_{\theta_{i}}\mu_{\theta_{i}}(s)\nabla_{a_{i}}Q_{i}(s,a_{1}+ \perp\Delta a_{1},...,a_{i},...,a_{n}+\perp\Delta a_{n})|_{a_{i}=\mu_{\theta_{ i}}(s)\;\forall i\in\mathcal{N}}. \tag{19}\]
#### 4.2.3 HLA-OffPA2
As previously mentioned, HLA is proposed to improve coordination in fully cooperative games with common interested agents. To develop HLA-OffPA2, we first define \(\mathcal{M}\subseteq\mathcal{N}\), as a set of size \(m=|\mathcal{M}|\) common-interested agents with a common reward function \(\mathcal{R}=\mathcal{R}_{i}=\mathcal{R}_{j}\;\forall i,j\in\mathcal{M}\). Without the loss of generality, we consider \(\mathcal{M}=\mathcal{N}\), i.e., team games with a common state-action value function \(Q(s,a_{1},...,a_{m})\).
**Hierarchy level assignment.** Similarly to HLA, each agent in HLA-OffPA2 is first assigned to one of \(m\) levels, with level one as the lowest hierarchy level and level \(m\) as the
highest. Although the hierarchy level assignment can be random as proposed by Bighashdel et al. (2023), we utilize the amount of influence that agents have on others, i.e., their shaping capacity, as the indicator for the agents' hierarchy levels (see Section 5.3.3 for experimental comparisons). We define the shaping capacity of the \(i^{\text{th}}\) agent, \(\mathcal{SC}_{i}\), as the sum of the action shaping values with respect to all other agents \(j\):
\[\mathcal{SC}_{i}=\sum_{j\in\{\mathcal{M}-\{i\}\}}\left\|(\nabla_{a_{i}}\Delta a _{j})^{\intercal}\nabla_{a_{j}}Q(a_{1},...,a_{m})\right\|, \tag{20}\]
where \(\Delta a_{j}=\nabla_{a_{j}}Q(a_{1},...,a_{m})\). The agent with the highest shaping capacity is assigned to the highest hierarchy level, and so on. As HLA-OffPA2 benefits from centralized learning, the only constraint for HLA reasoning rules remains the centralized state-action value function.
**Update rules.** After the hierarchy level assignment, the agents update their policy parameters in \(m\) update stages, i.e., one for each agent, and in a top-down fashion: the agent in the highest hierarchy level updates its policy parameters first. In each update stage, the corresponding agent 1) reasons about the actions of followers (if any) in a bottom-up fashion, i.e., it reasons about the agent in the lowest hierarchy level first, 2) updates its policy parameters, and 3) updates its action for the next update stage (if any).
If we set \(m=2\) and assume that agent Two is the leader (HLA-OffPA2-L) and agent One is the follower (HLA-OffPA2-F), the gradient adjustment for the leader is
\[\begin{split}\nabla_{\theta_{2}}J^{\text{HLA-OffPA2-L}}& =\mathbb{E}_{\rho^{\beta}(s)}\nabla_{\theta_{2}}\mu_{\theta_{2}}(s) \nabla_{a_{2}}Q(s,a_{1}+\Delta a_{1},a_{2})|_{a_{1}=\mu_{\theta_{1}}(s),a_{2}= \mu_{\theta_{2}}(s)}\\ &\approx\mathbb{E}_{\rho^{\beta}(s)}\nabla_{\theta_{2}}\mu_{ \theta_{2}}(s)\left(\nabla_{a_{2}}Q+(\nabla_{a_{1}a_{2}}Q)^{\intercal}\Delta a _{1}+(\nabla_{a_{2}}\Delta a_{1})^{\intercal}\nabla_{a_{1}}Q\right),\end{split} \tag{21}\]
where
\[\begin{split}&\Delta a_{1}=\hat{\eta}_{\text{1st}}\nabla_{a_{1}}Q(s, a_{1},a_{2})|_{a_{1}=\mu_{\theta_{1}}(s),a_{2}=\mu_{\theta_{2}}(s)}\\ & Q=Q(s,a_{1},a_{2})|_{a_{1}=\mu_{\theta_{1}}(s),a_{2}=\mu_{ \theta_{2}}(s)}.\end{split} \tag{22}\]
Figure 2: An example of the parameter update stages in HLA-OffPA2, for a game with three common-interested agents, where agent 1, agent 2, and agent 3 are assigned to hierarchy level 1, hierarchy level 2, and level 3, respectively.
The shaping plan of the leader is to change its actions as
\[\bar{a}_{2}=a_{2}+\hat{\eta}_{\text{1st}}\nabla_{a_{2}}Q(s,a_{1}+\Delta a_{1},a_{ 2})|_{a_{1}=\mu_{\theta_{1}}(s),a_{2}=\mu_{\theta_{2}}(s)}, \tag{23}\]
so that an optimal increase in the common state-action value is achieved after its new actions are taken into account by the follower. Therefore, the follower adjusts its parameters through
\[\begin{split}\nabla_{\theta_{1}}J^{\text{HLA-OffPA2-F}}& =\mathbb{E}_{\rho^{\beta}(s)}\nabla_{\theta_{1}}\mu_{\theta_{1}}(s) \nabla_{a_{1}}Q(s,a_{1},\bar{a}_{2})|_{a_{1}=\mu_{\theta_{1}}(s)}\\ &\approx\mathbb{E}_{\rho^{\beta}(s)}\nabla_{\theta_{1}}\mu_{ \theta_{1}}(s)\left(\nabla_{a_{1}}Q+(\nabla_{a_{1}a_{2}}Q)^{\intercal}\hat{ \eta}_{\text{1st}}\nabla_{a_{2}}Q(s,a_{1}+\Delta a_{1},a_{2})\right).\end{split} \tag{24}\]
As in the case of LOLA-OffPA2 and LA-OffPA2, we can use an automatic differentiation engine to directly compute the gradients. To clarify the update rules in HLA-OffPA2 for \(m>2\), we demonstrate an example of update stages for three common-interested agents in Figure 2, where agent One, agent Two, and agent Three are assigned to hierarchy level 1, hierarchy level 2, and hierarchy level 3, respectively. For the case of \(m\) agents, see the HLA-OffPA2 optimization framework in Algorithm 3 in Appendix.
## 5 Experiments
In this section, we conduct a set of experiments to accomplish two main goals: 1) to indicate the benefits of learning anticipation in a broader range of MARL problems, including non-differentiable games with large state spaces, and 2) to show the advantages of our proposed action anticipation approach with respect to policy parameter anticipation.
To accomplish the first goal, we compare our proposed methods with Multi-Agent Deep Deterministic Policy Gradient (MADDPG) (Lowe et al., 2017), configured with three state-of-the-art update rules: 1) standard update rule (Lowe et al., 2017), referred to as MADDPG, 2) Centralized Policy Gradient (CPG) update rule (Peng et al., 2021), referred to as CPG-MADDPG, and 3) Probabilistic Recursive Reasoning (PR2) update rule (Wen et al., 2019), referred to as PR2-MADDPG. To achieve our second goal, we compare our proposed OffPA2-based methods with existing HOG methods that are capable of solving non-differentiable games (see Figure 1). Specifically, we compare LOLA-OffPA2 and LA-OffPA2 with LOLA-DiCE and LA-DiCE, respectively. Prior work (Foerster et al., 2018) has shown that LOLA-DiCE significantly outperforms LOLA in IPD, and, consequently, we do not compare LOLA-OffPA2 with LOLA. As both LOLA-DiCE and LA-DiCE are based on policy parameter anticipation, with these experiments, we can highlight the benefits of our novel action anticipation approach. Similarly to the implementation of Foerster et al. (2018), agents in DiCE can access the policy parameters of other agents. Since the original HLA method can only be applied to differentiable games, we do not compare HLA-OffPA2 with HLA.
Unless mentioned otherwise, we evaluate the performance of methods based on (normalized) Average Episode Reward (AER), with higher values indicating better performance. Furthermore, we assess the efficiency of HOG methods based on the Learning Anticipation Time Complexity (LATC), which for HOG method \(\mathcal{H}\) is computed as:
\[\text{LATC}(\mathcal{H})=\frac{\text{per iteration training time of }\mathcal{H}}{\text{per iteration training time of the naive version of }\mathcal{H}}-1\geq 0 \tag{25}\]
where the naive version of \(\mathcal{H}\) does not perform learning anticipation. The lower values of LATC indicate better efficiency, and LATC \(=0\) implies that learning anticipation adds zero time complexity to the algorithm. In the following sections, we separately evaluate our proposed methods and discuss the results (see Appendix B for implementation details).
### Evaluation of LA-OffPA2
We evaluate the methods on the non-differentiable version of the rotational game proposed by Zhang and Lesser (2010), and we refer to it as the Iterated Rotational Game (IRG). IRG is a one-state, two-agent, one-action (continuous) matrix game with the rewards depicted in Table 2 (for two discrete actions). However, the agents do not have access to the reward table and can only receive a reward for their joint actions. Each agent \(i\in\{1,2\}\) must choose a 1-D continuous action (\(0\leq a_{i}\leq 1\) representing the probability of taking two discrete actions. The game has a unique equilibrium point at \(a_{1}=a_{2}=0.5\), which is also the fixed point of the game. The rotational game was originally proposed to demonstrate the circular behavior that can emerge if the agents follow the naive gradient updates. LA agents, on the other hand, can quickly converge to the equilibrium point by considering their opponent's parameter adjustment. We evaluate the performances of methods based on the Distance to Equilibrium (DtE), which is the Euclidean distance between current actions and the equilibrium point.
Figure 3 demonstrates the learning curves for LA-OffPA2 and other, state-of-the-art MADDPG-based algorithms. From this figure, we find that LA-OffPA2 is the only method that can converge to equilibrium actions. These results highlight the importance of learning anticipation in IRG. To further show the effectiveness of LA-OffPA2, we compare our LA-OffPA2 method with LA-DiCE (Foerster et al., 2018) and report the results in Table 1.
\begin{table}
\begin{tabular}{l c} Method & Distance to Equilibrium \(\downarrow\) \\ \hline LA-DiCE & 0.09\(\pm\)0.07 \\ LA-OffPA2 (ours) & **0.03\(\pm\)0.02** \\ \hline Method & Learning Anticipation Time Complexity \(\downarrow\) \\ \hline LA-DiCE & 1.06 \\ LA-OffPA2 (ours) & **0.13** \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of LA principles in the frameworks of DiCE and our proposed OffPA2 in iterated rotational game.
Figure 3: Learning curves in iterated rotational game in terms of the distance to the equilibrium point (\(\downarrow\)).
Looking at DtE results in Table 1, it is apparent that both methods can solve the games, with LA-OffPA2 achieving slightly better results. However, if we compare the methods regarding learning anticipation time complexity (LATC), it is clear that LA-OffPA2 is significantly more efficient than LA-DiCE, which indicates the benefits of our OffPA2 framework with respect to DiCE.
### Evaluation of LOLA-OffPA2
#### 5.2.1 Iterated prisoner's dilemma
Iterated Prisoner's Dilemma (IPD) (Foerster et al., 2018a) is a five-state, two-agent, two-action game with the reward matrices depicted in Table 4. Each agent must choose between two discrete actions (cooperate or defect). The game is played for 150 time steps (\(T=150\)). In the one-shot version of the game, there is only one Nash equilibrium for the agents (Defect, Defect). In the iterated games, (Defect, Defect) is also a Nash equilibrium. However, a better equilibrium is Tit-For-Tat (TFT), where the players start by cooperating and then repeat the previous action of the opponents. The LOLA agents can shape the opponent's learning to encourage cooperation and, therefore, converge to TFT (Letcher et al., 2019). We evaluate the methods' performances based on the Averaged Episode Reward (AER).
In Figure 4, we depict the learning curves for LOLA-OffPA2 and the MADDPG-based methods. From this figure, we find that only LOLA-OffPA2 can solve the game, which once again highlights the importance of learning anticipation. Additionally, we compared the performance of LOLA-OffPA2 with LOLA-DiCE (Foerster et al., 2018b), which is designed specifically for this game, and we reported the results in Table 3. Although both methods demonstrate high values of AER, our LOLA-OffPA2 is significantly more efficient as its LATC value is much lower than that of LOLA-DiCE.
Figure 4: Learning curves in iterated prisoner’s dilemma in terms of the average episode reward (\(\uparrow\)).
#### 5.2.2 Multi-level Exit-Room game
In the second experiment, we evaluate the capability of LOLA-OffPA2 in games with large state spaces, which is the envisioned use case for our framework. Inspired by Vinitsky et al. (2019), we propose an Exit-Room game with three levels of complexity (see Figure 5). The Exit-Room game is a grid-world variant of the IPD, with two agents (blue and red) and \(15^{2l}\) states where \(l\in\{1,2,3\}\) is the complexity level of the game. The agents should cooperate and move toward the exit doors on the right. However, they are tempted to exit through the left doors and, in some cases, not exiting at all. In level 1, the agents have three possible actions (_move-left_, _move-right_, or _do nothing_), and the reward is computed as Vinitsky et al. (2019):
\[\begin{split}\text{reward}_{C}&=\lambda_{C}(\text{ cooperation}_{self}+\text{cooperation}_{opponent})\\ \text{reward}_{D}&=\lambda_{D}(1-\text{cooperation }_{self})\\ \text{reward}&=\text{reward}_{C}+\text{reward}_{D},\end{split} \tag{26}\]
where \(\lambda_{C}\) and \(\lambda_{D}\) are some constants, and cooperation\({}_{self}\) and cooperation\({}_{opponent}\) are the normalized distances of the agent and its opponent to the right door, respectively. In levels 2 and 3, the agents have additional _move-up_ and _move-down_ actions. In level 3, the door positions are randomly located, resulting in more complex interactions among the agents. In addition to the reward in Eq. (26), the agents receive an additional reward for approaching the doors in levels 2 and 3. Each agent receives four \(90\times 90\) RGB images representing the state observations of the last four time steps.
Figure 6 compares the learning curves of LOLA-OffPA2 and MADDPG-based methods in terms of Normalized Average Episode Reward (NAER), which is the AER value normalized between the highest and lowest episode rewards in each game level. In Figure 6, we can
Figure 5: State observation in the Exit-Room game, level one (left), level two (middle), and level three (right).
\begin{table}
\begin{tabular}{l|c c c|c c c c} \hline \hline \multicolumn{6}{c}{Normalized Average Episode Reward \(\uparrow\)} & \multicolumn{6}{c}{Learning Anticipation Time Complexity \(\downarrow\)} \\ \cline{2-9} Methods & \(l=1\) & \(l=2\) & \(l=3\) & Naïve & 1st-order & 2nd-order & 3rd-order & 4th-order \\ \hline LOLA-DiCE & 0.91\(\pm\)0.04 & 0.68\(\pm\)0.06 & 0.56\(\pm\)0.12 & 0.00 & 1.39 & 2.74 & 4.12 & 5.41 \\ LOLA-OffPA2 (ours) & **1.00\(\pm\)0.00** & **0.99\(\pm\)0.01** & **0.93\(\pm\)0.03** & 0.00 & **0.24** & **0.47** & **0.69** & **0.94** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparisons of LOLA-DiCE with our proposed LOLA-OffPA2 in the Exit-Room game, in terms of performance (normalized average return in different game levels) and efficiency (learning anticipation time complexity in different reasoning levels).
clearly see that our LOLA-OffPA2 significantly outperforms the other methods, similarly in the IPD matrix game.
To highlight the benefits of our proposed method with respect to existing HOG methods, we compare our LOLA-OffPA2 with LOLA-DiCE in terms of performance (by comparing NAER) and training efficiency (by comparing LATC) in Table 5. Observing Table 5, it is apparent that LOLA-DiCE fails to acquire the highest rewards, particularly for the second and third levels of the game, where the state-space size is increased. The reason can be attributed to the fact that when the size of the state space increases, the LOLA-DiCE agents fail to properly approximate the higher-order gradients via sampling, and, consequently, cannot perform learning anticipation to achieve a higher reward. This is while our proposed LOLA-OffPA2 performs better in all levels of the game in terms of NAER, and scaling from naive to higher-order reasoning is significantly more efficient for LOLA-OffPA2 than LOLA-DiCE. This emphasizes that we have overcome the limitations of HOG methods described in Section 1.
#### 5.2.3 Influence of the projected prediction length
Based on our findings in Theorem 2, action anticipation via first-order Taylor expansion scales the prediction length and influences the convergence behavior of the HOG methods. Here, we empirically show that by directly changing the projected prediction length, i.e.,
Figure 6: Learning curves in different complexity levels of the exit-room game in terms of the normalized average episode reward (\(\uparrow\)).
Figure 7: The influence of the projection prediction length (\(\hat{\eta}_{\text{1st}}\)) on the convergence behavior of LOLA-OffPA2 in the level-three Exit-room game.
\(\hat{\eta}_{\rm lst}\), we can tune the resulting prediction length in the state space, i.e., \(\eta^{\prime}\), and consequently improve the convergence behavior. We conduct an experimental study to analyze the influence of \(\hat{\eta}_{\rm lst}\) on the convergence behavior of LOLA-OffPA2 in the level-three Exit-room game (See Figure 7). The experiments are repeated four times, and the mean results are reported in terms of NEAR in Figure 7. It is clear from Figure 7 that by tuning \(\hat{\eta}_{\rm lst}\), we can alter the convergence behavior of LOLA-OffPA2. Furthermore, Figure 7 demonstrates that low values of the projected prediction length (\(\hat{\eta}_{\rm lst}=0.1\)) cancel the effect of learning anticipation in OffPA2, and high values of the projected prediction length (\(\hat{\eta}_{\rm lst}=1.3\)) lead to instability of the LOLA-OffPA2 which can be attributed to our findings in Theorem 2.
### Evaluation of HLA-OffPA2
#### 5.3.1 Particle-coordination game
To demonstrate the coordination capability of HLA-OffPA2, we propose the Particle-Coordination Game (PCG) in the Particle environment (Lowe et al., 2017). As shown in Figure 7(a), each one of the two agents (purple circles) should select and approach one of the three landmarks (one gray and two green circles). Landmarks are selected based on the closest distance between the agent and the landmarks. Suppose the agents select and approach the same landmark. In that case, they receive global (by selecting the green landmarks) or local (by selecting the gray landmark) optimal rewards. They will receive an assigned miscoordination penalty if they select and approach different landmarks (see Table 6). Each agent receives a 10-D state observation vector (velocity and position information of the agent, i.e., 4-D, and location information of the landmarks, i.e., 6-D) and selects a 5-D, one-hot vector representing one of the five discrete actions: _move-right_, _move-left_, _move-up_, _move-down_, and _stay_. The horizon is set to 25.
\begin{table}
\begin{tabular}{c|c|c|c} & Green (L) & Gray & Green (R) \\ \hline Green (L) & 2 & 0 & -20 \\ \hline Gray & 0 & 0.4 & 0 \\ \hline Green (R) & -20 & 0 & 2 \\ \end{tabular}
\end{table}
Table 6: Rewards in particle-coordination game.
Figure 8: Particle-coordination game. (a) Schematics of the game with two agents, i.e., purple circles, and three landmarks, i.e., gray and green circles. (b) Learning curves in terms of average episode reward (\(\uparrow\)). Best viewed in color.
The game is quite challenging as the agents cannot see the locations of each other, and they can be subject to miscoordination.
In Figure 7(b), we depict the learning curves for our HLA-OffPA2 and other MADDPG-based methods in terms of the average episode reward (AER). As demonstrated, MADDPG-based methods have relatively high variance in the convergence points. For instance, the MADDPG agents, which heavily benefit from exploration and randomness during policy parameter updates, can occasionally converge to the global optimal point with the highest AER. However, the high miscoordination penalty forces the agents to choose the safest option (gray landmark), which leads to a zero reward in the worst-case scenario. From this figure, it is clear that our HLA-OffPA2 is the only method that consistently converges to the global optimum of the game, which is consistent with the reported results for HLA in fully-cooperative differentiable games (Bighashdel et al., 2023).
#### 5.3.2 Standard multi-agent games
For the final experiments, we evaluate our HLA-OffPA2 and MADDPG-based methods in three Particle environment games (Lowe et al., 2017): 1) Cooperative Navigation with three common-interested agents, 2) Physical Deception with two common-interested agents and one self-interested agent, and 3) Predator-Prey with two common-interested (predator) agents and one self-interested (prey) agent. Furthermore, we compare the methods in three games within the multi-agent Mujoco environment (Peng et al., 2021): 1) two-agent Half-Cheetah, 2) two-agent Walker, and 3) two-agent Reacher. In the mixed environments (Physical Deception and Predator-Prey), we have employed the MADDPG method for the self-interested agents in all experiments. Games' specifications are reported in Table 7.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{Particle Environment} & \multicolumn{3}{c}{Mujoco Environment} \\ \cline{2-7} & Cooperative Navigation & Physical Deception & Predator-Prey & Half-Cheetah & Walker & Reacher \\ \hline Observation & 18-D & 10-D (8-D) & 14-D (12-D) & 11-D & 11-D & 8-D \\ Action & 5-D & 5-D (5-D) & 5-D (5-D) & 3-D & 3-D & 1-D \\ Action type & discrete & discrete & discrete & continuous & continuous & continuous \\ Horizon (step) & 25 & 25 & 25 & 100 & 300 & 50 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Specifications in the standard multi-agent games. In the mixed environments, the dimensions are reported as ”\(d_{1}\) (\(d_{2}\))” where \(d_{1}\) is the dimension for common-interested agents and \(d_{2}\) is the dimension for self-interested ones.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c}{\(\uparrow\)NAER in Particle Environment} & \multicolumn{3}{c}{\(\uparrow\)NAER in Mujoco Environment} \\ \cline{2-7} Methods & Cooperative Navigation & Physical Deception & Predator-Prey & Half-Cheetah & Walker & Reacher \\ \hline DDPG (LB) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ C-MADDPG (UB) & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ \hline MADDPG & 0.77 & 0.61 & 0.21 & 0.86 & 0.45 & 0.02 \\ CPG-MADDPG & 0.78 & 0.67 & 0.18 & 0.88 & 0.46 & 0.05 \\ PB2-MADDPG & 0.78 & 0.54 & 0.08 & 0.85 & 0.45 & 0.01 \\ HLA-OffPA2 (ours) & **0.88** & **0.83** & **0.44** & **0.94** & **0.67** & **0.42** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Comparisons of methods in terms of the Normalized Average Episode Reward (NAER) for common-interested agents. LB: Lower Bound. UB: Upper Bound.
We created separate validation and test sets for each game that included 100 and 300 randomly generated scenarios, respectively. In each game, we save the models that have the best performance on the validation set and test them on the test set to report the results. All experiments are repeated five times, and the mean results are reported in Table 8 in terms of the Normalized AER (NAER). The normalization is done between the single-agent variant of MADDPG, i.e., DDPG (Lillicrap et al., 2016), and a fully centralized (in learning and execution) variant of MADDPG, referred to as C-MADDPG. As all of the games are non-differentiable, the original HLA method is no longer applicable.
In Table 8, we observe that our proposed HLA-OffPA2 consistently and significantly outperforms all the state-of-the-art MADDPG-based methods. Again, these results confirm that learning anticipation, and in particular, our proposed HLA-OffPA2 improves coordination among common-interested agents, leading to better results.
#### 5.3.3 Ablation study on hierarchy-level assignments
We have additionally conducted an ablation study on the hierarchy-level assignment in the HLA-OffPA2. Rather than iteratively sorting the agents based on their shaping capacities through Eq. (20), we randomly assigned the agents to hierarchy levels in the beginning and fixed the hierarchy levels throughout the optimization. This variant of the HLA-OffPA2, i.e., referred to as HLA-OffPA2 (F), is evaluated and compared in Table 9. As can be seen, using the proposed sorting strategy based on the shaping capacities of the agents, as done in our HLA-OffPA2, constantly improves performance.
## 6 Conclusion
In this paper, we proposed the OffPA2 framework that enables the applicability of HOG methods to non-differentiable games with large state spaces. To indicate the advantages of our framework, we developed three novel HOG methods, LA-OffPA2, LOLA-OffPA2, and HLA-OffPA2. By conducting several experiments, we demonstrated that our proposed methods outperform the existing HOG methods in terms of performance and efficiency. Furthermore, we extensively compared our methods with various DPG-based methods, which do not use learning anticipation, and we signified that learning anticipation improves coordination among agents and leads to higher rewards. As a result of our framework, the benefits of learning anticipation can now be used in many more MARL problems.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c}{\(\uparrow\)NAER in Particle Environment} & \multicolumn{3}{c}{\(\uparrow\)NAER in Mujoco Environment} \\ \cline{2-7} Methods & Cooperative Navigation & Physical Deception & Predator-Prey & Half-Cheetah & Walker & Reacher \\ \hline HLA-OffPA2 (F) & 0.85 & 0.80 & 0.38 & 0.92 & 0.63 & 0.38 \\ HLA-OffPA2 & **0.88** & **0.83** & **0.44** & **0.94** & **0.67** & **0.42** \\ \hline \hline \end{tabular}
\end{table}
Table 9: Ablation study on the hierarchy level assignments in our HLA-OffPA2 method.
## Appendix A Proofs
### Proof of Theorem 1
At each state \(s\sim\rho^{b}(s)\), agent One anticipates the changes in the policy parameters of agent Two, i.e., \(\Delta\theta_{2}(s)=\eta\nabla_{\theta_{2}}Q_{2}(s,\mu_{\theta_{1}}(s),\mu_{ \theta_{2}}(s))\), and updates the policy parameters in the direction of:
\[\nabla_{\theta_{1}}J_{1}=\mathbb{E}_{\rho^{\beta}(s)}\nabla_{\theta_{1}}Q_{1}( s,\mu_{\theta_{1}},\mu_{\theta_{2}+\Delta\theta_{2}(s)}(s)), \tag{27}\]
In order to prove Proposition 1, we need two assumptions: 1) neglecting the direct dependencies of the state-action value function on the policy parameters, and 2) first-order Taylor expansion. The first assumption is standard in off-policy reinforcement learning, including both deterministic and stochastic off-policy Actor-Critic algorithms (Silver et al., 2014; Degris et al., 2012), and justification to support this assumption is provided in Degris et al. (2012). As to the second assumption, please refer to Theorem 2 to see how this assumption influences the performance.
Given the first assumption, we can rewrite Eq. (27) as
\[\nabla_{\theta_{1}}J_{1}=\mathbb{E}_{\rho^{\beta}(s)}\nabla_{\theta_{1}}\mu_ {\theta_{1}}(s)Q_{1}(s,a_{1},\tilde{a}_{2})|_{a_{1}=\mu_{\theta_{1}}(s),\tilde {a}_{2}=\mu_{\theta_{2}+\Delta\theta_{2}(s)}(s)}, \tag{28}\]
Similarly, we can set \(\Delta\theta_{2}(s)=\eta\nabla_{\theta_{2}}\mu_{\theta_{2}}(s)\nabla_{a_{2}}Q _{2}(s,a_{1},a_{2})|_{a_{2}=\mu_{\theta_{2}}(s)}\). We now use the first-order Taylor expansion to map the anticipated gradient information to the action space:
\[\begin{split}\tilde{a}_{2}&=\mu_{\theta_{2}+\Delta \theta_{2}(s)}(s)\\ &\approx\mu_{\theta_{2}}(s)+(\Delta\theta_{2}(s))^{\intercal} \nabla_{\theta_{2}}\mu_{\theta_{2}}(s),\end{split} \tag{29}\]
Given the definition of \(\Delta\theta_{2}(s)\), we have:
\[\begin{split}\tilde{a}_{2}&\approx\mu_{\theta_{2}}( s)+(\eta\nabla_{\theta_{2}}\mu_{\theta_{2}}(s)\left(\nabla_{d_{2}}Q_{2}(s,a_{1},a_{ 2})\right)^{\intercal}\nabla_{\theta_{2}}\mu_{\theta_{2}}(s)\\ &=\mu_{\theta_{2}}(s)+\nabla_{a_{2}}Q_{2}(s,a_{1},a_{2})\left( \eta\nabla_{\theta_{2}}\mu_{\theta_{2}}(s)\right)^{\intercal}\nabla_{\theta_{ 2}}\mu_{\theta_{2}}(s)\\ &=\mu_{\theta_{2}}(s)+\nabla_{a_{2}}Q_{2}(s,a_{1},a_{2})\eta \left\|\nabla_{\theta_{2}}\mu_{\theta_{2}}(s)\right\|^{2}\\ &=\mu_{\theta_{2}}(s)+\nabla_{a_{2}}Q_{2}(s,a_{1},a_{2})\hat{\eta }_{\rm 1st},\end{split} \tag{30}\]
where \(\left\|.\right\|\) is the \(l^{2}\)-norm and we have defined the projected prediction length \(\hat{\eta}_{\rm 1st}=\eta\left\|\nabla_{\theta_{2}}\mu_{\theta_{2}(s)}\right\|^{2}\), since \(\left\|\nabla_{\theta_{2}}\mu_{\theta_{2}(s)}\right\|^{2}\) is a positive number and independent of \(\theta_{1}\). Therefore:
\[\tilde{a}_{2}\approx a_{2}+\Delta a_{2}. \tag{31}\]
where we have defined \(\Delta a_{2}=\hat{\eta}_{\rm 1st}\nabla_{a_{2}}Q_{2}(s,a_{1},a_{2})\) Replacing Eq. 31 in Eq. 27 yields:
\[\nabla_{\theta_{1}}J_{1}^{\rm LA}\approx\mathbb{E}_{\rho^{\beta}(s)}\nabla_{ \theta_{1}}\mu_{\theta_{1}}(s)\nabla_{a_{1}}Q_{1}(s,a_{1},a_{2}+\Delta a_{2})| _{a_{1}=\mu_{\theta_{1}}(s),a_{2}=\mu_{\theta_{2}}(s)}, \tag{32}\]
and consequently, Theorem 1 is proved.
### Proof of Theorem 2
In order to prove Theorem 2, we first need to show that:
**Lemma 4**: _If the anticipated changes are mapped from the policy parameter space to the action space using full-order Taylor expansion, there exists \(\hat{\eta}_{\text{full}}\in\mathbb{R}\) such that_
\[\mu_{\theta_{2}+\Delta\theta_{2}}(s)=\mu_{\theta_{2}}(s)+\hat{\eta}_{\text{full }}\nabla_{a_{2}}Q_{2}(s,a_{1},a_{2}), \tag{33}\]
_where_
\[\Delta\theta_{2}=\eta\nabla_{\theta_{2}}\mu_{\theta_{2}}(s)\nabla_{a_{2}}Q_{ 2}(s,a_{1},a_{2}) \tag{34}\]
**Proof**. The full-order Taylor expansion of the anticipated gradient yields:
\[\mu_{\theta_{2}+\Delta\theta_{2}}(s)=\mu_{\theta_{2}}(s)+(\Delta\theta_{2})^{ \intercal}\nabla_{\theta_{2}}\mu_{\theta_{2}}(s)+\frac{1}{2}(\Delta\theta_{2} )^{\intercal}H_{\mu_{\theta_{2}}}(s)\Delta\theta_{2}+O(\|\Delta\theta_{2}\|^{ 3}), \tag{35}\]
where \(H_{\mu_{\theta_{2}}}(s)\) denotes the Hessian of \(\mu_{\theta_{2}}\) at \(s\). Given that:
\[\Delta\theta_{2}=(\eta\nabla_{\theta_{2}}\mu_{\theta_{2}}(s)\nabla_{a_{2}}Q_{ 2}(s,a_{1},a_{2}))^{\intercal}\,, \tag{36}\]
we have
\[\begin{split}\mu_{\theta_{2}+\Delta\theta_{2}}(s)=& \mu_{\theta_{2}}(s)+\nabla_{a_{2}}Q_{2}(s,a_{1},a_{2})\eta\left\| \nabla_{\theta_{2}}\mu_{\theta_{2}(s)}\right\|^{2}\\ &+\frac{1}{2}\nabla_{a_{2}}Q_{2}(s,a_{1},a_{2})\eta^{2}\left( \nabla_{\theta_{2}}\mu_{\theta_{2}}(s)\right)^{\intercal}H_{\mu_{\theta_{2}}}( s)\nabla_{\theta_{2}}\mu_{\theta_{2}}(s)\left(\nabla_{a_{2}}Q_{2}(s,a_{1},a_{2}) \right)^{\intercal}\\ &+O(\eta^{3}).\end{split} \tag{37}\]
By defining
\[\begin{split} C1(s)&=\left\|\nabla_{\theta_{2}}\mu_{ \theta_{2}(s)}\right\|^{2}\\ C_{2}(s)&=\frac{1}{2}\left(\nabla_{\theta_{2}}\mu_ {\theta_{2}}(s)\right)^{\intercal}H_{\mu_{\theta_{2}}}(s)\nabla_{\theta_{2}} \mu_{\theta_{2}}(s)\left(\nabla_{a_{2}}Q_{2}(s,a_{1},a_{2})\right)^{\intercal },\end{split} \tag{38}\]
we have:
\[\begin{split}\mu_{\theta_{2}+\Delta\theta_{2}}(s)=& \mu_{\theta_{2}}(s)+\nabla_{a_{2}}Q_{2}(s,a_{1},a_{2})(\eta C_{1}(s)+ \eta^{2}C_{2}(s)+O(\eta^{3})),\end{split} \tag{39}\]
Given the definition of \(C_{2}(s)\) and the dimension constraint implied by \(\nabla_{a_{2}}Q_{2}(s,a_{1},a_{2})\), it can be concluded that \(C_{1}(s)\in\mathbb{R}^{+}\) and \(C_{2}(s)\in\mathbb{R}\). Therefore:
\[\begin{split}\mu_{\theta_{2}+\Delta\theta_{2}}(s)=& \mu_{\theta_{2}}(s)+\hat{\eta}_{\text{full}}\nabla_{a_{2}}Q_{2}(s,a_{1},a_{2 }),\end{split} \tag{40}\]
where \(\hat{\eta}_{\text{full}}=\eta C_{1}(s)+\eta^{2}C_{2}(s)+O(\eta^{3})\in \mathbb{R}\). Consequently, we have proved Lemma 4
If we now map the anticipated changes of policy parameters, with a prediction length \(\eta^{\prime}\in\mathbb{R}^{+}\), to the action space using full-order Taylor expansion, we have:
\[\begin{split}\mu_{\theta_{2}+\Delta\theta^{\prime}_{2}}(s)=& \mu_{\theta_{2}}(s)+\hat{\eta^{\prime}}_{\text{full}}\nabla_{a_{2}}Q_{2}(s,a_{ 1},a_{2}),\end{split} \tag{41}\]
where \(\hat{\eta^{\prime}}_{\text{full}}=\eta^{\prime}C_{1}(s)+\eta^{\prime 2}C_{2}(s)+O( \eta^{\prime 3})\). In order to prove Theorem 2, we need to find the values of \(\hat{\eta}_{\text{1st}}\) that yields:
\[\begin{split}\hat{\eta}_{\text{1st}}&=\hat{\eta^{ \prime}}_{\text{full}}\\ &=\eta^{\prime}C_{1}(s)+\eta^{\prime 2}C_{2}(s)+O(\eta^{\prime 3}), \end{split} \tag{42}\]
and at the same time \(\eta^{\prime}\in\mathbb{R}^{+}\). By neglecting \(O(\eta^{\prime 3})\) and given that \(\hat{\eta}_{\text{1st}}\in\mathbb{R}^{+}\), there are two cases to be considered:
* if \(C_{2}(s)\) is non-negative, then for any value of \(\hat{\eta}_{\text{1st}}\in\mathbb{R}^{+}\), there exists \(\eta\in\mathbb{R}^{+}\).
* if \(C_{2}(s)\) is negative, then for \(\hat{\eta}_{\text{1st}}<\frac{C_{1}(s)^{2}}{4|C_{2}(s)|}\), there exists \(\eta\in\mathbb{R}^{+}\).
Therefore, for sufficiently small \(\hat{\eta}_{\text{1st}}\), i.e., \(\hat{\eta}_{\text{1st}}<\frac{C_{1}(s)^{2}}{4|C_{2}(s)|}\), there always exists \(\eta\in\mathbb{R}^{+}\), and consequently, Theorem 2 is proved.
### Proof of Theorem 3
As both policy and state-action value functions are approximated via neural networks, the time complexity of the gradient anticipation follows the time complexity of backpropagation in neural networks. As assumed, the policy and state-action value networks have the same number of hidden layers, \(H\), and neurons in each hidden layer, \(N\). Therefore, the backpropagation time complexity of the networks for an input state of size \(N_{s}\) and action of size \(N_{a}\) is (Lister and Stone, 1995):
* Backpropagation time complexity in the policy network: \(O(N_{s}N+(H-1)N^{2}+NN_{a})\)
* Backpropagation time complexity in the state-action value network: \(O((N_{s}+N_{a})N+(H-1)N^{2}+N)\)
Given that \(N>N_{s}+N_{a}\), the time complexity of both networks can be upper bounded by \(O(LN^{2})\) where we defined \(L=H+1\). Now we assume that agent \(i\in\mathcal{N}\) wants to anticipate the learning step of another agent \(j\in\{\mathcal{N}-\{i\}\}\). In the case of policy parameter anticipation, agent \(i\) anticipates \(\Delta\theta_{j}(s)\) as:
\[\Delta\theta_{j}(s)=\eta\nabla_{\theta_{j}}\mu_{\theta_{j}}(s)\nabla_{a_{j}}Q _{j}(s,a_{1},...,a_{n})|_{a_{j}=\mu_{\theta_{j}}(s)}. \tag{43}\]
Therefore, the time complexity is \(O(LN^{2})\times O(LN^{2})\), or in other words, \(O(L^{2}N^{4})\). In the case of action anticipation, on the other hand, agent \(i\) anticipates \(\Delta a_{j}(s)\) as:
\[\Delta a_{j}=\hat{\eta}_{\text{1st}}\nabla_{a_{j}}Q_{2}(s,a_{1},...,a_{n}), \tag{44}\]
which has the complexity of \(O(LN^{2})\). Consequently, the time complexity is reduced by \(O(LN^{2})\), and Theorem 3 is proved.
```
Initialize \(\mu_{\theta_{i}}\), \(Q_{i}\), \(\mu^{\prime}_{i}\), and \(Q^{\prime}_{i}\)\(\forall i\in\mathcal{N}\), and set \(\hat{\eta}_{\text{1st}}\) for\(\text{episode}=1\) to max-num-episodes do Receive initial state \(s\) for\(t=1\) to max-episode-length do Select action \(a_{i}\) from \(\pi^{b}_{\theta_{i}}(s)\)\(\forall i\in\mathcal{N}\) Execute actions \(a=\{a_{i}\}_{\forall i\in\mathcal{N}}\) and observe rewards \(r=\{r_{i}\}_{\forall i\in\mathcal{N}}\) and new state \(s^{\prime}\) Store the tuple \((s,a,r,s^{\prime})\) in replay buffer \(\mathcal{D}\) Set \(s=s^{\prime}\) Sample a random \(K\) tuples \(\{(s^{k},a^{k},r^{k},s^{\prime k})\}_{k\in\{1,...,K\}}\) from \(\mathcal{D}\) for agent \(i=1\) to \(n\)do Set \(y^{k}_{i}=r^{k}_{i}+\gamma Q^{\prime}_{i}(s^{\prime k},a^{\prime}_{1},...,a^{ \prime}_{n})|_{a^{\prime}_{n}=\mu^{\prime}_{i}(s^{\prime k})}\), for \(k\in\{1,...,K\}\) Update state-action value function \(Q_{i}\) by minimizing: \[\mathcal{L}(\omega_{i})=\frac{1}{K}\sum_{k\in\{1,...,K\}}[(Q_{i}(s^{k},a^{k}_{ 1},...,a^{k}_{n};\omega_{i})-y^{k}_{i})^{2}]\] endfor Set \(a^{k}_{i}=\mu_{\theta_{i}}(s^{k})\), for \(k\in\{1,...,K\}\) and \(i\in\mathcal{N}\) for agent \(i=1\) to \(n\)do for agent \(j=1\) to \(n\)do if\(j=i\) then continue Set \(\Delta a^{k}_{j}=\hat{\eta}_{\text{1st}}\frac{\partial}{\partial a^{k}_{j}}Q_{ j}(s^{k},a^{k}_{1},...,a^{k}_{n})\) for \(k\in\{1,...,K\}\) endfor Update policy parameters \(\theta_{i}\) via: \(\nabla_{\theta_{i}}J^{\text{LOLA-OffPA2}}_{i}=\) \[\frac{1}{K}\sum_{k\in\{1,...,K\}}\nabla_{\theta_{i}}\mu_{\theta_{i}}(s^{k}) \frac{\partial}{\partial a^{k}_{i}}Q_{i}(s^{k},a^{k}_{1}+\Delta a^{k}_{1},..., a^{k}_{i},...,a^{k}_{n}+\Delta a^{k}_{n})\] endfor Update \(Q^{\prime}_{i}\) and \(\mu^{\prime}_{i}\)\(\forall i\in\mathcal{N}\) endfor endfor
```
**Algorithm 1**LOLA-OffPA2 for a set of \(n\) self-interested agents (\(\mathcal{N}\)).
```
Initialize \(\mu_{\theta_{i}}\), \(Q_{i}\), \(\mu^{\prime}_{i}\), and \(Q^{\prime}_{i}\)\(\forall i\in\mathcal{N}\), and set \(\hat{\eta}_{\text{1st}}\) for\(\text{episode}=1\) to max-num-episodes do Receive initial state \(s\) for\(t=1\) to max-episode-length do Select action \(a_{i}\) from \(\pi^{b}_{\theta_{i}}(s)\)\(\forall i\in\mathcal{N}\) Execute actions \(a=\{a_{i}\}_{\forall i\in\mathcal{N}}\) and observe rewards \(r=\{r_{i}\}_{\forall i\in\mathcal{N}}\) and new state \(s^{\prime}\) Store the tuple \((s,a,r,s^{\prime})\) in replay buffer \(\mathcal{D}\) Set \(s=s^{\prime}\) Sample a random \(K\) tuples \(\{(s^{k},a^{k},r^{k},s^{\prime k})\}_{k\in\{1,...,K\}}\) from \(\mathcal{D}\) for agent \(i=1\) to \(n\)do Set \(y^{k}_{i}=r^{k}_{i}+\gamma Q^{\prime}_{i}(s^{\prime k},a^{\prime}_{1},...,a^{ \prime}_{n})|_{a^{\prime}_{n}=\mu^{\prime}_{i}(s^{\prime k})}\), for \(k\in\{1,...,K\}\) Update state-action value function \(Q_{i}\) by minimizing:
```
**Algorithm 2**LA-OffPA2 for a set of \(n\) self-interested agents (\(\mathcal{N}\)).
B Implementations details
In this section, we describe the implementations of methods in detail. In order to have fair comparisons between the methods, we have used policies and value functions with the same neural network architecture in all methods. Algorithms 1, 2, and 3 illustrates the optimization frameworks for LOLA-OffPA2, LA-OffPA2, and HLA-OffPA2, respectively.
**A note on partial observability**. So far, we have formulated the MARL setup as an MG, where it is assumed that the agents have access to the state space. However, in many games, the agents only receive a private state observation of the current state. In this case, the MARL setup can be formulated as a Partially Observable Markov Game (PO-MG) (Littman, 1994). A PO-MG is a tuple \((\mathcal{N},\mathcal{S},\{\mathcal{A}_{i}\}_{i\in\mathcal{N}},\{\mathcal{O}_ {i}\}_{i\in\mathcal{N}},\{\mathcal{R}_{i}\}_{i\in\mathcal{N}},\mathcal{T},\{ \Omega_{i}\}_{i\in\mathcal{N}},\rho,\gamma)\), where \(\mathcal{O}_{i}\) is the set of sate observations for agent \(i\in\mathcal{N}\). Each agent \(i\) chooses its action \(a_{i}\in\mathcal{A}_{i}\) through the policy \(\mu_{\theta_{i}}:\mathcal{O}_{i}\rightarrow\mathcal{A}_{i}\) parameterized by \(\theta_{i}\) conditioning on the given state observation \(o_{i}\in\mathcal{O}_{i}\). After the transition to a new state, each agent \(i\) receives a private state observation through its observation function \(\Omega_{i}:\mathcal{S}\rightarrow\mathcal{O}_{i}\). In this case, the centralized state-action value function for each agent \(i\) is defined as \(Q_{i}(o_{1},...,o_{n},a_{1},...,a_{n})=\mathbb{E}[G_{i}^{t}(\tau|s^{t}=s,o_{i} =\Omega_{i}(s)\ \&\ a_{i}^{t}=a_{i}\ \ \forall i\in\mathcal{N})]\). Therefore, the proposed OffPA2 framework can be modified accordingly.
### Iterated rotational game and iterated prisoner's dilemma
We employed Multi-Layer Perceptron (MLP) networks with two hidden layers of dimension 64 for policies and value functions. In order to make the state-action value functions any-order differentiable, we used SiLU nonlinear function (Elfwing et al., 2018) in between the hidden layers. For IRG, we used the Sigmoid function in the policies to output 1-D continues action, and for IPD, we used the Gumble-softmax function (Jang et al., 2017) in the policies to output two discrete actions. The algorithms are trained for 900 (in IRG) and 50 (in IPD) episodes by running Adam optimizer (Kingma and Ba, 2015) with a fixed learning rate of 0.01. The (projected) prediction lengths in OffPA2 and DiCE frameworks are tuned and set to 0.8 and 0.3, respectively. All experiments are repeated five times, and the results are reported in terms of mean and standard deviation.
### Exit-Room game
Both policy and value networks consist of two parts: encoder and decoder. The encoders are CNN networks with three convolutional layers (\(12\times 90\times 90\to 32\times 21\times 21\to 64\times 9 \times 64\times 7\times 7\) ) and two fully connected layers (\(3136\to 512\to 128\) ), with SiLU nonlinear functions (Elfwing et al., 2018) in between. The decoders are MLP networks with two hidden layers of dimension 64 for policies and value functions. We used the Gumble-softmax function in the policies (Jang et al., 2017) to output the discrete actions. The algorithms are trained for 450 (in level one) and 4500 (in levels two and three) episodes by running Adam optimizer (Kingma and Ba, 2015) with a fixed learning rate of 0.01. The (projected) prediction lengths in OffPA2 and DiCE frameworks are tuned and set to 1 and 0.4, respectively. All experiments are repeated five times, and the results are reported in terms of mean and standard deviation.
### Particle-coordination game
We employed MLP networks with two hidden layers of dimension 64 for policies and state-action value functions with SiLU nonlinear functions (Elfwing et al., 2018) in between. We used the Gumble-softmax function (Jang et al., 2017) in the policies to output the discrete actions. The algorithms are trained for 100k episodes by running Adam optimizer (Kingma and Ba, 2015) with a fixed learning rate of 0.01. We set the projected prediction length to 0.1 for HLA-OffPA2 agents. All experiments are repeated five times, and the results are reported in terms of mean and standard deviation.
### Standard multi-agent games
As before, we used policies and state-action value functions with the same neural network architecture in all methods. We employed MLP networks with two hidden layers (of dimension 64 for the Particle environment and 256 for the Mujoco environment) for policies and state-action value functions with SiLU nonlinear functions (Elfwing et al., 2018). In the Particle environment, We used the Gumble-softmax function (Jang et al., 2017) in the policies to output the discrete actions and trained the algorithms for 100k episodes by running Adam optimizer (Kingma and Ba, 2015) with a fixed learning rate of 0.01. In the Mujoco environment, we used the Tanh function in the policies to output the continuous actions and train the algorithms for 10k episodes by running Adam optimizer (Kingma and Ba, 2015) with a fixed learning rate of 0.001. The projected prediction lengths for HLA-OffPA2 agents are optimized between \(0.001-0.1\) in all games. The optimized projected prediction lengths are reported in Table 10.
|
2306.10333
|
Bounded fixed point sets and Krasnoselskii iterates of Thompson metric
nonexpansive maps
|
We consider maps defined on the interior of a normal, closed cone in a real
Banach space that are nonexpansive with respect to Thompson's metric. With mild
compactness assumptions, we prove that the Krasnoselskii iterates of such maps
converge to a fixed point when one exists. For maps that are also
order-preserving, we give simple necessary and sufficient conditions in terms
of upper and lower Collatz-Wielandt numbers for the fixed point set to be
nonempty and bounded in Thompson's metric. When the map is also real analytic,
these conditions are both necessary and sufficient for the map to have a unique
fixed point and for all iterates of the map to converge to the fixed point. We
demonstrate how these results apply to certain nonlinear matrix equations on
the cone of positive definite Hermitian matrices.
|
Brian Lins
|
2023-06-17T12:20:23Z
|
http://arxiv.org/abs/2306.10333v2
|
# Bounded fixed point sets and Krasnoselskii iterates of Thompson metric nonexpansive maps
###### Abstract.
We consider maps defined on the interior of a normal, closed cone in a real Banach space that are nonexpansive with respect to Thompson's metric. With mild compactness assumptions, we prove that the Krasnoselskii iterates of such maps converge to a fixed point when one exists. For maps that are also order-preserving, we give simple necessary and sufficient conditions in terms of upper and lower Collatz-Wielandt numbers for the fixed point set to be nonempty and bounded in Thompson's metric. When the map is also real analytic, these conditions are both necessary and sufficient for the map to have a unique fixed point and for all iterates of the map to converge to the fixed point. We demonstrate how these results apply to certain nonlinear matrix equations on the cone of positive definite Hermitian matrices.
Key words and phrases:Nonlinear Perron-Frobenius theory, normal cones, fixed points, Thompson's metric, nonexpansive maps, Krasnoselskii iteration, measures of noncompactness, Collatz-Wielandt numbers, order-preserving subhomogeneous functions, real analytic functions, algebraic Riccati equations.
fit into other classes of hyperbolic spaces where Krasnoselskii-Mann iteration has been studied [22, 27].
Iterative methods to find fixed points of Thompson's metric nonexpansive maps typically require extra assumptions so that the maps are contractions [16], or involve some kind of nonlinear relaxation chosen for a specific cone [7, Theorem 11], [9]. In Section 3, we show that if a Thompson metric nonexpansive map \(f\) has a fixed point in \(C^{\circ}\), then under mild compactness assumptions which are always satisfied in finite dimensions, the iterates of the linear relaxation \(\alpha f+(1-\alpha)\operatorname{id}\) converge to a fixed point of \(f\) for any \(0<\alpha<1\) and any starting point in \(C^{\circ}\). The compactness assumptions are stated in terms of Kuratowski's measure of noncompactness, which can be defined using either the norm or Thompson's metric. The two metrics yield different versions of Kuratowski's measure of noncompactness, each with different properties. The properties of the norm version, which we denote by \(\gamma\), are well-known (see e.g., [3]), and the properties of the version \(\tau\) defined in terms of Thompson metric were investigated in [10]. We extend these properties by proving in Theorem 2.5 that the sum of a \(\tau\)-condensing map with a \(d_{T}\)-nonexpansive map is \(\tau\)-condensing.
In Section 4, we consider Thompson metric nonexpansive maps which are also order-preserving with respect to the partial order induced by the cone. Much of what is known about the fixed point theory for such maps is due to the work of Krasnoselskii and his students Ladyzhenskii and Bahktin (see [15, Chapter 6] and the bibliographic notes there for details), and to Thompson [30]. Some additional recent results can be found in [2, 25].
In Theorem 4.1, we show that a simple sufficient condition for the existence of fixed points, involving upper and lower Collatz-Wielandt numbers, is also necessary for the fixed point set to be bounded in Thompson's metric. Although the sufficient condition for the existence of fixed points in Theorem 4.1 appears to be new, the main novelty of Theorem 4.1 is the proof that these conditions are necessary for the fixed point set to be bounded.
In Section 5 we focus on Thompson's metric nonexpansive maps which are also real analytic. We show in Theorem 5.3 that the conditions of Theorem 4.1 are necessary and sufficient for an order-preserving, real analytic, \(d_{T}\)-nonexpansive map \(f\) to have a unique fixed point in \(C^{\circ}\), and in that case the iterates \(f^{k}(x)\) converge to the fixed point for every initial point \(x\in C^{\circ}\). We also prove in Theorem 5.2 that if \(C\) is a finite dimensional closed cone and \(f:C^{\circ}\to C^{\circ}\) is real analytic and \(d_{T}\)-nonexpansive map (but not necessarily order-preserving), then \(f\) has a nonempty and bounded set of fixed points in \(C^{\circ}\) if and only if \(f\) has a unique fixed point.
Although most of the results in this paper are stated for infinite dimensional Thompson geometries, the main results are new and noteworthy even in finite dimensions. For an example, we demonstrate in Section 6 how the results of Sections 4 and 5 can be applied to find solutions of a class of nonlinear matrix equations on the cone of positive definite matrices.
## 2. Preliminaries
In what follows, we use \(A^{\circ},\overline{A}\), and \(\partial A\) to denote respectively the interior, closure, and boundary of a set \(A\). We let \(\operatorname{Fix}(f)\) denote the set of fixed points of a function \(f\).
### Cones and Thompson's metric
Let \(X\) be a real Banach space with norm \(\|\cdot\|\) and dual space \(X^{*}\). A _closed cone_ is a closed convex set \(C\subset X\) such that (i) \(\lambda C\subseteq C\) for all \(\lambda\geq 0\) and (ii) \(C\cap(-C)=\{0\}\). A closed cone \(C\) induces the following partial order on \(X\). We say that \(x\leq y\) whenever \(y-x\in C\). We will write \(x\ll y\) when \(y-x\in C^{\circ}\). For \(x,y\in C\), we let
\[M(x/y)=\inf\{\beta>0:x\leq\beta y\}\]
and
\[m(x/y)=\sup\{\alpha>0:\alpha y\leq x\}.\]
An alternative formula for \(M(x/y)\) when \(y\in C^{\circ}\) (see e.g., [19, Lemma 2.2]) is:
\[M(x/y)=\sup_{\phi\in C^{*}}\frac{\phi(x)}{\phi(y)} \tag{2.1}\]
where \(C^{*}=\{\phi\in X^{*}:\phi(x)\geq 0\text{ for all }x\in C\}\) is the _dual cone_ of \(C\).
Two elements \(x,y\in X\) are _comparable_, denoted \(x\sim y\), if there are constants \(\alpha,\beta>0\) such that \(\alpha x\leq y\leq\beta x\). Comparability is an equivalence relation on \(C\), and the equivalence classes are called the _parts_ of \(C\). If \(C\) has nonempty interior, then \(C^{\circ}\) is a part. For comparable \(x,y\in C\), _Thompson's metric_ is
\[d_{T}(x,y)=\log\left(\max\{M(x/y),M(y/x)\}\right)=\log\inf\{\beta\geq 1: \beta^{-1}x\leq y\leq\beta x\}.\]
We use \(B_{R}(x)=\{y\in C:y\sim x,\ d_{T}(x,y)<R\}\) to denote the open balls in Thompson's metric. For any \(x,y\in X\), we let \([x,y]\) denote the _order interval_
\[[x,y]=\{z\in X:x\leq z\leq y\}.\]
Observe that for any \(x\in C\),
\[\overline{B_{R}(x)}=[e^{-R}x,e^{R}x]. \tag{2.2}\]
A closed cone \(C\) in a Banach space \(X\) is _normal_ if there is a constant \(\kappa\) such that \(\|x\|\leq\kappa\|y\|\) whenever \(0\leq x\leq y\). When \(C\) is normal, Thompson's metric is a complete metric on each part \(C\), and the topology induced by Thompson's metric is equivalent to the norm topology [30, Lemma 3]. It is apparent from the definition that Thompson metric balls are bounded in the norm topology when \(C\) is a normal cone.
Let \(x_{n}\) be a sequence in a Banach space \(X\) with a partial order induced by a closed cone \(C\). We say that \(x_{n}\) is _decreasing_ (_increasing_) if \(x_{n+1}\leq x_{n}\) (\(x_{n+1}\geq x_{n}\)) for all \(n\in\mathbb{N}\). A closed cone \(C\) is _regular_ if every decreasing sequence in \(C\) converges. All finite dimensional cones are regular and all regular cones are normal [3, Proposition 19.2].
Many important closed cones in infinite dimensional Banach spaces have empty interior. For example, the cone \(L^{p}([a,b])_{\geq 0}\) of almost everywhere nonnegative functions in \(L^{p}([a,b])\) has empty interior for all \(1\leq p<\infty\). However, \(L^{p}([a,b])_{\geq 0}\) is normal since \(0\leq f\leq g\) implies that \(\|f\|_{p}\leq\|g\|_{p}\). In fact, \(L^{p}([a,b])\) is also regular by the monotone convergence theorem.
Suppose that \(C\) is a closed cone in a Banach space \(X\). Let \(u\in C\) and let \(C_{u}\) be the part of \(C\) containing \(u\). Let
\[X_{u}=\bigcup_{k>0}[-ku,ku]\ \text{ and }\ \|x\|_{u}=\inf\{k>0:x\in[-ku,ku]\}. \tag{2.3}\]
If \(C\) is a normal cone, then \((X_{u},\|\cdot\|_{u})\) is a Banach space which is continuously embedded in \((X,\|\cdot\|)\)[3, Proposition 19.9]. Furthermore, \(\overline{C_{u}}=C\cap X_{u}\) is a normal,
closed cone with interior equal to \(C_{u}\) in \((X_{u},\|\cdot\|_{u})\). For this reason, when we are interested in the Thompson geometry on a part of a normal closed cone, we can assume without loss of generality that the part is the interior of a normal cone in some Banach space.
The following observation will be used in Section 3.
**Lemma 2.1**.: _Let \(C\) be a closed cone with nonempty interior in a Banach space. If \(f:C^{\circ}\to C^{\circ}\) is \(d_{T}\)-nonexpansive, then for any \(0<\alpha<1\) the map \(\alpha f+(1-\alpha)\operatorname{id}\) is also nonexpansive on \((C^{\circ},d_{T})\)._
Proof.: Fix \(x,y\in C^{\circ}\). By definition, \(d_{T}(x,y)=\log\inf\{\beta\geq 1:\beta^{-1}x\leq y\leq\beta x\}\). Suppose that \(d_{T}(x,y)=\log\beta\) for some \(\beta>1\). Since \(C\) is closed, this means that
\[\beta^{-1}x\leq y\leq\beta x.\]
Since \(f\) is \(d_{T}\)-nonexpansive, we also have \(d_{T}(f(x),f(y))\leq\log\beta\) so
\[\beta^{-1}f(x)\leq f(y)\leq\beta f(x)\]
Therefore
\[\beta^{-1}\left(\alpha f(x)+(1-\alpha)x\right)\leq\alpha f(y)+(1-\alpha)y\leq \beta\left(\alpha f(x)+(1-\alpha)x\right).\]
This means that
\[d_{T}(\alpha f(x)+(1-\alpha)x,\alpha f(y)+(1-\alpha)y)\leq\log\beta=d_{T}(x,y),\]
and therefore \(\alpha f+(1-\alpha)\operatorname{id}\) is nonexpansive.
### Nonexpansive maps and omega limit sets
Let \((M,d)\) be a metric space. Let \(f:M\to M\) be nonexpansive. The _orbit_ of \(x\in M\) under \(f\) is \(\mathcal{O}(x,f)=\{f^{k}(x):k\in\mathbb{N}\}\). The _omega limit set_ of \(x\) under \(f\) is
\[\omega(x,f)=\bigcap_{n\in\mathbb{N}}\overline{\{f^{k}(x):k\geq n\}}.\]
The following result combines [21, Lemma 3.1.2 and Corollary 3.1.5]. It can be traced back to a theorem of Freudenthal and Hurewicz which states that any surjective nonexpansive map from a compact metric space onto itself must be an isometry [6].
**Proposition 2.2**.: _Let \((M,d)\) be a metric space. If \(f:M\to M\) is nonexpansive and \(\mathcal{O}(x,f)\) has compact closure in \(M\), then \(\omega(x,f)\) is a nonempty compact set and \(f\) restricted to \(\omega(x,f)\) is an invertible isometry._
### Measures of noncompactness
Let \(X\) be a Banach space and let \(\mathcal{B}\) denote the bounded subsets of \(X\). _Kuratowski's measure of noncompactness_ on \((X,\|\cdot\|)\) is the function \(\gamma:\mathcal{B}\to[0,\infty)\) defined by
\[\gamma(A)=\inf\{d>0:A\text{ admits a finite cover by sets with diameter }\leq d\}.\]
This measure of noncompactness has several properties (see e.g., [3, Proposition 7.2]), of which we note the following. For any \(A,B\in\mathcal{B}\) and \(\lambda\in\mathbb{R}\),
1. \(\gamma(A)=0\) if and only if \(\overline{A}\) is compact.
2. \(\gamma\) is a seminorm, i.e., \(\gamma(\lambda A)=|\lambda|\gamma(A)\) and \(\gamma(A+B)\leq\gamma(A)+\gamma(B)\).
Let \(D\subseteq X\) and let \(f:D\to X\) be continuous. We say that \(f\) is \(\gamma\)-_condensing_ if \(\gamma(f(A))<\gamma(A)\) whenever \(A\subseteq D\) is bounded and \(\gamma(A)>0\). By the Darboux-Sadovskii theorem (see e.g., [3, Theorem 9.1]), if \(C\) is a nonempty, closed, bounded, convex subset of \(D\), and \(f\) is \(\gamma\)-condensing with \(f(C)\subseteq C\), then \(f\) has a fixed point in \(C\).
The following lemma is a well known application of property (P2) which we will use in the sequel.
**Lemma 2.3**.: _Let \(X\) be a Banach space and let \(D\subseteq X\). If \(f:D\to X\) is \(\gamma\)-condensing, then for any \(0<\alpha<1\) the map \(\alpha f+(1-\alpha)\operatorname{id}\) is also \(\gamma\)-condensing._
Proof.: Let \(A\) be a bounded subset of \(D\). By property (P2) of \(\gamma\),
\[\gamma(\alpha f(A)+(1-\alpha)A) \leq\gamma(\alpha f(A))+\gamma((1-\alpha)A)\] \[\leq\alpha\gamma(f(A))+(1-\alpha)\gamma(A)\] \[<\gamma(A). \text{(since $f$ is $\gamma$-condensing)}\]
Therefore \(\alpha f+(1-\alpha)\operatorname{id}\) is \(\gamma\)-condensing.
Let \(C\) be a closed cone with nonempty interior in a Banach space. Let \(\mathcal{B}_{T}\) denote the \(d_{T}\)-bounded subsets of \(C^{\circ}\) and let \(\operatorname{diam}_{T}(A)\) denote the Thompson's metric diameter of \(A\in\mathcal{B}_{T}\). _Kuratowski's measure of noncompactness_ on \((C^{\circ},d_{T})\) is
\[\tau(A)=\inf\{d>0:A\text{ has a finite cover by sets }A_{i}\text{ with }\operatorname{diam}_{T}(A_{i})\leq d\}.\]
The properties of \(\tau\) were investigated by Herzog and Kunstmann [10]. Although \(\tau\) satisfies property (P1) of \(\gamma\), it does not satisfy (P2). Instead by [10, Proposition 2.5],
P3. \(\tau(\lambda A)=\tau(A)\) for all \(\lambda>0\) and \(\tau(A+B)\leq\max\{\tau(A),\tau(B)\}\),
for any \(A,B\in\mathcal{B}_{T}\).
Let \(D\) be a subset of \(C^{\circ}\) and let \(f:D\to C^{\circ}\) be continuous. Then \(f\) is \(\tau\)-_condensing_ if \(\tau(f(A))<\tau(A)\) for every \(d_{T}\)-bounded subset \(A\subset D\) such that \(\tau(A)>0\). One advantage of \(\tau\) over \(\gamma\) is that if \(f\) is \(\tau\)-condensing, then so is \(cf\) for all \(c>0\). If \(D\) is a nonempty closed, convex, \(d_{T}\)-bounded subset of \(C^{\circ}\), and \(f:D\to D\) is continuous and \(\tau\)-condensing, then \(f\) has a fixed point in \(D\)[10, Theorem 4.1]. Here we will prove that the sum of a \(\tau\)-condensing map and a \(d_{T}\)-nonexpansive map is \(\tau\)-condensing.
**Lemma 2.4**.: _Let \(C\) be a normal, closed cone with nonempty interior in a Banach space. The map \(f(x)=x+u\) is \(\tau\)-condensing on \(C^{\circ}\) for any \(u\in C^{\circ}\)._
Proof.: On any \(d_{T}\)-bounded set \(A\subset C^{\circ}\), the map \(f\) is a strict \(d_{T}\)-contraction, i.e., there is a constant \(0<c<1\) such that \(d_{T}(f(x),f(y))\leq cd_{T}(x,y)\) for all \(x,y\in A\)[18, Theorem 5.3]. If \(\tau(A)=\delta>0\), then for any \(\epsilon>0\), \(A\) can be covered by a finite collection of sets \(A_{1},\dots,A_{n}\subseteq A\) each with \(\operatorname{diam}_{T}(A_{i})\leq\delta+\epsilon\). Then \(\operatorname{diam}_{T}(f(A_{i}))\leq c\operatorname{diam}_{T}(A_{i})\leq c( \delta+\epsilon)\), so \(\tau(f(A))\leq c(\delta+\epsilon)\) for any \(\epsilon>0\). Therefore \(\tau(f(A))\leq c\delta<\delta=\tau(A)\).
**Theorem 2.5**.: _Let \(C\) be a normal, closed cone with nonempty interior in a Banach space. If \(f:C^{\circ}\to C^{\circ}\) is \(\tau\)-condensing and \(g:C^{\circ}\to C^{\circ}\) is \(d_{T}\)-nonexpansive, then \(f+g\) is \(\tau\)-condensing._
Proof.: Let \(A\subset C^{\circ}\) be bounded in Thompson's metric and suppose that \(\tau(A)=\delta>0\). Then \(\tau(f(A))<\delta\) and \(\tau(g(A))\leq\delta\) since \(f\) is \(\tau\)-condensing and \(g\) is nonexpansive.
Since \(f(A)\) is \(d_{T}\)-bounded, we can choose \(u\in C^{\circ}\) such that \(x\geq u\) for all \(x\in f(A)\). Then for every \(x\in f(A)\) and \(0<c<1\),
\[x-cx\leq x-cu\leq x.\]
This implies that \(d_{T}(x,x-cu)\leq|\log(1-c)|\).
Since \(\tau(f(A))<\delta\), we can cover \(f(A)\) with a finite collection of sets \(B_{1},\dots,B_{n}\subset C^{\circ}\) each with \(\operatorname{diam}_{T}(B_{i})\leq\delta^{\prime}\) where \(\delta^{\prime}<\delta\). Now consider \(\operatorname{diam}_{T}(B_{i}-cu)\). Observe that if \(x,y\in B_{i}-cu\), then by the triangle inequality
\[d_{T}(x,y) \leq d_{T}(x,x+cu)+d_{T}(x+cu,y+cu)+d_{T}(y+cu,y)\] \[\leq\delta^{\prime}+2|\log(1-c)|.\]
By choosing \(c>0\) small enough, we can guarantee that \(\operatorname{diam}_{T}(B_{i}-cu)<\delta\) for every \(B_{i}\), so \(\tau(f(A)-cu)<\delta\). By Lemma 2.4, \(\tau(g(A)+cu)<\delta\). Then by property (P3) of \(\tau\),
\[\tau(f(A)+g(A)) =\tau(f(A)-cu+g(A)+cu)\] \[\leq\max\{\tau(f(A)-cu),\tau(g(A)+cu)\}<\delta.\]
Therefore \(f+g\) is \(\tau\)-condensing.
**Corollary 2.6**.: _Let \(C\) be a normal, closed cone with nonempty interior in a Banach space. If \(f:C^{\circ}\to C^{\circ}\) is \(\tau\)-condensing, then so is \(\alpha f+(1-\alpha)\operatorname{id}\) for every \(0<\alpha<1\)._
Proof.: Observe that \(\alpha f\) is \(\tau\)-condensing by property (P3) and \((1-\alpha)\operatorname{id}\) is a Thompson metric isometry, so it is nonexpansive. Therefore \(\alpha f+(1-\alpha)\operatorname{id}\) is \(\tau\)-condensing by Theorem 2.5.
## 3. Krasnoselskii iteration
**Theorem 3.1**.: _Let \(C\) be a normal, closed cone with nonempty interior in a Banach space. Let \(f:C^{\circ}\to C^{\circ}\) be \(d_{T}\)-nonexpansive and either \(\gamma\) or \(\tau\)-condensing. If \(f\) has a fixed point in \(C^{\circ}\), then for any \(0<\alpha<1\) and \(x_{0}\in C^{\circ}\), the sequence \(x_{n}\) defined by_
\[x_{n+1}=\alpha f(x_{n})+(1-\alpha)x_{n}\]
_converges to a fixed point of \(f\)._
Proof.: Let \(g=\alpha f+(1-\alpha)\operatorname{id}\). Note that \(g\) is nonexpansive by Lemma 2.1 and either \(\gamma\)-condensing by Lemma 2.3 or \(\tau\)-condensing by Corollary 2.6. Since \(f\) and therefore \(g\) both have a fixed point in \(C^{\circ}\), the orbit \(\mathcal{O}(x,g)\) is bounded in \((C^{\circ},d_{T})\) and therefore also in the norm topology. Since \(g\) is \(\gamma\) or \(\tau\)-condensing, it follows that \(\mathcal{O}(x,g)\) has compact closure by property (P1). Therefore the omega limit set \(\omega(x,g)\) is a nonempty compact subset of \(C^{\circ}\).
Choose \(y,z\in\omega(x,g)\) such that \(d_{T}(y,z)\) is maximal. We may assume without loss of generality that \(d_{T}(y,z)=\log M(z/y)\). Then there is a linear functional \(\phi\in C^{*}\) such that \(M(z/y)=\phi(z)/\phi(y)\) (see [19, Lemma 2.2]). Let \(a=\phi(y)\) and \(b=\phi(z)\) and note that \(b\geq a\) since \(0\leq d_{T}(y,z)=\log(b/a)\).
By Proposition 2.2, \(g\) is an invertible isometry on \(\omega(x,g)\). Let \(g^{-1}\) denote the inverse of \(g\) on \(\omega(x,g)\) and let \(y^{-1}=g^{-1}(y)\) and \(z^{-1}=g^{-1}(z)\). Observe that \(a\leq\phi(w)\leq b\) for all \(w\in\omega(x,g)\), otherwise \(d_{T}(w,z)\) or \(d_{T}(y,w)\) would be greater
than \(\log(b/a)\) by (2.1), but \(\log(b/a)\) is the maximal distance between pairs in \(\omega(x,g)\). In particular, \(\phi(y^{-1})\geq a\) and \(\phi(z^{-1})\leq b\).
Since \(\phi(y)=a\) and \(y=g(y^{-1})\) is a convex combination of \(y^{-1}\) and \(f(y^{-1})\), it follows that \(\phi(f(y^{-1}))\leq a\). Similarly \(\phi(f(z^{-1}))\geq b\). However,
\[\log\frac{b}{a} \leq\log\frac{\phi(f(z^{-1}))}{\phi(f(y^{-1}))}\] \[\leq d_{T}(f(y^{-1}),f(z^{-1}))\] \[\leq d_{T}(y^{-1},z^{-1}) \text{(nonexpansiveness)}\] \[=\log\frac{b}{a}. \text{(since $g$ is an isometry on $\omega(x,g)$)}\]
We conclude that \(\phi(y^{-1})=a\) and \(\phi(z^{-1})=b\).
We can repeat this argument to prove that \(\phi(z^{-k})=b\) for all \(k\in\mathbb{N}\) where \(z^{-k}=g^{-k}(z)\in\omega(x,g)\). However, there is a point \(g^{m}(x)\in\mathcal{O}(x,g)\) that is arbitrarily close to \(y\) and an \(n\in\mathbb{N}\) such that \(g^{m+n}(x)\) is arbitrarily close to \(z\). Then \(g^{n}(y)\) will be arbitrarily close to \(z\) by the nonexpansiveness of \(g\). Since \(g\) is an isometry on \(\omega(x,g)\), we have \(d_{T}(g^{n}(y),z)=d_{T}(y,z^{-n})\) arbitrarily small. But since \(\phi(y)=a\) and \(\phi(z^{-k})=b\), we have \(d_{T}(y,z^{-k})\geq\log(b/a)\) for all \(k\in\mathbb{N}\), which is a contradiction unless \(a=b\) and \(\omega(x,g)\) is a singleton.
A _retraction_ is a continuous map \(r\) from a topological space \(X\) to a subspace \(Y\subseteq X\) such that \(r(X)=Y\) and \(r\) restricted to \(Y\) is the identity map. The set \(Y\) is called a _retract_ of \(X\). If \(r\) is also nonexpansive, then \(Y\) is called a _nonexpansive retract_. Bruck showed that if a nonexpansive map on a Banach space satisfies a conditional fixed point property, then its fixed point set is a nonexpansive retract [1]. Later, Nussbaum showed that the fixed point set of a Thompson metric nonexpansive map is a nonexpansive retract under certain compactness assumptions [26, Theorem 4.7 and Corollary 4.1]. Here we use Theorem 3.1 to give a simple proof that the fixed point set of a condensing Thompson metric nonexpansive map is a nonexpansive retract.
**Corollary 3.2**.: _Let \(C\) be a normal, closed cone with nonempty interior in a Banach space. If \(f:C^{\circ}\to C^{\circ}\) is \(d_{T}\)-nonexpansive and either \(\gamma\) or \(\tau\)-condensing, then \(\operatorname{Fix}(f)\) is a nonexpansive retract of \(C^{\circ}\)._
Proof.: Suppose that \(\operatorname{Fix}(f)\) is nonempty. Let \(r\) be the map which assigns to any \(x\in C^{\circ}\) the limit of the sequence \(x_{n}\) defined by \(x_{0}=x\) and \(x_{n+1}=\alpha f(x_{n})+(1-\alpha)x_{n}\) with \(0<\alpha<1\). Then \(r\) is a nonexpansive retraction and its range is \(\operatorname{Fix}(f)\).
## 4. Bounded fixed point sets of order-preserving maps
Let \(X\) be a real Banach space with a partial order induced by a closed cone \(C\). Let \(D\subseteq X\). A function \(f:D\to X\) is _order-preserving_ if \(f(x)\leq f(y)\) whenever \(x\leq y\). The function \(f\) is _homogeneous_ if \(f(tx)=tf(x)\) for all \(t>0\) and \(x\in D\) and _subhomogeneous_ if \(f(tx)\leq tf(x)\) for all \(t\geq 1\) and \(x\in D\).
If \(C\) has nonempty interior and \(f:C^{\circ}\to C^{\circ}\) is order-preserving, then \(f\) is nonexpansive with respect to Thompson's metric if and only if \(f\) is subhomogeneous (see e.g., [21, Lemma 2.1.7] where this is proved in finite dimensions, but the same proof applies to infinite dimensional cones as well). If \(f(x)\geq\alpha x\) for some \(x\in C^{\circ}\)
then we will say that \(x\) is a _sub-eigenvector_ of \(f\) with _sub-eigenvalue_\(\alpha\). Similarly, if \(f(y)\leq\beta y\) for some \(y\in C^{\circ}\), then \(y\) is a _super-eigenvector_ of \(f\) with _super-eigenvalue_\(\beta\).
The _upper Collatz-Wielandt number_ for any function \(f:C^{\circ}\to C\) is
\[r(f)=\inf_{x\in C^{\circ}}M(f(x)/x)\]
and the _lower Collatz-Wielandt number_ for a function \(f:C^{\circ}\to C^{\circ}\) is
\[\lambda(f)=\sup_{x\in C^{\circ}}m(f(x)/x).\]
Observe that \(r(f)\) is the infimum of the super-eigenvalues of \(f\), and \(\lambda(f)\) is the supremum of the sub-eigenvalues of \(f\).
The main result of this section is:
**Theorem 4.1**.: _Let \(C\) be a normal, closed cone with nonempty interior in a Banach space. Let \(f:C^{\circ}\to C^{\circ}\) be order-preserving and subhomogeneous. If \(C\) is regular or if \(f\) is \(\gamma\) or \(\tau\)-condensing, then the following are equivalent._
1. \(\operatorname{Fix}(f)\) _is nonempty and bounded in_ \((C^{\circ},d_{T})\)_._
2. _There exist_ \(x,y\in C^{\circ}\) _such that_ \(f(x)\gg x\) _and_ \(f(y)\ll y\)_._
3. \(\lambda(f)>1\) _and_ \(r(f)<1\)_._
Before proving Theorem 4.1, we gather a few minor lemmas. The first lemma is known (see e.g., [3, Theorem 19.1]). The proof is simple, so we include it here for convenience.
**Lemma 4.2**.: _Let \(C\) be a normal, closed cone in a Banach space \(X\). Let \(D\) be a closed subset of \(X\) and let \(f:D\to D\) be order-preserving and continuous. Assume either that \(C\) is regular or \(f\) is \(\gamma\) or \(\tau\)-condensing. If \(f^{k}(x)\in D\) is an increasing (or decreasing) sequence that is bounded above (below) by \(y\in D\), then \(f^{k}(x)\) converges to a fixed point of \(f\) in \(D\). In particular, if \([x,y]\subseteq D\) is a nonempty order interval and \(f([x,y])\subseteq[x,y]\), then \(f\) has a fixed point in \([x,y]\)._
Proof.: If \(C\) is regular, then the sequence \(f^{k}(x)\) converges by definition. If \(f\) is either \(\gamma\) or \(\tau\)-condensing, then \(\{f^{k}(x):k\in\mathbb{N}\}\) has compact closure by property (P1) and therefore \(f^{k}(x)\) has a limit point in \(D\). Since \(f^{k}(x)\) is increasing (or decreasing), it converges to that limit point. In either case, the continuity of \(f\) guarantees that the limit of \(f^{k}(x)\) is a fixed point. If \([x,y]\subseteq D\) is a nonempty order-interval and \(f([x,y])\subseteq[x,y]\), it follows that \(f(x)\geq x\) and therefore \(f^{k}(x)\) is an increasing sequence bounded above by \(y\). Therefore \(f^{k}(x)\) converges to a fixed point in \([x,y]\).
**Lemma 4.3**.: _Let \(C\) be a normal, closed cone with nonempty interior in a Banach space. Let \(f:C^{\circ}\to C^{\circ}\) be order-preserving and subhomogeneous. In addition, suppose either that \(C\) is regular or \(f\) is \(\gamma\) or \(\tau\)-condensing. If \(u\) is a fixed point of \(f\) and \(\operatorname{Fix}(f)\) is bounded in Thompson's metric, then for any \(R>r>0\) such that \(\operatorname{Fix}(f)\subset B_{r}(u)\), there is a \(k\in\mathbb{N}\) large enough so that_
\[f^{k}(\overline{B_{R}(u)})\subset B_{r}(u).\]
_In particular, if \(f\) has a unique fixed point \(u\), then \(f^{k}(x)\) converges to \(u\) for all \(x\in C^{\circ}\)._
Proof.: Recall by (2.2) that \(\overline{B_{R}(u)}=[e^{-R}u,e^{R}u]\). Since \(u\) is a fixed point and \(f\) is subhomogeneous, it follows that \(f(e^{-R}u)\geq e^{-R}u\) and \(f(e^{R}u)\leq e^{R}u\). By Lemma 4.2, the sequences \(f^{j}(e^{-R}u)\) and \(f^{j}(e^{R}u)\) converge to fixed points of \(f\). Therefore we can choose a \(k\) large enough so that both \(f^{k}(e^{-R}u)\) and \(f^{k}(e^{R}u)\) are contained in \(B_{r}(u)\). Since \(f^{k}\) is order-preserving, it follows that
\[f^{k}(\overline{B_{R}(u)})=f^{k}([e^{-R}u,e^{R}u])\subseteq[f^{k}(e^{-R}u),f^ {k}(e^{R}u)]\]
which is contained in \(B_{r}(u)\).
_Remark 4.4_.: The observation in Lemma 4.3 that if \(f\) has a unique fixed point \(u\in C^{\circ}\), then \(f^{k}(x)\) converges to \(u\) for all \(x\in C^{\circ}\) was made in [15, Theorem 6.6], although with the stronger assumption that \(f\) is compact when \(C\) is not regular.
This next lemma shows that there is an order relation between sub-eigenvectors and super-eigenvectors.
**Lemma 4.5**.: _Let \(C\) be a closed cone with nonempty interior in a Banach space. Let \(f:C^{\circ}\to C^{\circ}\) be order-preserving and subhomogeneous. If \(f(x)\geq\alpha x\) and \(f(y)\leq\beta y\) where \(\alpha>\beta\), then \(x\ll y\)._
Proof.: Suppose that \(y-x\notin C^{\circ}\). Then there is a maximal \(0<t\leq 1\) such that \(y-tx\in C\). For that \(t\),
\[\beta y\geq f(y)\geq f(tx)\geq tf(x)\geq t\alpha x.\]
Therefore \(y-(\alpha/\beta)tx\in C\) which contradicts the maximality of \(t\).
Proof of Theorem 4.1.: (a)\(\Rightarrow\)(b). Choose \(u\in\operatorname{Fix}(f)\). Choose any \(R>r>0\) with \(r\) large enough so that \(\operatorname{Fix}(f)\) is contained in the open Thompson metric ball \(B_{r}(u)\). By Lemma 4.3, there exists \(k\in\mathbb{N}\) such that \(f^{k}(\overline{B_{R}(u)})\subset B_{r}(u)\). Let \(g=(1+\epsilon)^{-1}f\) and \(h=f+\epsilon u\) where \(\epsilon=e^{(R-r)/k}-1\). Both \(g\) and \(h\) are order-preserving, subhomogeneous functions on \(C^{\circ}\). If \(f\) is \(\gamma\)-condensing, then so are both \(g\) and \(h\) by property (P2) of \(\gamma\). Similarly, if \(f\) is \(\tau\)-condensing, then so are \(g\) and \(h\) by property (P3) of \(\tau\). A quick induction argument shows that
\[g^{k}(x)\geq(1+\epsilon)^{-k}f^{k}(x)\geq e^{r-R}f^{k}(x)\]
for all \(x\in C^{\circ}\). Since \(h(x)=f(x)+\epsilon u=f(x)+\epsilon f(u)\leq(1+\epsilon)f(x)\) for all \(x\geq u\), we can use a similar induction argument to show that
\[h^{k}(x)\leq(1+\epsilon)^{k}f^{k}(x)\leq e^{R-r}f^{k}(x)\]
for all \(x\geq u\). In particular, these inequalities imply that
\[g^{k}(x)\geq e^{r-R}f^{k}(e^{-R}u)\geq e^{-R}u\]
when \(x\geq e^{-R}u\), and
\[h^{k}(x)\leq e^{R-r}f^{k}(e^{R}u)\leq e^{R}u\]
when \(u\leq x\leq e^{R}u\).
By the above inequalities, \(g^{jk}(u)\geq e^{-R}u\) for all \(j\in\mathbb{N}\). Since the sequence \(g^{j}(u)\) is decreasing, it follows that \(g^{j}(u)\geq e^{-Ru}\) for all \(j\in\mathbb{N}\). Therefore \(g^{j}(u)\) converges to a fixed point \(x\) of \(g\) in \([e^{-R}u,u]\) by Lemma 4.2. Similarly \(h^{jk}(u)\leq e^{R}u\) for all \(j\in\mathbb{N}\). Since \(h^{j}(u)\) is increasing, it must be bounded above by \(e^{R}u\), and so it converges to a fixed point \(y\) of \(h\) in \([u,e^{R}u]\). Then \(f(x)=(1+\epsilon)x\gg x\) and \(f(y)=y-\epsilon u\ll y\).
(b)\(\Rightarrow\)(a). If \(f(x)\gg x\) and \(f(y)\ll y\), then there exist \(\alpha>1\) and \(\beta<1\) such that \(f(x)\geq\alpha x\) and \(f(y)\leq\beta y\). So \(y\gg x\) by Lemma 4.5. Since \(f([x,y])\subset[x,y]\), Lemma 4.2 implies that \(f\) has a fixed point in \([x,y]\). In fact, all fixed points of \(f\) are contained in \([x,y]\) by Lemma 4.5 since every fixed point is both a super and sub-eigenvector of \(f\) with eigenvalue one. Therefore \(\operatorname{Fix}(f)\) is nonempty and bounded in Thompson's metric.
(b)\(\Rightarrow\)(c). This follows immediately from the definition of \(\lambda(f)\) and \(r(f)\).
(c)\(\Rightarrow\)(b). For any \(\epsilon>0\), there exists \(x,y\in C^{\circ}\) such that
\[f(x)\geq(\lambda(f)-\epsilon)x\ \text{ and }\ f(y)\leq(r(f)+\epsilon)y.\]
If \(\lambda(f)>1\) and \(r(f)<1\), then we can choose \(\epsilon\) small enough so that \(\lambda(f)-\epsilon>1\) and \(r(f)+\epsilon<1\). Then \(f(x)\gg x\) and \(f(y)\ll y\).
The next result about the spectrum of order-preserving subhomogeneous maps follows immediately by applying Theorem 4.1 to \(\mu^{-1}f\). Note that Krasnoselskii observed that the eigenvalues of a compact, order-preserving, subhomogeneous map form a continuous interval in [15, Section 6.2].
**Corollary 4.6**.: _Let \(C\) be a normal closed cone with nonempty interior in a Banach space. Let \(f:C^{\circ}\to C^{\circ}\) be order-preserving and subhomogeneous. In addition, assume one of the following: (i) \(C\) is regular, (ii) \(\lambda(f)f\) is \(\gamma\)-condensing, or (iii) \(f\) is \(\tau\)-condensing. If \(r(f)<\lambda(f)\), then for every \(r(f)<\mu<\lambda(f)\), the set of eigenvectors \(\{x\in C^{\circ}:f(x)=\mu x\}\) is nonempty and bounded in \((C^{\circ},d_{T})\)._
_Remark 4.7_.: The conditions of Theorem 4.1 are never satisfied if the map \(f\) is also homogeneous. If a homogeneous map has a fixed point \(x\in C^{\circ}\), then \(\operatorname{Fix}(f)\) cannot be bounded since it contains the ray \(\{tx:t>0\}\). Note also that \(\lambda(f)\leq r(f)\) whenever \(f\) is homogeneous. Therefore Theorem 4.1 does not help determine when order-preserving homogeneous maps have a fixed point (or eigenvector) in the interior of a cone. For more information on that topic, see [21] or for some recent results for order-preserving homogeneous maps on the standard cone \(\mathbb{R}^{n}_{\geq 0}\), see [24].
_Remark 4.8_.: One might wonder if it is possible to give similar necessary and sufficient conditions for \(\operatorname{Fix}(f)\) to be nonempty and bounded without the assumption that \(f\) is order-preserving. The answer is yes for the standard cone \(\mathbb{R}^{n}_{\geq 0}\), however the conditions are somewhat more complicated. The entrywise logarithm function is an isometry from \((\mathbb{R}^{n}_{>0},d_{T})\) onto \(\mathbb{R}^{n}\) with the supremum norm \(\|x\|_{\infty}=\max_{1\leq i\leq n}|x_{i}|\). Necessary and sufficient conditions for the fixed point set of a nonexpansive map on a finite dimensional normed space to be bounded and nonempty are given in [20]. Verifying that a \(\|\cdot\|_{\infty}\)-nonexpansive map on \(\mathbb{R}^{n}\) has a nonempty bounded fixed point set requires confirming \(2^{n}\) inequalities similar to condition (b) of Theorem 4.1. It is an open question whether similar conditions could be given for \(d_{T}\)-nonexpansive maps on other cones. The fact that condition (b) of Theorem 4.1 only requires confirming two inequality conditions demonstrates how strong the order-preserving property is.
Theorem 4.1 raises the question of how to compute the upper and lower Collatz-Wielandt numbers of an order-preserving, subhomogeneous function \(f\). Here we make some observations that can help.
Let \(C\) be a closed cone with nonempty interior in a Banach space \(X\), and suppose that \(f:C^{\circ}\to C^{\circ}\) is order-preserving and subhomogeneous. The _recession map_ of
\(f\) is the function defined by
\[f_{\infty}(x)=\lim_{t\to\infty}t^{-1}f(tx)\]
for all values \(x\in C^{\circ}\) where the limit exists. The use of recession maps to study the eigenvalues of order-preserving subhomogeneous maps can be traced back at least as far as [15, Theorem 6.11], although there the recession maps are required to be linear. See [2] for a more recent example where the recession map is used to give conditions for the existence of fixed points of order-preserving subhomogeneous maps on \(\mathbb{R}^{n}_{>0}\).
**Proposition 4.9**.: _Let \(C\) be a closed, regular cone with nonempty interior in a Banach space. Let \(f:C^{\circ}\to C^{\circ}\) be order-preserving and subhomogeneous. Then the recession map \(f_{\infty}\) exists for every \(x\in C^{\circ}\), and \(f_{\infty}:C^{\circ}\to C\) is order-preserving and homogeneous. Furthermore \(r(f)=r(f_{\infty})\)._
Proof.: Since \(f\) is subhomogeneous, \(t^{-1}f(tx)\leq s^{-1}f(sx)\) whenever \(0<s<t\). Since \(C\) is regular, it follows that \(f_{\infty}(x)=\lim_{t\to\infty}t^{-1}f(tx)\) exists for every \(x\in C^{\circ}\). For any \(\lambda>0\), can use the substitution \(s=t\lambda\) to see that
\[f_{\infty}(\lambda x)=\lim_{t\to\infty}\tfrac{1}{t}f(t\lambda x)=\lim_{s\to \infty}\tfrac{\lambda}{s}f(sx)=\lambda f_{\infty}(x)\]
so \(f_{\infty}\) is homogeneous. If \(x\leq y\) in \(C^{\circ}\), then \(t^{-1}f(tx)\leq t^{-1}f(ty)\) for all \(t>0\), so we have \(f_{\infty}(x)\leq f_{\infty}(y)\).
Since \(f_{\infty}(x)\leq f(x)\) for all \(x\in C^{\circ}\), it follows from the definition that \(r(f_{\infty})\leq r(f)\). Choose a sequence \(x_{n}\in C^{\circ}\) such that \(f_{\infty}(x_{n})\leq\beta_{n}x_{n}\) where the each \(\beta_{n}>0\) and the sequence \(\beta_{n}\) converges to \(r(f_{\infty})\). For each \(x_{n}\), there exists \(t_{n}\geq 1\) such that \(t_{n}^{-1}f(t_{n}x_{n})\leq(\beta_{n}+\tfrac{1}{n})x_{n}\). Then since the sequence \(\beta_{n}+\tfrac{1}{n}\) converges to \(r(f_{\infty})\) it follows that \(r(f)=r(f_{\infty})\).
If \(C\) is a normal closed cone with nonempty interior and \(f:C^{\circ}\to C^{\circ}\) is order-preserving and homogeneous, then the upper Collatz-Wielandt number \(r(f)\) is the same as the _partial cone spectral radius_
\[r_{C^{\circ}}(f)=\lim_{k\to\infty}\|f^{k}(x)\|^{1/k},\]
where \(x\in C^{\circ}\) is arbitrary and the choice of \(x\) does not affect the value of the limit [19, Lemma 4.2 and Theorem 4.6]. Unfortunately, a recession map \(f_{\infty}\) may not send \(C^{\circ}\) into itself (see (6.2) for an example). If \(f:C^{\circ}\to C\) is order-preserving and homogeneous, but \(f(C^{\circ})\) is not contained in \(C^{\circ}\), then it is still possible to use an iterative method to calculate the upper Collatz-Wielandt number. The key idea is to combine the iterative formula above with a small perturbation of \(f\).
**Proposition 4.10**.: _Let \(C\) be a closed normal cone with nonempty interior in a Banach space. Fix \(u\in C^{\circ}\) and let \(\|\cdot\|_{u}\) denote the corresponding order-unit norm defined by (2.3). If \(f:C^{\circ}\to C\) is order-preserving and homogeneous, then_
\[r(f) =\lim_{k\to\infty}\|(f+\mathrm{id})^{k}(u)\|_{u}^{1/k}-1\] \[=\inf_{k>0}\|(f+\mathrm{id})^{k}(u)\|_{u}^{1/k}-1\] \[=\lim_{k\to\infty}\|(f+\mathrm{id})^{k}(u)\|^{1/k}-1.\]
_In particular, \(r(f)<1\) if and only if there is a \(k\in\mathbb{N}\) such that \((f+\mathrm{id})^{k}(u)\ll 2^{k}u\)._
Proof.: It is clear from the definition of the upper Collatz-Wielandt number that \(r(f)+1=r(f+\operatorname{id}).\) Since the perturbed map \(f+\operatorname{id}\) is order-preserving, homogeneous, and maps \(C^{\circ}\) into \(C^{\circ}\), [19, Lemma 4.2 and Theorem 4.6] implies that
\[r(f)+1=\lim_{k\to\infty}\|(f+\operatorname{id})^{k}(x)\|^{1/k}.\]
for any \(x\in C^{\circ}\). Furthermore, in the proof of [19, Theorem 4.6] it is shown that
\[\lim_{k\to\infty}\|(f+\operatorname{id})^{k}(u)\|_{u}^{1/k}=\inf_{k>0}\|(f+ \operatorname{id})^{k}(u)\|_{u}^{1/k}=\lim_{k\to\infty}\|(f+\operatorname{id })^{k}(u)\|^{1/k}.\]
Therefore \(r(f)<1\) if and only if \(\inf_{k>0}\|(f+\operatorname{id})^{k}(u)\|_{u}^{1/k}<2\). This happens if and only if there is a \(k>0\) such that \(\|(f+\operatorname{id})^{k}(u)\|_{u}<2^{k}\) which is equivalent to \((f+\operatorname{id})^{k}(u)\ll 2^{k}u\).
We now consider how to compute the lower Collatz-Wielandt number. Some cones have an order-reversing, bijective \(d_{T}\)-isometry \(L:C^{\circ}\to C^{\circ}\). By _order-reversing_, we mean that \(L(x)\leq L(y)\) for all \(x\geq y\gg 0\). The inverse function on a symmetric cone is an order-reversing \(d_{T}\)-isometry, as is the operator \(f(x)\mapsto 1/f(x)\) for the cone of positive functions in many function spaces. For finite dimensional cones, Walsh [31] proved that an order-reversing, bijective \(d_{T}\)-isometry exists if and only if \(C\) is a symmetric cone. When there is an order-reversing isometry, we can make the following simple observation which is an immediate corollary of Proposition 4.9.
**Lemma 4.11**.: _Let \(C\) be a regular, closed cone with nonempty interior in a Banach space and let \(f:C^{\circ}\to C^{\circ}\) be order-preserving and subhomogeneous. If there is an order-reversing, bijective \(d_{T}\)-isometry \(L:C^{\circ}\to C^{\circ}\), then_
\[\lambda(f)=r((LfL)_{\infty})^{-1}\]
_where \((LfL)_{\infty}\) is the recession map of the composition \(LfL\)._
## 5. Analytic maps and uniqueness of fixed points
Let \(X,Y\) be real Banach spaces and let \(U\) be an open subset of \(X\). A function \(f:U\to Y\) is _real analytic_ if for every \(x\in U\), there is an \(r>0\) and continuous symmetric \(n\)-linear forms \(A_{n}:X^{n}\to Y\) such that \(\sum_{n=1}^{\infty}\|A_{n}\|r^{n}<\infty\) and
\[f(x+h)=f(x)+\sum_{n=1}^{\infty}A_{n}(h^{n})\]
for all \(h\) in a neighborhood of \(0\) in \(X\).
A _real analytic variety_ is a set of common zeros of a finite collection of real analytic functions on an open domain \(U\subseteq X\). The following proposition is based on a theorem of Sullivan about the structure of real analytic varieties.
**Proposition 5.1**.: _Let \(V\) be a nonempty real analytic variety defined in an open subset of a finite dimensional real normed space. If \(V\) is compact and contractible, then \(V\) consists of a single point._
Proof.: Let \(k=\dim V\) and suppose by way of contradiction that \(k>0\). A theorem of Sullivan [29, Corollary 2] asserts that V is locally homeomorphic to the topological cone over a polyhedron with even Euler characteristic. As an immediate consequence, the sum of all \(k\)-simplices in a triangularization of \(V\) is a mod \(2\) cycle [29, see comments after Corollary 2]. Since there are no simplices of dimension
greater than \(k\) in a triangularization of \(V\), the sum of all \(k\)-simplices is not a boundary. Therefore the homology group \(H_{k}(V,\mathbb{Z}_{2})\neq\varnothing\). This means that \(V\) cannot be contractible, a contradiction.
**Theorem 5.2**.: _Let \(C\) be a closed cone with nonempty interior in a finite dimensional real normed space. Let \(f:C^{\circ}\to C^{\circ}\) be real analytic and \(d_{T}\)-nonexpansive. Then \(f\) has a unique fixed point in \(C^{\circ}\) if and only if \(\operatorname{Fix}(f)\) is a nonempty and bounded subset of \((C^{\circ},d_{T})\)._
Proof.: We only need to prove that \(\operatorname{Fix}(f)\) nonempty and bounded implies that \(f\) has a unique fixed point since the converse is obvious. Observe that \(\operatorname{Fix}(f)\) is the zero set of the real analytic function \(f-\operatorname{id}\), so it is a real analytic variety. Since \(X\) is finite dimensional and we are assuming that \(\operatorname{Fix}(f)\) is bounded in \((C^{\circ},d_{T})\), it follows that \(\operatorname{Fix}(f)\) is compact. We know by Corollary 3.2 that \(\operatorname{Fix}(f)\) is contractible. Therefore Proposition 5.1 implies that \(\operatorname{Fix}(f)\) consists of a single point.
We can say more when \(f\) is also order-preserving. The proof of the next theorem uses a different technique, which allows us to remove the assumption that the cone is finite dimensional.
**Theorem 5.3**.: _Let \(C\) be a closed, normal cone with nonempty interior in a Banach space. Let \(f:C^{\circ}\to C^{\circ}\) be real analytic, order-preserving, and subhomogeneous. Suppose that \(C\) is regular or \(f\) is \(\gamma\) or \(\tau\)-condensing. Then the following are equivalent._
1. \(\operatorname{Fix}(f)\) _is nonempty and bounded in_ \((C^{\circ},d_{T})\)_._
2. _There exist_ \(x,y\in C^{\circ}\) _such that_ \(f(x)\gg x\) _and_ \(f(y)\ll y\)_._
3. \(\lambda(f)>1\) _and_ \(r(f)<1\)_._
4. \(f\) _has a unique fixed point in_ \(C^{\circ}\)_._
5. _There is a_ \(u\in C^{\circ}\) _such that_ \(\lim_{k\to\infty}f^{k}(x)=u\) _for all_ \(x\in C^{\circ}\)_._
Proof.: Theorem 4.1 proved the equivalence of (a), (b), and (c). It is obvious that (d) implies (a). Here we will prove that (a) implies (d) and that (d) and (e) are equivalent.
(a)\(\Rightarrow\)(d). Choose \(u\in\operatorname{Fix}(f)\), and choose \(R>r>0\) with \(r\) large enough so that \(\operatorname{Fix}(f)\subset B_{r}(u)\). By Lemma 4.3 there is a \(k\in\mathbb{N}\) such that \(f^{k}(\overline{B_{R}(u)})\subset B_{r}(u)\). In particular, by (2.2), we have \(f^{k}(e^{R}u)\ll e^{r}u\) and \(f^{k}(e^{-R}u)\gg e^{-r}u\).
Since the composition of two real analytic functions is real analytic [33], it follows that \(f^{k}\) is real analytic. Now, choose \(\phi\in C^{*}\setminus\{0\}\), and consider the real-valued function \(g(t)=t^{-1}\phi(f^{k}(tu))\) which is defined on the real interval \((0,\infty)\). Suppose \(0<s<t\). By (2.1),
\[\log\frac{\phi(f^{k}(tu))}{\phi(f^{k}(su))}\leq d_{T}(f^{k}(tu),f^{k}(su))\leq d _{T}(tu,su)=\log\left(\frac{t}{s}\right).\]
It follows that \(g(t)\leq g(s)\), so \(g\) is monotone decreasing. If \(g(t)=g(1)=\phi(u)\) for any \(t>0\) other than \(1\), then \(g\) will be constant on the interval between \(t\) and \(1\). Since \(g\) is real analytic, that would imply that \(g\) is constant on all of \((0,\infty)\). That cannot be the case, however, since \(f^{k}(e^{R}u)\ll e^{r}u\), which implies that
\[g(e^{R})\leq e^{-R}\phi(e^{r}u)=e^{r-R}\phi(u)<\phi(u)=g(1).\]
From this we conclude that \(g\) is strictly decreasing and therefore \(\phi(f^{k}(tu))<t\phi(u)\) for all \(t>1\) and \(\phi(f^{k}(tu))>t\phi(u)\) for all \(t<1\). These inequalities are true for all \(\phi\in C^{*}\setminus\{0\}\).
Note that \(x\in C^{\circ}\) if and only if \(\phi(x)>0\) for all \(\phi\in C^{*}\setminus\{0\}\)[3, Proposition 19.3(b)]. Since \(\phi(tu-f^{k}(tu))>0\) for all \(\phi\in C^{*}\setminus\{0\}\) and \(t>1\), we have \(f^{k}(tu)\ll tu\) when \(t>1\). Likewise, \(f^{k}(tu)\gg tu\) when \(0<t<1\). So for every \(t>1\), \(f^{k}\) maps the closed Thompson metric ball \(\overline{B_{\log t}(u)}=[t^{-1}u,tu]\) into its interior. This implies that \(d_{T}(f^{k}(x),u)<d_{T}(x,u)\) for all \(x\in C^{\circ}\). From this, we conclude that \(u\) is the only fixed point of \(f^{k}\), and so it must also be the only fixed point of \(f\) as well.
(d)\(\Leftrightarrow\)(e). If \(f\) has a unique fixed point \(u\in C^{\circ}\), then Lemma 4.3 implies that \(\lim_{k\to\infty}f^{k}(x)=u\) for all \(x\in C^{\circ}\). Conversely, if \(f^{k}(x)\) converges to \(u\in C^{\circ}\) for all \(x\in C^{\circ}\), then \(u\) is a fixed point of \(f\) since \(f\) is continuous and \(u\) is unique since the iterates of all other \(x\in C^{\circ}\) converge to \(u\).
_Remark 5.4_.: It is known that if \(X\) is a real Banach space with the fixed point property and \(f:X\to X\) is real analytic and nonexpansive, then \(f\) has a unique fixed point if and only if \(\operatorname{Fix}(f)\) is bounded and nonempty [24, Theorem 5.3].
## 6. Application to nonlinear matrix equations
Let \(\mathcal{H}\subset\mathbb{C}^{n\times n}\) denote the set of \(n\)-by-\(n\) Hermitian matrices with complex entries. Let \(\mathcal{P}\) denote the cone of positive semidefinite matrices in \(\mathcal{H}\). Let \(I\) denote the \(n\)-by-\(n\) identity matrix and let \(\|\cdot\|\) denote the spectral norm on \(\mathcal{H}\). Note that \(\mathcal{P}^{\circ}\) is the set of all positive definite matrices. Let \(L(X)=X^{-1}\) for any \(X\in\mathcal{P}^{\circ}\). The following nonlinear function \(f:\mathcal{P}^{\circ}\to\mathcal{P}\) was studied in [18, Section 8]:
\[f(X)=A+N^{*}(B+X^{-1})^{-1}N\]
where \(A,B\in\mathcal{P}\) and \(N\in\mathbb{C}^{n\times n}\). Any fixed point of \(f\) is a solution to the discrete algebraic Riccati equation \(X=A+N^{*}(B+X^{-1})^{-1}N\). Assume that \(A+N^{*}N\in\mathcal{P}^{\circ}\) so that \(f(\mathcal{P}^{\circ})\subseteq\mathcal{P}^{\circ}\). Unlike [18], we do not assume that \(A\) and \(B\) are positive definite, only positive semidefinite which is enough to guarantee that \(f\) is order-preserving and subhomogeneous on \(\mathcal{P}^{\circ}\). As a consequence of [18, Theorem 8.1], \(f\) is a strict \(d_{T}\)-contraction on \(\mathcal{P}^{\circ}\) when \(A,B\) are positive definite. With our weaker assumptions, \(f\) may not be a strict \(d_{T}\)-contraction. In this section, we will demonstrate how to calculate the recession maps \(f_{\infty}\) and \((LfL)_{\infty}\). Using the recession maps, we will give simple sufficient conditions for \(f\) to have a unique positive definite fixed point, despite the fact that \(f\) may not be a strict contraction.
First, we note the following.
**Lemma 6.1**.: _Let \(A\in\mathcal{P}\) and let \(B(t)\in\mathcal{P}\) for all \(t>0\) and suppose that \(\lim_{t\to\infty}B(t)=B(\infty)\in\mathcal{P}\) exists and \(A+B(t)\in\mathcal{P}^{\circ}\) for all \(t\in(0,\infty]\). Then_
\[\lim_{t\to\infty}(tA+B(t))^{-1}=(Q_{A}B(\infty)Q_{A})^{\dagger}\]
_where \(Q_{A}\in\mathcal{H}\) is the orthogonal projection onto the nullspace of \(A\) and \(M^{\dagger}\) denotes the Moore-Penrose pseudoinverse of a matrix \(M\)._
Proof.: Recall the Schur complement formula for the inverse of a partitioned matrix [11, Section 0.7.3]:
\[\begin{bmatrix}M_{11}&M_{12}\\ M_{21}&M_{22}\end{bmatrix}^{-1}=\begin{bmatrix}S_{1}^{-1}&-M_{11}^{-1}M_{12} S_{2}^{-1}\\ -S_{2}^{-1}M_{21}M_{11}^{-1}&S_{2}^{-1}\end{bmatrix} \tag{6.1}\]
where \(S_{1}=M_{11}-M_{12}M_{22}^{-1}M_{21}\) and \(S_{2}=M_{22}-M_{21}M_{11}^{-1}M_{12}\). Note that all of the inverses in this formula are well-defined when the partitioned matrix is positive definite [11, Theorem 7.7.6].
We can choose a basis so that \(M(t)=tA+B(t)\) can be expressed as a partitioned matrix:
\[M(t)=\begin{bmatrix}tA_{11}+B(t)_{11}&B(t)_{12}\\ B(t)_{21}&B(t)_{22}\end{bmatrix}\]
where
\[A=\begin{bmatrix}A_{11}&0\\ 0&0\end{bmatrix}\]
has \(A_{11}\) positive definite. Since \(M(t)\) is positive definite for all \(t>0\), we can apply (6.1) to \(M(t)\). Then by inspection, the limit of \(M(t)^{-1}\) as \(t\to\infty\) is
\[\begin{bmatrix}0&0\\ 0&B(\infty)_{22}^{-1}\end{bmatrix}=(Q_{A}B(\infty)Q_{A})^{\dagger}.\]
Now, we use Lemma 6.1 to compute \(f_{\infty}\) and \((LfL)_{\infty}\). For any \(X\in\mathcal{P}^{\circ}\), we have
\[f_{\infty}(X) =\lim_{t\to\infty}t^{-1}(A+N^{*}(B+(tX)^{-1})^{-1}N)\] (6.2) \[=\lim_{t\to\infty}N^{*}(tB+X^{-1})^{-1}N\] \[=N^{*}(Q_{B}X^{-1}Q_{B})^{\dagger}N\] (by Lemma 6.1)
and
\[(LfL)_{\infty}(X) =\lim_{t\to\infty}t^{-1}(A+N^{*}(B+tX)^{-1}N)^{-1}\] (6.3) \[=\lim_{t\to\infty}(tA+N^{*}(t^{-1}B+X)^{-1}N)^{-1}\] \[=(Q_{A}N^{*}X^{-1}NQ_{A})^{\dagger}\] (by Lemma 6.1)
where \(Q_{A}\) and \(Q_{B}\) are the orthogonal projections onto the nullspaces of \(A\) and \(B\), respectively.
**Proposition 6.2**.: _Let \(f(X)=A+N^{*}(B+X^{-1})^{-1}N\) where \(A,B\in\mathcal{P}\) and \(N\in\mathbb{C}^{n\times n}\) satisfies \(A+N^{*}N\in\mathcal{P}^{\circ}\). Let \(Q_{A}\) and \(Q_{B}\) denote the orthogonal projections onto the nullspaces of \(A\) and \(B\) respectively, and let \(f_{\infty}\) and \((LfL)_{\infty}\) be given by equations (6.2) and (6.3). Then \(f\) has a (necessarily unique) globally attracting fixed point in \(\mathcal{P}^{\circ}\) if and only if there are constants \(k,\ell>0\) such that_
\[\|(f_{\infty}+\mathrm{id})^{k}(I)\|<2^{k}\text{ and }\|((LfL)_{\infty}+ \mathrm{id})^{\ell}(I)\|<2^{\ell}.\]
_In particular, \(f\) has a globally attracting fixed point if_
\[\|N^{*}Q_{B}N\|<1\text{ and }\|(Q_{A}N^{*}NQ_{A})^{\dagger}\|<1.\]
Proof.: Observe that \(f\) is real analytic because the inverse operator \(L\) is real analytic in a neighborhood of any \(X\in\mathcal{P}^{\circ}\). Since \(A,B\in\mathcal{P}\) and \(A+N^{*}N\in\mathcal{P}^{\circ}\), it follows that \(f\) is an order-preserving, subhomogeneous map sending \(\mathcal{P}^{\circ}\) into itself. When \(X\in\mathcal{P}\), note that \(X\ll I\) if and only if \(\|X\|<1\). In particular, \(\|(f_{\infty}+\mathrm{id})^{k}(I)\|\leq 2^{k}\) if and only if \((f_{\infty}+\mathrm{id})^{k}(I)<2^{k}I\). By Propositions 4.9 and 4.10, there is a \(k>0\) for which these equations hold if and only if \(r(f)<1\). Similarly, \(\|((LfL)_{\infty}+\mathrm{id})^{\ell}(I)\|\leq 2^{\ell}\) for some \(\ell>0\) if and only if \(r((LfL)_{\infty})<1\). This is equivalent to \(\lambda(f)>1\) by Lemma 4.11. Therefore Theorem 5.3 says that \(f\) has a globally attracting fixed point in \(\mathcal{P}^{\circ}\) if and only if
\[\|(f_{\infty}+\mathrm{id})^{k}(I)\|<2^{k}\text{ and }\|((LfL)_{\infty}+\mathrm{id })^{\ell}(I)\|<2^{\ell}\]
for some \(k,\ell\in\mathbb{N}\). This condition is satisfied with \(k=\ell=1\) if
\[\|N^{*}Q_{B}N\|<1\text{ and }\|(Q_{A}N^{*}NQ_{A})^{\dagger}\|<1.\]
|
2307.07566
|
Reconstruction of 3-Axis Seismocardiogram from Right-to-left and
Head-to-foot Components Using A Long Short-Term Memory Network
|
This pilot study aims to develop a deep learning model for predicting
seismocardiogram (SCG) signals in the dorsoventral direction from the SCG
signals in the right-to-left and head-to-foot directions ($\textrm{SCG}_x$ and
$\textrm{SCG}_y$). The dataset used for the training and validation of the
model was obtained from 15 healthy adult subjects. The SCG signals were
recorded using tri-axial accelerometers placed on the chest of each subject.
The signals were then segmented using electrocardiogram R waves, and the
segments were downsampled, normalized, and centered around zero. The resulting
dataset was used to train and validate a long short-term memory (LSTM) network
with two layers and a dropout layer to prevent overfitting. The network took as
input 100-time steps of $\textrm{SCG}_x$ and $\textrm{SCG}_y$, representing one
cardiac cycle, and outputted a vector that mapped to the target variable being
predicted. The results showed that the LSTM model had a mean square error of
0.09 between the predicted and actual SCG segments in the dorsoventral
direction. The study demonstrates the potential of deep learning models for
reconstructing 3-axis SCG signals using the data obtained from dual-axis
accelerometers.
|
Mohammad Muntasir Rahman, Amirtahà Taebi
|
2023-07-14T18:13:29Z
|
http://arxiv.org/abs/2307.07566v2
|
Reconstruction of 3-Axis Seismocardiogram from Right-to-left and Head-to-foot Components Using A Long Short-Term Memory Network
###### Abstract
This pilot study aims to develop a deep learning model for predicting seismocardiogram (SCG) signals in the dorsoventral direction from the SCG signals in the right-to-left and head-to-foot directions (SCG\({}_{\text{x}}\) and SCG\({}_{\text{y}}\)). The dataset used for the training and validation of the model was obtained from 15 healthy adult subjects. The SCG signals were recorded using tri-axial accelerometers placed on the chest of each subject. The signals were then segmented using electrocardiogram R waves, and the segments were downsampled, normalized, and centered around zero. The resulting dataset was used to train and validate a long short-term memory (LSTM) network with two layers and a dropout layer to prevent overfitting. The network took as input 100-time steps of SCG\({}_{\text{x}}\) and SCG\({}_{\text{y}}\), representing one cardiac cycle, and outputted a vector that mapped to the target variable being predicted. The results showed that the LSTM model had a mean square error of 0.09 between the predicted and actual SCG segments in the dorsoventral direction. The study demonstrates the potential of deep learning models for reconstructing 3-axis SCG signals using the data obtained from dual-axis accelerometers.
_Clinical relevance--_ This work contributes to the advancement of cardiovascular monitoring techniques that rely on SCG signals obtained from single- or dual-axis accelerometers.
## I Introduction
Cardiovascular diseases (CVDs) are the leading cause of death in the United States, claiming the life of one person every 34 seconds, resulting in a staggering 2,544 deaths per day based on 2020 data [1]. Beyond the devastating human toll, this places an immense strain on healthcare systems and society, with significant economic costs associated with treatment and lost productivity. The early detection of cardiac abnormalities is crucial for achieving better outcomes for patients with CVD, and improving diagnostic methods and accessibility is a key step in this direction. Current diagnostic methods, including non-invasive techniques such as electrocardiography (ECG), medical imaging, and cardiac catheterization, can aid in identifying CVDs. Advancements in technology have also led to the development of new diagnostic options, such as wearable and remote monitoring systems, which can provide continuous monitoring of patients' cardiovascular health outside of traditional healthcare settings. Seismocardiography (SCG) is another technique that noninvasively monitors cardiovascular activity by measuring cardiovascular-induced vibrations on the chest [2, 3]. These vibrations result from a range of cardiac activities, including valve opening and closing, isovolumetric contraction, blood ejection, and rapid left ventricle filling [3, 4, 5]. Unlike other non-invasive techniques such as ECG and pulse oximetry, which focus on the electrical activity of the heart and the blood oxygen level, respectively, SCG provides complementary insights into the mechanical activity of the heart [6, 7, 4]. With its ability to evaluate these mechanical activities, SCG has the potential to offer valuable diagnostic information for cardiac conditions such as heart failure, myocardial infarction, ischemia, and hemorrhage, as changes in the mechanical function of the heart can be an early indication of these diseases [8, 9, 10, 11, 12, 13]. As a result, SCG can enhance our understanding of the cardiac function and contribute to the development of more accurate diagnostic tools for patients with CVD. SCG signals are commonly measured using accelerometers that are placed on the chest surface. These signals are typically measured in three directions of right-to-left, head-to-foot, and dorsoventral. In that regard, while single or dual-axis accelerometers may be used for SCG measurement, three-axis accelerometers are more informative as they offer a more comprehensive understanding of the motion of the heart and chest wall [6].
This study aims to answer the question: "Can we generate three-axis SCG measurements using a dual-axis accelerometer?" More specifically, is it possible to generate SCG vibrations in the \(z\) direction using the measurements from the \(x\) and \(y\) axes of a dual-axis accelerometer? In this work, we propose a deep neural network model based on long short-term memory (LSTM) to predict the SCG component in the dorsoventral direction (SCG\({}_{\text{z}}\)) using the vibrations in right-to-left and head-to-foot directions (SCG\({}_{\text{x}}\), SCG\({}_{\text{y}}\)). LSTM models have been successfully applied to a wide range of time series-related problems, including but not limited to stock price prediction, energy load forecasting, weather forecasting, speech recognition, and natural language processing. In this paper, we designed a regression model based on a stacked LSTM neural network with two layers to process the SCG\({}_{\text{x}}\) and SCG\({}_{\text{y}}\) sequences corresponding to a single cardiac cycle and generate an output sequence of SCG\({}_{\text{z}}\) of the same length. The use of a stacked LSTM network can potentially improve the accuracy of the predictions by allowing the network to learn more complex temporal patterns in the data.
## II Materials and Methods
### _Long Short-Term Memory Network_
The basic building block of an LSTM is the LSTM cell, which consists of a memory cell and three gates: the input
gate, the forget gate, and the output gate. The memory cell is responsible for storing information over time and passing it forward through the sequence. The input gate controls how much new information is added to the cell state, the forget gate controls how much old information is removed from the cell state, and the output gate controls how much information from the cell state is used to compute the hidden state. The LSTM cell's computations are performed using various activation functions, such as the sigmoid function and the hyperbolic tangent function. These activation functions help to ensure that the information flow in and out of the cell is regulated and controlled, leading to better long-term memory retention and more effective learning. Fig. 1a shows a schematic diagram of an LSTM unit. At each time step \(t\), the LSTM cell takes as input the current input sequence \(x_{t}\), the previous hidden state \(h_{t-1}\), and the previous cell state \(c_{t-1}\). Using these inputs, the LSTM cell generates the current hidden state \(h_{t}\) and the updated cell state \(c_{t}\). The computations involved in this process include several gates that control the flow of information in and out of the cell and can be calculated by:
\[i_{t}=\sigma(W_{x}^{(i)}x_{t}+W_{h}^{(i)}h_{t-1}+b^{(i)}) \tag{1}\] \[f_{t}=\sigma(W_{x}^{(f)}x_{t}+W_{h}^{(f)}h_{t-1}+b^{(f)})\] (2) \[o_{t}=\sigma(W_{x}^{(o)}x_{t}+W_{h}^{(o)}h_{t-1}+b^{(o)})\] (3) \[\tilde{c}_{t}=tanh(W_{x}^{(c)}x_{t}+W_{h}^{(c)}h_{t-1}+b^{(c)}) \tag{4}\]
where \(i_{t}\), \(f_{t}\), and \(o_{t}\) are the input, forget, and output gates, respectively, and \(\tilde{c}_{t}\) provides the change contents. Finally, updated cell state \(c_{t}\) and hidden state \(h_{t}\) are computed as:
\[c_{t}=f_{t}\odot c_{t-1}+i_{t}\odot\tilde{c}_{t} \tag{5}\] \[h_{t}=o_{t}\odot tanh(c_{t}) \tag{6}\]
where \(\odot\) performs element-wise product.
### _Study Population_
The study enrolled a total of 15 participants, of whom 4 were female, all of whom had no prior history of cardiovascular diseases (age: 25.93 \(\pm\) 10.65 year, height: 171.31 \(\pm\) 9.22 cm, weight: 74.83 \(\pm\) 22.83 kg, body mass index: 25.29 \(\pm\) 6.58 kg/m\({}^{2}\)). The study sample was diverse, with participants from various racial and ethnic backgrounds, including 53.3% White, 20% Black, 20% Asian, and 6.7% mixed. The Mississippi State University Institutional Review Board approved the study protocol.
### _Data Acquisition Protocol_
To minimize any potential movement artifacts that could affect the quality of the data, all subjects were instructed to lay supine on a bed without additional body movements. Three triaxial accelerometers (356A32, PCB Piezotronics, Depew, NY) were attached to three locations on the sternum including the manubrium, the fourth costal notch, and the xiphod process. The accelerometer outputs were amplified using a signal conditioner (482C, PCB Piezotronics, Depew, NY) with a gain factor of 100 to increase the signal-to-noise ratio. The amplified signals were then recorded using a data acquisition system (416, iWorx Systems, Inc., Dover, NH), with a sampling frequency of 5000 Hz. A microphone was connected to the system and tapped at the beginning and end of each recording. These taps were then located in the sound signals to identify the start and end of the intended part of the recording. An ECG module was also used to simultaneously record the ECG signal. Data were collected from all accelerometers during a 15-second breath-hold at the end of inhalation and exhalation, and 2 additional minutes of normal breathing.
### _SCG Dataset_
To prepare the dataset, the following pre-processing steps were carried out to eliminate noise from the raw signals. The first step involved applying a moving average filter to smooth the SCG signals. This step helps reduce high-frequency noise in the data, making it easier to detect underlying patterns or features. Subsequently, a band-pass filter was applied to the accelerometer outputs with cutoff frequencies of 1 and 30
Fig. 1: (a) A representation of a LSTM cell. (b) Stacked LSTM network architecture.
Hz. This eliminated the low-frequency respiration vibrations and the higher-frequency SCG components above 30 Hz.
SCG signals were then segmented using the ECG R waves as reference points. To detect the R waves, we utilized the widely-used Pan-Tompkins algorithm [14], which is known for its ability to identify the peaks corresponding to ventricular depolarization. We then computed the average duration of the cardiac cycle for each subject from the ECG RR intervals. This information was then used to determine the window size to segment the SCG signals for each subject. Specifically, we set the start of the window to 1/4 of the average cardiac cycle duration before the R wave and the end of the window to 3/4 of the average cardiac cycle duration after the R wave, resulting in consistent SCG segments. After segmenting the SCG signals for each cardiac cycle, the segments were downsampled to a fixed number of sample points (100 points in this study). This allowed us to obtain consistent segment lengths across all samples for different subjects, ensuring that the input data for the LSTM model was uniform. This is crucial because the neural network takes one segment at a time as input, and the original segments may have varied lengths for different subjects (due to different cardiac cycle durations). By treating each SCG segment as a separate sample in the dataset, the neural network can learn patterns and features specific to each cycle, which can be useful for predicting SCG signals in the \(z\)-direction.
Next, the segments were normalized to have values between -1 and 1 to ensure that the input data falls within a suitable range for activation functions. Then, the mean value was subtracted from each segment to center the signals around zero. This step was useful in mitigating any DC offset or baseline drift present in the signal that could affect the performance of the neural network. In the last step, the preprocessed and normalized SCG segments from all subjects were combined to form a single dataset, which was used for training, validating and testing the neural network model. A total of 7492 SCG segments were used for training and validation and the model was tested on 475 segments. This dataset was expected to capture the common patterns and features present in the SCG signals of the population studied, and was thus representative of the entire cohort.
### _Network Architecture and Training_
Fig. 1b provides an overview of a stacked LSTM network's architecture employed in this study. A stacked LSTM refers to the use of multiple LSTM layers, one on top of the other. This allows the network to learn more complex patterns and relationships in the data. The network has two stacked LSTM layers, each with a hidden size of 512. The hidden size refers to the number of hidden units or neurons in the LSTM layer, which determines the capacity of the network to capture complex patterns in the data. The network processes 100-time steps of \(\text{SCG}_{\text{x}}\) and \(\text{SCG}_{\text{y}}\) as a sample input, which corresponds to approximately one cardiac cycle (as described in Sect. II-D). Utilizing dual-axis rather than single-axis SCG data may help the network to capture more information and learn the relationship between the input (\(\text{SCG}_{\text{x}}\) and \(\text{SCG}_{\text{y}}\)) to predict the \(\text{SCG}_{\text{z}}\) and make accurate predictions on new, unseen data. The input then flows into the stacked LSTM layers. The final LSTM layer outputs a vector \(h_{i}\), which is subsequently fed into a fully connected layer featuring 100 output neurons. This fully connected layer maps the input vector to a set of output values that correspond to the target variable predicted by the model. To prevent overfitting and improve the generalization, a dropout layer is added after each LSTM layer. The dataset was divided into training, validation, and testing sets to train and evaluate the network. Since we collected data from three accelerometers for each subject, we separated breath-hold data captured at the end of inhalation and at the end of exhalation from the accelerometer attached to the xizhoid process to test the model. The remaining data were used for training and validation. The training set, which contained 90% of the data, was used to determine the weight and bias parameters through forward and backward propagation during the training process. The validation set, which contained 10% of the data, was used to evaluate the performance of the model during the training process. We trained the network with a maximum of 1000 epochs and used early stopping technique to prevent overfitting during training. Early stopping is a regularization technique to monitor the performance of the model on the validation set during training and stop the training when the performance on the validation set no longer improves. The initial learning rate was 0.001, the learning rate was reduced using learning rate decay when the validation set stopped improving for a specified number of epochs. This technique helped the model converge to a better solution and avoid getting stuck in local minima.
## III Results and Discussion
Our goal was to investigate the possibility of predicting \(\text{SCG}_{\text{z}}\) from the SCG signals in the \(x\) and \(y\) directions through the use of an LSTM neural network. We used the testing set to evaluate the model's performance. The accuracy of the model in predicting \(\text{SCG}_{\text{z}}\) based on \(\text{SCG}_{\text{x}}\) and \(\text{SCG}_{\text{y}}\) was assessed by comparing the predicted and actual SCG signal
in the \(z\) direction, using mean-square error (MSE) as the measure of accuracy. Table I shows the model's performance in terms of MSE, while Fig. 2 provides a box plot to illustrate the results for each subject's end-exhalation and end-inhalation data, as well as the total performance with all end-exhalation and end-inhalation data. The box plot shows the median, interquartile range, and outliers of the prediction in terms of MSE. The prediction MSE for each subject varied from 0.05 to 0.14, with a total MSE of 0.09 for all subjects combined. Four samples of the predicted and actual \(\text{SCG}_{\text{z}}\) segments from subjects 6 and 12 are presented in Fig. 3. The close resemblance between the predicted and actual \(\text{SCG}_{\text{z}}\) signals suggests that there is a correlation between the SCG components in the \(x\), \(y\), and \(z\) directions, and that our LSTM neural network was able to learn it.
## IV Conclusion
The study investigated whether the \(\text{SCG}_{\text{z}}\) signal can be predicted from SCG signals in the \(x\) and \(y\) directions using an LSTM neural network. Results showed that the predicted \(\text{SCG}_{\text{z}}\) closely matched the actual signal, indicating a relationship between the SCG components in the three directions. The study provides insights into the potential of using a dual-axis accelerometer to monitor SCG signals in all three directions, which can improve cardiovascular monitoring methods.
|
2307.08655
|
Multilingual Speech-to-Speech Translation into Multiple Target Languages
|
Speech-to-speech translation (S2ST) enables spoken communication between
people talking in different languages. Despite a few studies on multilingual
S2ST, their focus is the multilinguality on the source side, i.e., the
translation from multiple source languages to one target language. We present
the first work on multilingual S2ST supporting multiple target languages.
Leveraging recent advance in direct S2ST with speech-to-unit and vocoder, we
equip these key components with multilingual capability. Speech-to-masked-unit
(S2MU) is the multilingual extension of S2U, which applies masking to units
which don't belong to the given target language to reduce the language
interference. We also propose multilingual vocoder which is trained with
language embedding and the auxiliary loss of language identification. On
benchmark translation testsets, our proposed multilingual model shows superior
performance than bilingual models in the translation from English into $16$
target languages.
|
Hongyu Gong, Ning Dong, Sravya Popuri, Vedanuj Goswami, Ann Lee, Juan Pino
|
2023-07-17T17:12:44Z
|
http://arxiv.org/abs/2307.08655v1
|
# Multilingual Speech-to-Speech Translation into Multiple Target Languages
###### Abstract
Speech-to-speech translation (S2ST) enables spoken communication between people talking in different languages. Despite a few studies on multilingual S2ST, their focus is the multilinguality on the source side, i.e., the translation from multiple source languages to one target language. We present the first work on multilingual S2ST supporting multiple target languages. Leveraging recent advance in direct S2ST with speech-to-unit and vocoder, we equip these key components with multilingual capability. Speech-to-masked-unit (S2MU) is the multilingual extension of S2U, which applies masking to units which don't belong to the given target language to reduce the language interference. We also propose multilingual vocoder which is trained with language embedding and the auxiliary loss of language identification. On benchmark translation testsets, our proposed multilingual model shows superior performance than bilingual models in the translation from English into \(16\) target languages.
Hongyu Gong, Ning Dong, Sravya Popuri, Vedanuj Goswami, Ann Lee, Juan Pino Meta AI Research, USA
{hygong, dnn, spopuri, vedanuj, annl, juancarabina}@meta.com
**Index Terms**: multilingual speech-to-speech translation
## 1 Introduction
Speech-to-speech translation consists in translating an utterance from a source language into another language, preserving the semantic meaning. Traditional methods mostly build a pipeline of automatic speech recognition (ASR), machine translation (MT) and text-to-speech (TTS) synthesis [1]. Recent research progress on direct approaches has paved the way for S2ST modeling without reliance on intermediate texts. Direct S2ST makes it possible to use only speech alignments as training data and support languages without standard writing systems [2]. The recently proposed direct approach uses discrete units learned from pre-trained HuBERT models as the bridge between source and target speech [3]. It builds a speech-to-unit (S2U) module to translate source speech to target units, and a separately trained vocoder constructs the target speech from these units.
Multilingual modeling has attracted great research interest in its scalability to the increased coverage of translation directions [4, 5]. Instead of training and maintaining numerous bilingual models, we can use one multilingual model to support multiple directions. Besides deployment efficiency, the multilingual research is further motivated by the enhanced translation performance [4]. Translation is a resource-intensive task, however, not all languages have abundant training resources. The knowledge sharing is enabled by multilingual training across languages, benefiting a language with data in other languages.
Research explorations have been made in multilingual speech-to-speech translation, but existing works focus on translation from multiple sources languages into English only [6, 7]. To the best of our knowledge, this is the first study on multilingual S2ST supporting multiple target languages. We leverage the direct approach built upon S2U and vocoder, and further equip the model with the multilingual capability. Modeling challenges have been identified in order to support multiple target languages. First of all, languages have different unit vocabularies, and the concatenation of multiple unit sets increases the vocabulary size and makes the unit sequence modeling harder. Empirically we observe degraded translation performance with the extended unit dictionary. Secondly, monolingual vocoders used in existing S2ST studies do not scale efficiently with the increased language coverage in multilingual setting.
In this work, we propose a speech-to-masked-unit model to address the first challenge of extended unit dictionary. We apply unit masking to help the model focus on the units belonging to the given language without being interfered by other languages. Another contribution of this work is the exploration of multilingual vocoders to synthesize speech for a family of similar languages. It effectively reduces the number of vocoders when scaling up target languages. To mitigate the language interference in multilingual speech synthesis, we add language embedding to vocoder training and introduce the auxiliary loss of language identification. Empirical results demonstrate positive transfer across languages in speech synthesis, and improved speech quality of multilingual vocoders.
Our multilingual S2ST is empirically evaluated on the task of translating English into 16 languages. On the testsets from EuroPar [8], VoxPopuli [9] and FLEURS [10], proposed multilingual models achieve consistent gains than bilingual models with an average of \(+5.2\) and \(+2.7\) BLEU on in-domain and out-of-domain data respectively.
## 2 Related work
**Speech-to-speech translation**. Conventional approaches to S2ST are cascaded models with texts as intermediate outputs. Source speech is translated into target texts using speech-to-text translation or the combination of speech recognition and machine translation [1]. Target texts are lastly converted to target speech via text-to-speech models. Direct S2ST models are recently proposed without the need of target texts. Translatotron 2 applies multitask learning with phoneme information [2]. Another type of direct models bridges source and target speech with units learned from acoustic models, and its framework consists of speech-to-unit and vocoder [3, 11]. Besides the advances in translation modeling, recent works explore data mining [7] and data augmentation [12] to improve the speech translation performance.
**Multilingual modeling**. Multilinguality has been studied in machine translation [4], automatic speech recognition [13], text-to-speech synthesis [14] and speech-to-text transla
tion [15]. The advantages of multilingual models are the performance improvements brought by knowledge transfer across languages and better efficiency of model training and maintenance. Instead of training multiple monolingual models, researchers train a single multilingual model supporting numerous languages. As for speech-to-speech translation, a few recent works explore multilingual modeling from multiple source languages to one target language [6, 7].
Despite positive transfer of cross-lingual knowledge, multilingual models are also faced with the challenge of language interference. It is known as the curse of multilinguality, which results in performance degradation in some language directions [16].
## 3 Model
To model speech-to-speech translation, we take advantage of the direct approach built upon speech-to-unit (S2U) and vocoder [7, 3]. Given aligned source and target speech, the target speech is transformed into a sequence of discrete units with pre-trained HuBERT model [17]. S2U model is trained to translate source speech to the corresponding target unit sequence. Vocoder is separately trained to synthesize speech from discrete units. In the stage of inference, units are predicted by S2U model from source speech, and then taken by the vocoder to synthesize target speech.
Previous studies focus on only one target language in S2ST, and it is not trivial to adapt S2U model and vocoder to multilingual setting. We propose a multilingual speech-to-masked-unit (S2MU) model as described in subsection 3.1. Multilingual vocoders are introduced to improve speech synthesis quality across languages in subsection 3.2.
### Speech-to-Masked-Unit Model
Our multilingual speech-to-masked-unit model has an encoder-decoder architecture. The overview of multilingual S2MU model is presented in Figure 1. The speech encoder consists of convolutional layers and Transformer encoder layers, and the unit decoder is essentially a Transformer decoder. There is a length adaptor to bridge the sequence length gap between encoder outputs and decoder units. The adaptor is a single convolutional layer to downsample the encoder states since encoder length is longer than unit length. Similar to previous works [18], we initialize S2U model with pretrained encoder and decoder as the initialization demonstrated performance gains.
When supporting multiple target languages, the decoder needs language information to make correct predictions. Therefore we inform the decoder by prepending language tag to the target unit sequence. For example, "[de]" is prepended to the German units in Figure 1. Suppose that multiple languages fall into language families such as Germanic (abbreviated as gem), Romanian (rom), Slavic (slv) and Uralic (ura) family. Each language family has their own unit dictionary, and languages within the same family share units since some of their pronunciations sound similar. To distinguish units in different vocabularies, we add the family tag to their units, i.e., Germanic units "111 23 47" are converted to "gem-111, gem-23, gem-47". Family dictionaries are then concatenated as extended target dictionary used by the unit decoder.
The extended dictionary inevitably makes the unit prediction harder for the model, and units from other languages act as distractors in both training and inference. Empirically it is often seen that the model predicts units belonging to another language even when the target language is specified. We propose **unit masking.** which masks units from irrelevant languages in both decoder training and evaluation. It helps the model to focus on units in the target language no matter how large the extended unit vocabulary is.
Suppose that the unit dictionary is \(\mathbf{u}=\{u_{i}\}_{1\leq i\leq|V|}\) and \(|V|\) is the vocabulary size. The index set of units in language \(l\) is \(\mathbf{m}^{l}\), i.e., \(u_{i}\) belongs to language \(l\) for \(i\in\mathbf{m}^{l}\). Denote \(\mathbf{y}\) as ground truth target units, and \(\hat{\mathbf{y}}\) as the predicted likelihood over units. The training loss \(L\) of speech-to-masked-unit model is calculated over only language \(l\)'s units instead of the whole unit dictionary.
\[L=-\frac{1}{T}\sum_{j=1}^{T}\sum_{i\in\mathbf{m}^{l}}y_{j,i}\log\hat{y}_{j,i} \tag{1}\]
where \(y_{j,i}\) is a binary value indicating whether \(u_{i}\) is the \(j\)-th unit in the target sequence and \(T\) is the target length.
As for inference, the predicted likelihood of units in other languages is forced to be \(-\text{inf}\), so only units related to the target language are generated.
### Multilingual Vocoder
A monolingual vocoder typically consists of a HiFi-GAN generator which converts discrete units to speech waveform, a duration predictor and discriminators which provides feedback on the speech quality [19]. To extend it to multilingual setting, we introduce new components to the vocoder architecture as shown in Fig. 2. Vocoders keep embedding tables to convert discrete units, speaker and language to continuous embeddings. We add the language tag to the input unit sequence, and an embedding lookup table retrieves language embedding and prepends it to the unit and speaker embedding. A common challenge of multilingual training is the language interference, and we notice that the generated speech from a multilingual vocoder might sound like another langu
Figure 1: Architecture of multilingual speech-to-speech translation model.
Figure 2: Architecture of multilingual vocoder.
issue, we add a speech language identification (LID) classifier to the generator-discriminator framework. The LID classifier built on convolutional layers takes speech signal and predicts its language. Given input \(\mathbf{x}\), the convolution layer consists of convolution operations followed by ReLU activation and LayerNorm.
\[\text{ConvLayer}(x)=\text{LayerNorm}(\text{ReLU}(\text{Conv}(\mathbf{x}))). \tag{2}\]
A linear projection layer is added on the top of LID classifier to predict the language of synthesized waveform.
\[\hat{\mathbf{y}}=\text{softmax}(\mathbf{W}\cdot\text{ConvLayer}(\text{ConvLayer}( \mathbf{x}))), \tag{3}\]
where \(\hat{\mathbf{y}}\) is predicted likelihood over languages, and \(\mathbf{W}\) is a tunable weight matrix. The LID prediction indicates how well the generated speech fits in the given language.
We outline how a multilingual vocoder trains generator together with auxiliary modules including LID classifier, duration predictor, Multi-Period (MPD) and Multi-Scale Discriminators (MSD). At each step, generator generates waveform based on discrete units together with speaker and language information.
**Auxiliary module training.** MPD and MSD are trained to distinguish the synthetic waveform from real speech. Duration predictor is tuned to predict the duration of consecutive units. Real speech is fed to LID classifier for language prediction.
**Generator training.** Generator is trained with multiple losses. The generated speech is compared with the reference via L1 loss of their mel-spectrograms and discriminator features. Adversarial loss is also applied to generator so that it learns to fool discriminators. Lastly LID classifier predicts the language of synthesized speech, and the LID loss penalizes generator for speech which does not sound like desired language.
## 4 Experiments
In the experiments, we focus on speech-to-speech translation from English into \(16\) languages which are grouped into \(4\) families based on their linguistic similarity.
* Germanic family: German (de) and Dutch (nl);
* Romance family: Spanish (es), French (fr), Italian (it), Portuguese (pt) and Romanian (ro);
* Slavic family: Czech (cs), Croatian (hr), Lithuanian (lt),, Polish (pl), Slovak (sk) and Slovenian (sl);
* Uralic family: Estonian (et), Finnish (fi) and Hungarian (hu).
The multilingual speech alignments are provided by SpeechMatrix [7] together with useful resources including multilingual HuBERT models and vocoder training data.
### Empirical Setup
**Preprocessing**. Speech-to-unit models and vocoders rely on units extracted with HuBERT and k-means models. Given speech alignments, we transform target speech into target units, and take the aligned source speech and target units as the S2U training data. As for vocoder training, we derive units from speech data, and vocoder is trained to reconstruct speech from the corresponding units.
We reuse multilingual HuBERT models provided by SpeechMatrix to learn speech features. Each HuBERT model was trained on audios collected from a family of languages, and thus speech features of languages from the same family are in the same space. We further learn a k-means model for each family to cluster speech features. The continuous features are discretized by its cluster index assigned by the k-means model. Therefore a family of languages share the same unit vocabulary, and the number of clusters is its vocabulary size. The total vocabulary size of all languages is the sum of family vocabulary sizes.
To optimize the unit quality, previous works sweeped over multiple configurations of unit extraction. Following Speech-Matrix [7], we try different HuBERT layers (layer \(10\), \(11\) and \(12\)) for speech feature extraction and different cluster sizes for k-means clustering (\(800\), \(1000\), \(1500\) and \(2000\)). In each configuration with a specific HuBERT layer and cluster size, we prepare a set of family units for vocoder training. Monolingual vocoders are trained on these family units and then evaluated on speech resynthesis as in subsection 4.2. The best configuration of unit extraction is selected based on the corresponding vocoder quality. In our experiments, we choose HuBERT layer \(11\) for feature extraction in all languages, and the optimal k-means cluster sizes varies from family to familiy. Germanic, Slavic and Uralic families have the best cluster size of \(1000\), Roman family has the best size of \(2000\).
**Evaluation**. The performance of both vocoder and S2ST models is measured by the quality of their generated speech. As we care about the semantic preservation in the output speech, so we transcribe the speech into texts which carry the semantic content with pretrained ASR models. We reuse ASR models in [7] for a fair comparison, which are built upon pretrained XLS-R or wav2vec2 models and finetuned on ASR datasets. These ASR models are released on HuggingFace [20], and could be indexed by models ids as summarized in Appendix. The transcriptions of speech are lastly compared with reference texts, and different metrics are applied to measure how much they differ or resemble.
For vocoder evaluation, the metric is word error rate (WER) between speech transcriptions and ground truth texts. The lower WER, the better speech resynthesis a vocoder has. As for speech-to-speech translation, a commonly used metric is BLEU reflecting the lexical overlap. Higher BLEU score reflects better translation quality.
We first evaluate the quality of multilingual vocoders on speech resynthesis in subsection 4.2. Next we train multilingual speech-to-masked-unit models, and report translation quality in comparison with bilingual models subsection 4.3.
### Multilingual Vocoder
**Dataset**. We reuse the traininig and evaluation data as used by vocoders 1 in [7]. Vocoder training requires high-quality speech which is collected from CSS10 [21], VoxPopuli [9] and Common Voice [22]. Table 1 summarizes vocoder data statistics.
Footnote 1: We note that SpeechMatrix vocoders use units extracted from language-specific k-means models. In our experiments, both multilingual and monolingual vocoders use units from family-specific k-means model. Language- and family-specific units lead to comparable resynthesis quality for monolingual vocoders (c.f. Appendix)
We develop multilingual vocoders for each language family, and combine all vocoder data in the same family as the train set. As a comparison, we also train monolingual vocoders for each language using the same set of speech and units. When it comes to evaluation, a trained vocoder takes test units and synthesizes speech which is then transcribed by pre-trained ASR models. We report word error rate of the transcriptions compared with reference texts in Table 1.
**Hyperparameters**. The dimension of speaker embedding is set as \(128\) for all vocoders. For multilingual vocoder, it has
additional \(128\)-d language embedding. The dimension of unit embeddings control the model capacity of vocoders, and we try a small architecture by setting unit dimension as \(128\) and a large architecture by increasing unit dimension to \(256\).
The unit embeddings are upsampled by \(5\) transposed convolutional layers to match the audio sample rate. The speaker embedding is concatenated with the upsampled representations and then processed by \(15\) residual blocks which consist of dilated convolutional layers. For multilingual vocoder, when LID auxiliary loss is applied, we add an LID classifier which has two convolutional layers and a linear projection layer.
All vocoders are trained with a learning rate of \(0.0002\) and a batch size of \(16\). The training time does not differ much between monolingual and multilingual vocoders, and it takes around \(3\) days on \(8\) GPUs.
**Results**. In Table 1, we report WER of pretrained ASR models by providing the speech from the test set as the input and comparing ASR outputs with ground truth texts. We note that the WER metric is dependent on the ASR model quality, and ASR WER serves as a lower bound of vocoder WER. Therefore, it reflects the vocoder quality more accurately to check vocoder WER with respect to ASR WER.
We compare vocoders of different model sizes and training recipes. "Mono-S" and "Multi-S" are monolingual and multilingual vocoders that both have small architecture with \(128\)-d unit embeddings. "Multi-S" prepends language id to vocoder inputs, and the training objectives are the same as monolingual vocoder "Mono-S". We also try larger architectures with \(256\)-d unit embeddings, i.e.,"Multi-L" in Table 1 "Multi-S/L (+LID)" are multilingual vocoders with the auxiliary LID loss as well as language embedding.
Comparing "Mono-S" and "Multi-S (+LID)" which both have small architecture in Table 1, we find that multilingual model achieves comparable performance in Germanic, Romanian and Slavic family, and falls behind in Uralic languages. Multilingual performance can be further improved when we enlarge the architecture of "Multi-S (+LID)" to "Multi-L (+LID)". The training with LID loss achieves lower WER than "Multi-L" without LID. The best vocoders across families are large multilingual vocoders trained with LID loss.
### English-to-Many S2ST
**S2ST data**. The multilingual dataset used in S2ST experiments is SpeechMatrix corpus with speech alignments between \(17\) languages. We use parallel speech in 16 en-xx directions to train one-to-many S2ST models. SpeechMatrix is a mined corpus and each alignment is scored by its semantic similarity [7]. We select aligned speech with scores above \(1.09\) so that we could have a decent amount of good-quality training data. Table 2 reports the statistics of parallel speech data.
**Models**. We implemented a multilingual speech-to-masked-unit model. The speech encoder is a stack of \(7\) convolutional layers and \(48\) Transformer encoder layers with \(1024\)-d and \(4096\)-d layer and forward embeddings. The unit decoder is a \(12\)-layer Transformer decoder with layer and forward dimensions of \(1024\) and \(4096\) respectively. A multilingual S2MU model has \(1.2\)B parameters. We initialize the speech encoder with XLS-R model of 1B parameters [23] and initialize the unit decoder with mBART decoder trained on English units [18].
We include two bilingual approaches proposed in recent works as baselines. One is bilingual speech-to-unit (S2U) model [18], which has the same initialized speech encoder and unit decoder as the multilingual S2MU model. Its difference from S2MU lies in the decoder vocabulary. Since a bilingual S2U model supports one translation direction, the vocabulary of S2U only contains target language-specific units. A bilingual S2U model also has \(1.2\)B parameters.
The other bilingual baseline is Textless model [7], which uses the same training and validation data as models above. In Textless model, the speech encoder has \(2\) convolutional layers and \(12\) Transformer encoder layers with \(512\)-d layer and \(2048\)-d forward embeddings. Its unit decoder consists of \(6\) Transformer decoder layers with layer and forward embedding of \(512\) and
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline
**Family** & **Lang** & **Data** & **Train hours** & **ASR WER** & **Mono-S** & **Multi-S (+LID)** & **Multi-L** & **Multi-L (+LID)** \\ \hline \multirow{2}{*}{**Gem**} & de & CSS10 & 13.2 & 10.0 & 16.1 & 13.1 & 14.2 & 13.1 \\ & nl & CSS10 & 11.3 & 19.0 & 28.0 & 28.1 & 29.0 & 27.7 \\ \hline \multirow{4}{*}{**Rom**} & es & CSS10 & 23.4 & 8.4 & 11.3 & 11.7 & 11.8 & 11.1 \\ & fr & CSS10 & 17.7 & 24.0 & 30.7 & 30.2 & 28.7 & 28.6 \\ & it & VoxPopuli & 25.8 & 23.0 & 31.6 & 32.5 & 29.6 & 28.7 \\ & pt & Common Voice & 16.1 & 6.0 & 36.6 & 30.9 & 31.0 & 29.7 \\ & ro & VoxpopPuli & 25.5 & 42.0 & 50.0 & 53.5 & 51.9 & 51.5 \\ \hline \multirow{4}{*}{**Slv**} & cs & VoxPopuli & 26.8 & 15.0 & 23.0 & 24.2 & 24.4 & 23.0 \\ & hr & VoxPopuli & 25.3 & 21.0 & 29.7 & 30.7 & 31.2 & 29.7 \\ & lr & VoxPopuli & 1.3 & 38.0 & 57.3 & 57.3 & 57.3 & 57.3 \\ & pl & VoxPopuli & 26.7 & 14.0 & 25.0 & 22.7 & 23.8 & 21.7 \\ & sk & VoxPopuli & 25.3 & 28.0 & 41.0 & 40.8 & 41.7 & 38.9 \\ & sl & VoxPopuli & 6.1 & 37.0 & 47.0 & 49.2 & 48.5 & 45.9 \\ \hline \multirow{4}{*}{**Ura**} & et & Common Voice & 12.0 & 14.0 & 44.1 & 47.5 & 47.9 & 45.9 \\ & fi & CSS10 & 8.3 & 2.0 & 17.8 & 18.7 & 17.7 & 16.4 \\ \cline{1-1} & hu & CSS10 & 7.9 & 21.0 & 21.0 & 28.8 & 28.0 & 24.9 \\ \hline \hline \end{tabular}
\end{table}
Table 1: WER (\(\downarrow\)) of resynthesized speech from vocoders (mono: monolingual vocoder, multi: multilingual vocoder with language embedding, multi (+LID): multilingual vocoder with language embedding and LID auxiliary loss), S and L indicate vocoder size as small or large.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Lang** & cs & de & es & et & fi & fr & hr & hu \\
**Hours** & 883 & 1,451 & 1,366 & 321 & 426 & 1,517 & 148 & 434 \\ \hline
**Lang** & it & It & nl & pl & pt & ro & sk & sl \\
**Hours** & 1,575 & 4 & 1,231 & 942 & 988 & 521 & 593 & 46 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Training data statistics of SpeechMatrix.
\(2048\) dimensions. Textless model has \(70\)M parameters.
**Hyperparameters**. The multilingual speech-to-masked-unit model has a dropout probability of \(0.1\) and a label smoothing factor of \(0.2\). It is trained with a batch size of \(320\)k tokens and a update frequency of \(8\) on \(48\) GPUs. The learning rate is set as \(0.0001\). The total number of training steps is \(200\)k, and it takes \(10\) days to train a dense multilingual model. The best checkpoints which have the lowest loss on the validation set are used for S2ST evaluation. The bilingual speech-to-unit models have the same hyperparameters as the multilingual speech-to-masked unit model, and they are trained for \(50\)k steps, which takes around \(2\) days. The checkpoints with the best validation loss are evaluated.
**Evaluation**. We have in-domain and out-of-domain test data for S2ST evaluation [7]. The in-domain testsets are collected from EuroParl-ST (EP) and VoxPopuli (VP) corpus, whose European Parliament speech is in the same domain as our training data. FLEURS serves as out-of-domain data, and its test split is taken as test data. Following previous works on S2ST, we report ASR-BLEU as the metric of translation quality. The generated waveform by models are transcribed by pre-rained ASR models, and then BLEU score is calculated by comparing the transcriptions with reference target texts.
**Results**. Monolingual vocoders used in S2ST experiments are "Mono-S" and multilingual vocoders are "Multi-L (+LID)" as described in subsection 4.2. Table 3 reports ASR-BLEU of S2ST models on testsets. Bilingual S2U models perform well in high-resource directions such as en-es and en-nl, but have low BLEU in other directions. The large S2U model is known to be data-hungry and are not trained well in languages without sufficient data. As for bilingual Textless models which have fewer parameters, they fall behind S2U models in high-resource languages, but outperform S2U in low-resource directions.
S2MU model together with multilingual vocoder achieves the best performance. The average gains over bilingual S2U with monolingual vocoder are +\(5.2\) and +\(2.7\) BLEU on in-domain and out-of-domain testsets respectively. When compared with bilingual Textless models, the average gains are +\(1.1\) and +\(1.6\) BLEU.
With S2MU model, multilingual vocoders outperforms monolingual vocoders by \(0.5\) and \(0.2\) BLEU averaged over \(16\) directions on in-domain and out-of-domain data respectively. Looking at each language direction, the BLEU gain on S2ST by multilingual vocoder is correlated with WER reduction on resynthesis. BLEU gain is also dependent on inference performance of S2MU. For example, multilingual vocoder reduces WER of Slovak (sk) by \(5\%\), but does not show much translation gains due to low-quality units.
### Analysis
Multilingual S2MU outperforms Textless models in directions except for three Slavic languages: hr, sk and sl. These three languages have very limited training data, and multilingual training is in favor of higher-resource languages. Even with more capacity in S2MU, these languages don't benefit from multilingual training.
When we compare bilingual models, S2U and Textless, we find that model capacity should match the language resource size in order to optimize translation performance. Given high-resource languages including it and de, S2U with much more parameters demonstrate gains over smaller Textless model. As for languages with less training data, the performance of S2U drops sharply and BLEU scores are close \(0\) in pt and ro, while Textless model achieves higher BLEU of \(11.8\) and \(7.6\) respectively. This suggests that model capacity is a bottleneck if there is sufficient data, while data size becomes the blocker if the model is too large.
When it comes to extremely low-resource directions such as et, fi and lt, all models perform poorly. We also note that data domain matters to the translation performance. For each model, its performance is always better on in-domain sets than on out-of-domain data.
Comparing multilingual vocoders against monolingual vocoders, the gains are more obvious in language directions with high BLEU scores. For example with S2MU model, multilingual vocoder improves BLEU by \(3.9\) and \(1.6\) on EP/VP and FLEURS data respectively. As for en-es translation on EP/VP testsets, multilingual vocoder brings +\(2.0\) BLEU with S2MU model, and +\(1.7\) BLEU with S2U model.
## 5 Limitations
This work proposes multilingual training techniques for speech-to-speech translation into multiple target languages. There have been extensive studies on multilinguality in tasks of machine translation and language models, which could be leveraged to further improve multilingual S2ST. In our future work, we would like to explore more research ideas such as multilingual data sampling to deal with imbalanced training data. Also this work groups languages based on their linguistic similarity. According to findings of existing literature, a better grouping could be learned with a data-driven approach to encourage cross-lingual transfer and mitigate language interference.
Furthermore, we have to concatenate multiple sets of units as the vocabulary due to HuBERT models trained for only one language family. It is worth exploring a feature extraction model (e.g. HuBERT) supporting all languages so that we could use a single unit vocabulary shared by all languages. The shared vocabulary might better support the knowledge transfer across
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c c c c c c c c} \hline \hline
**Domain** & **Model** & **Vooder** & **es** & **de** & **es** & **et** & **fi** & **fr** & **hr** & **hu** & **lt** & **rl** & **nl** & **pl** & **pt** & **ro** & **sk** & **sl** & **avg** \\ \hline \multirow{4}{*}{EP/VP} & S2MU & Mono. & 10.5 & 15.5 & 23.1 & - & 2.6 & 19.2 & 2.6 & 1.1 & 15.0 & 0.1 & 18.6 & 10.1 & 12.3 & 8.8 & 1.2 & 4.3 & 9.7 \\ & Multi. & 10.4 & 16.2 & 25.1 & - & 2.3 & 20.0 & 2.6 & 0.9 & 14.9 & 0.1 & 19.1 & 10.6 & 16.2 & 8.7 & 1.3 & 4.4 & 10.2 \\ \cline{2-19} & S2U & Mono. & 2.9 & 13.3 & 20.1 & - & 0.0 & 12.6 & 0.0 & 0.0 & 7.0 & 0.0 & 18.8 & 0.0 & 0.0 & 0.3 & 0.0 & 0.0 & 5.0 \\ & Multi. & 2.9 & 13.8 & 21.8 & - & 0.0 & 13.1 & 0.0 & 0.0 & 7.1 & 0.0 & 19.4 & 0.0 & 0.0 & 0.3 & 0.0 & 0.0 & 5.2 \\ \cline{2-19} & Textless & Mono. & 8.2 & 10.1 & 21.9 & - & 1.9 & 19.2 & 8.4 & 1.1 & 11.5 & 0.3 & 15.1 & 8.2 & 11.8 & 7.6 & 5.7 & 5.5 & 9.1 \\ \hline \multirow{4}{*}{FLEURS} & S2MU & Mono. & 4.3 & 6.4 & 7.8 & 1.5 & 0.9 & 12.4 & 2.9 & 0.6 & 7.2 & 0.0 & 6.2 & 2.7 & 7.1 & 3.5 & 1.9 & 1.1 & 4.2 \\ & Multi. & 4.3 & 6.8 & 7.9 & 1.5 & 0.8 & 12.9 & 3.0 & 0.6 & 7.2 & 0.0 & 6.5 & 2.9 & 8.7 & 3.7 & 2.3 & 1.3 & 4.4 \\ \cline{2-19} & S2U & Mono. & 0.8 & 5.1 & 5.4 & 0.0 & 0.0 & 6.8 & 0.0 & 0.0 & 1.6 & 0.0 & 7.0 & 0.0 & 0.0 & 0.0 & 0.0 & 1.7 \\ \cline{2-19} & Multi. & 0.8 & 5.3 & 5.4 & 0.0 & 0.0 & 7.1 & 0.0 & 0.0 & 1.5 & 0.0 & 7.3 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 1.7 \\ \cline{2-19} & Textless & Mono. & 2.7 & 2.7 & 6.0 & 0.7 & 0.6 & 10.4 & 2.4 & 0.3 & 3.6 & 0.1 & 3.8 & 1.3 & 5.1 & 2.0 & 1.2 & 1.2 & 2.8 \\ \hline \hline \end{tabular}
\end{table}
Table 3: ASR-BLEU scores of S2ST models on in-domain and out-of-domain testsets.
languages, especially those from different families.
## 6 Conclusions
We developed a single multilingual model to support speech-to-speech translation into multiple target languages. We proposed vocabulary masking and multilingual vocoding to encourage knowledge transfer across languages and mitigate their interference at the same time. Empirical results demonstrated that these are useful techniques for multilingual S2S training.
|
2302.04869
|
Reversible Vision Transformers
|
We present Reversible Vision Transformers, a memory efficient architecture
design for visual recognition. By decoupling the GPU memory requirement from
the depth of the model, Reversible Vision Transformers enable scaling up
architectures with efficient memory usage. We adapt two popular models, namely
Vision Transformer and Multiscale Vision Transformers, to reversible variants
and benchmark extensively across both model sizes and tasks of image
classification, object detection and video classification. Reversible Vision
Transformers achieve a reduced memory footprint of up to 15.5x at roughly
identical model complexity, parameters and accuracy, demonstrating the promise
of reversible vision transformers as an efficient backbone for hardware
resource limited training regimes. Finally, we find that the additional
computational burden of recomputing activations is more than overcome for
deeper models, where throughput can increase up to 2.3x over their
non-reversible counterparts. Full code and trained models are available at
https://github.com/facebookresearch/slowfast. A simpler, easy to understand and
modify version is also available at https://github.com/karttikeya/minREV
|
Karttikeya Mangalam, Haoqi Fan, Yanghao Li, Chao-Yuan Wu, Bo Xiong, Christoph Feichtenhofer, Jitendra Malik
|
2023-02-09T18:59:54Z
|
http://arxiv.org/abs/2302.04869v1
|
# Reversible Vision Transformers
###### Abstract
We present Reversible Vision Transformers, a memory efficient architecture design for visual recognition. By decoupling the GPU memory requirement from the depth of the model, Reversible Vision Transformers enable scaling up architectures with efficient memory usage. We adapt two popular models, namely Vision Transformer and Multiscale Vision Transformers, to reversible variants and benchmark extensively across both model sizes and tasks of image classification, object detection and video classification. Reversible Vision Transformers achieve a reduced memory footprint of up to **15.5\(\times\)** at roughly identical model complexity, parameters and accuracy, demonstrating the promise of reversible vision transformers as an efficient backbone for hardware resource limited training regimes. Finally, we find that the additional computational burden of recomputing activations is more than overcome for deeper models, where throughput can increase up to **2.3\(\times\)** over their non-reversible counterparts. Full code and trained models are available at [https://github.com/facebookresearch/slowfast](https://github.com/facebookresearch/slowfast). A simpler, easy to understand and modify version is also available at [https://github.com/karttikeya/minREV](https://github.com/karttikeya/minREV).
## 1 Introduction
The deep learning revolution in computer vision has rested on the bedrock of high performance hardware accelerators. Fueled by special purpose AI accelerators, the compute requirements for state-of-the-art models are growing exponentially. However, compute is only half the story. The other, and often overlooked half, is memory bandwidth bottleneck, which has been difficult to proportionally scale as compared to peak accelerator FLOPs [54]. In particular, the peak accelerator FLOPs have been increasing at a rate of \(\sim\)3.1\(\times\) every 2 years [21, 62]. However, peak bandwidth only scales at a rate of \(\sim\)1.4\(\times\) every 2 years. This disparity is exacerbated in transformers, which have been doubling in required compute roughly every three months for the past three years, resulting in a so-called memory wall [21] where both the overall model performance as well as the training speed have become tightly memory-bound [34].
As such, for bandwidth bound models, trading compute for memory through re-computation could actually be more efficient than using work-optimal algorithms [70, 71]. In the case of training neural network models, this can be achieved by re-computing activations instead of storing and then loading them from DRAM [31]. Besides training speed, scaling vision transformers in depth naturally hits the GPU memory capacity, especially in memory starved regimes such as video recognition where state-of-the-art models are often limited to batch size \(1\) due to high memory footprint of intermediate activations.
We propose Reversible Vision Transformers, a family of expressive visual recognition architectures with very favorable activation memory footprints (Figure 1) compared to their non-reversible variants. By trading-off GPU activation caching with efficient on-the-fly activation re-computation, reversible vision transformers effectively _decouple_ the activation _memory_ growth from the _depth_ of the model.
Figure 1: **Reversible Vision Transformers** are more memory-efficient, yet powerful _reversible counterparts_ of state-of-the-art Vision Transformer (ViT) [15] and Multiscale Vision Transformer (MViT) [18] architectures with varying model complexity. Numbers in parentheses denote top-1 ImageNet performance. ResNet [28] and RegNet [58] are only shown for reference. For detailed discussion please refer to §4.1.
|
2301.02790
|
Investigations on convergence behaviour of Physics Informed Neural
Networks across spectral ranges and derivative orders
|
An important inference from Neural Tangent Kernel (NTK) theory is the
existence of spectral bias (SB), that is, low frequency components of the
target function of a fully connected Artificial Neural Network (ANN) being
learnt significantly faster than the higher frequencies during training. This
is established for Mean Square Error (MSE) loss functions with very low
learning rate parameters. Physics Informed Neural Networks (PINNs) are designed
to learn the solutions of differential equations (DE) of arbitrary orders; in
PINNs the loss functions are obtained as the residues of the conservative form
of the DEs and represent the degree of dissatisfaction of the equations. So
there has been an open question whether (a) PINNs also exhibit SB and (b) if
so, how does this bias vary across the orders of the DEs. In this work, a
series of numerical experiments are conducted on simple sinusoidal functions of
varying frequencies, compositions and equation orders to investigate these
issues. It is firmly established that under normalized conditions, PINNs do
exhibit strong spectral bias, and this increases with the order of the
differential equation.
|
Mayank Deshpande, Siddharth Agarwal, Vukka Snigdha, Arya Kumar Bhattacharya
|
2023-01-07T06:31:28Z
|
http://arxiv.org/abs/2301.02790v1
|
Investigations on convergence behaviour of Physics Informed Neural Networks across spectral ranges and derivative orders
###### Abstract
An important inference from Neural Tangent Kernel (NTK) theory is the existence of spectral bias (SB), that is, low frequency components of the target function of a fully connected Artificial Neural Network (ANN) being learnt significantly faster than the higher frequencies during training. This is established for Mean Square Error (MSE) loss functions with very low learning rate parameters. Physics Informed Neural Networks (PINNs) are designed to learn the solutions of differential equations (DE) of arbitrary orders; in PINNs the loss functions are obtained as the residues of the conservative form of the DEs and represent the degree of dissatisfaction of the equations. So there has been an open question whether (a) PINNs also exhibit SB and (b) if so, how does this bias vary across the orders of the DEs. In this work, a series of numerical experiments are conducted on simple sinusoidal functions of varying frequencies, compositions and equation orders to investigate these issues. It is firmly established that under normalized conditions, PINNs do exhibit strong spectral bias, and this increases with the order of the differential equation.
Artificial Neural Networks, Neural Tangent Kernel Theory, spectral bias, Physics Informed Neural Networks, differential equations, convergence rates.
## I Introduction
Coupled sets of Partial Differential Equations (PDEs), like the Navier-Stokes equations for Fluid Mechanics [1], Maxwell's equations for Electromagnetics [2], and analogous equations in other domains, govern the behavior and dynamic field characteristics in their respective areas. In most practical applications these domains turn out to be quite complicated and not amenable to closed form solutions. Numerical simulations of these equations, using either the Integral Equation approach [3][4] that are based fundamentally on the Green's Identities [5], or the more popular discretized form approaches [6][7], under appropriate boundary and initial conditions, have performed till date as the best-known techniques for solution of these equations to high levels of fidelity suitable to the relevant application.
Recently, the Universal Approximation capabilities of Artificial Neural Networks [8][9] have been extended to the solution of sets of PDEs under the framework of Physics Informed Neural Networks [10] and in a short period this approach has seen rapid growth both in theory and applications in multiple domains. Particularly in the context of quick turnaround times for solutions with minor changes in design and flow conditions, Physics Informed Neural Networks (PINNs) have already demonstrated their utility, see e.g. [11].
PINNs represent a specialized approach within the gamut of Artificial Neural Network (ANN) architectures and techniques, and it may be expected that the properties of ANNs will apply onto PINNs as well. Using the theory of Neural Tangent Kernel [12] (NTK) it can be mathematically deduced [13] that in the training process of a fully-connected multi-layer ANN with Mean Square Error (MSE) as the loss function, and with wide hidden layers and a very small learning rate parameter, the lower frequency components of the function being learnt will converge faster, and the higher frequencies will be acquired more slowly. That is,
higher the frequency component, the slower it will be learnt. This phenomenon is known as Spectral Bias [14].
Now, PINNs do not use MSE and are not based on supervised learning, but instead the _residues_ of the equations in the field and at the boundaries perform the role of the loss function. So, it has been an open question if the deductions from NTK theory that map into the phenomena of spectral bias in ANNs trained with standard supervised learning approach, will be valid for PINNs.
This ambiguity on the existence of spectral bias in PINNs has led to minor speculation along the following lines:
(a) if solution of functions represented as differential equations will indeed exhibit spectral bias in the learning process [15]
(b) and, if (a) is true, if the extent of variation in the rate of convergence across frequency components will increase or reduce with the order of the differential equations.
This work seeks to answer the above questions conclusively through a series of numerical experiments. In addition, it also investigates:
(c) the extent to which spectral bias varies with different activation functions, and
(d) for functions without derivatives (i.e., not expressed as differential equations) that can be solved by conventional ANNs using supervised learning and training data, the extent of difference in overall convergence rates and spectral bias between solutions obtained by the conventional approach, versus solutions obtained using a PINN framework.
The rest of this paper is organized as follows. Section II provides a brief theoretical background of PINNs. Section III provides the summary of the concept of spectral bias and its deduction from NTK theory. Section IV provides the results of numerical experimentation, where different sub-sections discuss the functions being tested and the corresponding results and observations. Conclusions are discussed in Section V.
## II Basic formulation of physics informed neural networks
We begin with a statement of the general form of sets of PDEs, as typically applicable to Fluid Mechanics and other domains with analogous sets of governing partial differential equations
\[N_{i}[u](x)=f_{i}(x), \forall i\in\{1,...,N_{x}\}, x\in D, \tag{1}\] \[C_{j}[u](x)=g_{j}(x), \forall j\in\{1,...,N_{c}\}, x\in\partial D, \tag{2}\]
where \(N_{i}[.]\) are general differential operators that are applied on functions \(u(x)\), where \(x\) is the set of independent location vectors defined over a bounded continuous domain \(D\subseteq\square^{d}\), \(d\in\{1,2,3,...\}\), \(i\) represents the number of operators corresponding to equations in the set, \(u\) is a vector of dependent variables of interest representing the solution at the field points \(x\). \(C_{j}[.]\) denote constraint operators that consist of differential, linear and non-linear terms and usually cover the boundary and initial conditions, \(j\) denotes the set of constraint operators. \(\partial\)D denotes a subset of the domain boundary that is needed to define the constraints. It may be noted that x can in principle denote both location and time variables, in which case (2) will extend to both boundary and initial conditions.
As an example, for 3D laminar flow described by the Navier-Stokes equations, the number of equations \(\mathrm{N}_{{}_{N}}\) is 4, of which 3 are the momentum equations and one that of continuity. The boundary conditions represented by eq. (2) depend on the flow specifics; on solid boundaries these will be the zero normal and tangential flow conditions. In this framework the components of \(u\) at any \(x\) are the 3 velocity components and the pressure (and possibly density).
We seek to approximate the solution \(u(x)\) by a neural network \(u_{net}(\mathbf{x};\,\theta)\), where \(\theta\) is the vector of parameters of the neural network, as typically represented in ML models. Indeed, if \(u_{net}(\mathbf{x};\,\theta)\) were to precisely represent the solution, then (1) & (2) could be expressed as
\[N_{{}_{N}}[u_{net}(\theta)](x)-f_{{}_{N}}(x) =0,\quad\forall i\in\{1,...,\mathrm{N}_{{}_{N}}\},\;\;\;x\in D, \tag{3}\] \[C_{{}_{J}}[u_{net}(\theta)](x)-g_{{}_{J}}(x) =0,\quad\forall j\in\{1,...,\mathrm{N}_{{}_{C}}\},\;\;\;x\in \partial D. \tag{4}\]
However, no ML model can generate precisely zero error, so it is reasonable to write the above equations in the form of residuals
\[r_{{}_{N}}^{{}_{(i)}}\left(x;u_{net}(\theta)\right) =N_{{}_{I}}[u_{net}(\theta)](x)-f_{{}_{i}}(x), \tag{5}\] \[r_{{}_{C}}^{{}_{(i)}}\left(x;u_{net}(\theta)\right) =C_{{}_{J}}[u_{net}(\theta)](x)-g_{{}_{J}}(x), \tag{6}\]
in (5) and (6) the ranges of \(i\) and \(j\) and distribution of \(x\) are no longer restated for brevity. Also, that the residuals are expressed as functions of the current state of the network (i.e. its parameters).
The procedure for training of the ANN, i.e. attainment of the best possible parameters \(\theta\) to enable \(\mathrm{u}_{net}(\mathbf{x};\,\theta)\) to represent (3) & (4) as closely as possible, is to minimize the net loss function
\[\mathcal{L}_{res}(\theta)=\sum_{i=1}^{\mathrm{N}_{{}_{N}}}\,\int_{\mathcal{D}} \,\lambda_{N}^{(i)}(\mathbf{x})\,\big{\|}\,r_{N}^{(i)}\big{(}\mathbf{x};u_{net }(\theta)\big{)}\big{\|}_{p}\,d\mathbf{x}+\sum_{j=1}^{\mathrm{N}_{{}_{C}}}\, \int_{\partial\mathcal{D}}\,\lambda_{\mathcal{C}}^{(I)}(\mathbf{x})\,\big{\|} \,r_{C}^{(j)}\big{(}\mathbf{x};u_{net}(\theta)\big{)}\big{\|}_{p}\,d\mathbf{x} \tag{7}\]
where \(\big{\|}\,\big{\|}_{p}\) denotes the p-norm, and \(\lambda_{{}_{N}}^{{}_{(i)}},\lambda_{{}_{C}}^{{}_{(j)}}\) are weight functions that control the loss interplay between the equation terms and the constraint terms, as well as across the different equation and constraint terms (represented as summations). It may be stated here that the values of these \(\lambda\)'s play a crucial role in the accuracy and the convergence of the solutions and are subjects of ongoing research. The integration symbols do not really denote integration over continuous spaces, but Monte-Carlo integration over a fairly large number of points (cloud) selected in and on the respective volumes and surfaces.
The PINN mechanism is illustrated in the figure 1. below, avoiding detail.
## 3 Deduction of Spectral Bias From Ntk Theory
This section first expresses the fundamental result from Neural Tangent Kernel Theory [12, 16] and from there deduces the varying convergence rates of different frequency components of the function being trained, for a conventional MSE-loss based ANN.
Let \(f(\mathbf{x},\mathbf{\theta})\) represent a scalar valued fully connected ANN with weights \(\theta\) initialized by a Gaussian distribution. Considering a training data set \(\{\mathbf{X}_{{}_{m}},\mathbf{Y}_{{}_{m}}\}\) composed of N samples, one may express inputs \(\mathbf{X}_{{}_{m}}\) as \((\mathbf{x}_{{}_{i}})_{{}_{i-1}}^{{}_{N}}\) and the corresponding outputs \(\mathbf{Y}_{{}_{m}}\) as \((y_{{}_{i}})_{{}_{i-1}}^{{}_{N}}\). If the ANN is trained using the Mean Square Error loss
function
\[\mathbf{L}\left(\theta\right)=\frac{1}{N}\sum_{i=1}^{N}\left(f\left(x_{i},\theta \right)-y_{i}\right)\right)^{2} \tag{8}\]
with a very small value of learning rate parameter \(\eta\), then, using the derivation of Jacot et al [12, 16], one may define the Neural Tangent Kernel (NTK) operator \(\mathbf{K}\), with entries given by
\[\mathbf{K}_{i,j}=\mathbf{K}\left(\mathbf{x}_{i},\mathbf{x}_{j}\right)=\left\langle\frac{ \partial f\left(\mathbf{x}_{i},\mathbf{\theta}\right)}{\partial\theta},\frac{\partial f \left(\mathbf{x}_{j},\theta\right)}{\partial\theta}\right\rangle. \tag{9}\]
The NTK theory shows that, under the above conditions and using a gradient descent approach to training, the kernel \(\mathbf{K}\) converges to a deterministic value and does not change even if the width of the network hidden layer/s increase towards infinity.
Further, it can be shown that [17]
\[\frac{df\left(\mathbf{X}_{m},\mathbf{\theta}(t)\right)}{dt}\approx-\mathbf{K}\cdot\left(f \left(\mathbf{X}_{m},\mathbf{\theta}(t)\right)-\mathbf{Y}_{m}\right) \tag{10}\]
where \(\mathbf{\theta}(t)\) denotes the parameters of the network at iteration \(t\); the vector form of differential equation (10) may be observed. Solution of (10) may be expressed as
\[f\left(\mathbf{X}_{m},\mathbf{\theta}(t)\right)\approx\left(\mathbf{I}-e^{-\mathbf{K}}\right) \mathbf{Y}_{m} \tag{11}\]
The kernel \(\mathbf{K}\) being square symmetric and positive semi-definite, we can express its spectral decomposition as
\[\mathbf{K}=\mathbf{Q}\Lambda\mathbf{Q}^{\tau} \tag{12}\]
where \(\mathbf{Q}\) is an orthogonal matrix with \(i^{th}\) column as the eigenvector \(\mathbf{q}_{i}\) of \(\mathbf{K}\), and \(\Lambda\) is a diagonal matrix with entries \(\lambda_{i}\) as the corresponding eigenvalues. Also note that \(\mathbf{Q}^{\tau}=\mathbf{Q}^{-1}\), and as
\[e^{-\mathbf{K}i}=\mathbf{Q}e^{-\lambda_{i}}\mathbf{Q}^{\tau} \tag{13}\]
From (11), one may write
\[\left(f\left(\mathbf{X}_{m},\mathbf{\theta}(t)\right)-\mathbf{Y}_{m}\right)\approx-e^{-\bm {K}}\mathbf{Y}_{m} \tag{14}\]
where and on substituting from (13), (14) yields
Figure 1: PINN training process. The NN inputs are the points cloud \(x\) at multiple time steps. The outputs are the values of vector \(\text{user}\) or \(\hat{u}\) at each of the input points. Then _Automatic Differentiation_ is used to evaluate each of the derivative terms in all the PDEs and Constraints, which are finally converted into the different components of the Loss as in eq. (7). Residue shown of NS equations.
\[\left(f(\mathbf{X}_{m},\mathbf{\theta}(t))-\mathbf{Y}_{m}\right)\approx-\mathbf{Q}e^{-\mathbf{x}}\mathbf{Q }^{T}\mathbf{Y}_{m}\]
which can be further written as
\[\mathbf{Q}^{T}\left(f(\mathbf{X}_{m},\mathbf{\theta}(t))-\mathbf{Y}_{m}\right)\approx-e^{-\mathbf{x }}\mathbf{Q}^{T}\mathbf{Y}_{m} \tag{15}\]
Equation (15) can be written in expanded form as shown below. Eq. (16) shows that the ith component of the absolute error, \(\left|\mathbf{q}_{i}^{T}\cdot\left(f(\mathbf{X}_{m},\mathbf{\theta}(t))-\mathbf{Y}_{m}\right)\right|\), will decay approximately exponentially at the rate \(\lambda_{i}\). That is, components of the target function that correspond to kernel eigenvectors with larger eigenvalues, will be learnt faster. The larger eigenvalues correspond to the larger spectral wavelengths and hence the smaller (lower) frequencies, and vice-versa, see e.g. [18] and [19]. Thus, for a fully connected ANN with MSE loss function and small learning rate parameter, in the process of training _the lower frequency components of the target function learn faster than the higher ones_.
\[\begin{bmatrix}\mathbf{q}_{i}^{T}\\ \mathbf{q}_{i}^{T}\\ \cdot\\ \cdot\\ \mathbf{q}_{N}^{T}\end{bmatrix}\left(f(\mathbf{X}_{m},\mathbf{\theta}(t))-\mathbf{Y}_{m} \right)=\begin{bmatrix}e^{-\lambda_{N}}&&&&&&\\ &e^{-\lambda_{N}}&&&&\\ &&e^{-\lambda_{N}}&&&\\ &&\cdot&&\\ &&&\cdot&&\\ &&&\cdot&&\\ &&&\cdot&&\\ &&&\cdot&&\\ &&&\cdot&&\\ \mathbf{q}_{N}^{T}\end{bmatrix}\begin{bmatrix}\mathbf{q}_{i}^{T}\\ \mathbf{q}_{i}^{T}\\ \mathbf{q}_{i}^{T}\\ \mathbf{q}_{i}^{T}\\ \cdot\\ \cdot\\ \cdot\\ \cdot\\ \cdot\\ \cdot\\ \mathbf{q}_{N}^{T}\end{bmatrix} \tag{16}\]
The question arises - does the above also hold for PINNs, where the loss function is different, and if yes, then how does this spectral bias vary across the orders of the differential equation that represents the target function? We perform a series of numerical experiments to address the above and related questions, and the results are reproduced and discussed in Section IV.
## IV Numerical experimentation and results
### _Equations of combined frequency terms_
A series of numerical experiments are performed on different sinusoidal functions represented as differential equations of various orders. The considered equations are represented below:
\[f(x) =\sum_{i=1}^{5}\sin(2kx)\,/\,(2k)\,,\ x\in[-\pi,\pi] \tag{17}\] \[\frac{df(x)}{dx} =\sum_{i=1}^{5}\cos(2kx)\,\,\,,\ x\in[-\pi,\pi]\] (18) \[\frac{d^{2}f(x)}{dx^{2}} =-\sum_{i=1}^{5}(2k)\sin(2kx)\,,\ x\in[-\pi,\pi]\] (19) \[\frac{d^{3}f(x)}{dx^{3}} =-\sum_{i=1}^{5}(2k)^{2}\cos(2kx)\,,\ x\in[-\pi,\pi] \tag{20}\]
It can be seen that eqns. (18-20) are increasing order differential equations of the function represented by (17), when the boundary conditions are set at \(f(x)=0\) at both the ends of the given range.
PINN solutions are obtained on the Nvidia Modulus framework [20] on the DGX-1 computing platform. Computations are performed for each of the equations (17-20), and while the typical plots of convergence and accuracy against the number of iterations are made, it is considered more pertinent here to show the plots of the developing solution (red curve) against the closed form solution, in blue. The plots are made for every
1000 (referred as 1 K) iterations till convergence, and only the plots at 1 K and 5 K iterations are shown, in figs. 2-7 for the eqns. (17-19). The third derivative solution did not converge; hence plots are not shown. When convergence occurs, the red and blue curves will coincide. The variation in convergence across the different derivative orders may be observed.
higher frequency components (the wiggles) are captured more slowly. Second, among the three equations, the highest order differential equation \(f_{x}(x)\) is resolved fastest, the intermediate derivative \(f_{x}(x)\) slower, and the baseline function \(f(x)\) slowest of all.
These aspects are captured in figs. 8-11 that show frequency-magnitude plots at different iteration levels, obtained by performing FFT on the solutions displayed above. Each figure shows the 5 relevant frequencies, and at each frequency, the magnitude of the 3 solutions against that of the closed form. The variant rates of evolution of the 3 solutions can be seen. each frequency, the magnitude of the 3 can be seen.
The combined frequency functions expressed as different order differential equations (17-20) are disadvantageous for frequency variation studies as the function plots cannot precisely discriminate between frequencies, and one has to take recourse to Fourier transforms to obtain the frequency-magnitude plots. Instead, now individual functions at different order derivatives are created with only single sinusoids of different frequencies, and analyzed in the next section.
### _Functions with single frequency terms_
The functions considered are as below:
\[\frac{d^{\,2}f(x)}{dx^{\,2}}=\sin{kx}\,,\,\text{for k}=2,\,6\text{ and }10,\,\,x\in[-\pi,\pi] \tag{21}\]
which is obtained from the following function:
\[f(x)=-\frac{1}{k^{\,2}}\sin{kx}\,,\,\text{for k}=2,\,6\text{ and }10,\,\,x\in[-\pi,\pi] \tag{22}\]
\[\text{and}\,\,\,\frac{d^{\,3}f(x)}{dx^{\,3}}=\cos{kx}\,,\,\,\text{for k}=2,\,6 \text{ and }10,\,\,x\in[-\pi,\pi] \tag{23}\]
with its corresponding baseline function
\[f(x)=-\frac{1}{k^{\,3}}\sin{kx}\,,\,\text{for k}=2,\,6\text{ and }10,\,\,x\in[-\pi,\pi] \tag{24}\]
All of these functions are solved using the PINN approach within the Modulus framework. First, the results in terms of number of iterations for convergence for each of the frequency cases are presented in the table below:
Table 2 clearly shows that low frequency cases are converging faster, irrespective of derivative order. However, there is no clear pattern across derivative orders for a given frequency, and the second order differential equation converges faster than either the third order or the baseline function. To obtain a better insight into the rates of convergence across frequencies and derivative orders, figs. 12-20 plot the function values for all 9 cases, 3 derivative orders at 3 different frequencies. Again, in each case the function at 5 K iterations (red) is plotted against the closed form solution (blue).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{**Activation function**} & \multicolumn{3}{c|}{**Number of iterations at which convergence is achieved for the function, in thousands (K)**} \\ \cline{2-5} & _f(x)_ (eq. 17) & _f(x)_, (eq. 18) & _f(x)_, (eq. 19) & _f(x)_, (eq. 20) \\ \hline tanh(.) & 18 & 10 & 5 & No convergence \\ \hline swish(.) & 110 & 17 & 12 & No convergence \\ \hline \end{tabular}
\end{table}
Table 1: Convergences of functions of combined sinusoids
Figure 16: PINN solution at 5 K iterations of \(f_{xx}(x)\) (eq. (21)) (red) against exact solution (blue), when frequency \(k\) is 6.
Figure 14: PINN solution at 5 K iterations of \(f_{xx}(x)\) (eq. (22)) (red) against exact solution (blue), when frequency \(k\) is 10. Note this case did not converge at all.
Figure 12: PINN solution at 5 K iterations of \(f_{xx}(x)\) (eq. (22)) (red) against exact solution (blue), when frequency \(k\) is 2.
Figure 17: PINN solution at 5 K iterations of \(f_{xx}(x)\) (eq. (21)) (red) against exact solution (blue), when frequency \(k\) is 10.
Figure 18: PINN solution at 5 K iterations of \(f_{xx}(x)\) (eq. (23)) (red) against exact solution (blue), when frequency \(k\) is 2.
Figure 13: PINN solution at 5 K iterations of \(f_{xx}(x)\) (eq. (22)) (red) against exact solution (blue), when frequency \(k\) is 6.
Figure 15: PINN solution at 5 K iterations of \(f_{xx}(x)\) (eq. (21)) (red) against exact solution (blue), when frequency \(k\) is 2. Note that this case had already converged at 2 K iterations.
Carrying forward the discussion on inter-frequency convergences across derivative orders, no clear behavioral pattern is observable till this point. Referring back to eqns. (21-23), it can be seen that while the baseline function (22) comes with a damping coefficient of \(\nicefrac{{1}}{{k^{2}}}\) on the RHS, the other two higher-derivative equations have a coefficient of just \(1\) (i.e., neither damping nor amplification). The next set of investigations checked out if this could have any effect.
The equation solved for is
\[f(x)=-\sin kx\text{, for k = 2, 6 and 10, }x\in[-\pi,\pi] \tag{25}\]
where, compared with (22), the damping coefficient has been removed. The solutions at different frequencies with (25) are compared against solutions obtained with (22). The results are presented in Table 3 below:
The results for normalized coefficients are highlighted in bold and are possibly the most significant finding of the investigations reported in this work. Convergences are significantly improved - across all frequencies - and in fact now we see a clear pattern that is presented in Table 4:
The results for normalized Three trends related to PINNs are clearly observable in the above Table 4:
1. First, the more established result, namely that low frequencies are more easily resolved
2. Second, _higher the derivative, the more effort has to be made for attaining converged solution_
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{**Frequency, k**} & \multicolumn{3}{c|}{**Number of iterations at which convergence is achieved for the function, in thousands (K)**} \\ \cline{2-4} & _f(x)_ (eq. 25) & _f(x)_ (eq. 21) & _f(x)_ (eq. 23) \\ \hline
2 & 2 & 2 & 11 \\ \hline
6 & 13 & 18 & 81 \\ \hline
10 & 18 & 50 & No convergence \\ \hline \end{tabular}
\end{table}
Table 4: Convergences of functions of single sinusoids at different frequencies and derivative orders, activation function tanh(.), when all functions are normalized
3. The variance (spread) in convergence rates across frequencies _increases with increasing orders of the differential equations._
The last point can be clearly seen in Table 4; the ratio of the number of iterations needed for convergence between the highest and lowest frequencies is 9 for the baseline function, 25 for the 2nd derivative, and indefinitely large for the 3rd derivative.
The above finding overturns the interpretations that were made in Sec. IV.A, namely, that higher derivatives seem to converge faster in PINNs. Even at the stage of coming to this conclusion, there was a discrepancy at the third order equation (20) which did not converge. That equation has high magnitude coefficients, \(\left(2k\right)^{2}\), which became very high at the higher frequencies, thus inhibiting convergence. Also, the baseline function (17) coefficients were small. The coefficients were closer to 1 for the first and second derivatives, which exhibited good convergence. So those observations were _related more to the magnitudes of the coefficients associated with different frequency terms at different orders of the differential equations_, than the equation orders themselves
It is pertinent at this point to show one more figure aptly demonstrating the last significant result. Fig. 21 shows solution obtained at 5 K iterations, for frequency k = 6 on eq. (25), that is the baseline function with normalized coefficients.
Finally, we discuss observations from one more experiment, though these are on expected lines. Eq. (22), i.e., the baseline equation with damped coefficients, has been solved using PINNs as reported above. But the derivative-free equation can be naturally solved on fully-connected conventional ANNs in supervised learning mode, MSE cost function, with training data extracted from the closed form. The question is, how does convergence speed compare against the PINN solution, at different frequencies.
The results shown in Table 5 are as expected, i.e., the conventional ANNs are more than one order of magnitude faster. Which implies that, if one needs to create a network for a known equation that is derivative free and has sufficient data to solve in supervised learning mode, there is absolutely no need to use PINNs.
The architecture of all ANNs (PINNs and Conventional) considered here used a common 1 x 100 x 100 x 1 pattern, i.e., 2 hidden layers with 100 nodes each. In all cases, only one input node (\(x\)) and one output (\(y\) or \(f(x)\)). Activation function was tanh(.) in all cases except the experiments performed on swish(.). Adam optimizer was used, and learning rate started with 0.005 and gradually reduced with increasing iterations.
Each run was online tracked for convergence of various parameters, both residuals and boundary conditions. Simultaneously the divergence between simulation result and exact solution was also tracked. In
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{**Frequency, k**} & \multicolumn{2}{c|}{**Number of iterations at which convergence is achieved for the function**} \\ \cline{2-3} & _f(x)_ & _f(x)_ & _f(x)_, (eq. 22), for conventional ANN \\ \hline
2 & 6000 & 440 \\ \hline
6 & 246000 & 2000 \\ \hline
10 & No convergence & 3600 \\ \hline \end{tabular}
\end{table}
Table 5: Convergences of baseline function _f(x)_ of single sinusoids at different frequencies, activation function tanh(.), for PINNs against supervised learning on conventional ANNs with MSE cost function
almost all cases, even though the convergence parameters reduced to low values, it took more iterations for simulation result to match with exact solution. Only under the latter condition was the solution considered to be converged. The cases reported as "no convergence" were actually found to be diverging with increasing iterations.
Considering the large number of cases, multiple runs for repeatability were made in just a few cases. Only very minor variations were observed.
A clear observation that can be made across all tests performed, is that convergence is best when the coefficient of the forcing term at RHS is close to one. Hence, for drawing conclusions on convergence rates across frequency components and derivative orders, one needs to compare uniformly under this condition. This has been reported exactly in Table 4. There is considerable scope for analysis as to the reasons for the above observations, as well as the conclusion, i.e., higher derivatives need more effort to attain convergence. The work reported here does not get into this analysis.
## 5 Conclusions
With the objective of investigating if the phenomena of spectral bias exists in PINNs and its possible variation across orders of differential equations, a series of numerical experiments were conducted on simple sinusoidal functions of different frequencies, compositions and derivative orders.
The conclusions from these experiments are the following:
* The most expected one, that the low frequencies are most easily resolved
* The higher the derivative, the more effort has to be made for attaining converged solution
* As a logical corollaey of the above two, that the variance (spread) in convergence rates across frequencies increases with increasing orders of the differential equations.
###### Acknowledgements.
The authors acknowledge and express their gratitude for the support received from Dr. Yang Juntao and Dr. Simon See of Nvidia, Singapore, in negotiating some of the aspects of the Modulus framework.
|
2307.06678
|
The Frobenius transform of a symmetric function
|
We define an abelian group homomorphism $\mathscr{F}$, which we call the
Frobenius transform, from the ring of symmetric functions to the ring of the
symmetric power series. The matrix entries of $\mathscr{F}$ in the Schur basis
are the restriction coefficients $r_\lambda^\mu = \dim
\operatorname{Hom}_{\mathfrak{S}_n}(V_\mu, \mathbb{S}^\lambda \mathbb{C}^n)$,
which are known to be nonnegative integers but have no known combinatorial
interpretation. The Frobenius transform satisfies the identity
$\mathscr{F}\{fg\} = \mathscr{F}\{f\} \ast \mathscr{F}\{g\}$, where $\ast$ is
the Kronecker product.
We prove for all symmetric functions $f$ that $\mathscr{F}\{f\} =
\mathscr{F}_{\mathrm{Sur}}\{f\} \cdot (1 + h_1 + h_2 + \cdots)$, where
$\mathscr{F}_{\mathrm{Sur}}\{f\}$ is a symmetric function with the same degree
and leading term as $f$. Then, we compute the matrix entries of
$\mathscr{F}_{\mathrm{Sur}}\{f\}$ in the complete homogeneous, elementary, and
power sum bases and of $\mathscr{F}^{-1}_{\mathrm{Sur}}\{f\}$ in the complete
homogeneous and elementary bases, giving combinatorial interpretations of the
coefficients where possible. In particular, the matrix entries of
$\mathscr{F}^{-1}_{\mathrm{Sur}}\{f\}$ in the elementary basis count words with
a constraint on their Lyndon factorization.
As an example application of our main results, we prove that $r_\lambda^\mu =
0$ if $|\lambda \cap \hat\mu| < 2|\hat\mu| - |\lambda|$, where $\hat\mu$ is the
partition formed by removing the first part of $\mu$. We also prove that
$r_\lambda^\mu = 0$ if the Young diagram of $\mu$ contains a square of side
length greater than $2^{\lambda_1 - 1}$, and this inequality is tight.
|
Mitchell Lee
|
2023-07-13T10:55:34Z
|
http://arxiv.org/abs/2307.06678v4
|
# The Frobenius transform of a symmetric function
###### Abstract.
We define an abelian group homomorphism \(\mathscr{F}\), which we call the _Frobenius transform_, from the ring of symmetric functions to the ring of the symmetric power series. The matrix entries of \(\mathscr{F}\) in the Schur basis are the _restriction coefficients_\(r_{\lambda}^{\mu}=\dim\operatorname{Hom}_{\mathfrak{S}_{n}}(V_{\mu}, \mathbb{S}^{\lambda}\mathbb{C}^{n})\), which are known to be nonnegative integers but have no known combinatorial interpretation. The Frobenius transform satisfies the identity \(\mathscr{F}\{fg\}=\mathscr{F}\{f\}\ast\mathscr{F}\{g\}\), where \(\ast\) is the Kronecker product.
We prove for all symmetric functions \(f\) that \(\mathscr{F}\{f\}=\mathscr{F}_{\operatorname{Sur}}\{f\}\cdot(1+h_{1}+h_{2}+\dots)\), where \(\mathscr{F}_{\operatorname{Sur}}\{f\}\) is a symmetric function with the same degree and leading term as \(f\). Then, we compute the matrix entries of \(\mathscr{F}_{\operatorname{Sur}}\) in the complete homogeneous, elementary, and power sum bases and of \(\mathscr{F}_{\operatorname{Sur}}^{=1}\) in the complete homogeneous and elementary bases, giving combinatorial interpretations of the coefficients where possible. In particular, the matrix entries of \(\mathscr{F}_{\operatorname{Sur}}^{=1}\) in the elementary basis count words with a constraint on their Lyndon factorization.
As an example application of our main results, we prove that \(r_{\lambda}^{\mu}=0\) if \(|\lambda\cap\mu^{\prime}|<2|\mu^{\prime}|-|\lambda|\), where \(\mu^{\prime}\) is the partition formed by removing the first part of \(\mu\). We also prove that \(r_{\lambda}^{\mu}=0\) if the Young diagram of \(\mu\) contains a square of side length greater than \(2^{\lambda_{1}-1}\), and this inequality is tight.
## 1. Introduction
Let \(n\geq 0\) and let \(\lambda\) be a partition with at most \(n\) parts. There is a corresponding irreducible \(GL_{n}(\mathbb{C})\)-module: the Schur module \(\mathbb{S}^{\lambda}\mathbb{C}^{n}\). Because the symmetric group \(\mathfrak{S}_{n}\) embeds in \(GL_{n}(\mathbb{C})\) by permutation matrices, one may ask: how does the restriction of \(\mathbb{S}^{\lambda}\mathbb{C}^{n}\) to \(\mathfrak{S}_{n}\) decompose into irreducible \(\mathfrak{S}_{n}\)-modules?
In other words, let \(\lambda\) and \(\mu\) be partitions and let \(n=|\mu|\). What is the value of the _restriction coefficient_
\[r_{\lambda}^{\mu}=\dim\operatorname{Hom}_{\mathfrak{S}_{n}}(V_{\mu},\mathbb{ S}^{\lambda}\mathbb{C}^{n}),\]
where \(V_{\mu}\) is the Specht module corresponding to the partition \(\mu\)? This problem, called the _restriction problem_, has held considerable recent interest [2, 7, 15, 16, 19, 21]. However, there remains no known combinatorial interpretation for \(r_{\lambda}^{\mu}\).
Let \(\Lambda\) be the ring of symmetric functions in the variables \(x_{1},x_{2},x_{3},\dots\) and let \(\overline{\Lambda}\) be the ring of symmetric power series in \(x_{1},x_{2},x_{3},\dots\). In this paper, we will consider the abelian group homomorphism \(\mathscr{F}\colon\Lambda\to\overline{\Lambda}\) defined on the basis \(\{s_{\lambda}\}\) of Schur functions by
\[\mathscr{F}\{s_{\lambda}\}=\sum_{\mu}r_{\lambda}^{\mu}s_{\mu}.\]
Equivalently, \(\mathscr{F}\{s_{\lambda}\}\) is the result of applying the Frobenius character map to the representation \(\bigoplus_{n}\mathbb{S}^{\lambda}\mathbb{C}^{n}\) of \(\bigoplus_{n}\mathbb{C}[\mathfrak{S}_{n}]\). For this reason, we call \(\mathscr{F}\) the _Frobenius transform_. It encodes all information about all the restriction coefficients.
Section 3 will cover the basic properties of the Frobenius transform, many of which are implicit in the work of Orellana and Zabrocki. For example, for any symmetric functions \(f,g\), we have \(\mathscr{F}\{fg\}=\mathscr{F}\{f\}*\mathscr{F}\{g\}\), where \(*\) is the Kronecker product of symmetric functions. Moreover, for any \(f\in\Lambda\), there exists \(\mathscr{F}_{\mathrm{Sur}}\left\{f\right\}\in\Lambda\) with the same degree and leading term as \(f\) such that \(\mathscr{F}\{f\}=\mathscr{F}_{\mathrm{Sur}}\left\{f\right\}\cdot(1+h_{1}+h_{2} +\cdots)\). We refer to the map \(\mathscr{F}_{\mathrm{Sur}}\colon\Lambda\to\Lambda\) as the _surjective Frobenius transform_. Because the surjective Frobenius transform preserves degree and leading term, it has an inverse, which we denote by \(\mathscr{F}_{\mathrm{Sur}}^{-1}\).
The _induced trivial character basis_\(\{\tilde{h}_{\lambda}\}_{\lambda}\) and the _irreducible character basis_\(\{\tilde{s}_{\lambda}\}_{\lambda}\), which were introduced by Orellana and Zabrocki in 2021 [19], can be defined in terms of the inverse surjective Frobenius transform. Namely, \(\tilde{h}_{\lambda}=\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{h_{\lambda}\right\}\) and \(\tilde{s}_{\lambda}=\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{(1+h_{1}+h_{2}+ \cdots)^{\perp}s_{\lambda}\right\}\), where \(f^{\perp}\colon\Lambda\to\Lambda\) denotes the operator adjoint under the Hall inner product to multiplication by \(f\).
In Section 4, we will use the Frobenius transform to study _stable restriction coefficients_; that is, the limits
\[a_{\lambda}^{\mu}=\lim_{n\to\infty}r_{\lambda}^{(n-|\mu|,\mu_{1},\ldots,\mu_{ \ell(\mu)})},\]
which exist for all \(\lambda,\mu\) by a classical result of Littlewood [9]. In 2019, Assaf and Speyer found a formula for \(a_{\lambda}^{\mu}\) and for the entries \(b_{\lambda}^{\mu}\) of the inverse matrix \([b_{\lambda}^{\mu}]=[a_{\lambda}^{\mu}]^{-1}\)[2]. We will provide an alternative proof of these formulas using the Frobenius transform. In Theorem 4.2, we will broadly summarize the known relationships between the five kinds of restriction coefficients considered in this paper.
In Section 5, we will prove the following theorem, which shows how to write \(\mathscr{F}_{\mathrm{Sur}}\colon\Lambda\to\Lambda\) as a sum of operators of the form \(fg^{\perp}\).
**Theorem 1.1**.: _Let \(f\) be a symmetric function. Then_
\[\mathscr{F}_{\mathrm{Sur}}\left\{f\right\}=\sum_{\lambda}s_{\lambda}(s_{ \lambda}[h_{2}+h_{3}+h_{4}+\cdots])^{\perp}f,\]
_where the sum is over all partitions \(\lambda\)._
Since Theorem 1.1 involves plethysm and there is no known simple formula for the plethysm of Schur functions, it is not well-suited for general computation. We will, however, use it to prove that many restriction coefficients \(r_{\lambda}^{\mu}\) and stable restriction coefficients \(a_{\lambda}^{\mu}\) vanish.
**Theorem 1.2**.: _Let \(\lambda,\mu\) be partitions. If \(r_{\lambda}^{\mu}>0\), then \(|\lambda\cap\mu^{\prime}|\geq 2|\mu^{\prime}|-|\lambda|\), where \(\mu^{\prime}=(\mu_{2},\ldots,\mu_{\ell(\mu)})\) is the partition formed by removing the first part of \(\mu\)._
**Theorem 1.3**.: _Let \(\lambda,\mu\) be partitions. If \(a_{\lambda}^{\mu}>0\), then \(|\lambda\cap\mu|\geq 2|\mu|-|\lambda|\)._
In Section 6, we will compute \(\mathscr{F}_{\mathrm{Sur}}\left\{f\right\}\) when \(f\) is a complete homogeneous, elementary, or power sum symmetric function. We have \(\mathscr{F}_{\mathrm{Sur}}\left\{e_{n}\right\}=e_{n}\) and \(\mathscr{F}_{\mathrm{Sur}}\left\{p_{n}\right\}=\sum_{d|n}p_{d}\) for \(n\geq 1\). More generally, with \(\mathbb{N}=\{0,1,2,\ldots\}\):
**Theorem 1.4**.: _Let \(\lambda\) be a partition and let \(\ell=\ell(\lambda)\) be its length._
1. _Then_ \[\mathscr{F}_{\mathrm{Sur}}\left\{h_{\lambda}\right\}=\sum_{M}\prod_{j\in \mathbb{N}^{\ell}}h_{M(j)}\] _where the sum is over all functions_ \(M\colon\mathbb{N}^{\ell}\to\mathbb{N}\) _such that_ \(M(0,\ldots,0)=0\) _and_ \(\sum_{j\in\mathbb{N}^{\ell}}j_{i}M(j)=\lambda_{i}\) _for_ \(i=1,\ldots,\ell\)
2. _Then_ \[\mathscr{F}_{\mathrm{Sur}}\left\{e_{\lambda}\right\}=\sum_{M}\prod_{j\in\{0,1\}^ {\ell}}\begin{cases}h_{M(j)}&\text{if $j_{1}+\cdots+j_{\ell}$ is even}\\ e_{M(j)}&\text{if $j_{1}+\cdots+j_{\ell}$ is odd}\end{cases}\] _where the sum is over all functions_ \(M\colon\{0,1\}^{\ell}\to\mathbb{N}\) _such that_ \(M(0,\ldots,0)=0\) _and_ \(\sum_{j\in\{0,1\}^{\ell}}j_{i}M(j)=\lambda_{i}\) _for_ \(i=1,\ldots,\ell\)_._
3. _Then_ \[\mathscr{F}_{\mathrm{Sur}}\left\{p_{\lambda}\right\}=\sum_{\pi}\prod_{U\in\pi} \left(\sum_{d|\gcd\{\lambda_{i}\colon\,i\in U\}}d^{|U|-1}p_{d}\right)\] _where the outer sum is over all partitions_ \(\pi\) _of_ \(\{1,\ldots,\ell\}\) _into nonempty sets._
A statement equivalent to parts (a) and (b) of this theorem has appeared previously in the work of Orellana and Zabrocki [17, Equation (6)].
Theorem 1.4 has the following interesting consequence. For any partition \(\mu\), denote by \(D(\mu)\) the size of the Durfee square of \(\mu\); that is, \(D(\mu)\) is the largest integer \(d\) such that \(\mu_{d}\geq d\)[1, Chapter 8].
**Theorem 1.5**.: _Let \(\mu\) be a partition and let \(k\geq 1\) be an integer. The following are equivalent:_
1. _There exists a partition_ \(\lambda\) _such that_ \(\lambda_{1}\leq k\) _and_ \(r_{\lambda}^{\mu}>0\)_._
2. \(D(\mu)\leq 2^{k-1}\)_._
In particular, \(r_{\lambda}^{\mu}=0\) if \(D(\mu)>2^{\lambda_{1}-1}\).
Finally, in Section 7, we will compute \(\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{e_{\lambda}\right\}\) and \(\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{h_{\lambda}\right\}\) (Theorem 7.7). In particular, we will prove that
\[\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{e_{\lambda}\right\}=\sum_{\mu}(-1)^{| \lambda|-|\mu|}L_{\lambda}^{\mu}e_{\mu},\]
where \(L_{\lambda}^{\mu}\) is a nonnegative integer with an explicit combinatorial interpretation involving Lyndon words (Corollary 7.9).
## Acknowledgements
The author thanks Rosa Orellana and Mike Zabrocki for helpful correspondence.
## 2. Preliminaries
Apart from the definition of the restriction coefficients \(a_{\lambda}^{\mu}\) (Definition 2.10 below), all the definitions in this section can be found in any standard reference on the theory of symmetric functions [5][13, Chapter I][22, Chapter 7].
Let \(\Lambda=\Lambda_{\mathbb{Z}}\) denote the ring of symmetric functions over \(\mathbb{Z}\) in the variables \(x_{1},x_{2},x_{3},\ldots\). For \(n\geq 0\), let \(\Lambda_{n}\) denote the subgroup of \(\Lambda\) consisting of all symmetric functions that are homogeneous of degree \(n\). Let \(\overline{\Lambda}\) denote the ring of symmetric formal power series (that is, formal sums \(\sum_{n}f_{n}\), where each \(f_{n}\in\Lambda_{n}\)). Let \(\langle\cdot,\cdot\rangle\colon\Lambda\times\overline{\Lambda}\to\mathbb{Z}\) denote the Hall inner product.
For any partition \(\lambda=(\lambda_{1},\ldots,\lambda_{\ell})\), define the _length_\(\ell(\lambda)=\ell\) and the _size_\(|\lambda|=\lambda_{1}+\cdots+\lambda_{\ell}\). Let \(m_{i}(\lambda)\) be the number of times the part \(i\) appears in \(\lambda\), and let \(z_{\lambda}=\prod_{i}i^{m_{i}(\lambda)}(m_{i}(\lambda))!\). Let \(\lambda^{T}\) denote the dual of \(\lambda\). Let \(m_{\lambda},e_{\lambda},h_{\lambda},p_{\lambda},s_{\lambda}\in\Lambda\) denote the monomial, elementary, homogeneous, power sum, and Schur symmetric functions respectively. Let \(V_{\lambda}\) denote the corresponding _Specht module_, which is
an irreducible \(\mathfrak{S}_{|\lambda|}\)-module. Let \(\chi_{\lambda}\) denote the character of \(V_{\lambda}\). Let \(\mathbb{S}^{\lambda}\) denote the corresponding _Schur functor_, which is an endofunctor of the category of vector spaces over \(\mathbb{C}\).
For any partitions \(\lambda,\mu\), define the _intersection_\(\lambda\cap\mu\) by \(\ell(\lambda\cap\mu)=\min(\ell(\lambda),\ell(\mu))\) and \((\lambda\cap\mu)_{i}=\min(\lambda_{i},\mu_{i})\). That is, it is the partition whose Young diagram is the intersection of the Young diagrams of \(\lambda\) and \(\mu\). Let us say that \(\lambda\subset\mu\) if \(\lambda\cap\mu=\lambda\).
Let \(\omega\colon\Lambda\to\Lambda\) be the ring homomorphism given by \(\omega(p_{k})=(-1)^{k-1}p_{k}\). Recall that \(\omega\) is an involution and that for all partitions \(\lambda\), we have \(\omega(h_{\lambda})=e_{\lambda}\) and \(\omega(s_{\lambda})=s_{\lambda^{T}}\).
**Definition 2.1**.: The _Lyndon symmetric function_\(L_{n}\) is given by
\[L_{n}=\frac{1}{n}\sum_{d|n}\mu(d)p_{d}^{n/d}\]
for \(n\geq 1\), where \(\mu\colon\{1,2,3,\ldots\}\to\{-1,0,1\}\) is the Mobius function.
**Definition 2.2**.: Let \(f\in\overline{\Lambda}\). The _skewing operator_\(f^{\perp}\colon\Lambda\to\Lambda\) is the adjoint to multiplication by \(f\) under the Hall inner product:
\[\langle g,f^{\perp}h\rangle=\langle fg,h\rangle.\]
**Definition 2.3**.: Let \(f\in\overline{\Lambda}\). We say that the Schur function \(s_{\lambda}\)_appears_ in \(f\) if \(\langle s_{\lambda},f\rangle\neq 0\). We say that \(f\) is _Schur positive_ if \(\langle s_{\lambda},f\rangle\geq 0\) for all partitions \(\lambda\).
Let \(C_{n}\) denote the space of all \(\mathbb{C}\)-valued class functions on \(\mathfrak{S}_{n}\). Let \(R_{n}\) denote the additive group of all virtual characters on \(\mathfrak{S}_{n}\). In other words, \(R_{n}\) is the subgroup of \(C_{n}\) generated by the irreducible characters \(\chi_{\lambda}\).
**Definition 2.4**.: The \(n\)th _Frobenius character map_ is the map \(\operatorname{ch}_{n}\colon R_{n}\to\Lambda_{n}\) defined by
\[\operatorname{ch}_{n}(\chi)=\frac{1}{n!}\sum_{w\in\mathfrak{S}_{n}}\chi(w)p_{ c(w)},\]
where \(c(w)\) denotes the cycle type of \(w\).
It is well-known that \(\operatorname{ch}_{n}\) is an isomorphism and that \(\operatorname{ch}_{n}(\chi_{\lambda})=s_{\lambda}\) for all \(\lambda\) with \(|\lambda|=n\).
**Definition 2.5**.: The _Kronecker product_ is the unique bilinear operator \(*\colon\Lambda\times\Lambda\to\Lambda\) satisfying \(p_{\lambda}*p_{\mu}=\delta_{\lambda\mu}z_{\lambda}p_{\lambda}\) for all \(\lambda,\mu\). It extends to a bilinear operator \(*\colon\overline{\Lambda}\times\overline{\Lambda}\to\overline{\Lambda}\) by continuity.
The Frobenius character map and the Kronecker product are related in the following way: for any \(\chi_{1},\chi_{2}\in R_{n}\), we have \(\operatorname{ch}(\chi_{1}\chi_{2})=\operatorname{ch}(\chi_{1})*\operatorname {ch}(\chi_{2})\).
For any \(f,g\in\overline{\Lambda}\), let \(f[g]\) denote the plethysm of \(f\) by \(g\). This is well-defined as long as \(f\in\Lambda\) or \(g\) has no constant term.
**Proposition 2.6** (Plethystic Addition Formula, [10, Section 3.2]).: _Let \(\lambda\) be a partition and let \(f,g\in\overline{\Lambda}\). Then_
\[s_{\lambda}[f+g]=\sum_{\mu}s_{\lambda/\mu}[f]s_{\mu}[g],\]
_where the sum is over all partitions \(\mu\)._
**Definition 2.7**.: Let \(t\) be a variable. Let
\[H(t) =\sum_{n}h_{n}t^{n}=\prod_{i}\frac{1}{1-x_{i}t}\ =\exp\left(\sum_{k}\frac{p_{k} }{k}t^{k}\right)\qquad\qquad\in\Lambda[\![t]\!]\] \[E(t) =\sum_{n}e_{n}t^{n}=\prod_{i}(1+x_{i}t)=\exp\left(\sum_{k}\frac{p_ {k}}{k}(-1)^{k-1}t^{k}\right)\in\Lambda[\![t]\!].\]
Let \(H=H(1)=1+h_{1}+h_{2}+\cdots\) and let \(H_{+}=h_{1}+h_{2}+h_{3}+\cdots=H-1\).
It is clear that \(E(t)=\frac{1}{H(-t)}\).
**Definition 2.8**.: Let \(\lambda,\mu,\nu\) be partitions. The _Littlewood-Richardson coefficient_\(c_{\lambda\mu}^{\nu}\) is the Hall inner product \(\langle s_{\nu},s_{\lambda}s_{\mu}\rangle\).
**Definition 2.9**.: Let \(\lambda,\mu,\nu\) be partitions. Define \(g_{\lambda\mu\nu}=\langle s_{\nu},s_{\lambda}*s_{\mu}\rangle\).
**Definition 2.10**.: Let \(\lambda,\mu\) be partitions and let \(n=|\mu|\). Then, the Schur module \(\mathbb{S}^{\lambda}\mathbb{C}^{n}\) can be considered as an \(\mathfrak{S}_{n}\)-module, with \(\mathfrak{S}_{n}\) acting on \(\mathbb{C}^{n}\) by permutation matrices. The _restriction coefficient_
\[r_{\lambda}^{\mu}=\dim\operatorname{Hom}_{\mathfrak{S}_{n}}(V_{\mu},\mathbb{ S}^{\lambda}\mathbb{C}^{n})\]
is the multiplicity of the Specht module \(V_{\mu}\) in \(\mathbb{S}^{\lambda}\mathbb{C}^{n}\).
## 3. The Frobenius transform: definition and basic properties
Recall from the introduction that it is a long-standing open problem to find a combinatorial interpretation of \(r_{\lambda}^{\mu}\). As a potential way to approach this problem, we now define the Frobenius transform, which is the primary object of study in this paper.
**Definition 3.1**.: The _Frobenius transform_ is the abelian group homomorphism \(\mathscr{F}\colon\Lambda\to\overline{\Lambda}\) defined on the basis \(\{s_{\lambda}\}\) by
\[\mathscr{F}\{s_{\lambda}\}=\sum_{\mu}r_{\lambda}^{\mu}s_{\mu},\]
where the sum is over all partitions \(\mu\).
_Remark 3.2_.: By a classical result of Littlewood [9], we have
\[r_{\lambda}^{\mu}=\langle s_{\lambda},s_{\mu}[H]\rangle.\]
Hence, \(\mathscr{F}\) is adjoint to plethysm by \(H\) under the Hall inner product.
Here is the reason for calling \(\mathscr{F}\) the Frobenius transform. Let \(n\geq 0\) and let \(\lambda\) be any partition. Then \(\mathbb{S}^{\lambda}\mathbb{C}^{n}\), considered as an \(\mathfrak{S}_{n}\)-module, can be expressed as a direct sum of Specht modules:
\[\bigoplus_{|\mu|=n}r_{\lambda}^{\mu}V_{\mu}=\mathbb{S}^{\lambda}\mathbb{C}^{n}.\]
Taking the character of both sides and applying the Frobenius character map, we obtain
\[\sum_{|\mu|=n}r_{\lambda}^{\mu}s_{\mu}=\operatorname{ch}_{n}(\chi_{\mathbb{S}^ {\lambda}\mathbb{C}^{n}}). \tag{1}\]
In other words, the degree \(n\) part of \(\mathscr{F}\{s_{\lambda}\}\) is equal to the Frobenius character of \(\mathbb{S}^{\lambda}\mathbb{C}^{n}\).
**Example 3.3**.: Let \(r\geq 0\). We will compute \(\mathscr{F}\{e_{r}\}\). First, \(e_{r}=s_{\lambda}\), where \(\lambda=(1^{r})\). Hence, for any \(n\), the degree \(n\) part of \(\mathscr{F}\{e_{r}\}\) is the Frobenius character of
\[\mathbb{S}^{\lambda}\mathbb{C}^{n}=\wedge^{r}\mathbb{C}^{n}=\operatorname{ Ind}_{\mathfrak{S}_{r}\times\mathfrak{S}_{n-r}}^{\mathfrak{S}_{n}}(V_{(1^{r})} \otimes V_{(n-r)}),\]
considered as an \(\mathfrak{S}_{n}\)-module. Thus, it is equal to \(e_{r}h_{n-r}\)[22, Proposition 7.18.2]. Taking the sum over all \(n\) yields \(\mathscr{F}\{e_{r}\}=e_{r}\cdot H\).
### The Frobenius transform and representations of combinatorial categories
The purpose of this subsection is to provide an alternate perspective on the Frobenius transform. This subsection is not essential to the proofs of our main results, so the reader may skip it. For a more complete introduction to the representation theory of categories, see Wiltshire-Gordon's 2016 PhD thesis [23].
In what follows, let \(\operatorname{Vect}_{\mathbb{C}}\) be the category whose objects are vector spaces over \(\mathbb{C}\) and whose morphisms are linear transformations. (The ground field \(\mathbb{C}\) can be replaced by any algebraically closed field of characteristic \(0\).)
**Definition 3.4**.: Let \(\mathcal{C}\) be a category. A \(\mathcal{C}\)_-module_ (over the ground field \(\mathbb{C}\)) is a functor \(M[\bullet]\colon\mathcal{C}\to\operatorname{Vect}_{\mathbb{C}}\). The _category of \(\mathcal{C}\)-modules_ is the functor category \(\operatorname{Mod}^{\mathcal{C}}=(\operatorname{Vect}_{\mathbb{C}})^{ \mathcal{C}}\).
We say that a \(\mathcal{C}\)-module \(M\) is _finite-dimensional_ if \(M[U]\) is finite-dimensional for all \(U\in\operatorname{Ob}(\mathcal{C})\).
If \(M\) and \(N\) are \(\mathcal{C}\)-modules, we may form the _direct sum_\(M\oplus N\) by \((M\oplus N)[U]=M[U]\oplus N[U]\).
Let \(\operatorname{Bij}\) be the category whose objects are finite sets and whose morphisms are bijections and let \(M\) be a \(\operatorname{Bij}\)-module. (Previous authors have referred to Bij-modules as _linear species_ or _tensor species_[8, 14].) For \(n\geq 0\), denote by \(M[n]\) the vector space \(M[[n]]=M[\{1,\ldots,n\}]\), which is an \(\mathfrak{S}_{n}\)-module. Then \(M\) is uniquely determined, up to natural isomorphism, by the sequence \(M[0],M[1],M[2],\ldots\) of symmetric group modules; namely,
\[M[U]=\mathbb{C}\operatorname{Bij}([n],U)\otimes_{\mathfrak{S}_{n}}M[n] \tag{2}\]
for all finite sets \(U\) with \(|U|=n\). Additionally, if \(M\) is finite-dimensional, then we may form the series
\[\operatorname{ch}(M)=\sum_{n}\operatorname{ch}_{n}(\chi_{M[n]})\in\overline{\Lambda}\]
which we call the _Frobenius character of \(M\)_. It also uniquely determines \(M\) up to natural isomorphism.
Not every symmetric power series can be written in the form \(\operatorname{ch}(M)\), where \(M\) is a finite-dimensional \(\operatorname{Bij}\)-module. (Every symmetric power series can be written as the Frobenius character of _virtual_\(\operatorname{Bij}\)_-module_, but we will not define virtual \(\operatorname{Bij}\)-modules in this article.) However, it is still helpful to think of \(\operatorname{ch}\) as a partial correspondence between (isomorphism classes of) finite-dimensional \(\operatorname{Bij}\)-modules and symmetric power series. Many concepts from the theory of symmetric power series have analogous concepts in the theory of \(\operatorname{Bij}\)-modules. For example, in Definition 3.10 below, we will define the _product_\(M\cdot N\) of two \(\operatorname{Bij}\)-modules \(M,N\). Via the Frobenius character, this is analogous to the product of symmetric power series in the sense that \(\operatorname{ch}(M\cdot N)=\operatorname{ch}(M)\operatorname{ch}(N)\) (Proposition 3.11).
In what follows, we will find the construction in the theory of \(\operatorname{Bij}\)-modules which is analogous to the Frobenius transform. More precisely, let \(M\) be a \(\operatorname{Bij}\)-module
such that \(M_{n}=0\) for all but finitely many \(n\). Then \(\operatorname{ch}(M)\) is in fact a symmetric function, so its Frobenius transform \(\mathscr{F}\{\operatorname{ch}(M)\}\) is well-defined. We will construct a Bij-module whose Frobenius character is \(\mathscr{F}\{\operatorname{ch}(M)\}\). Before we do, we need the following definitions.
**Definition 3.5** ([23, Definition 2.6.1]).: Let \(F\colon\mathcal{C}\to\mathcal{D}\) be a functor. The _pullback functor_\(F^{*}\colon\operatorname{Mod}^{\mathcal{D}}\to\operatorname{Mod}^{ \mathcal{C}}\) is given by \(F^{*}M=M\circ F\).
**Definition 3.6** ([23, Proposition 2.6.2]).: Let \(F\colon\mathcal{C}\to\mathcal{D}\) be a functor. The _left Kan extension functor_\(F_{!}\colon\operatorname{Mod}^{\mathcal{C}}\to\operatorname{Mod}^{ \mathcal{D}}\) is the left adjoint to the pullback \(F^{*}\), if it exists:
\[\operatorname{Hom}_{\operatorname{Mod}^{\mathcal{D}}}(F_{!}M,N)= \operatorname{Hom}_{\operatorname{Mod}^{\mathcal{C}}}(M,F^{*}N).\]
Let \(\operatorname{Fun}\) be the category whose objects are finite sets and whose morphisms are functions. Clearly, \(\operatorname{Bij}\) is a subcategory of \(\operatorname{Fun}\); let \(\iota\colon\operatorname{Bij}\to\operatorname{Fun}\) be the inclusion functor.
**Proposition 3.7**.: _The left Kan extension \(\iota_{!}\colon\operatorname{Mod}^{\operatorname{Bij}}\to\operatorname{Mod}^{ \operatorname{Fun}}\) exists. Moreover, let \(M\) be a \(\operatorname{Bij}\)-module. Then the left Kan extension \(\iota_{!}M\) is given on finite sets \(U\) by_
\[(\iota_{!}M)[U]=\bigoplus_{n}\mathbb{C}U^{n}\otimes_{\mathfrak{S}_{n}}M[n].\]
_Here \(\mathbb{C}U^{n}\otimes_{\mathfrak{S}_{n}}M[n]\) denotes the quotient of the tensor product \(\mathbb{C}U^{n}\otimes M[n]\) by the relation that \(a\otimes wb\sim aw\otimes b\) for all \(a\in\mathbb{C}U^{n}\), \(b\in M_{n}\), and \(w\in\mathfrak{S}_{n}\)._
Proof.: This follows from [23, Proposition 2.6.7].
_Remark 3.8_.: Joyal refers to \(\iota_{!}M\colon\operatorname{Fun}\to\operatorname{Vect}_{\mathbb{C}}\) as the _analytic functor_ corresponding to the tensor species \(M\)[8, Definition 4.2].
We are now ready to show that \(\iota^{*}\iota_{!}\) is the construction analogous to the Frobenius transform.
**Proposition 3.9**.: _Let \(M\) be a finite-dimensional \(\operatorname{Bij}\)-module such that \(M[n]=0\) for all but finitely many \(n\). Then \(\iota^{*}\iota_{!}M\) is finite-dimensional and \(\mathscr{F}\{\operatorname{ch}(M)\}=\operatorname{ch}(\iota^{*}\iota_{!}M)\)._
Proof.: For any partition \(\lambda\), define the \(\operatorname{Bij}\)-module \(M_{\lambda}\) by
\[(M_{\lambda})[n]=\begin{cases}V_{\lambda}&\text{if }n=|\lambda|\\ 0&\text{otherwise}\end{cases},\]
extending to all of \(\operatorname{Bij}\) using (2).
We have that \(M\) can be written as a direct sum of the \(M_{\lambda}\). Since \(\iota^{*}\) and \(\iota_{!}\) preserve direct sums, it is enough to prove the proposition for \(M=M_{\lambda}\). Then \(\operatorname{ch}(M)=\operatorname{ch}_{n}(\chi_{\lambda})=s_{\lambda}\).
On the other hand, by Proposition 3.7, we have for \(m\geq 0\) that
\[(\iota^{*}\iota_{!}M)[m] =\bigoplus_{n}\mathbb{C}[m]^{n}\otimes_{\mathfrak{S}_{n}}M[n]\] \[=\bigoplus_{n}(\mathbb{C}^{m})^{\otimes n}\otimes_{\mathfrak{S}_{ n}}M[n]\] \[=(\mathbb{C}^{m})^{\otimes|\lambda|}\otimes_{\mathfrak{S}_{| \lambda|}}V_{\lambda}.\]
By Schur-Weyl duality, this is isomorphic as a \(GL_{m}\)-module to \(\mathbb{S}^{\lambda}\mathbb{C}^{m}\). Hence, as a \(\mathfrak{S}_{m}\)-module, it decomposes into irreducibles as
\[(\iota^{*}\iota_{!}M)[m]=\bigoplus_{|\mu|=m}r_{\lambda}^{\mu}V_{\mu}.\]
It follows that
\[\operatorname{ch}(\iota^{*}\iota_{!}M)=\sum_{\mu}r_{\lambda}^{\mu}s_{\mu}= \mathscr{F}\{s_{\lambda}\}=\mathscr{F}\{\operatorname{ch}(M)\}\]
as desired.
We will occasionally make use of the following definition in later remarks.
**Definition 3.10** ([8, Section 4.1]).: Let \(M,N\) be Bij-modules. Define the _product_\(M\cdot N\) to be the Bij-module given by
\[(M\cdot N)[U]=\bigoplus_{\begin{subarray}{c}U_{1}\cup U_{2}=U\\ U_{1}\cap U_{2}=\emptyset\end{subarray}}M[U_{1}]\otimes N[U_{2}].\]
**Proposition 3.11** ([14, Proposition 2.1]).: _Let \(M\), \(N\) be finite-dimensional Bij-modules. Then_
\[\operatorname{ch}(M\cdot N)=\operatorname{ch}(M)\operatorname{ch}(N).\]
### The Frobenius transform and evaluation at roots of unity
Let us now describe the expansion of \(\mathscr{F}\{s_{\lambda}\}\) in the power sum basis. In order to simplify the description, we will do it one degree at a time. In this subsection, we will write \(f_{n}\) to mean the degree \(n\) part of \(f\).
By (1) and Definition 2.5, we have
\[(\mathscr{F}\{s_{\lambda}\})_{n}=\operatorname{ch}_{n}(\chi_{\mathbb{S}^{ \lambda}\mathbb{C}^{n}})=\frac{1}{n!}\sum_{w\in\mathfrak{S}_{n}}\chi_{\mathbb{ S}^{\lambda}\mathbb{C}^{n}}(w)p_{c(w)}. \tag{3}\]
It is well-known [5, Section 8.3] that the character of \(\mathbb{S}^{\lambda}\mathbb{C}^{n}\) as a \(GL_{n}\)-module is the Schur function \(s_{\lambda}\) itself. In other words, if \(g\in GL_{n}(\mathbb{C})\) has eigenvalues \(x_{1},\ldots,x_{n}\), then \(\chi_{\mathbb{S}^{\lambda}\mathbb{C}^{n}}(g)\) is equal to the evaluation \(s_{\lambda}(x_{1},\ldots,x_{n})\).
For \(w\in\mathfrak{S}_{n}\), we have \(\chi_{\mathbb{S}^{\lambda}\mathbb{C}^{n}}(w)=s_{\lambda}(x_{1},\ldots,x_{n})\), where \(x_{1},\ldots,x_{n}\) are the eigenvalues of the permutation matrix \(P_{w}\). Let \(\mu=(\mu_{1},\ldots,\mu_{\ell})\) be the cycle type of \(w\). Then the eigenvalues of \(P_{w}\) are the roots of unity
\[1,\exp\left(\frac{2\pi i}{\mu_{1}}\right),\exp\left(\frac{4\pi i }{\mu_{1}}\right),\ldots,\exp\left(\frac{2(\mu_{1}-1)\pi i}{\mu_{1}}\right),\] \[\vdots\] \[1,\exp\left(\frac{2\pi i}{\mu_{\ell}}\right),\exp\left(\frac{4\pi i }{\mu_{\ell}}\right),\ldots,\exp\left(\frac{2(\mu_{\ell}-1)\pi i}{\mu_{\ell}} \right).\]
Following Orellana and Zabrocki [18, 19], let \(\Xi_{\mu}\in\mathbb{C}^{n}\) denote this sequence. We have that \(\chi_{\mathbb{S}^{\lambda}\mathbb{C}^{n}}(w)\) is the result of evaluating \(s_{\lambda}\) at these roots of unity:
\[\chi_{\mathbb{S}^{\lambda}\mathbb{C}^{n}}(w)=s_{\lambda}\left(\Xi_{\mu}\right).\]
Now, let us group the terms on the right-hand side of (3) according to the cycle type \(\mu=c(w)\). The number of permutations \(w\in\mathfrak{S}_{n}\) with cycle type \(\mu\) is \(\frac{n!}{z_{\mu}}\), so
\[(\mathscr{F}\{s_{\lambda}\})_{n} =\frac{1}{n!}\sum_{w\in\mathfrak{S}_{n}}\chi_{\mathbb{S}^{\lambda }\subset\!\!^{*}}(w)p_{c(w)}\] \[=\frac{1}{n!}\sum_{|\mu|=n}\frac{n!}{z_{\mu}}s_{\lambda}(\Xi_{\mu })p_{\mu}\] \[=\sum_{|\mu|=n}s_{\lambda}(\Xi_{\mu})\frac{p_{\mu}}{z_{\mu}}.\]
Taking the sum over all \(n\), we obtain
\[\mathscr{F}\{s_{\lambda}\}=\sum_{\mu}s_{\lambda}(\Xi_{\mu})\frac{p_{\mu}}{z_{ \mu}}.\]
Finally, by linearity, we may extend this result to any \(f\in\Lambda\). We have proved the following.
**Proposition 3.12**.: _Let \(f\in\Lambda\). Then_
\[\mathscr{F}\{f\}=\sum_{\mu}f(\Xi_{\mu})\frac{p_{\mu}}{z_{\mu}}.\]
In the notation of Orellana and Zabrocki [18], this proposition can be written as
\[\mathscr{F}\{f\}=\phi_{0}(f)+\phi_{1}(f)+\phi_{2}(f)+\cdots.\]
### The Frobenius transform and the Kronecker product
The Frobenius transform relates the ordinary product of symmetric functions to the Kronecker product in the following way.
**Proposition 3.13** (cf. [18, Section 2.3]).: _Let \(f,g\in\Lambda\). Then \(\mathscr{F}\{fg\}=\mathscr{F}\{f\}*\mathscr{F}\{g\}\)._
We provide two different proofs of Proposition 3.13: a category-theoretic proof using Proposition 3.9 and a direct computational proof using Proposition 3.12.
First proof.: Because both sides of the desired equation are bilinear, we may assume that there exist Bij-modules \(M,N\) such that \(\operatorname{ch}(M)=f\) and \(\operatorname{ch}(N)=g\). By [14, Proposition 2.1], we have \(\operatorname{ch}(M\cdot N)=\operatorname{ch}(M)\operatorname{ch}(N)=fg\), where \(M\cdot N\) is the product as defined in Definition 3.10.
By [8, Equation 2.1(ii)], we have \(\iota^{*}\iota_{!}(M\cdot N)=(\iota^{*}\iota_{!}M)\otimes_{\mathbb{C}}(\iota ^{*}\iota_{!}N)\), where \(\otimes_{\mathbb{C}}\) denotes the object-wise tensor product of Bij-modules. Applying \(\operatorname{ch}\) to both sides, we obtain \(\mathscr{F}\{fg\}=\mathscr{F}\{f\}*\mathscr{F}\{g\}\), as desired.
Second proof.: By Proposition 3.12, we have
\[\mathscr{F}\{f\}*\mathscr{F}\{g\} =\left(\sum_{\mu}f(\Xi_{\mu})\frac{p_{\mu}}{z_{\mu}}\right)* \left(\sum_{\mu}g(\Xi_{\mu})\frac{p_{\mu}}{z_{\mu}}\right)\] \[=\sum_{\mu}f(\Xi_{\mu})g(\Xi_{\mu})\frac{p_{\mu}}{z_{\mu}}\] \[=\mathscr{F}\{fg\}\]
as desired.
**Corollary 3.14**.: _Let \(\lambda,\mu,\nu\) be partitions. Then_
\[\sum_{\nu^{\prime}}r_{\nu^{\prime}}^{\nu}c_{\lambda\mu}^{\nu^{\prime}}=\sum_{ \lambda^{\prime},\mu^{\prime}}r_{\lambda}^{\lambda^{\prime}}r_{\mu}^{\mu^{ \prime}}g_{\lambda^{\prime}\mu^{\prime}\nu}.\]
Proof.: In Proposition 3.13, take \(f=s_{\lambda}\) and \(g=s_{\mu}\). Then take the Hall inner product of both sides with \(s_{\nu}\).
### The surjective Frobenius transform
**Proposition 3.15**.: _Let \(f\in\Lambda\). Then there exists a symmetric function \(\mathscr{F}_{\mathrm{Sur}}\left\{f\right\}\in\Lambda\) such that \(\mathscr{F}\{f\}=\mathscr{F}_{\mathrm{Sur}}\left\{f\right\}\cdot H\). Moreover, \(\mathscr{F}_{\mathrm{Sur}}\left\{f\right\}\) has the same degree and leading term as \(f\)._
For example, in Example 3.3 we showed that \(\mathscr{F}\{e_{r}\}=e_{r}\cdot H\). Thus, \(\mathscr{F}_{\mathrm{Sur}}\left\{e_{r}\right\}=e_{r}\). (Note, however, that in general, \(\mathscr{F}_{\mathrm{Sur}}\) does not preserve the property of being homogeneous.) Again, we provide two separate proofs of this proposition: a category-theoretic proof and a direct computational proof.
First proof.: Because both sides of the desired equation are linear in \(f\), we may assume that there exists a Bij-module \(M\) such that \(\mathrm{ch}(M)=f\).
Let \(\mathrm{Sur}\) be the category whose objects are finite sets and whose morphisms are surjections, and let \(\kappa\colon\mathrm{Bij}\to\mathrm{Sur}\) be the inclusion functor. We claim that the choice \(\mathscr{F}_{\mathrm{Sur}}\left\{f\right\}=\mathrm{ch}(\kappa^{*}\kappa_{!}M)\) satisfies the desired properties. For this, we will show that there is a natural isomorphism
\[\iota^{*}\iota_{!}M=\kappa^{*}\kappa_{!}M\cdot\mathbb{C}E, \tag{4}\]
where \(\iota\) and \(\cdot\) are defined as in Section 3.1 and \(\mathbb{C}E\) is the Bij-module given by \((\mathbb{C}E)[U]=\mathbb{C}\) for all finite sets \(U\).
By Proposition 3.7, we have
\[(\iota^{*}\iota_{!}M)[U]=\bigoplus_{n}\mathbb{C}U^{n}\otimes_{\mathfrak{S}_{n }}M[n]. \tag{5}\]
Think of \(U^{n}\) as the set of all functions \([n]\to U\). By grouping those functions by their image \(V\), we may write \(U^{n}\) as a disjoint union:
\[U^{n}=\sum_{V\subset U}\mathrm{Sur}([n],V).\]
Substituting into (5), we obtain
\[(\iota^{*}\iota_{!}M)[U] =\bigoplus_{n}\mathbb{C}\left(\sum_{V\subset U}\mathrm{Sur}([n], V)\right)\otimes_{\mathfrak{S}_{n}}M[n]\] \[=\bigoplus_{n}\left(\bigoplus_{V\subset U}\mathbb{C}\,\mathrm{Sur }([n],V)\right)\otimes_{\mathfrak{S}_{n}}M[n]\] \[=\bigoplus_{V\subset U}\bigoplus_{n}\mathbb{C}\,\mathrm{Sur}([n], V)\otimes_{\mathfrak{S}_{n}}M[n]\] \[=\bigoplus_{V\subset U}(\kappa^{*}\kappa_{!}M)[V]\]
where the last step follows from [23, Proposition 2.6.7]. Recognizing the latter as \((\kappa^{*}\kappa_{!}M\cdot\mathbb{C}E)[U]\), we have shown (4). Applying \(\mathrm{ch}\) to both sides and using Propositions 3.9 and 3.11 yields \(\mathscr{F}\{f\}=\mathrm{ch}(\kappa^{*}\kappa_{!}M)\cdot H\).
It remains to show that \(\operatorname{ch}(\kappa^{*}\kappa_{!}M)\) has the same degree and leading term as \(f\). For this, we will prove that \((\kappa^{*}\kappa_{!}M)[m]\) and \(M[m]\) are isomorphic as \(\mathfrak{S}_{m}\)-modules for all \(m\geq\deg f\). Consider the natural isomorphism
\[(\kappa^{*}\kappa_{!}M)[U]=\bigoplus_{n}\mathbb{C}\operatorname{Sur}([n],U) \otimes_{\mathfrak{S}_{n}}M[n]\]
with \(U=[m]\). In each summand, the factor \(\mathbb{C}\operatorname{Sur}([n],[m])\) vanishes when \(n<m\) and the factor \(M[n]\) vanishes when \(n>\deg f\). If \(m\geq\deg f\), this means that every term vanishes except possibly the term \(n=m\). Hence,
\[(\kappa^{*}\kappa_{!}M)[m] =\mathbb{C}\operatorname{Sur}([m],[m])\otimes_{\mathfrak{S}_{m} }M[m]\] \[=\mathbb{C}\mathfrak{S}_{m}\otimes_{\mathfrak{S}_{m}}M[m]\] \[=M[m]\]
as desired.
Second proof.: We claim that the choice
\[\mathscr{F}_{\operatorname{Sur}}\left\{f\right\}=\sum_{\mu}\langle f,s_{\mu} [H_{+}]\rangle s_{\mu}\]
satisfies the desired properties. In other words, we may take \(\mathscr{F}_{\operatorname{Sur}}\) to be the adjoint to plethysm by \(H_{+}\) under the Hall inner product.
By Remark 3.2, we have
\[\mathscr{F}\{f\} =\sum_{\lambda}\langle f,s_{\lambda}[H]\rangle s_{\lambda}\] \[=\sum_{\lambda}\langle f,s_{\lambda}[H_{+}+1]\rangle s_{\lambda}.\]
By the plethystic addition formula (Proposition 2.6), we have
\[\mathscr{F}\{f\}=\sum_{\lambda,\mu}\langle f,s_{\mu}[H_{+}]s_{\lambda/\mu}[1] \rangle s_{\lambda}.\]
We have
\[s_{\lambda/\mu}[1]=s_{\lambda/\mu}(1,0,0,\ldots)=\begin{cases}1&\text{if $ \lambda/\mu$ is a horizontal strip}\\ 0&\text{otherwise}\end{cases},\]
so
\[\mathscr{F}\{f\} =\sum_{\begin{subarray}{c}\lambda,\mu\\ \lambda/\mu\text{ h. strip}\end{subarray}}\langle f,s_{\mu}[H_{+}]\rangle s_{\lambda}\] \[=\sum_{\mu}\langle f,s_{\mu}[H_{+}]\rangle\left(\sum_{ \begin{subarray}{c}\lambda\\ \lambda/\mu\text{ h. strip}\end{subarray}}s_{\lambda}\right)\] \[=\sum_{\mu}\langle f,s_{\mu}[H_{+}]\rangle s_{\mu}\cdot H,\]
where the last equality follows from the Pieri rule.
Now, the only thing left to show is that \(f\) has the same degree and leading term as
\[\sum_{\mu}\langle f,s_{\mu}[H_{+}]\rangle s_{\mu}.\]
This follows directly from the observation that if \(|\mu|\geq\deg f\), then \(\langle f,s_{\mu}[H_{+}]\rangle=\langle f,s_{\mu}\rangle\).
We refer to \(\mathscr{F}_{\mathrm{Sur}}\colon\Lambda\to\Lambda\) as the _surjective Frobenius transform_. Clearly, it is invertible:
**Corollary 3.16**.: _There exists a two-sided inverse \(\mathscr{F}_{\mathrm{Sur}}^{-1}\colon\Lambda\to\Lambda\) of \(\mathscr{F}_{\mathrm{Sur}}\)._
Proof.: Define \(\mathcal{M}\colon\Lambda\to\Lambda\) by \(\mathcal{M}\{f\}=f-\mathscr{F}_{\mathrm{Sur}}\left\{f\right\}\). By Proposition 3.15, we have
\[\deg(\mathcal{M}\{f\})<\deg f\]
for any \(f\in\Lambda\setminus\{0\}\). Hence, \(\mathcal{M}^{k}\{f\}=0\) for any \(k>\deg f\).
Define \(\mathscr{F}_{\mathrm{Sur}}^{-1}\colon\Lambda\to\Lambda\) by
\[\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{f\right\}=f+\mathcal{M}\{f\}+\mathcal{M }^{2}\{f\}+\cdots.\]
This is well-defined by the above, and it is easy to check that it is the two-sided inverse of \(\mathscr{F}_{\mathrm{Sur}}\).
Like the ordinary Frobenius transform, the surjective Frobenius transform can be described in terms of its matrix entries in the Schur basis.
**Definition 3.17**.: Let \(\lambda,\mu\) be partitions. Define the _surjective restriction coefficient_
\[t_{\lambda}^{\mu}=\langle\mathscr{F}_{\mathrm{Sur}}\left\{s_{\lambda}\right\}, s_{\mu}\rangle\]
and define the _inverse surjective restriction coefficient_
\[u_{\lambda}^{\mu}=\langle\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{s_{\lambda} \right\},s_{\mu}\rangle.\]
By the above, we have \(t_{\lambda}^{\mu}=u_{\lambda}^{\mu}=\delta_{\lambda\mu}\) for \(|\mu|\geq|\lambda|\).
## 4. Stable restriction
Define the _stable restriction coefficients_\(a_{\lambda}^{\mu}\) as follows. For any partition \(\mu=(\mu_{1},\ldots,\mu_{\ell})\) and any \(n\geq\mu_{1}+|\mu|\), define \(\mu^{(n)}=(n-|\mu|,\mu_{1},\ldots,\mu_{\ell})\). Then, for any partitions \(\lambda,\mu\), the stable restriction coefficient \(a_{\lambda}^{\mu}\) is defined by the following limit, which exists by a classical result of Littlewood [9]:
\[a_{\lambda}^{\mu}=\lim_{n\to\infty}r_{\lambda}^{\mu(n)}.\]
Littlewood also showed that \(a_{\lambda}^{\mu}=\delta_{\lambda\mu}\) if \(|\lambda|\leq|\mu|\), so the infinite matrix \([a_{\lambda}^{\mu}]\), with rows and columns indexed by partitions in increasing order of size, is upper unitriangular. In 2019, Assaf and Speyer [2] found the following formula for the entries \(b_{\lambda}^{\mu}\) of the inverse matrix \([b_{\lambda}^{\mu}]=[a_{\lambda}^{\mu}]^{-1}\):
**Theorem 4.1** ([2, Theorem 2]).: _Let \(\lambda,\mu\) be partitions. Then_
\[b_{\lambda}^{\mu}=(-1)^{|\lambda|-|\mu|}\langle s_{\lambda^{T}},s_{\mu^{T}}[L_ {1}+L_{2}+L_{3}+\cdots]\cdot H\rangle.\]
_In particular, \((-1)^{|\lambda|-|\mu|}b_{\lambda}^{\mu}\) is a nonnegative integer._
In this section, we will provide an alternative proof of Theorem 4.1. In fact, we will prove similar plethystic formulas for all five kinds of restriction coefficients defined so far. These formulas have all been collected into Theorem 4.2 below.
**Theorem 4.2**.: _Let \(\lambda,\mu\) be partitions with \(|\lambda|=m\) and \(|\mu|=n\)._
1. _The restriction coefficient_ \(r_{\lambda}^{\mu}\) _is given by_ \[r_{\lambda}^{\mu} =\langle\mathscr{F}\{s_{\lambda}\},s_{\mu}\rangle\] \[=\langle s_{\lambda},s_{\mu}[H]\rangle.\]
2. _The surjective restriction coefficient_ \(t_{\lambda}^{\mu}\) _is given by_ \[t_{\lambda}^{\mu} =\langle\mathscr{F}_{\mathrm{Sur}}\left\{s_{\lambda}\right\},s_{ \mu}\rangle\] \[=\langle s_{\lambda},s_{\mu}[H_{+}]\rangle.\]
3. _The inverse surjective restriction coefficient_ \(u_{\lambda}^{\mu}\) _is given by_ \[u_{\lambda}^{\mu} =\langle\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{s_{\lambda}\right\}, s_{\mu}\rangle\] \[=\langle s_{\lambda},s_{\mu}[\omega(L_{1})-\omega(L_{2})+\omega(L _{3})-\cdots]\rangle\] \[=(-1)^{m-n}\langle s_{\lambda^{T}},s_{\mu^{T}}[L_{1}+L_{2}+L_{3}+ \cdots]\rangle.\]
4. _The stable restriction coefficient_ \(a_{\lambda}^{\mu}\) _is given by_ \[a_{\lambda}^{\mu} =\langle H^{\perp}\mathscr{F}_{\mathrm{Sur}}\left\{s_{\lambda} \right\},s_{\mu}\rangle\] \[=\langle\mathscr{F}_{\mathrm{Sur}}\left\{s_{\lambda}\right\},s_{ \mu}\cdot H\rangle\] \[=\langle s_{\lambda},(s_{\mu}\cdot H)[H_{+}]\rangle.\]
5. _The inverse stable restriction coefficient_ \(b_{\lambda}^{\mu}\) _is given by_ \[b_{\lambda}^{\mu} =\langle\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{(H^{\perp})^{-1}s_{ \lambda}\right\},s_{\mu}\rangle\] \[=\langle s_{\lambda},s_{\mu}[\omega(L_{1})-\omega(L_{2})+\omega(L _{3})-\cdots]\cdot(1-e_{1}+e_{2}-e_{3}+\cdots)\rangle\] \[=(-1)^{m-n}\langle s_{\lambda^{T}},s_{\mu^{T}}[L_{1}+L_{2}+L_{3}+ \cdots]\cdot H\rangle.\]
Before proving Theorem 4.2, we restate a basic result of plethystic calculus.
**Lemma 4.3** (Negation Rule, [10, Theorem 6]).: _Let \(f\in\Lambda\) and \(g\in\overline{\Lambda}\). If \(f\) is homogeneous, then_
\[f[-g]=(-1)^{\deg f}(\omega(f))[g].\]
Proof of Theorem 4.2.:
1. The first equality is true by definition. The second follows from Remark 3.2.
2. The first equality is true by definition. The second is demonstrated in the second proof of Proposition 3.15.
3. The first equality is true by definition. For the second, Cadogan showed in 1971 that \(\omega(L_{1})-\omega(L_{2})+\omega(L_{3})-\cdots\) is the plethystic inverse of \(H_{+}\)[3]. In part (b), we showed that \(\mathscr{F}_{\mathrm{Sur}}\) is adjoint to plethysm by \(H_{+}\). Hence, \(\mathscr{F}_{\mathrm{Sur}}^{-1}\) is adjoint to plethysm by \(\omega(L_{1})-\omega(L_{2})+\omega(L_{3})-\cdots\), as desired. For the third, by Lemma 4.3 and the associativity of plethysm, we have \[\langle s_{\lambda},s_{\mu}[\omega(L_{1})-\omega(L_{2})+\omega( L_{3})-\cdots]\rangle\] \[= \langle s_{\lambda},s_{\mu}[-(L_{1}+L_{2}+L_{3}+\cdots)[-p_{1}]]\rangle\] \[= \langle s_{\lambda},(s_{\mu}[-(L_{1}+L_{2}+L_{3}+\cdots)])[-p_{1}]\rangle\] \[= \langle s_{\lambda}[-p_{1}],s_{\mu}[-(L_{1}+L_{2}+L_{3}+\cdots)]\rangle\] \[= \langle(-1)^{m}\omega(s_{\lambda})[p_{1}],(-1)^{n}\omega(s_{\mu} )[L_{1}+L_{2}+L_{3}+\cdots]\rangle\] \[= (-1)^{m-n}\langle s_{\lambda^{T}},s_{\mu^{T}}[L_{1}+L_{2}+L_{3}+ \cdots]\rangle\]
as desired.
4. By part (a), we have (6) \[a_{\lambda}^{\mu}=\lim_{n\to\infty}\langle\mathscr{F}\{s_{\lambda}\},s_{\mu^{(n)}} \rangle=\lim_{n\to\infty}\langle\mathscr{F}_{\mathrm{Sur}}\,\{s_{\lambda}\} \cdot H,s_{\mu^{(n)}}\rangle.\] Now, we claim that for any \(f\in\Lambda\), we have (7) \[\lim_{n\to\infty}\langle f\cdot H,s_{\mu^{(n)}}\rangle=\langle f,s_{\mu}\cdot H\rangle.\] By linearity, it is enough to show (7) when \(f=s_{\nu}\) for some partition \(\nu\). In that case, by the Pieri rule, \[\lim_{n\to\infty}\langle f\cdot H,s_{\mu^{(n)}}\rangle=\lim_{n\to\infty} \begin{cases}1&\text{if $\mu^{(n)}/\nu$ is a horizontal strip}\\ 0&\text{otherwise}\end{cases}.\] For \(n\) sufficiently large, it is easy to see that \(\mu^{(n)}/\nu\) is a horizontal strip if and only if \(\nu/\mu\) is a horizontal strip. So the above limit is equal to \[\begin{cases}1&\text{if $\nu/\mu$ is a horizontal strip}\\ 0&\text{otherwise}\end{cases},\] which is just \(\langle f,s_{\mu}\cdot H\rangle\). Hence, (7) indeed holds. Substituting (7) into (6) with \(f=\mathscr{F}_{\mathrm{Sur}}\,\{s_{\lambda}\}\), we obtain \[a_{\lambda}^{\mu}=\langle\mathscr{F}_{\mathrm{Sur}}\,\{s_{\lambda}\}\,,s_{\mu} \cdot H\rangle.\] The result now follows from the fact that \(\mathscr{F}_{\mathrm{Sur}}\) is adjoint to plethysm by \(H_{+}\).
5. The first and second equalities follow from the definition of \(b_{\lambda}^{\mu}\) and from (d). For the third equality, we again use Lemma 4.3, together with the fact that plethysm by \(-p_{1}\) is an isometry and a ring automorphism: \[\langle s_{\lambda},s_{\mu}[\omega(L_{1})-\omega(L_{2})+\omega(L _{3})-\cdots]\cdot(1-e_{1}+e_{2}-e_{3}+\cdots)\rangle\] \[= \langle s_{\lambda},s_{\mu}[-(L_{1}+L_{2}+L_{3}+\cdots)][-p_{1}] \cdot(1-e_{1}+e_{2}-e_{3}+\cdots)\rangle\] \[= \langle s_{\lambda}[-p_{1}],s_{\mu}[-(L_{1}+L_{2}+L_{3}+\cdots)] \cdot(1-e_{1}+e_{2}-e_{3}+\cdots)[-p_{1}]\rangle\] \[= \langle(-1)^{m}\omega(s_{\lambda})[p_{1}],(-1)^{n}\omega(s_{\mu} )[L_{1}+L_{2}+L_{3}+\cdots]\cdot H\rangle\] \[= (-1)^{m-n}\langle s_{\lambda^{T}},s_{\mu^{T}}[L_{1}+L_{2}+L_{3}+ \cdots]\cdot H\rangle,\] as desired.
## 5. An expansion of the Frobenius transform
In 1999, Zabrocki showed that every abelian group homomorphism from \(\Lambda\) to itself can be written as a sum of operators of the form \(fg^{\perp}\) where \(f,g\in\Lambda\)[24, Corollary 4.11]. In this section, we will prove the following theorem, which shows how to write \(\mathscr{F}_{\mathrm{Sur}}\) in this way.
**Theorem 1.1**.: _Let \(f\) be a symmetric function. Then_
\[\mathscr{F}_{\mathrm{Sur}}\,\{f\}=\sum_{\lambda}s_{\lambda}(s_{\lambda}[h_{2} +h_{3}+h_{4}+\cdots])^{\perp}f,\]
_where the sum is over all partitions \(\lambda\)._
_Remark 5.1_.: The symmetric power series \(s_{\lambda}[h_{2}+h_{3}+h_{4}+\cdots]\in\overline{\Lambda}\) appearing in Theorem 1.1 contains only terms of degree at least \(2|\lambda|\). Hence, the degree of the summand \(s_{\lambda}(s_{\lambda}[h_{2}+h_{3}+h_{4}+\cdots])^{\perp}f\) is at most \(\deg(f)-|\lambda|\). In particular, it vanishes if \(|\lambda|>\deg(f)\), so the sum in Theorem 1.1 is finite.
Proof of Theorem 1.1.: Let \(\mu\) be an arbitrary partition. It suffices to show that
\[\left\langle s_{\mu},\mathscr{F}_{\mathrm{Sur}}\left\{f\right\}\right\rangle= \left\langle s_{\mu},\sum_{\lambda}s_{\lambda}(s_{\lambda}[h_{2}+h_{3}+h_{4}+ \cdots])^{\perp}f\right\rangle. \tag{8}\]
By Theorem 4.2(b) and Proposition 2.6, we have
\[\left\langle s_{\mu},\mathscr{F}_{\mathrm{Sur}}\left\{f\right\}\right\rangle =\left\langle s_{\mu}[H_{+}],f\right\rangle\] \[=\left\langle\sum_{\lambda}s_{\mu/\lambda}s_{\lambda}[h_{2}+h_{3} +h_{4}+\cdots],f\right\rangle\] \[=\left\langle\sum_{\lambda}s_{\lambda}[h_{2}+h_{3}+h_{4}+\cdots] s_{\lambda}^{\perp}s_{\mu},f\right\rangle\] \[=\left\langle s_{\mu},\sum_{\lambda}s_{\lambda}(s_{\lambda}[h_{2} +h_{3}+h_{4}+\cdots])^{\perp}f\right\rangle,\]
completing the proof of (8) and of the theorem.
Now, we will use Theorem 4.2 and Theorem 1.1 to study the vanishing of the surjective restriction coefficients \(t_{\lambda}^{\mu}\), the restriction coefficients \(r_{\lambda}^{\mu}\), and the restriction coefficients \(a_{\lambda}^{\mu}\).
**Theorem 5.2**.: _Let \(\lambda,\mu\) be partitions. If \(t_{\lambda}^{\mu}>0\), then \(|\lambda\cap\mu|\geq 2|\mu|-|\lambda|\)._
Proof.: By the definition of the surjective restriction coefficients, the Schur function \(s_{\mu}\) appears in \(\mathscr{F}_{\mathrm{Sur}}\left\{s_{\lambda}\right\}\). By Theorem 1.1, there exists a partition \(\nu\) such that \(s_{\mu}\) appears in
\[s_{\nu}(s_{\nu}[h_{2}+h_{3}+h_{4}+\cdots])^{\perp}s_{\lambda}.\]
Hence, there exists a partition \(\rho\) such that \(s_{\rho}\) appears in \((s_{\nu}[h_{2}+h_{3}+h_{4}+\cdots])^{\perp}s_{\lambda}\) and \(s_{\mu}\) appears in \(s_{\nu}s_{\rho}\).
Since \(s_{\rho}\) appears in \((s_{\nu}[h_{2}+h_{3}+h_{4}+\cdots])^{\perp}s_{\lambda}\), we have that \(\rho\subset\lambda\) and
\[|\rho|\leq|\lambda|-2|\nu|. \tag{9}\]
Since \(s_{\mu}\) appears in \(s_{\nu}s_{\rho}\), we have that \(\rho\subset\mu\) and
\[|\mu|=|\nu|+|\rho|. \tag{10}\]
Combining (9) and (10), we obtain \(|\rho|\geq 2|\mu|-|\lambda|\). Now, we have \(\rho\subset\lambda\cap\mu\), so
\[|\lambda\cap\mu|\geq|\rho|\geq 2|\mu|-|\lambda|,\]
as desired.
**Theorem 1.2**.: _Let \(\lambda,\mu\) be partitions. If \(r_{\lambda}^{\mu}>0\), then \(|\lambda\cap\mu^{\prime}|\geq 2|\mu^{\prime}|-|\lambda|\), where \(\mu^{\prime}=(\mu_{2},\ldots,\mu_{\ell(\mu)})\) is the partition formed by removing the first part of \(\mu\)._
Proof.: By the Pieri rule and the definition of \(\mathscr{F}_{\mathrm{Sur}}\), we have
\[r_{\lambda}^{\mu}=\sum_{\begin{subarray}{c}\nu\\ \mu/\nu\text{ is a horizontal strip}\end{subarray}}t_{\lambda}^{\nu}.\]
Hence, there exists a partition \(\nu\) such that \(\mu/\nu\) is a horizontal strip and \(t_{\lambda}^{\nu}>0\). By Theorem 5.2, we have
\[|\lambda\cap\nu|\geq 2|\nu|-|\lambda|.\]
Now, \(\mu^{\prime}\subset\nu\), so
\[|\lambda\cap\mu^{\prime}| \geq|\lambda\cap\nu|-(|\nu|-|\mu^{\prime}|)\] \[\geq(2|\nu|-|\lambda|)-(|\nu|-|\mu^{\prime}|)\] \[=(2|\mu^{\prime}|-|\lambda|)+(|\nu|-|\mu^{\prime}|)\] \[\geq 2|\mu^{\prime}|-|\lambda|,\]
as desired.
**Theorem 1.3**.: _Let \(\lambda,\mu\) be partitions. If \(a_{\lambda}^{\mu}>0\), then \(|\lambda\cap\mu|\geq 2|\mu|-|\lambda|\)._
Proof.: By the Pieri rule and Theorem 4.2(d), we have
\[a_{\lambda}^{\mu}=\sum_{\begin{subarray}{c}\nu\\ \nu/\mu\text{ is a horizontal strip}\end{subarray}}t_{\lambda}^{\nu}\]
Hence, there exists a partition \(\nu\) such that \(\nu/\mu\) is a horizontal strip and \(t_{\nu}^{\mu}>0\). By Theorem 5.2, we have
\[|\lambda\cap\nu|\geq 2|\nu|-|\lambda|.\]
Now, \(\mu\subset\nu\), so
\[|\lambda\cap\mu| \geq|\lambda\cap\nu|-(|\nu|-|\mu|)\] \[\geq(2|\nu|-|\lambda|)-(|\nu|-|\mu|)\] \[=(2|\mu|-|\lambda|)+(|\nu|-|\mu|)\] \[=2|\mu|-|\lambda|,\]
as desired.
## 6. Computations of the surjective Frobenius transform
In this section, we will compute \(\mathscr{F}_{\operatorname{Sur}}\left\{f\right\}\) for various symmetric functions \(f\). Recall from the introduction:
**Theorem 1.4**.: _Let \(\lambda\) be a partition and let \(\ell=\ell(\lambda)\) be its length._
1. _Then_ \[\mathscr{F}_{\operatorname{Sur}}\left\{h_{\lambda}\right\}=\sum_{M}\prod_{j \in\mathbb{N}^{\ell}}h_{M(j)}\] _where the sum is over all functions_ \(M\colon\mathbb{N}^{\ell}\to\mathbb{N}\) _such that_ \(M(0,\dots,0)=0\) _and_ \(\sum_{j\in\mathbb{N}^{\ell}}j_{i}M(j)=\lambda_{i}\) _for_ \(i=1,\dots,\ell\)_._
2. _Then_ \[\mathscr{F}_{\operatorname{Sur}}\left\{e_{\lambda}\right\}=\sum_{M}\prod_{j \in\{0,1\}^{\ell}}\begin{cases}h_{M(j)}&\text{if $j_{1}+\dots+j_{\ell}$ is even}\\ e_{M(j)}&\text{if $j_{1}+\dots+j_{\ell}$ is odd}\end{cases}\] _where the sum is over all functions_ \(M\colon\{0,1\}^{\ell}\to\mathbb{N}\) _such that_ \(M(0,\dots,0)=0\) _and_ \(\sum_{j\in\{0,1\}^{\ell}}j_{i}M(j)=\lambda_{i}\) _for_ \(i=1,\dots,\ell\)_._
3. _Then_ \[\mathscr{F}_{\operatorname{Sur}}\left\{p_{\lambda}\right\}=\sum_{\pi}\prod_{U \in\pi}\left(\sum_{d|\gcd\{\lambda_{i}\colon\,i\in U\}}d^{|U|-1}p_{d}\right)\] _where the outer sum is over all partitions_ \(\pi\) _of_ \(\{1,\dots,\ell\}\) _into nonempty sets._
**Example 6.1**.: Let us use Theorem 1.4(a) to compute \(\mathscr{F}_{\rm Sur}\left\{h_{2,2}\right\}\). First, we list all the functions \(M\colon\mathbb{N}^{2}\to\mathbb{N}\) such that \(M(0,0)=0\) and \(\sum_{j\in\mathbb{N}^{2}}jM(j)=(2,2)\). There are nine such functions \(M_{1},\ldots,M_{9}\). Here are all of their nonzero values.1
Footnote 1: For readers who are familiar with the language of multisets and multiset partitions [19], it can be helpful to remember that such functions \(M\) are in bijection with multiset partitions of \(\{\!\!\{1,1,2,2\}\!\!\}\). The multiset partition corresponding to the function \(M\) contains \(M(j)\) copies of \(\{\!\!\{1^{j_{1}},2^{j_{2}}\}\!\!\}\) for all \(j\in\mathbb{N}^{2}\). For example, the function \(M_{6}\) corresponds to the multiset partition \(\{\!\!\{1,1\}\!\!\}\), \(\{\!\{2\}\!\!\}\), \(\{\!\{2\}\!\!\}\), \(\{\!\{2\}\!\!\}\), \(\{\!\{2\}\!\!\}\), \(\{\!\{1,1,2,2\}\!\!\}\).
\[M_{1}(2,2)=1\] \[M_{2}(1,1)=2\] \[M_{3}(2,1)=1\] \[M_{4}(1,2)=1\] \[M_{5}(2,0)=1\] \[M_{6}(2,0)=1\] \[M_{6}(0,1)=2\] \[M_{7}(0,2)=1\] \[M_{8}(1,1)=1\] \[M_{8}(1,0)=1\] \[M_{9}(1,0)=2\]
Thus
\[\mathscr{F}_{\rm Sur}\left\{h_{2,2}\right\} =\underbrace{h_{1}}_{M_{1}}+\underbrace{h_{2}}_{M_{2}}+ \underbrace{h_{1}^{2}}_{M_{3}}+\underbrace{h_{1}^{2}}_{M_{4}}+\underbrace{h_{ 1}^{2}}_{M_{5}}+\underbrace{h_{1}h_{2}}_{M_{6}}+\underbrace{h_{1}h_{2}}_{M_{7 }}+\underbrace{h_{1}^{3}}_{M_{8}}+\underbrace{h_{2}^{2}}_{M_{9}}\] \[=h_{1}+h_{2}+3h_{1,1}+2h_{2,1}+h_{1,1,1}+h_{2,2}.\]
**Example 6.2**.: Let us use Theorem 1.4(b) to compute \(\mathscr{F}_{\rm Sur}\left\{e_{5,3}\right\}\). First, we list all the functions \(M\colon\{0,1\}^{2}\to\mathbb{N}\) such that \(M(0,0)=0\) and \(\sum_{j\in\{0,1\}^{2}}jM(j)=(5,3)\). There are four such functions \(M_{1},M_{2},M_{3},M_{4}\). Here are all of their nonzero values.
\[M_{1}(1,0)=5\qquad M_{1}(0,1)=3\] \[M_{2}(1,1)=1\qquad M_{2}(1,0)=4\qquad M_{2}(0,1)=2\] \[M_{2}(1,1)=2\qquad M_{2}(1,0)=3\qquad M_{1}(0,1)=1\] \[M_{2}(1,1)=3\qquad M_{2}(1,0)=2\]
Thus
\[\mathscr{F}_{\rm Sur}\left\{e_{5,3}\right\}=\underbrace{e_{5}e_{3}}_{M_{1}}+ \underbrace{h_{1}e_{4}e_{2}}_{M_{2}}+\underbrace{h_{2}e_{3}e_{1}}_{M_{3}}+ \underbrace{h_{3}e_{2}}_{M_{4}}.\]
**Example 6.3**.: Let us use Theorem 1.4(c) to compute \(\mathscr{F}_{\rm Sur}\left\{p_{15,10,6}\right\}\). Take \(\lambda=(15,10,6)\) and \(\ell=3\). There are five partitions of \([\ell]\) into nonempty sets: \(\{\{1,2,3\}\}\)
\(\{\{1,2\},\{3\}\}\), \(\{\{1,3\},\{2\}\}\), \(\{\{2,3\},\{1\}\}\), and \(\{\{1\},\{2\},\{3\}\}\). Thus,
\[\mathscr{F}_{\mathrm{Sur}}\left\{p_{\lambda}\right\}= \left(\sum_{d|\gcd(\lambda_{1},\lambda_{2},\lambda_{3})}d^{2}p_{d}\right)\] \[+\left(\sum_{d|\gcd(\lambda_{1},\lambda_{2})}dp_{d}\right)\left( \sum_{d|\lambda_{3}}p_{d}\right)\] \[+\left(\sum_{d|\gcd(\lambda_{1},\lambda_{3})}dp_{d}\right)\left( \sum_{d|\lambda_{2}}p_{d}\right)\] \[+\left(\sum_{d|\gcd(\lambda_{2},\lambda_{3})}dp_{d}\right)\left( \sum_{d|\lambda_{1}}p_{d}\right)\] \[+\left(\sum_{d|\lambda_{1}}p_{d}\right)\left(\sum_{d|\lambda_{2}} p_{d}\right)\left(\sum_{d|\lambda_{3}}p_{d}\right)\] \[=p_{1}+(p_{1}+5p_{5})(p_{1}+p_{2}+p_{3}+p_{6})\] \[\quad+(p_{1}+3p_{3})(p_{1}+p_{2}+p_{5}+p_{10})\] \[\quad+(p_{1}+2p_{2})(p_{1}+p_{3}+p_{5}+p_{15})\] \[\quad+(p_{1}+p_{3}+p_{5}+p_{15})(p_{1}+p_{2}+p_{5}+p_{10})(p_{1}+ p_{2}+p_{3}+p_{6}).\]
Proof of Theorem 1.4.:
1. Let \(t_{1},\ldots,t_{\ell}\) be variables and let \[f=H(t_{1})\cdots H(t_{\ell})\in\Lambda[\![t_{1},\ldots,t_{\ell}]\!].\]
We will use Proposition 3.12 to compute
\[\mathscr{F}\{f\}\in\overline{\Lambda}[\![t_{1},\ldots,t_{\ell}]\!].\]
For any partition \(\mu\), we have
\[f(\Xi_{\mu}) =\prod_{i=1}^{\ell}H(t_{i})(\Xi_{\mu})\] \[=\prod_{i=1}^{\ell}\prod_{j=1}^{\ell(\mu)}\prod_{k=0}^{\mu_{j}-1} \frac{1}{1-t_{i}\exp(2\pi ik/\mu_{j})}\] \[=\prod_{i=1}^{\ell}\prod_{j=1}^{\ell(\mu)}\frac{1}{1-t_{i}^{\mu_{j }}}.\]
Hence, by Proposition 3.12, we have
\[\mathscr{F}\{f\}=\sum_{\mu}\left(\prod_{i=1}^{\ell}\prod_{j=1}^{\ell(\mu)} \frac{1}{1-t_{i}^{\mu_{j}}}\right)\frac{p_{\mu}}{z_{\mu}}.\]
We may recognize the right-hand side as a product of exponentials, and then evaluate the product as follows.
\[\mathscr{F}\{f\} =\prod_{k}\exp\left(\frac{p_{k}}{k}\prod_{i=1}^{\ell}\frac{1}{1-t_{ i}^{k}}\right)\] \[=\prod_{k}\exp\left(\frac{p_{k}}{k}\sum_{j\in\mathbb{N}^{\ell}}(t_ {1}^{j_{1}}\cdots t_{\ell}^{j_{\ell}})^{k}\right)\] \[=\prod_{j\in\mathbb{N}^{\ell}}\exp\left(\sum_{k}\frac{p_{k}}{k}(t_ {1}^{j_{1}}\cdots t_{\ell}^{j_{\ell}})^{k}\right)\] \[=\prod_{j\in\mathbb{N}^{\ell}}H(t_{1}^{j_{1}}\cdots t_{\ell}^{j_{ \ell}}).\]
Hence,
\[\mathscr{F}_{\mathrm{Sur}}\left\{f\right\}=\frac{\mathscr{F}\{f\}}{H}=\prod_{ j\in\mathbb{N}^{\ell}\backslash\{(0,\dots,0)\}}H(t_{1}^{j_{1}}\cdots t_{\ell}^{j_ {\ell}}).\]
Finally, the result follows from taking the coefficient of \(t_{1}^{\lambda_{1}}\cdots t_{\ell}^{\lambda_{\ell}}\) on both sides.
1. Let \(t_{1},\dots,t_{\ell}\) be variables and let \[f=E(t_{1})\cdots E(t_{\ell})\in\Lambda[\![t_{1},\dots,t_{\ell}]\!].\]
We will use Proposition 3.12 to compute
\[\mathscr{F}\{f\}\in\overline{\Lambda}[\![t_{1},\dots,t_{\ell}]\!].\]
For any partition \(\mu\), we have
\[f(\Xi_{\mu}) =\prod_{i=1}^{\ell}E(t_{i})(\Xi_{\mu})\] \[=\prod_{i=1}^{\ell}\prod_{j=1}^{\ell(\mu)}\prod_{k=0}^{\mu_{j}-1} (1+t_{i}\exp(2\pi ik/\mu_{j}))\] \[=\prod_{i=1}^{\ell}\prod_{j=1}^{\ell(\mu)}(1-(-t_{i})^{\mu_{j}}).\]
Hence, by Proposition 3.12, we have
\[\mathscr{F}\{f\}=\sum_{\mu}\left(\prod_{i=1}^{\ell}\prod_{j=1}^{\ell(\mu)}(1- (-t_{i}))^{\mu_{j}}\right)\frac{p_{\mu}}{z_{\mu}}.\]
We may recognize the right-hand side as a product of exponentials, and then evaluate the product as follows.
\[\mathscr{F}\{f\} =\prod_{k}\exp\left(\frac{p_{k}}{k}\prod_{i=1}^{\ell}(1-(-t_{i})^{k })\right)\] \[=\prod_{k}\exp\left(\frac{p_{k}}{k}\sum_{j\in\{0,1\}^{\ell}}(-1)^{ j_{1}+\cdots+j_{\ell}}((-t_{1})^{j_{1}}\cdots(-t_{\ell})^{j_{\ell}})^{k}\right)\] \[=\prod_{j\in\{0,1\}^{\ell}}\exp\left(\sum_{k}\frac{p_{k}}{k}((-1) ^{j_{1}+\cdots+j_{\ell}})^{k-1}(t_{1}^{j_{1}}\cdots t_{\ell}^{j_{\ell}})^{k}\right)\] \[=\prod_{j\in\{0,1\}^{\ell}}\begin{cases}H(t_{1}^{j_{1}}\cdots t_{ \ell}^{j_{\ell}})&\text{if $j_{1}+\cdots+j_{\ell}$ is even}\\ E(t_{1}^{j_{1}}\cdots t_{\ell}^{j_{\ell}})&\text{if $j_{1}+\cdots+j_{\ell}$ is odd} \end{cases}\]
Hence,
\[\mathscr{F}_{\text{Sur}}\left\{f\right\}=\frac{\mathscr{F}\{f\}}{H}=\prod_{j \in\{0,1\}^{\ell}\setminus\{(0,\ldots,0)\}}\begin{cases}H(t_{1}^{j_{1}}\cdots t _{\ell}^{j_{\ell}})&\text{if $j_{1}+\cdots+j_{\ell}$ is even}\\ E(t_{1}^{j_{1}}\cdots t_{\ell}^{j_{\ell}})&\text{if $j_{1}+\cdots+j_{\ell}$ is odd} \end{cases}.\]
Finally, the result follows from taking the coefficient of \(t_{1}^{\lambda_{1}}\cdots t_{\ell}^{\lambda_{\ell}}\) on both sides.
1. By Proposition 3.12, we have (11) \[\mathscr{F}\{p_{\lambda}\}=\sum_{\mu}p_{\lambda}(\Xi_{\mu})\frac{p_{\mu}}{z_{ \mu}}.\]
For all \(k\), we have
\[p_{k}(\Xi_{\mu})=\sum_{d|k}dm_{d}(\mu),\]
so (11) becomes
\[\mathscr{F}\{p_{\lambda}\}=\sum_{\mu}\prod_{i=1}^{\ell}\left(\sum_{d|\lambda_ {i}}dm_{d}(\mu)\right)\frac{p_{\mu}}{z_{\mu}}. \tag{12}\]
Let us say that a function \(\mathbf{d}\colon[\ell]\to\mathbb{N}\) is _permissible_ if \(\mathbf{d}(i)\mid\lambda_{i}\) for all \(i\). The product on the right-hand side of (12) can be expanded into a sum over all permissible functions:
\[\mathscr{F}\{p_{\lambda}\} =\sum_{\mu}\sum_{\mathbf{d}\text{ permissible}}\left(\prod_{i=1}^{\ell} \mathbf{d}(i)m_{\mathbf{d}(i)}(\mu)\right)\frac{p_{\mu}}{z_{\mu}} \tag{13}\] \[=\sum_{\mu}\sum_{\mathbf{d}\text{ permissible}}\left(\prod_{d}(dm_{ d}(\mu))^{|\mathbf{d}^{-1}(d)|}\right)\frac{p_{\mu}}{z_{\mu}}.\]
For any \(k\), let \((x)_{k}=x(x-1)\cdots(x-k+1)\) denote the falling factorial. It is well-known [6, Chapter 6.1] that for any \(n\), the monomial \(x^{n}\) can be written as a linear combination
\[x^{n}=\sum_{k}\genfrac{}{}{0.0pt}{}{n}{k}(x)_{k}\]
of falling factorials, where the coefficient \(\genfrac{\{}{\}{0.0pt}{}{n}{k}}{k}\) (a _Stirling number of the second kind_) is the number of partitions of \([n]\) into \(k\) nonempty sets. Let us use this to rewrite the factor \((m_{d}(\mu))^{|\mathbf{d}^{-1}(d)|}\) appearing in (13). We obtain
\[\mathscr{F}\{p_{\lambda}\}=\sum_{\mu}\sum_{\mathbf{d}\text{ permissible}}\left(\prod_{d}d^{| \mathbf{d}^{-1}(d)|}\sum_{k}\genfrac{\{}{\}}{0.0pt}{}{|\mathbf{d}^{-1}(d)|}{k }(m_{d}(\mu))_{k}\right)\frac{p_{\mu}}{z_{\mu}}. \tag{14}\]
Given a permissible function \(\mathbf{d}\colon[\ell]\to\mathbb{N}\) and a partition \(\pi\) of \([\ell]\) into nonempty sets, let us say that \(\pi\) is _level_ with respect to \(\mathbf{d}\) if \(\mathbf{d}(i)=\mathbf{d}(j)\) whenever \(i\) and \(j\) are in the same part of \(\pi\). If \(\pi\) is level with respect to \(\mathbf{d}\), we may define the function \(\widetilde{\mathbf{d}}\colon\pi\to\mathbb{N}\) by taking \(\widetilde{\mathbf{d}}(U)\) to be the common value of \(\mathbf{d}(i)\) for \(i\in U\).
Suppose that \(\mathbf{d}\) is a fixed permissible function and \(\{k_{d}\}_{d}\) is any sequence. Then it is easy to see that the product
\[\prod_{d}\left\{\genfrac{}{}{0.0pt}{}{|\mathbf{d}^{-1}(d)|}{k_{d}}\right\}\]
is equal to the number of set partitions \(\pi\) that are level with respect to \(U\) and which satisfy \(|\widetilde{\mathbf{d}}^{-1}(d)|=k_{d}\) for all \(d\). Using this fact, we may expand the product in (14) into a sum over all functions that are level with respect to \(\mathbf{d}\):
\[\mathscr{F}\{p_{\lambda}\}=\sum_{\mu}\sum_{\mathbf{d}\text{ permissible}}\sum_{\pi\text{ level}}\left(\prod_{d}d^{|\mathbf{d}^{-1}(d)|}(m_{d}(\mu))_{|\widetilde{\mathbf{d}}^{-1}(d)|} \right)\frac{p_{\mu}}{z_{\mu}}\]
We will simplify this expression by switching the order of summation so that the sum over \(\pi\) is all the way on the outside. To do so, given a partition \(\pi\) of \([\ell]\) into nonempty sets, we will now describe the set of all permissible functions \(\mathbf{d}\colon[\ell]\to\mathbb{N}\) such that \(\pi\) is level with respect to \(\mathbf{d}\). These are exactly the functions given by \(\mathbf{d}(i)=\widetilde{\mathbf{d}}(U)\) for all \(U\in\pi\) and \(i\in U\), where \(\widetilde{\mathbf{d}}\colon\pi\to\mathbb{N}\) is any function satisfying
\[\widetilde{\mathbf{d}}(U)\mid\gcd\{\lambda_{i}\colon i\in U\}\]
for all \(U\in\pi\). Let us call such functions \(\pi\)_-permissible_. Then
\[\mathscr{F}\{p_{\lambda}\}=\sum_{\pi}\sum_{\widetilde{\mathbf{d}}\text{ $\pi$-permissible}}\sum_{\mu}\left(\prod_{d}d^{|\mathbf{d}^{-1}(d)|}(m_{d}( \mu))_{|\widetilde{\mathbf{d}}^{-1}(d)|}\right)\frac{p_{\mu}}{z_{\mu}}.\]
Now, let us evaluate the inner sum over \(\mu\). Expanding \(p_{\mu}\) and \(z_{\mu}\) gives
\[\mathscr{F}\{p_{\lambda}\} =\sum_{\pi}\sum_{\widetilde{\mathbf{d}}\ \pi\text{-permissible}}\sum_{\mu}\left( \prod_{d}d^{|\mathbf{d}^{-1}(d)|}(m_{d}(\mu))_{|\widetilde{\mathbf{d}}^{-1}(d) |}\frac{p_{d}^{m_{d}(\mu)}}{d^{m_{d}(\mu)}(m_{d}(\mu))!}\right)\] \[=\sum_{\pi}\sum_{\widetilde{\mathbf{d}}\ \pi\text{-permissible}}\prod_{d} \left(d^{|\mathbf{d}^{-1}(d)|}\sum_{m=0}^{\infty}(m)_{|\widetilde{\mathbf{d}}^ {-1}(d)|}\frac{p_{d}^{m}}{d^{m}m!}\right)\] \[=\sum_{\pi}\sum_{\widetilde{\mathbf{d}}\ \pi\text{-permissible}}\prod_{d} \left(d^{|\mathbf{d}^{-1}(d)|}\sum_{m=|\widetilde{\mathbf{d}}^{-1}(d)|}^{ \infty}\frac{p_{d}^{m}}{d^{m}(m-|\widetilde{\mathbf{d}}^{-1}(d)|)!}\right)\] \[=\sum_{\pi}\sum_{\widetilde{\mathbf{d}}\ \pi\text{-permissible}}\prod_{d} \left(d^{|\mathbf{d}^{-1}(d)|}\left(\frac{p_{d}}{d}\right)^{|\widetilde{ \mathbf{d}}^{-1}(d)|}\exp\left(\frac{p_{d}}{d}\right)\right)\] \[=\left(\sum_{\pi}\sum_{\widetilde{\mathbf{d}}\ \pi\text{-permissible}}\prod_{d} \left(d^{|\mathbf{d}^{-1}(d)|}\left(\frac{p_{d}}{d}\right)^{|\widetilde{ \mathbf{d}}^{-1}(d)|}\right)\right)\cdot H\] \[=\left(\sum_{\pi}\sum_{\widetilde{\mathbf{d}}\ \pi\text{-permissible}}\prod_{d} \left(d^{|\mathbf{d}^{-1}(d)|-|\widetilde{\mathbf{d}}^{-1}(d)|}\cdot p_{d}^{| \widetilde{\mathbf{d}}^{-1}(d)|}\right)\right)\cdot H.\]
We will rewrite the remaining product as a product over \(U\in\pi\) instead of over \(d\in\mathbb{N}\). Each \(U\in\pi\) with \(\widetilde{\mathbf{d}}(U)=d\) contributes \(1\) to \(|\widetilde{\mathbf{d}}^{-1}(d)|\) and contributes \(|U|\) to \(|\mathbf{d}^{-1}(d)|\). So we obtain
\[\mathscr{F}\{p_{\lambda}\}=\left(\sum_{\pi}\sum_{\widetilde{\mathbf{d}}\ \pi\text{-permissible}}\prod_{U\in\pi}(\widetilde{\mathbf{d}}(U))^{|U|-1}p_{ \widetilde{\mathbf{d}}(U)}\right)\cdot H.\]
Given that \(\widetilde{\mathbf{d}}\) is \(\pi\)-permissible, the possible values of \(\widetilde{\mathbf{d}}(U)\) are exactly those \(d\in\mathbb{N}\) that divide \(\gcd\{\lambda_{i}\colon i\in U\}\). The choice of \(\widetilde{\mathbf{d}}(U)\) can be made independently for each \(U\in\pi\). Hence, we may factor the innermost sum, which finally yields
\[\mathscr{F}\{p_{\lambda}\}=\left(\sum_{\pi}\prod_{U\in\pi}\left(\sum_{d|\gcd \{\lambda_{i}\colon i\in U\}}d^{|U|-1}p_{d}\right)\right)\cdot H.\]
The result follows from dividing both sides of this equation by \(H\).
_Remark 6.4_.: Theorem 1.4(a) has an alternate, "purely combinatorial" proof. It involves exhibiting a combinatorial species \(E_{\lambda}\colon\mathrm{Bij}\to\mathrm{Fun}\) with \(\mathrm{ch}(\mathbb{C}E_{\lambda})=h_{\lambda}\), and then computing \(\iota^{*}\iota_{!}E_{\lambda}\) in terms of known combinatorial species. The details for this proof are forthcoming in a separate paper.
_Remark 6.5_.: One consequence of Theorem 1.4(c) which is not obvious _a priori_ is that the matrix entries of \(\mathscr{F}_{\mathrm{Sur}}\) in the power sum basis are all nonnegative integers. The same is not true of \(\mathscr{F}\); for example, \(\mathscr{F}\{1\}=H=1+p_{1}+\frac{1}{2}(p_{2}+p_{1}^{2})+\cdots\) certainly has some non-integer coefficients.
To illustrate the utility of Theorem 1.4, we now restate and prove Theorem 1.5 about the vanishing of restriction coefficients. Recall from the introduction that
\(D(\mu)\) is the size of the Durfee square of \(\mu\); that is, the largest integer \(d\) such that \(\mu_{d}\geq d\).
**Theorem 1.5**.: _Let \(\mu\) be a partition and let \(k\geq 1\) be an integer. The following are equivalent:_
1. _There exists a partition_ \(\lambda\) _such that_ \(\lambda_{1}\leq k\) _and_ \(r_{\lambda}^{\mu}>0\)_._
2. \(D(\mu)\leq 2^{k-1}\)_._
Before proceeding to the proof, we need a few lemmas.
**Lemma 6.6**.: _For any \(k\geq 0\), we have_
\[\operatorname{span}\{e_{\lambda}\colon\ell(\lambda)\leq k\}=\operatorname{ span}\{s_{\lambda}\colon\lambda_{1}\leq k\}.\]
_(Here \(\operatorname{span}(S)\) refers to the additive subgroup of \(\Lambda\) generated by \(S\).)_
Proof.: By the Pieri rule, every \(e_{\lambda}\) with \(\ell(\lambda)\leq k\) is a linear combination of Schur functions \(s_{\mu}\), where \(\mu\) is the union of at most \(k\) vertical strips. Hence
\[\operatorname{span}\{e_{\lambda}\colon\ell(\lambda)\leq k\}\subset \operatorname{span}\{s_{\lambda}\colon\lambda_{1}\leq k\}.\]
By the dual Jacobi-Trudi identity, every \(s_{\lambda}\) with \(\lambda_{1}\leq k\) can be written as the determinant of a \(k\times k\) matrix whose entries are elementary symmetric functions \(e_{r}\). Hence
\[\operatorname{span}\{s_{\lambda}\colon\lambda_{1}\leq k\}\subset \operatorname{span}\{e_{\lambda}\colon\ell(\lambda)\leq k\}.\]
The result follows.
**Lemma 6.7**.: _Let \(\lambda\) be a partition and let \(d\geq 0\). Then \(D(\lambda)\leq d\) if and only if there exist partitions \(\mu\), \(\nu\) with \(\ell(\mu),\ell(\nu)\leq d\) such that \(s_{\lambda}\) appears in \(h_{\mu}e_{\nu}\)._
Proof.: For the "only if" direction, assume that \(D(\lambda)\leq d\). Take
\[\mu =(\lambda_{1},\ldots,\lambda_{D(\lambda)})\] \[\nu =(\lambda_{1}^{T}-D(\lambda),\ldots,\lambda_{D(\lambda)}^{T}-D( \lambda)).\]
Clearly \(\ell(\mu),\ell(\nu)\leq d\). Also, the Young diagram of \(\lambda\) can be decomposed into a union of horizontal strips of lengths \(\mu_{1},\ldots,\mu_{\ell(\mu)}\) and vertical strips of lengths \(\nu_{1},\ldots,\nu_{\ell(\nu)}\), so \(s_{\lambda}\) appears in \(h_{\mu}e_{\nu}\) by the Pieri rule.
For the "if" direction, it easily follows from induction on \(\ell(\nu)\) and the Pieri rule that if \(s_{\lambda}\) appears in \(h_{\mu}e_{\nu}\), then \(\lambda_{\ell(\mu)+1}<\ell(\nu)+1\) (where we take \(\lambda_{i}=0\) for \(i>\ell(\lambda)\)). Assuming that \(\ell(\mu),\ell(\nu)\leq d\), we get \(\lambda_{d+1}<d+1\), so \(D(\lambda)\leq d\) as desired.
Proof of Theorem 1.5.: First, observe that by multiplying both sides by \(H\), Theorem 1.4(b) can be written in the following form, using the Frobenius transform instead of the surjective Frobenius transform. For any partition \(\lambda\) with \(\ell(\lambda)\leq k\), we have
\[\mathscr{F}\{e_{\lambda}\}=\sum_{M}\prod_{j\in\{0,1\}^{k}}\begin{cases}h_{M(j )}&\text{if $j_{1}+\cdots+j_{\ell}$ is even}\\ e_{M(j)}&\text{if $j_{1}+\cdots+j_{\ell}$ is odd}\end{cases} \tag{15}\]
where the sum is over all functions \(M\colon\{0,1\}^{k}\to\mathbb{N}\) such that \(\sum_{j\in\{0,1\}^{k}}j_{i}M(j)=\lambda_{i}\) for \(i=1,\ldots,k\) (where we take \(\lambda_{i}=0\) for \(i>\ell(\lambda)\)). We do not require that \(M(0,\ldots,0)=0\), so the sum in (15) is infinite.
Consider the following statements:
1. There exists a partition \(\lambda\) such that \(\lambda_{1}\leq k\) and \(r_{\lambda}^{\mu}>0\).
2. There exists a partition \(\lambda\) such that \(\lambda_{1}\leq k\) and \(\langle\mathscr{F}\{s_{\lambda}\},s_{\mu}\rangle\neq 0\).
3. There exists a partition \(\lambda\) such that \(\ell(\lambda)\leq k\) and \(\langle\mathscr{F}\{e_{\lambda}\},s_{\mu}\rangle\neq 0\).
4. There exists a function \(M\colon\{0,1\}^{k}\to\mathbb{N}\) such that \(s_{\mu}\) appears in \[\prod_{j\in\{0,1\}^{k}}\begin{cases}h_{M(j)}&\text{if $j_{1}+\cdots+j_{\ell}$ is even}\\ e_{M(j)}&\text{if $j_{1}+\cdots+j_{\ell}$ is odd}\end{cases}.\]
5. There exist partitions \(\nu,\nu^{\prime}\) such that \(\ell(\nu),\ell(\nu^{\prime})\leq 2^{k-1}\) and \(s_{\mu}\) appears in \(h_{\nu}e_{\nu^{\prime}}\).
6. \(D(\mu)\leq 2^{k-1}\).
By the definition of the Frobenius transform, (A) is equivalent to (C\({}_{1}\)). By the linearity of \(\mathscr{F}\) and Lemma 6.6, (C\({}_{1}\)) is equivalent to (C\({}_{2}\)). Since each term of (15) is Schur positive, \(s_{\mu}\) appears in the sum if and only if it appears in one or more of its terms, so (C\({}_{2}\)) is equivalent to (C\({}_{3}\)). By taking the parts of \(\nu\) to be the nonzero values of \(M(j)\) for \(j_{1}+\cdots+j_{\ell}\) even and taking the parts of \(\nu^{\prime}\) to be the nonzero values of \(M(j)\) for \(j_{1}+\cdots+j_{\ell}\) odd, we see that (C\({}_{3}\)) is equivalent to (C\({}_{4}\)). By Lemma 6.7, (C\({}_{4}\)) is equivalent to (B). Putting it all together, (A) is equivalent to (B), as desired.
## 7. Computations of the inverse surjective Frobenius transform
We will now compute \(\mathscr{F}_{\text{Sur}}^{-1}\left\{e_{\lambda}\right\}\) and \(\mathscr{F}_{\text{Sur}}^{-1}\left\{h_{\lambda}\right\}\). In order to state our formulas, first we must recall some definitions from combinatorics on words. For a more complete introduction, see [11, Chapter 5].
**Definition 7.1**.: Let \(A\) be a set. A _word_ over the alphabet \(A\) is a sequence \(w=w_{1}\cdots w_{n}\) with \(w_{1},\ldots,w_{n}\in A\). Given a letter \(a\in A\), we write \(m_{a}(w)\) to denote the number of times the letter \(a\) appears in \(w\).
**Definition 7.2** ([12]).: Let \(A\) be a totally ordered set. We say that a nonempty word \(w=w_{1}\cdots w_{n}\) over the alphabet \(A\) is a _Lyndon word_ if it is lexicographically less than its suffix \(w_{i}\cdots w_{n}\) for \(i=2,\ldots,n\). Let \(\operatorname{Lyndon}(A)\) be the set of all Lyndon words over the alphabet \(A\).
**Theorem 7.3** (Witt's Formula [20]).: _Let \(\ell>0\) and let \(t_{1},\ldots,t_{\ell}\) be variables. For any word \(w=i_{1}\cdots i_{n}\) over \([\ell]\), denote by \(t^{w}\) the product_
\[t_{i_{1}}\cdots t_{i_{n}}=\prod_{i=1}^{\ell}t_{i}^{m_{i}(w)}.\]
_Then the evaluation \((L_{1}+L_{2}+L_{3}+\cdots)(t_{1},\ldots,t_{\ell})\) is equal to_
\[\sum_{w\in\operatorname{Lyndon}([\ell])}t^{w}.\]
**Theorem 7.4** (Chen-Fox-Lyndon Theorem [4]).: _Let \(A\) be a totally ordered set. Any word \(w\) over the alphabet \(A\) has a unique Lyndon factorization; that is, an expression as a (lexicographically) non-increasing concatenation of Lyndon words._
**Definition 7.5**.: Let \(w\) be a word over a totally ordered alphabet. Define \(\pi(w)\) to be the partition obtained by listing the number of times each Lyndon word appears in the Lyndon factorization of \(w\), and then sorting the resulting positive numbers in decreasing order.
**Example 7.6**.: If \(A=\{1,2\}\) and \(w=212121211111\), then the Lyndon factorization of \(w\) is \(w=(2)(12)(12)(12)(12)(1)(1)(1)(1)(1)\). The Lyndon words appearing in this factorization are \(2\), \(12\), and \(1\), which appear once, three times, and four times, respectively, so \(\pi(w)=(4,3,1)\).
Now, we are ready to compute \(\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{e_{\lambda}\right\}\) and \(\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{h_{\lambda}\right\}\). The cleanest way to state our results is to use the series \(H(t)\) from Definition 2.7.
**Theorem 7.7**.: _Let \(\ell>0\) and let \(t_{1},\ldots,t_{\ell}\) be variables._
1. _For any word_ \(w=i_{1}\cdots i_{n}\) _over_ \([\ell]\)_, denote by_ \(t^{w}\) _the product_ \[t_{i_{1}}\cdots t_{i_{n}}=\prod_{i=1}^{\ell}t_{i}^{m_{i}(w)}.\] _Then_ \[\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{\frac{1}{\prod_{i=1}^{\ell}H(t_{i})} \right\}=\frac{1}{\prod_{w\in\mathrm{Lyndon}([\ell])}H(t^{w})}.\]
2. _For any word_ \(w=(i_{1},j_{1})\cdots(i_{n},j_{n})\) _over_ \([\ell]^{2}\) _(ordered lexicographically), denote by_ \(t^{w}\) _the product_ \[(t_{i_{1}}t_{j_{1}})\cdots(t_{i_{n}}t_{j_{n}})=\prod_{i,j=1}^{\ell}(t_{i}t_{j })^{m_{(i,j)}(w)}.\] _Then_ \[\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{\prod_{i=1}^{\ell}H(t_{i})\right\}= \frac{\prod_{w\in\mathrm{Lyndon}([\ell])}H(t^{w})}{\prod_{w\in\mathrm{Lyndon}( [\ell]^{2})}H(t^{w})}.\]
Before we proceed to the proof of Theorem 7.7, we will prove some corollaries that illustrate how to use it.
**Corollary 7.8**.: _For any \(r\), we have_
\[\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{h_{r}\right\}=\sum_{k=0}^{\lfloor r/2 \rfloor}(-1)^{k}h_{r-2k}e_{k}.\]
Proof.: Take \(\ell=1\) in Theorem 7.7(b). In this case, \(1\) is the only Lyndon word over \([1]\) and \((1,1)\) is the only Lyndon word over \([1]^{2}\). So, with \(t=t_{1}\),
\[\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{H(t)\right\}=\frac{H(t)}{H(t^{2})}=H(t) E(-t^{2}).\]
The result follows from taking the coefficient of \(t^{r}\).
**Corollary 7.9**.: _Let \(\lambda=(\lambda_{1},\ldots,\lambda_{\ell})\) be a sequence of nonnegative integers (not necessarily weakly decreasing). Then_
\[\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{e_{\lambda}\right\}=\sum_{w\in W}(-1)^{ \left\lvert\lambda\right\rvert-\left\lvert\pi(w)\right\rvert}e_{\pi(w)},\]
_where \(W\) is the set of all words \(w\) over \([\ell]\) such that \(m_{i}(w)=\lambda_{i}\) for all \(i\in[\ell]\)._
Proof.: First, the right-hand side of Theorem 1.4(a) can be written
\[\prod_{w\in\mathrm{Lyndon}(\ell)}\left(\sum_{r=0}^{\infty}(-1)^{r}e_{r}(t^{w })^{r}\right)=\sum_{\mathbf{r}}\prod_{w\in\mathrm{Lyndon}(\ell)}(-1)^{\mathbf{ r}(w)}e_{\mathbf{r}(w)}(t^{w})^{\mathbf{r}(w)} \tag{16}\]
where the sum is over all finitely supported functions \(\mathbf{r}\colon\operatorname{Lyndon}([\ell])\to\mathbb{N}\). By the Chen-Fox-Lyndon theorem (Theorem 7.4), there is a bijection
\[\{\text{words over }[\ell]\}\underset{\phi}{\longleftrightarrow}\{\text{ finitely supported functions }\operatorname{Lyndon}([\ell])\to\mathbb{N}\},\]
where \((\phi(w))(w^{\prime})\) is the number of times that \(w^{\prime}\) appears in the Lyndon factorization of \(w\). Using this bijection, we can rewrite (16) as a sum over all words \(w\) over \([\ell]\):
\[\mathscr{F}_{\operatorname{Sur}}^{-1}\left\{\frac{1}{\prod_{i=1}^{\ell}H(t_{i })}\right\}=\sum_{w}(-1)^{|\pi(w)|}e_{\pi(w)}t^{w}.\]
The result follows from taking the coefficient of \(t_{1}^{\lambda_{1}}\cdots t_{\ell}^{\lambda_{\ell}}\).
**Corollary 7.10**.: _Let \(\ell\geq 0\). Then_
\[\mathscr{F}_{\operatorname{Sur}}^{-1}\left\{e_{1}^{\ell}\right\}=e_{1}(e_{1} -1)\cdots(e_{1}-\ell+1).\]
Proof.: Take \(\lambda=(1^{\ell})\) in Corollary 7.9. Then \(W=\mathfrak{S}_{\ell}\). Moreover, the Lyndon factorization of any \(w\in\mathfrak{S}_{\ell}\) contains only distinct Lyndon words, one beginning with each left-to-right minimum of \(w\) (that is, each letter \(w_{i}\) such that \(w_{i}<w_{j}\) for all \(j<i\)). So \(\pi(w)=(1^{k})\), where \(k\) is the number of left-to-right minima of \(w\). It follows that
\[\mathscr{F}_{\operatorname{Sur}}^{-1}\left\{e_{1}^{\ell}\right\}=\sum_{k}(-1 )^{\ell-k}\binom{\ell}{k}e_{1}^{k},\]
where \(\binom{\ell}{k}\) (a _Stirling number of the first kind_) is the number of permutations \(w\in\mathfrak{S}_{\ell}\) with \(k\) left-to-right minima. This is well-known [6, Chapter 6.1] to be equal to \(e_{1}(e_{1}-1)\cdots(e_{1}-\ell+1)\).
More generally, we have the following.
**Corollary 7.11**.: _Let \(\lambda=(\lambda_{1},\ldots,\lambda_{\ell})\) be a sequence of nonnegative integers (not necessarily weakly decreasing) and let \(k\geq 0\). For any word \(w\) over \(\{0\}\cup[\ell]\), write \(p_{+}(w)\) to denote the longest prefix of \(w\) that does not contain the letter \(0\). Then_
\[\mathscr{F}_{\operatorname{Sur}}^{-1}\left\{e_{\lambda}e_{1}^{k}\right\}= \left(\sum_{w\in W}(-1)^{|\lambda|-|\pi(p_{+}(w))|}e_{\pi(p_{+}(w))}\right) \cdot e_{1}(e_{1}-1)\cdots(e_{1}-k+1),\]
_where \(W\) is the set of all words \(w\) over \(\{0\}\cup[\ell]\) such that \(m_{0}(w)=k\) and \(m_{i}(w)=\lambda_{i}\) for all \(i\in[\ell]\)._
Proof.: Let
\[\lambda^{\prime}=(\underbrace{1,\ldots,1}_{k},\lambda_{1},\ldots,\lambda_{ \ell})\]
and let \(W^{\prime}\) be the set of all words \(w\) over \([k+\ell]\) such that \(m_{i}(w)=\lambda_{i}^{\prime}\) for all \(i\in[k+\ell]\). By Corollary 7.9, we have
\[\mathscr{F}_{\operatorname{Sur}}^{-1}\left\{e_{\lambda}e_{1}^{k}\right\}=\sum_ {w\in W^{\prime}}(-1)^{|\lambda|+k-|\pi(w)|}e_{\pi(w)}. \tag{17}\]
Now, define the function
\[\phi\colon W^{\prime}\to W\times\mathfrak{S}_{k}\]
as follows. For any word \(w\in W^{\prime}\), let \(\phi_{1}(w)\in W\) be the word formed from \(w\) by replacing all the letters \(1,\ldots,k\) with \(0\) and replacing all copies of the letters \(k+1,\ldots,k+\ell\) with \(1,\ldots,\ell\) respectively. Let \(\phi_{2}(w)\in\mathfrak{S}_{k}\) be the word formed
from \(w\) by deleting all copies of the letters \(k+1,\ldots,k+\ell\). It is easy to see that \(\phi=(\phi_{1},\phi_{2})\) is a bijection. Moreover, for all \(w\in W^{\prime}\), we have that \(\pi(w)\) is the concatenation of \(\pi(p_{+}(\phi_{1}(w)))\) and \(\pi(\phi_{2}(w))\). Hence, we may factor (17):
\[\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{e_{\lambda}e_{1}^{k}\right\}=\left(\sum _{w\in W}(-1)^{|\lambda|-|\pi(p_{+}(w))|}e_{\pi(p_{+}(w))}\right)\left(\sum_{w \in\mathfrak{S}_{k}}(-1)^{k-|\pi(w)|}e_{|\pi(w)|}\right).\]
As in the proof of Corollary 7.10, the second factor is equal to \(e_{1}(e_{1}-1)\cdots(e_{1}-k+1)\). The result follows.
**Corollary 7.12**.: _Let \(f\in\Lambda\) and \(k\geq 0\). Then \(e_{1}(e_{1}-1)\cdots(e_{1}-k+1)\) divides \(f\) if and only if \(e_{1}^{k}\) divides \(\mathscr{F}_{\mathrm{Sur}}\left\{f\right\}\)._
Proof.: Let \(n=\deg f\). Let \(I\) be the set of all symmetric functions of degree at most \(n\) that are divisible by \(e_{1}(e_{1}-1)\cdots(e_{1}-k+1)\) and let \(J\) be the set of all symmetric functions of degree at most \(n\) that are divisible by \(e_{1}^{k}\). Now, \(J\) is spanned by symmetric functions of the form \(e_{\lambda}e_{1}^{k}\), where \(\lambda\) is a partition with \(|\lambda|\leq n-k\). By Corollary 7.11, we have \(\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{J\right\}\subset I\). Since \(I\) and \(J\) have the same dimension (and \(\Lambda/J\) is torsion-free), it follows that \(\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{J\right\}=I\). Thus, \(\mathscr{F}_{\mathrm{Sur}}\left\{I\right\}=J\). The result follows.
We are almost ready to prove Theorem 1.4. First, we will restate some lemmas from Loehr and Remmel's 2011 "expose" on plethysm [10].
**Lemma 7.13** ([10, Example 1]).: _There is a unique binary operation \(\bullet[\bullet]\colon\Lambda\times\mathbb{Z}[\![t_{1},\cdots,t_{\ell}]\!] \to\mathbb{Z}[\![t_{1},\cdots,t_{\ell}]\!]\) satisfying the following properties._
1. _For any fixed_ \(g\in\mathbb{Z}[\![t_{1},\cdots,t_{\ell}]\!]\)_, the function_ \(\bullet[g]\colon\Lambda\to\mathbb{Z}[\![t_{1},\cdots,t_{\ell}]\!]\) _is a ring homomorphism._
2. _For any fixed_ \(k>0\)_, the function_ \(p_{k}[\bullet]\colon\mathbb{Z}[\![t_{1},\cdots,t_{\ell}]\!]\to\mathbb{Z}[\![t _{1},\cdots,t_{\ell}]\!]\) _is a ring homomorphism which preserves summable infinite series._
3. _For any_ \(k\) _and_ \(i\)_, we have_ \(p_{k}[t_{i}]=t_{i}^{k}\)_._
We refer to the operation from Lemma 7.13 as _plethysm_, because it is closely related to the plethysm of symmetric functions \(\bullet[\bullet]\colon\Lambda\times\Lambda\to\Lambda\) mentioned in Section 2. For example, the two operations are related by a kind of associative property:
**Lemma 7.14** (Associativity of Plethysm, [10, Theorem 5]).: _Let \(f,g\in\Lambda\) and \(h\in\mathbb{Z}[\![t_{1},\cdots,t_{\ell}]\!]\). Then_
\[f[g[h]]=(f[g])[h]\in\mathbb{Z}[\![t_{1},\cdots,t_{\ell}]\!].\]
If \(g\) has positive integer coefficients, then the plethysm \(f[g]\) can be described as an evaluation:
**Lemma 7.15** (Monomial Substitution Rule, [10, Theorem 7]).: _Let \(f\in\Lambda\) and let \(M_{1},M_{2},M_{3},\ldots\in\mathbb{Z}[\![t_{1},\cdots,t_{\ell}]\!]\) be a (finite or infinite) sequence of monic monomials. Then_
\[f\left[\sum_{n}M_{n}\right]=f(M_{1},M_{2},M_{3},\ldots),\]
_where the right-hand side denotes the evaluation of \(f\) at \(M_{1},M_{2},M_{3},\ldots\)._
In general, plethysm can be expressed as a Hall inner product:
**Lemma 7.16**.: _Let \(M_{1},M_{2},M_{3},\ldots\in\mathbb{Z}[t_{1},\cdots,t_{\ell}]\) be a (finite or infinite) sequence of monic monomials and let \(a_{1},a_{2},a_{3},\ldots\in\mathbb{Z}\) be a sequence of the same length. Suppose that the series \(\sum_{n}a_{n}M_{n}\) is summable in \(\mathbb{Z}[\![t_{1},\cdots,t_{\ell}]\!]\). Then for any \(f\in\Lambda\), we have_
\[f\left[\sum_{n}a_{n}M_{n}\right]=\left\langle f,\prod_{n}H(M_{n})^{a_{n}} \right\rangle. \tag{18}\]
Proof.: Fix \(M_{1},M_{2},M_{3},\ldots\) and let \(a_{1},a_{2},a_{3},\ldots\) vary. Given any fixed monomial \(M\in\mathbb{Z}[t_{1},\ldots,t_{\ell}]\), it is easy to see that the coefficient of \(M\) on either side of (18) is a polynomial in \(a_{1},a_{2},a_{3},\ldots\). Hence, we may assume that \(a_{n}\geq 0\) for all \(n\). Then by replacing each \(M_{n}\) with \(a_{n}\) copies of \(M_{n}\), we may make the even stronger assumption that \(a_{n}=1\) for all \(n\). In other words, we wish to prove
\[f\left[\sum_{n}M_{n}\right]=\left\langle f,\prod_{n}H(M_{n})\right\rangle. \tag{19}\]
By linearity, it suffices to show Equation (19) in the case that \(f=m_{\lambda}\) is a monomial symmetric function. Then, since \(\langle m_{\lambda},h_{\mu}\rangle=\delta_{\lambda\mu}\), the right-hand side of the equation becomes \(m_{\lambda}(M_{1},M_{2},M_{3},\ldots)\). The result follows from the monomial substitution rule (Lemma 7.15).
Proof of Theorem 7.7.: In what follows, let
\[L=L_{1}+L_{2}+L_{3}+\cdots\in\overline{\Lambda}\]
and
\[\tilde{L}=\omega(L_{1})-\omega(L_{2})+\omega(L_{3})-\cdots\in\overline{ \Lambda}.\]
1. Let \(\overline{\omega}\colon\Lambda\to\Lambda\) be the involution given by \[\overline{\omega}(f)=f[-p_{1}]=(-1)^{\deg f}\omega(f)\] for all homogeneous \(f\in\Lambda\). Clearly, \(\overline{\omega}\) is a ring automorphism and \[\overline{\omega}(H(t))=E(-t)=\frac{1}{H(t)}.\] We wish to show that (20) \[(\overline{\omega}\circ\mathscr{F}_{\mathrm{Sur}}^{-1}\circ\overline{\omega}) \left\{\prod_{i=1}^{\ell}H(t_{i})\right\}=\prod_{w\in\mathrm{Lyndon}([\ell])} H(t^{w}).\] To do so, let \(f\in\Lambda\) be arbitrary. It is sufficient to show that each side of (20) has the same Hall inner product with \(f\). By Theorem 4.2(c), \(\overline{\omega}\circ\mathscr{F}_{\mathrm{Sur}}^{-1}\circ\overline{\omega}\) is adjoint to
plethysm by \(L\), so
\[\left\langle f,(\overline{\omega}\circ\mathscr{F}_{\mathrm{Sur}}^{-1} \circ\overline{\omega})\left\{\prod_{i=1}^{\ell}H(t_{i})\right\}\right\rangle =\left\langle f[L],\prod_{i=1}^{\ell}H(t_{i})\right\rangle\] \[=(f[L])[t_{1}+\cdots+t_{\ell}]\] \[=f[L[t_{1}+\cdots+t_{\ell}]]\] \[=f[L(t_{1},\ldots,t_{\ell})]\] \[=f\left[\sum_{w\in\mathrm{Lyndon}([\ell])}t^{w}\right]\] \[=\left\langle f,\prod_{w\in\mathrm{Lyndon}([\ell])}H(t^{w}) \right\rangle,\]
where we used Lemma 7.16, Lemma 7.14, Lemma 7.15, and Theorem 7.3. This completes the proof of (20) and of part (a) of the theorem.
(b) Again, let \(f\in\Lambda\) be arbitrary. It is sufficient to show that each side of the equation has the same Hall inner product with \(f\). By Theorem 4.2(c), \(\mathscr{F}_{\mathrm{Sur}}^{-1}\) is adjoint to plethysm by \(\tilde{L}\), so
\[\left\langle f,\mathscr{F}_{\mathrm{Sur}}^{-1}\left\{\prod_{i=1} ^{\ell}H(t_{i})\right\}\right\rangle =\left\langle f[\tilde{L}],\prod_{i=1}^{\ell}H(t_{i})\right\rangle\] \[=(f[\tilde{L}])[t_{1}+\cdots+t_{\ell}] \tag{21}\] \[=f[\tilde{L}[t_{1}+\cdots+t_{\ell}]]\]
Now, let us describe the monomials that appear in the plethysm \(\tilde{L}[t_{1}+\cdots+t_{\ell}]\). By the definition of Lyndon symmetric functions, we have
\[\tilde{L} =\sum_{n}(-1)^{n-1}\omega(L_{n})\] \[=\sum_{n}\frac{(-1)^{n-1}}{n}\sum_{d|n}\mu(d)\omega(p_{d}^{n/d})\] \[=\sum_{n}\frac{(-1)^{n-1}}{n}\sum_{d|n}\mu(d)(-1)^{(d-1)n/d}p_{d} ^{n/d}\] \[=\sum_{n}\frac{1}{n}\sum_{d|n}\mu(d)(-1)^{n/d-1}p_{d}^{n/d}.\]
Now, the term \((-1)^{n/d-1}\) is equal to \(-1\) if \(d\) divides \(\frac{n}{2}\) and \(1\) otherwise. Hence,
\[\tilde{L}=\sum_{n}\frac{1}{n}\left(\sum_{d|n}\mu(d)p_{d}^{n/d}-2\sum_{d|\frac{ n}{2}}\mu(d)p_{d}^{n/d}\right)\]
where the sum over \(d\mid\frac{n}{2}\) is understood to be empty if \(n\) is odd. Splitting this into two sums and then performing the change of variables \(\frac{n}{2}\to n\) in the second, we
obtain
\[\tilde{L} =\sum_{n}\frac{1}{n}\sum_{d|n}\mu(d)p_{d}^{n/d}-\sum_{n}\frac{2}{n} \sum_{d|\frac{n}{2}}\mu(d)p_{d}^{n/d}\] \[=\sum_{n}\frac{1}{n}\sum_{d|n}\mu(d)p_{d}^{n/d}-\sum_{n}\frac{1}{n} \sum_{d|n}\mu(d)p_{d}^{2n/d}\] \[=L-L[p_{1}^{2}].\]
By Lemma 7.14 and Theorem 7.3,
\[\tilde{L}[t_{1}+\cdots+t_{\ell}] =L[t_{1}+\cdots+t_{\ell}]-(L[p_{1}^{2}])[t_{1}+\cdots+t_{\ell}]\] \[=L[t_{1}+\cdots+t_{\ell}]-L[p_{1}^{2}[t_{1}+\cdots+t_{\ell}]]\] \[=L[t_{1}+\cdots+t_{\ell}]-L[(t_{1}+\cdots+t_{\ell})^{2}]\] \[=\sum_{w\in\text{Lyndon}([\ell])}t^{w}-\sum_{w\in\text{Lyndon}( [\ell]^{2})}t^{w}.\]
Substituting into (21) and using Lemma 7.16 one last time, we finally obtain
\[\left\langle f,\mathscr{F}_{\text{Sur}}^{-1}\left\{\prod_{i=1}^{ \ell}H(t_{i})\right\}\right\rangle =f\left[\sum_{w\in\text{Lyndon}([\ell])}t^{w}-\sum_{w\in\text{ Lyndon}([\ell]^{2})}t^{w}\right]\] \[=\left\langle f,\frac{\prod_{w\in\text{Lyndon}([\ell])}H(t^{w})} {\prod_{w\in\text{Lyndon}([\ell]^{2})}H(t^{w})}\right\rangle,\]
completing the proof.
|
2306.16742
|
Nodal solutions with synchronous sign changing components and Constant
sign solutions for singular Gierer-Meinhardt type system
|
We establish the existence of three solutions for singular semilinear
elliptic system, two of which are of opposite constant-sign. Under a strong
singularity effect, the third solution is nodal with synchronous sign
components. The approach combines sub-supersolutions method and Leray-Schauder
topological degree involving perturbation argument.
|
Abdelkrim Moussaoui
|
2023-06-29T07:34:27Z
|
http://arxiv.org/abs/2306.16742v1
|
Nodal solutions with synchronous sign changing components and constant sign solutions for singular Gierer-Meinhardt type system
###### Abstract.
We establish the existence of three solutions for singular semilinear elliptic system, two of which are of opposite constant-sign. Under a strong singularity effect, the third solution is nodal with synchronous sign components. The approach combines sub-supersolutions method and Leray-Schauder topological degree involving perturbation argument.
Key words and phrases:Singularity, Gierer-Meinhardt system, perturbation, sub-supersolutions, topological degree theory 2020 Mathematics Subject Classification: 35J75, 35J62, 35J92
## 1. Introduction
Let \(\Omega\) is a bounded domain in \(\mathbb{R}^{N}\) (\(N\geq 2\)) with a smooth boundary \(\partial\Omega\). We consider the following system of semilinear elliptic equations
(P) \[\left\{\begin{array}{ll}-\Delta u+u=sgn(v)\frac{|u|^{\alpha_{1}}}{|v|^{ \beta_{1}}}&\mbox{in }\Omega\\ -\Delta v+v=sgn(u)\frac{|u|^{\alpha_{2}}}{|v|^{\beta_{2}}}&\mbox{in }\Omega\\ u,v=0&\mbox{on }\partial\Omega,\end{array}\right.\]
where \(\Delta\) stands for the Laplacian differential operator on \(\mathcal{H}^{1}_{0}(\Omega)\), \(sgn(\cdot)\) denotes the sign function and the exponents \(\alpha_{i}\in(-1,1)\) and \(\beta_{i}\in(0,1)\) satisfy the following condition
\[\alpha_{i}+\beta_{i}<1\ \mbox{ and }\ 0>\alpha_{i}-\beta_{i}>-1,\mbox{ for }i=1,2. \tag{1.1}\]
System (P) exhibits a singularity that, without loss of generality, is located at zero. This make difficult any study attendant to sign properties of solutions for (P), especially those that change sign since they inevitably pass through the singularity. From a structural perspective, (P) is closely related to Gierer-Meinhardt system which is originally arose in studies of biological pattern formation, describing the activator-inhibitor coupled behavior for many systems in cell biology and physiology [11, 14, 19]. It is characterized by
\[\left\{\begin{array}{ll}-d_{1}\Delta u+a_{1}u=\frac{u^{\alpha_{1}}}{v^{ \beta_{1}}}&\mbox{in }\Omega\\ -d_{2}\Delta v+a_{2}v=\frac{u^{\alpha_{2}}}{v^{\beta_{2}}}&\mbox{in }\Omega\end{array}\right. \tag{1.2}\]
subject to Neumann boundary conditions \(\frac{\partial u}{\partial\eta}=\frac{\partial v}{\partial\eta}=0\) on \(\partial\Omega\), where \(u\) and \(v\) represent the scaled activator and inhibitor concentrations, respectively, \(d_{1},d_{2}\) are diffusion coefficients with \(d_{1}\ll d_{2}\) and the exponents \(\alpha_{i},\beta_{i}\in\mathbb{R}\) satisfy the relations
\[\beta_{1}\alpha_{2}<\left(1-\alpha_{1}\right)\left(1-\beta_{2}\right)\ \ \text{with}\ \ \alpha_{i}\geq 0\geq\beta_{i},\ i=1,2.\]
Depending on boundedness of the diffusive coefficient \(d_{2}\), existence, stability and/or dynamics of spike solutions are widely studied for system (1.2). We refer to [9, 12, 29, 30, 33] when \(d_{2}\rightarrow+\infty\) while in the bounded case \(d_{2}<+\infty\), we quote [13, 32, 31, 34, 35]. The case \(d_{1}=d_{2}=1\) for Neumann system (1.2) is examined recently in [26] establishing existence of at least three positive solutions. In whole space \(\Omega=\mathbb{R}^{N}\), existence, uniqueness as well as structural properties of solutions for Gierer-Meinhardt type systems are studied in [27] for \(N\geq 3\), in [4, 5] for \(N=1\),\(2\), and in [16, 17], when \(N=3\).
Gierer-Meinhardt system (1.2) has interesting and challenging mathematical properties, especially with Dirichlet boundary conditions when the nonlinear terms become singular near the boundary. In this context, numerous papers are devoted to study quantitative and qualitative properties of solutions for system (P) subject to Dirichlet boundary conditions \(u,v=0\) on \(\partial\Omega\). A long this direction, among many other interesting works, we refer interested reader to [3, 8, 28] and the references therein.
The passage from (1.2) to (P) summoned up a dependence of the nonlinearities on sign of components of solutions. Admittedly, this further complicates our study since the structure of system (P) will switch from the cooperative case to the competitive one, depending on the sign of solutions considered (see, e.g., [1, 7, 10, 15, 21, 22, 23]). However, the involvement of the sign of solutions components promotes the emergence of nonpositive solutions for (P), such as nodal solutions, that have been rarely studied for singular systems. By definition, a nodal solution is neither positive nor negative. Thus, in the scalar case, it should necessarily be a sign changing function. However, in the context of system (P), the concept of a nodal solution is more nuanced since it incorporates several types of solutions depending on the sign of their components. Actually, [25] is the only paper that has considered this issue for singular systems. The existence of nodal solutions is established for a class of semilinear singular system by means of the trapping region formed by specific sign changing sub-supersolutions pairs. Exploiting spectral properties of Laplacian operator as well as adequate truncation, in [25], it is established that nodal solutions vanish on negligible sets. This is an essential point enabling nodal solutions investigation for singular problems. Hence, by a solution of problem (P) we mean \((u,v)\in\mathcal{H}^{1}_{0}(\Omega)\times\mathcal{H}^{1}_{0}(\Omega)\) such that \(u\) and/or \(v\) vanish on zero measure sets
and
\[\int_{\Omega}(\nabla u\nabla\varphi_{1}+u\varphi_{1})\ \mathrm{d}x = \int_{\Omega}sgn(v)\frac{|u|^{\alpha_{1}}}{|v|^{\beta_{1}}}\varphi_{1}\ \mathrm{d}x,\] \[\int_{\Omega}(\nabla v\nabla\varphi_{2}+v\varphi_{2})\ \mathrm{d}x = \int_{\Omega}sgn(u)\frac{|u|^{\alpha_{2}}}{|v|^{\beta_{2}}}\varphi_{2}\ \mathrm{d}x,\]
for all \(\varphi_{i}\in\mathcal{H}^{1}_{0}(\Omega)\), for \(i=1,2\), provided the integrals in the right-hand side of the above identities exist.
The aim of this work is to establish multiplicity result for singular system (P) with a precise sign information. We provide three solutions for system (P), two of which are of opposite constant-sign. The sign property of the third solution is closely related to the structure of system (P) which, in turn, depends both on sign of exponents \(\alpha_{i},\beta_{i}\) and on sign of components of solutions of (P). Specifically, when \(\alpha_{i}\leq 0\), beside the uniqueness of the positive and negative solutions, we show that the third solution is nodal with synchronous sign changing components. According to our knowledge, this topic is a novelty. Nodal solutions with such property have never been discussed for systems, whether singular or regular (without singularities), and even for those with a variational structure.
The main result is stated as follows.
**Theorem 1**.: _Under assumption (1.1), problem (P) has at least three nontrivial solutions: \((u_{+},v_{+})\in int\mathcal{C}^{1}_{+}(\overline{\Omega}))\times int\mathcal{ C}^{1}_{+}(\overline{\Omega}),\)\((u_{-},v_{-})\in-int\mathcal{C}^{1}_{+}(\overline{\Omega})\times-int\mathcal{C}^{1}_{+} (\overline{\Omega})\) and \((u_{*},v_{*})\in\mathcal{H}^{1}_{0}(\Omega)\times\mathcal{H}^{1}_{0}(\Omega)\). If \(\alpha_{i}\leq 0\) then the opposite constant sign solutions \((u_{+},v_{+})\) and \((u_{-},v_{-})\) are unique and the third solution \((u_{*},v_{*})\) is nodal with synchronous sign components, that is,_
\[u_{*}v_{*}>0\ \ \text{a.e. in }\Omega. \tag{1.3}\]
The proof of Theorem 1 combines sub-supersolution method and topological degree theory. By a choice of suitable functions with an adjustment of adequate constants, we construct two opposite constant sign sub-supersolutions pairs on the basis of which positive and negative rectangles are formed. The latter provide a localisation of a positive and negative solutions \((u_{+},v_{+})\) and \((u_{-},v_{-})\) of (P), whose existence is derived from the sub-supersolutions Theorem for singular systems in [15, Theorem 2]. When \(\alpha_{i}\leq 0\) in (1.1), the uniqueness of solutions \((u_{+},v_{+})\) and \((u_{-},v_{-})\) is established by monotonicity argument.
The third solution of (P) is obtained via topological degree theory. It is located in the area between the positive and the negative rectangles. This is achieved by introducing first a parameter \(\varepsilon>0\) in (P), thus producing a regularized system (P\({}_{\varepsilon}\)) whose study is relevant for problem (P). Then, we prove that the degree on a ball \(\mathcal{B}_{R_{\varepsilon}}\), encompassing all potential solutions of (P\({}_{\varepsilon}\)), is \(0\) while the degree in \(\mathcal{B}_{R_{\varepsilon}}\), but excluding the area located between the aforementioned positive and negative rectangles, is equal to \(1\). By the excision property of Leray-Schauder degree, this leads to the existence of a nontrivial solution \((u_{\varepsilon},v_{\varepsilon})\) for (P\({}_{\varepsilon}\)). Here, it is important to note that, unlike
[7, 24, 26], the independence of the radius \(R_{\varepsilon}\) on \(\varepsilon\) is not required. Then, through a priori estimates, dominated convergence Theorem as well as \(S_{+}\)-property of the negative Laplacian, we may pass to the limit as \(\varepsilon\to 0\) in (P\({}_{\varepsilon}\)). This leads to a solution \((u_{*},v_{*})\) of (P) which, according to its localization, does not coincide with the above mentioned solutions \((u_{+},v_{+})\) and \((u_{-},v_{-})\). Thus, \((u_{*},v_{*})\) is a third solution of (P). Furthermore, when \(\alpha_{i}\leq 0\) in (1.1), the uniqueness of the aforementioned constant-sign solutions forces \((u_{*},v_{*})\) to be nodal in the sens that both components \(u_{*}\) and \(v_{*}\) are nontrivial and at least are not of the same constant sign. However, exploiting the sign coupled property of system (P), we conclude that \(u_{*}\) and \(v_{*}\) are both of synchronous sign changing.
The rest of this article is organized as follows. Section 2 deals with existence of solutions for regularized system (P\({}_{\varepsilon}\)) while Section 3 provides multiplicity result for system (P).
## 2. An auxiliary system
In the sequel, the Banach spaces \(\mathcal{H}^{1}_{0}(\Omega)\) and \(L^{2}(\Omega)\) are equipped with the usual norms \(\|\cdot\|_{1,2}\) and \(\|\cdot\|_{2}\), respectively. We also utilize the Holder spaces \(C^{1}(\overline{\Omega})\) and \(C^{1,\tau}(\overline{\Omega})\), \(\tau\in(0,1)\) as well as the order cone \(\mathcal{C}^{1}_{+}(\overline{\Omega})=\{w\in C^{1}(\overline{\Omega}):w(x)\geq 0\) for all \(x\in\overline{\Omega}\}\), which has a non-empty interior described as follows:
\[int\mathcal{C}^{1}_{+}(\overline{\Omega})=\{w\in\mathcal{C}^{1}_{+}(\overline {\Omega}):w(x)>0\text{ for all }x\in\overline{\Omega}\}.\]
Hereafter, we denote by \(d(x)\) the distance from a point \(x\in\overline{\Omega}\) to the boundary \(\partial\Omega\), where \(\overline{\Omega}=\Omega\cup\partial\Omega\) is the closure of \(\Omega\subset\mathbb{R}^{N}\). For \(w_{1},w_{2}\in\mathcal{C}^{1}(\overline{\Omega})\), the notation \(w_{1}\ll w_{2}\) means that
\[w_{1}(x)<w_{2}(x)\,\,\,\forall x\in\Omega\,\,\,\text{and}\,\,\,\frac{ \partial w_{2}}{\partial\eta}<\frac{\partial w_{1}}{\partial\eta}\,\,\,\text {on}\,\,\,\partial\Omega,\]
\(\eta\) is the outward normal to \(\partial\Omega\).
Let \(y_{i},z_{i}\in int\mathcal{C}^{1}_{+}(\overline{\Omega})\)\((i=1,2)\) be the unique solutions of the Dirichlet problems
\[-\Delta y_{i}(x)+y_{i}(x)=d(x)^{\alpha_{i}-\beta_{i}}\text{ in }\Omega,\,\,\,y_{i}=0\text{ on }\partial\Omega, \tag{2.1}\]
\[-\Delta z_{i}(x)+z_{i}(x)=\left\{\begin{array}{ll}d(x)^{\alpha_{i}-\beta_{ i}}&\text{in }\,\,\,\Omega\backslash\overline{\Omega}_{\delta},\\ -1&\text{in }\,\,\,\Omega_{\delta},\end{array}\right.,\,\,\,z_{i}=0\,\,\,\, \text{on}\,\,\,\partial\Omega, \tag{2.2}\]
which is known to satisfy
\[c^{-1}d(x)\leq z_{i}(x)\leq y_{i}(x)\leq cd(x)\text{ in }\Omega, \tag{2.3}\]
where \(c>1\) is a constant and
\[\Omega_{\delta}=\left\{x\in\Omega:d(x)<\delta\right\},\]
with a fixed \(\delta>0\) sufficiently small (see, e.g., [6]).
Let \(\phi_{1}\) be the positive eigenfunction defined by
\[-\Delta\phi_{1}+\phi_{1}=\lambda_{1}\phi_{1}\,\,\,\,\text{in}\,\,\Omega,\,\, \,\,\phi_{1}=0\,\,\,\,\text{on}\,\,\,\partial\Omega,\]
where \(\lambda_{1}\) is the principle eigenvalue characterized by
\[\lambda_{1}=\inf_{w\in\mathcal{H}^{1}_{0}(\Omega)\setminus\{0\}}\frac{\int_{ \Omega}(|\nabla w|^{2}+|w|^{2})\,\mathrm{d}x}{\int_{\Omega}|w|^{2}\,\mathrm{d}x}. \tag{2.4}\]
We will make use the topological degree theory to get a third solution for system (P). However, the singular terms in system (P) prevent the degree calculation to be well defined. This is mainly due to the difficulty in getting estimates from below for solutions of (P). To this challenge, we disturb system (P) by introducing a parameter \(\varepsilon>0\). This gives rise to a regularized system for (P) whose study is relevant to our initial problem.
For \(\varepsilon\in(0,1)\), we state the regularized system
\[\mathrm{(P_{\varepsilon})}\qquad\left\{\begin{array}{ll}-\Delta u+u=sgn(v) \frac{(|u|+\varepsilon)^{\alpha_{1}}}{(|v|+\varepsilon)^{\beta_{1}}}&\text{in } \Omega\\ -\Delta v+v=sgn(u)\frac{(|u|+\varepsilon)^{\alpha_{2}}}{(|v|+\varepsilon)^{ \beta_{2}}}&\text{in }\Omega\\ u,v=0&\text{on }\partial\Omega.\end{array}\right.\]
The existence result regarding problem (P\({}_{\varepsilon}\)) is stated as follows.
**Theorem 2**.: _Under assumption (1.1), system (P\({}_{\varepsilon}\)) admits nontrivial solutions \((u_{\varepsilon},v_{\varepsilon})\) in \(\mathcal{C}^{1}(\overline{\Omega})\times\mathcal{C}^{1}(\overline{\Omega})\) satisfying_
\[-C^{-1}z_{1}(x)\leq u_{\varepsilon}(x)\leq C^{-1}z_{1}(x),\ \forall x\in\Omega, \tag{2.5}\]
\[-C^{-1}z_{2}(x)\leq v_{\varepsilon}(x)\leq C^{-1}z_{2}(x),\ \forall x\in\Omega, \tag{2.6}\]
_for all \(\varepsilon\in(0,1)\) and all \(C>1\). Moreover, there exists \((u_{*},v_{*})\in\mathcal{H}^{1}_{0}(\Omega)\times\mathcal{H}^{1}_{0}(\Omega)\), solution of problem (P), within_
\[(u_{*},v_{*})\in[-C^{-1}z_{1},C^{-1}z_{1}]\times[-C^{-1}z_{2},C^{-1}z_{2}],\]
_such that_
\[u_{\varepsilon}\to u_{*}\ \text{and}\ v_{\varepsilon}\to v_{*}\ \text{in}\ \mathcal{H}^{1}_{0}(\Omega)\ \text{as}\ \varepsilon\to 0. \tag{2.7}\]
### Topological degree results
We shall study the homotopy class of problem
\[\mathrm{(P^{t}_{\varepsilon,\theta})}\qquad\left\{\begin{array}{ll}-\Delta u +u=\mathrm{F^{t}_{1,\theta}(x,}u,v)\ \text{in}\ \Omega,\\ -\Delta v+v=\mathrm{F^{t}_{2,\theta}(x,}u,v)\ \text{in}\ \Omega,\\ u,v=0\ \ \text{on}\ \partial\Omega,\end{array}\right.\]
with
\[\mathrm{F^{t}_{1,\theta}(x,}u,v)=t\ sgn(v)\frac{(|u|+\varepsilon)^{\alpha_{1}} }{(|v|+\varepsilon)^{\beta_{1}}}+(1-t)sgn((1-\theta)v)\ (1+\theta\lambda_{1}u^{+}), \tag{2.8}\]
\[\mathrm{F^{t}_{2,\theta}(x,}u,v)=t\ sgn(u)\frac{(|u|+\varepsilon)^{\alpha_{2} }}{(|v|+\varepsilon)^{\beta_{2}}}+(1-t)sgn((1-\theta)u)\ (1+\theta\lambda_{1}v^{+}), \tag{2.9}\]
for \(\varepsilon\in(0,1)\), for \(t\in[0,1]\), where \(\theta\) is a constant such that \(\theta\in\{0,1\}\) and \(s^{+}:=\max\{0,s\}\), for \(s\in\mathbb{R}\).
It is worth noting that all solutions \((u,v)\in\mathcal{H}^{1}_{0}(\Omega)\times\mathcal{H}^{1}_{0}(\Omega)\) of \((\mathrm{P}^{t}_{\varepsilon,\theta})\) satisfy
\[u(x),v(x)\neq 0\ \ \text{for a.e.}\ x\in\Omega, \tag{2.10}\]
for all \(t\in[0,1]\), all \(\varepsilon\in(0,1)\) and for \(\theta\in\{0,1\}\). This is due to the fact that \((0,0)\) cannot be a solution of \((\mathrm{P}^{t}_{\varepsilon,\theta})\) because \(\mathrm{F}^{t}_{i,\theta}(x{,}0,0)\neq 0\) as well as the fact that "a.e. in \(\Omega\)" is an equivalence relation in \(L^{1}(\Omega)\).
**Remark 1**.: _The decoupled system \((\mathrm{P}^{0}_{\varepsilon,1})\) (that is \((\mathrm{P}^{t}_{\varepsilon,\theta})\) for \(t=0\) and \(\theta=1\)) which reads as_
\[(\mathrm{P}^{0}_{\varepsilon,1})\qquad\left\{\begin{array}{l}-\Delta u+u= \mathrm{F}^{0}_{1,1}(x{,}u,v)=1+\lambda_{1}u^{+}\ \text{in}\ \Omega\\ -\Delta v+v=\mathrm{F}^{0}_{2,1}(x{,}u,v)=1+\lambda_{1}v^{+}\ \text{in}\ \Omega\\ u,v=0\ \ \text{on}\ \partial\Omega,\end{array}\right.\]
_does not admit solutions \((u,v)\in\mathcal{H}^{1}_{0}(\Omega)\times\mathcal{H}^{1}_{0}(\Omega)\), for all \(\varepsilon\in(0,1)\). This is due to [20, Proposition 9.64] with \(p=2\) and \(\beta(x),\xi(x),h(x)\equiv 1\)._
In the sequel, we denote by \(\mathcal{B}_{R_{\varepsilon}}\) and \(\mathcal{B}_{z}\) the balls in \(\mathcal{C}^{1}(\overline{\Omega})\times\mathcal{C}^{1}(\overline{\Omega})\), centered at the origin, defined by
\[\mathcal{B}_{R_{\varepsilon}}:=\left\{(u,v)\in\mathcal{C}^{1}(\overline{ \Omega})\times\mathcal{C}^{1}(\overline{\Omega}):\|u\|_{C^{1}(\overline{\Omega })}+\|v\|_{C^{1}(\overline{\Omega})}<R_{\varepsilon}\right\},\]
\[\mathcal{B}_{z}:=\left\{(u,v)\in\mathcal{B}_{R_{\varepsilon}}:-C^{-1}z_{1} \leq u\leq C^{-1}z_{1},\ \ -C^{-1}z_{2}\leq v\leq C^{-1}z_{2}\right\},\]
where, without loss of generality, we assumed that \(R_{\varepsilon}>\max_{i=1,2}\left\|z_{i}\right\|_{\infty},\) for all \(\varepsilon\in(0,1).\) It is readily seen that \(\mathcal{B}_{z}\) is an open set in \(\mathcal{C}^{1}(\overline{\Omega})\times\mathcal{C}^{1}(\overline{\Omega})\).
The next result shows that solutions of problem \((\mathrm{P}^{t}_{\varepsilon,\theta})\) cannot occur outside the ball \(\mathcal{B}_{R_{\varepsilon}}\).
**Proposition 1**.: _Assume (1.1) holds. Then, there is a constant \(R_{\varepsilon}>0\) such that every solution \((u,v)\) of \((\mathrm{P}^{t}_{\varepsilon,\theta})\) belongs to \(\mathcal{C}^{1}(\overline{\Omega})\times\mathcal{C}^{1}(\overline{\Omega})\) and satisfy_
\[\|u\|_{\mathcal{C}^{1}(\overline{\Omega})}\,,\|v\|_{\mathcal{C}^{1}(\overline{ \Omega})}<R_{\varepsilon}, \tag{2.11}\]
_for all \(t\in(0,1]\), all \(\varepsilon\in(0,1)\) and for \(\theta\in\{0,1\}\). Moreover, if \(\theta=0\), then all positive solutions \((u_{+},v_{+})\) and all negative solutions \((u_{-},v_{-})\) of \((\mathrm{P}^{t}_{\varepsilon,0})\) (\((\mathrm{P}^{t}_{\varepsilon,\theta})\) with \(\theta=0\)) satisfy_
\[C^{-1}z_{1}(x)\ll u_{+}(x)\ \ \text{and}\ \ u_{-}(x)\ll-C^{-1}z_{1}(x) \text{,}\ \ \forall x\in\Omega, \tag{2.13}\] \[C^{-1}z_{2}(x)\ll v_{+}(x)\ \ \text{and}\ \ v_{-}(x)\ll-C^{-1}z_{2}(x) \text{,}\ \ \forall x\in\Omega, \tag{2.12}\]
_for a constant \(C>1\) large._
Proof.: We begin by proving (2.11). If \(\alpha_{i}\leq 0\) in (1.1), by (2.8) and (2.9) we have
\[|\mathrm{F}^{t}_{1,\theta}(x{,}u,v)|\leq\varepsilon^{\alpha_{1}+\beta_{1}}+ \lambda_{1}|u|\ \ \text{and}\ \ |\mathrm{F}^{t}_{2,\theta}(x{,}u,v)|\leq\varepsilon^{\alpha_{2}+\beta_{2}}+ \lambda_{1}|v|.\]
Then, the regularity result [20, Corollary 8.13] together with the compact embedding \(\mathcal{C}^{1,\tau}(\overline{\Omega})\subset\mathcal{C}^{1}(\overline{ \Omega})\) show that all solutions \((u,v)\) of \((\mathrm{P}^{t}_{\varepsilon,\theta})\) are bounded in \(\mathcal{C}^{1}(\overline{\Omega})\times\mathcal{C}^{1}(\overline{\Omega})\) and satisfy (2.11).
Let us examine the case when \(\alpha_{i}>0\) in (1.1), \(i=1,2\). By contradiction suppose that for every \(n\in\mathbb{N}\), there exist \(t_{n}\in(0,1]\) and a solution \((u_{n},v_{n})\) of \((\mathrm{P}^{t_{n}}_{\varepsilon,\theta})\) such that
\[t_{n}\to t\in(0,1]\ \ \text{and}\ \ \|u_{n}\|_{\mathcal{C}^{1}(\overline{ \Omega})},\|v_{n}\|_{\mathcal{C}^{1}(\overline{\Omega})}\to\infty\ \ \text{as}\ n\to\infty.\]
Without loss of generality we may admit that
\[\gamma_{n}:=\|u_{n}\|_{\mathcal{C}^{1}(\overline{\Omega})}\to\infty\ \text{as}\ n\to\infty. \tag{2.14}\]
Denote
\[\tilde{u}_{n}:=\frac{u_{n}}{\gamma_{n}}\in\mathcal{C}^{1}(\overline{\Omega}) \ \text{with}\ \|\tilde{u}_{n}\|_{\mathcal{C}^{1}(\overline{\Omega})}=1,\ \text{for all}\ n\in\mathbb{N}. \tag{2.15}\]
Problem \((\mathrm{P}^{t_{n}}_{\varepsilon,\theta})\) results in
\[\begin{array}{l}-\Delta\tilde{u}_{n}+\tilde{u}_{n}=\frac{1}{\gamma_{n}} \mathrm{F}^{t_{n}}_{1,\theta}(x,u_{n},v_{n})\\ =\frac{t_{n}}{\gamma_{n}}\ sgn(v_{n})\frac{(|u_{n}|+\varepsilon)^{\alpha_{1}}} {(|v_{n}|+\varepsilon)^{\beta_{1}}}+\frac{1-t_{n}}{\gamma_{n}}\ sgn((1-\theta)v_{n})\ (1+\theta \lambda_{1}u_{n}^{+}).\end{array} \tag{2.16}\]
By (2.14) and since \(1>\alpha_{1}>0\), one has
\[\begin{array}{l}\left|\frac{t_{n}}{\gamma_{n}}\ sgn(v_{n})\frac{(|u_{n}|+ \varepsilon)^{\alpha_{1}}}{(|v_{n}|+\varepsilon)^{\beta_{1}}}+\frac{1-t_{n}}{ \gamma_{n}}\ sgn((1-\theta)v_{n})\ (1+\theta\lambda_{1}u_{n}^{+})\right|\\ \\ \leq\frac{1}{\gamma_{n}}\frac{(|u_{n}|+\varepsilon)^{\alpha_{1}}}{ \varepsilon^{\beta_{1}}}+1+\lambda_{1}\tilde{u}_{n}^{+}=\gamma_{n}^{\alpha_{1} -1}\frac{(|\tilde{u}_{n}|+\frac{\varepsilon}{\gamma_{n}})^{\alpha_{1}}}{ \varepsilon^{\beta_{1}}}+1+\lambda_{1}\tilde{u}_{n}^{+}\\ \\ \leq\frac{(|\tilde{u}_{n}|+1)^{\alpha_{1}}}{\varepsilon^{\beta_{1}}}+1+ \lambda_{1}\tilde{u}_{n}^{+}\leq C_{\varepsilon}(1+\|\tilde{u}_{n}\|_{C^{1}( \overline{\Omega})}^{\alpha_{1}}+\|\tilde{u}_{n}\|_{C^{1}(\overline{\Omega})} )\ \text{in}\ \Omega,\end{array}\]
for some constant \(C_{\varepsilon}>0\) independent of \(n\). Then, thanks to the regularity up to the boundary in [18], we derive that \(\tilde{u}_{n}\) is bounded in \(\mathcal{C}^{1,\tau}(\overline{\Omega})\) for certain \(\tau\in(0,1)\). The compactness of the embedding \(\mathcal{C}^{1,\tau}(\overline{\Omega})\subset\mathcal{C}^{1}(\overline{\Omega})\) implies
\[\tilde{u}_{n}\to\tilde{u}\ \ \text{in}\ \mathcal{C}^{1}(\overline{\Omega}).\]
Taking \(\theta=0\) and passing to the limit in (2.16) as \(n\to\infty\), we obtain
\[-\Delta\tilde{u}+\tilde{u}=0\ \text{in}\ \Omega,\ \tilde{u}=0\ \text{on}\ \partial\Omega.\]
Therefore \(\tilde{u}=0\) which contradicts (2.15). If \(\theta=1\), in the limit results in
(2.17) \[\left\{\begin{array}{l}-\Delta\tilde{u}+\tilde{u}=(1-t)\lambda_{1}\tilde{u} ^{+}\ \ \ \text{in}\ \Omega,\\ \tilde{u}=0,\
which is absurd because \(\tilde{u}>0\) and \(t<1.\) Consequently, this shows that there exists a constant \(R_{\varepsilon}>0\) such that (2.11) holds true.
We proceed to show (2.12) and (2.13). Let \((u_{+},v_{+})\) be a positive solution of \((\mathrm{P}^{t}_{\varepsilon,0})\). Then \(sgn(u_{+}),sgn(v_{+})\equiv 1.\) On account of (1.1), (2.3) and (2.11), we have
\[\left\{\begin{array}{ll}C^{-1}d(x)^{\alpha_{i}-\beta_{i}}&\mbox{in }\Omega \backslash\overline{\Omega}_{\delta}\\ -C^{-1}&\mbox{in }\Omega_{\delta}\end{array}\right.\leq\left\{\begin{array}{ll}C^{-1} \delta^{\alpha_{i}-\beta_{i}}&\mbox{in }\Omega\backslash\overline{\Omega}_{\delta}\\ -C^{-1}&\mbox{in }\Omega_{\delta}\end{array}\right.\]
\[<\left\{\begin{array}{ll}t\frac{\varepsilon^{\alpha_{i}}}{(R_{\varepsilon}+ 1)^{\beta_{i}}}+(1-t)&\mbox{if }\alpha_{i}\geq 0\\ t(R_{\varepsilon}+1)^{\alpha_{i}-\beta_{i}}+(1-t)&\mbox{if }\alpha_{i}\leq 0 \end{array}\right.\]
\[\leq\left\{\begin{array}{ll}t\frac{\varepsilon^{\alpha_{i}}}{(|v_{+}|+1)^{ \beta_{i}}}+(1-t)&\mbox{if }\alpha_{i}\geq 0\\ t\frac{(|u_{+}|+1)^{\alpha_{i}}}{(|v_{+}|+1)^{\beta_{i}}}+(1-t)&\mbox{if }\alpha_{i}\leq 0 \end{array}\right.\leq\mathrm{F}^{t}_{i,0}(x,u_{+},v_{+})\ \mbox{ in }\Omega,\ i=1,2,\]
for all \(t\in[0,1]\) and all \(\varepsilon\in(0,1)\), provided \(C>1\) is large. Thus, for each compact set K \(\subset\subset\Omega\), there is a constant \(\sigma=\sigma(\mathrm{K})>0\) such that
\[\sigma+C^{-1}\left\{\begin{array}{ll}d(x)^{\alpha_{i}-\beta_{i}}&\mbox{in }\ \ \Omega \backslash\overline{\Omega}_{\delta},\\ -1&\mbox{in }\ \ \Omega_{\delta},\end{array}\right.<\mathrm{F}^{t}_{i,0}(x,u_{+},v_{+}) \mbox{ a.e. in }\Omega\cap\mathrm{K},i=1,2.\]
Then, by (2.2) and the strong comparison principle [2, Proposition 2.6], we infer that \(C^{-1}z_{1}(x)\ll u_{+}(x)\) and \(C^{-1}z_{2}(x)\ll v_{+}(x)\), for a.a. \(x\in\overline{\Omega}\). In the same manner we can show that \(-C^{-1}z_{1}(x)\gg u_{-}(x)\) and \(-C^{-1}z_{2}(x)\gg v_{-}(x)\), for a.a. \(x\in\overline{\Omega}\). This ends the proof.
On account of (2.10), \(\mathrm{F}^{t}_{i,\theta}(x,\cdot,\cdot)\) is continuous for a.e. \(x\in\Omega\), for \(i=1,2\). Thus, the homotopy \(\mathcal{H}_{\varepsilon,\theta}:[0,1]\times\mathcal{C}^{1}(\overline{\Omega} )\times\mathcal{C}^{1}(\overline{\Omega})\to\mathcal{C}(\overline{\Omega})\) given by
\[\mathcal{H}_{\varepsilon,\theta}(t,u,v)=I(u,v)-\left(\begin{array}{cc}(- \Delta+I)^{-1}&0\\ 0&(-\Delta+I)^{-1}\end{array}\right)\left(\begin{array}{c}\mathrm{F}^{t}_{1,\theta}(x,u,v)\\ \mathrm{F}^{t}_{2,\theta}(x,u,v)\end{array}\right)\]
is well defined for all \(t\in[0,1]\), all \(\varepsilon\in(0,1)\), and for \(\theta=0,1\). Moreover, the compactness of the operator \((-\Delta+I)^{-1}:\mathcal{C}(\overline{\Omega})\to\mathcal{C}^{1}(\overline{ \Omega})\) implies that \(\mathcal{H}_{\varepsilon,\theta}\) is completely continuous.
The next result provides the values of the the topological degree of \(\mathcal{H}_{\varepsilon,\theta}\) in certain specific sets for \(\theta=0\) and \(\theta=1\).
**Proposition 2**.: _Assume that (1.1) is satisfied. Then, the Leray-Schauder topological degrees \(\deg(\mathcal{H}_{\varepsilon,1}(1,\cdot,\cdot),\mathcal{B}_{R_{\varepsilon}},0)\) (with \(\theta=1\)) and \(\deg(\mathcal{H}_{\varepsilon,0}(1,\cdot,\cdot),\mathcal{B}_{R_{\varepsilon}} \setminus\overline{\mathcal{B}}_{z},0)\) (with \(\theta=0\)) are well defined for all \(t\in[0,1]\) and all \(\varepsilon\in(0,1)\). Moreover, it holds_
\[\deg(\mathcal{H}_{\varepsilon,1}(1,\cdot,\cdot),\mathcal{B}_{R_{\varepsilon}},0)=\deg(\mathcal{H}_{\varepsilon,1}(0,\cdot,\cdot),\mathcal{B}_{R_{ \varepsilon}},0)=0 \tag{2.18}\]
_and_
\[\deg(\mathcal{H}_{\varepsilon,0}(1,\cdot,\cdot),\mathcal{B}_{R_{\varepsilon}} \setminus\overline{\mathcal{B}}_{z},0)=\deg(\mathcal{H}_{\varepsilon,0}(0, \cdot,\cdot),\mathcal{B}_{R_{\varepsilon}}\setminus\overline{\mathcal{B}}_{z},0)\neq 0, \tag{2.19}\]
_where \(\overline{\mathcal{B}}_{z}\) is the closure of \(\mathcal{B}_{z}\) in \(\mathcal{C}^{1}(\overline{\Omega})\times\mathcal{C}^{1}(\overline{\Omega})\)._
Proof.: Proposition 1 expressly establishes that solutions of \((\mathrm{P}^{t}_{\varepsilon,\theta})\) must lie in \(\mathcal{B}_{R_{\varepsilon}}\) and, if \(\theta=0\), positive and negative solutions of \((\mathrm{P}^{t}_{\varepsilon,0})\) are located in \(\mathcal{B}_{R_{\varepsilon}}\backslash\overline{\mathcal{B}}_{z}\). Hence, the degrees \(\deg(\mathcal{H}_{\varepsilon,1}(1,\cdot,\cdot),\mathcal{B}_{R_{\varepsilon}},0)\) and \(\deg(\mathcal{H}_{\varepsilon,0}(1,\cdot,\cdot),\mathcal{B}_{R_{\varepsilon}} \backslash\overline{\mathcal{B}}_{z},0)\) are well defined for all \(\varepsilon\in(0,1)\). Moreover, the homotopy invariance property of the degree ensures that the first equality in (2.18) and (2.19) is fulfilled. On the other hand, according to Remark 1, problem \((\mathrm{P}^{0}_{\varepsilon,1})\) (\((\mathrm{P}^{t}_{\varepsilon,\theta})\) with \(t=0\) and \(\theta=1\)) has no solutions whereas \((\mathrm{P}^{1}_{\varepsilon,0})\) (\((\mathrm{P}^{t}_{\varepsilon,\theta})\) with \(t=1\) and \(\theta=0\)), that is reduced to the following decoupled torsion problems
\[\left\{\begin{array}{l}-\Delta u+u=1\mbox{ in }\Omega\\ -\Delta v+v=1\mbox{ in }\Omega\end{array}\right.,\ u,v=0\mbox{ on }\partial\Omega,\]
admits a unique solution. Thence
\[\deg\left(\mathcal{H}_{\varepsilon,1}(0,\cdot,\cdot),\mathcal{B}_{R_{ \varepsilon}},0\right)=0,\]
while
\[\deg(\mathcal{H}_{\varepsilon,0}(0,\cdot,\cdot),\mathcal{B}_{R_{\varepsilon}} \backslash\overline{\mathcal{B}}_{z},0)\neq 0,\]
for all \(\varepsilon\in(0,1).\) Consequently, we deduce that
\[\deg\left(\mathcal{H}_{\varepsilon,1}(1,\cdot,\cdot),\mathcal{B}_{R_{ \varepsilon}},0\right)=0\ \mbox{ and }\ \deg(\mathcal{H}_{\varepsilon,0}(1,\cdot,\cdot),\mathcal{B}_{R_{ \varepsilon}}\backslash\overline{\mathcal{B}}_{z},0)\neq 0,\]
for all \(\varepsilon\in(0,1).\) This completes the proof.
### Proof of Theorem 2
By the definition of the homotopy \(\mathcal{H}_{\varepsilon,\theta}\), observe that
\[\mathcal{H}_{\varepsilon,0}(1,\cdot,\cdot)=\mathcal{H}_{\varepsilon,1}(1, \cdot,\cdot),\mbox{ for all }\varepsilon\in(0,1).\]
Moreover, \((u,v)\) is a solution for \((\mathrm{P}_{\varepsilon})\) if, and only if,
\[(u,v)\in\mathcal{B}_{R_{\varepsilon}}(0)\ \mbox{ and }\ \mathcal{H}_{\varepsilon,1}(1,u,v)=0.\]
By virtue of the domain additivity property of Leray-Schauder degree we have
\[\deg(\mathcal{H}_{\varepsilon,1}(1,\cdot,\cdot),\mathcal{B}_{R_{ \varepsilon}},0)\] \[= \deg(\mathcal{H}_{\varepsilon,1}(1,\cdot,\cdot),\mathcal{B}_{R_{ \varepsilon}}\backslash\overline{\mathcal{B}}_{z},0)+\deg(\mathcal{H}_{ \varepsilon,1}(1,\cdot,\cdot),\mathcal{B}_{z},0).\]
Then, on the basis of Proposition 2, we infer that
\[\deg(\mathcal{H}_{\varepsilon,1}(1,\cdot,\cdot),\mathcal{B}_{z},0)\neq 0,\]
showing that problem \((\mathrm{P}_{\varepsilon})\) has a solution \((u_{\varepsilon},v_{\varepsilon})\) in \(\mathcal{B}_{z}\). Consequently, \((u_{\varepsilon},v_{\varepsilon})\in\mathcal{C}^{1}(\overline{\Omega})\times \mathcal{C}^{1}(\overline{\Omega})\) fulfills (2.5)-(2.6) while (2.10) forces \(u_{\varepsilon},v_{\varepsilon}\neq 0\) a.e in \(\Omega\), for all \(\varepsilon\in(0,1)\). This proves the first part of the theorem.
We proceed to show the limit in (2.7). Set \(\varepsilon=\frac{1}{n}\) in \((\mathrm{P}_{\varepsilon})\) with any positive integer \(n\geq 1.\) From above, there exists \((u_{n},v_{n}):=(u_{\frac{1}{n}},v_{\frac{1}{n}})\in\mathcal{C}^{1}(\overline{ \Omega})\times\mathcal{C}^{1}(\overline{\Omega})\) solution of \((\mathrm{P}_{n})\) (\((\mathrm{P}_{\varepsilon})\) with \(\varepsilon=\frac{1}{n}\)) such that
\[(u_{n},v_{n})\in[-C^{-1}z_{1},C^{-1}z_{1}]\times[-C^{-1}z_{2},C^{-1}z_{2}] \tag{2.20}\]
and
\[\left\{\begin{array}{ll}\int_{\Omega}(\nabla u_{n}\nabla\varphi_{1}+u_{n}\, \varphi_{1})\ \mathrm{d}x=\int_{\Omega}sgn(v_{n})\frac{(|u_{n}|+\frac{1}{n})^{\alpha_{1}}}{(|v _{n}|+\frac{1}{n})^{\beta_{1}}}\varphi_{1}\ \mathrm{d}x,\\ \int_{\Omega}(\nabla v_{n}\nabla\varphi_{2}+v_{n}\,\varphi_{2})\ \mathrm{d}x=\int_{ \Omega}sgn(u_{n})\frac{(|u_{n}|+\frac{1}{n})^{\alpha_{2}}}{(|v_{n}|+\frac{1}{ n})^{\beta_{2}}}\varphi_{2}\ \mathrm{d}x,\end{array}\right. \tag{2.21}\]
for all \(\varphi_{1},\varphi_{2}\in\mathcal{H}^{1}_{0}(\Omega)\).
Taking \(t=1\) and \(\varepsilon=\frac{1}{n}\) in (2.8), by (2.10), we infer that
\[u_{n}(x),v_{n}(x)\neq 0\ \text{for a.e.}\ x\in\Omega,\ \text{for all}\ n. \tag{2.22}\]
Acting with \((\varphi_{1},\varphi_{2})=(u_{n},v_{n})\) in (2.21), by (1.1), (2.20) and (2.22), we get
\[\begin{array}{ll}\int_{\Omega}(|\nabla u_{n}|^{2}+|u_{n}|^{2})\ \mathrm{d}x=\int_{\Omega}sgn(v_{n})\frac{(|u_{n}|+\frac{1}{n})^{\alpha_{1}}}{(|v _{n}|+\frac{1}{n})^{\beta_{1}}}u_{n}\ \mathrm{d}x\\ \leq\int_{\Omega}\left|sgn(v_{n})\frac{(|u_{n}|+\frac{1}{n})^{\alpha_{1}}}{(|v _{n}|+\frac{1}{n})^{\beta_{1}}}u_{n}\right|\ \mathrm{d}x\\ \leq\left\{\begin{array}{ll}\int_{\Omega}\frac{|u_{n}|\alpha_{1}+1}{|v_{n}|^ {\beta_{1}}}\ \mathrm{d}x&\text{if}\ \alpha_{1}\leq 0\\ \int_{\Omega}\frac{(|u_{n}|+1)^{\alpha_{1}+1}}{|v_{n}|^{\beta_{1}}}\ \mathrm{d}x&\text{if}\ \alpha_{1}>0 \end{array}\right.\\ \leq\left\{\begin{array}{ll}\int_{\Omega}\frac{(C^{-1}z_{1})^{\alpha_{1}+1}}{ |v_{n}|^{\beta_{1}}}\ \mathrm{d}x&\text{if}\ \alpha_{1}\leq 0\\ \int_{\Omega}\frac{(C^{-1}z_{1}+1)^{\alpha_{1}+1}}{|v_{n}|^{\beta_{1}}}\ \mathrm{d}x&\text{if}\ \alpha_{1}>0 \end{array}\right.<\infty\end{array}\right.\end{array}\]
and
\[\begin{array}{ll}\int_{\Omega}(|\nabla v_{n}|^{2}+|v_{n}|^{2})\ \mathrm{d}x=\int_{\Omega}sgn(u_{n})\frac{(|u_{n}|+\frac{1}{n})^{\alpha_{2}}}{(|v _{n}|+\frac{1}{n})^{\beta_{2}}}v_{n}\ \mathrm{d}x\\ \leq\int_{\Omega}\left|sgn(u_{n})\frac{(|u_{n}|+\frac{1}{n})^{\alpha_{2}}}{(|v _{n}|+\frac{1}{n})^{\beta_{2}}}v_{n}\right|\ \mathrm{d}x\\ \leq\left\{\begin{array}{ll}\int_{\Omega}|u_{n}|^{\alpha_{1}}|v_{n}|^{1- \beta_{1}}\ \mathrm{d}x&\text{if}\ \alpha_{1}\leq 0\\ \int_{\Omega}(C^{-1}z_{1}+1)^{\alpha_{1}}|v_{n}|^{1-\beta_{1}}\ \mathrm{d}x&\text{if}\ \alpha_{1}>0 \end{array}\right.<\infty,\end{array}\]
showing that \(\{u_{n}\}_{n}\) and \(\{v_{n}\}_{n}\) are bounded in \(\mathcal{H}^{1}_{0}(\Omega)\). We are thus allowed to extract a subsequence (still denoted by \(\{u_{n}\}_{n},\{v_{n}\}_{n}\)) such that
\[u_{n}\rightharpoonup u_{*}\ \text{ and }\ v_{n}\rightharpoonup v_{*}\ \text{ in }\mathcal{H}^{1}_{0}(\Omega). \tag{2.23}\]
Moreover, on account of (2.20) and (2.23), we have
\[-C^{-1}z_{1}\leq u_{*}\leq C^{-1}z_{1}\ \text{ and }\ -C^{-1}z_{2}\leq v_{*}\leq C^{-1}z_{2}\ \text{ in }\Omega.\]
Inserting \((\varphi_{1},\varphi_{2})=(u_{n}-u_{*},v_{n}-v_{*})\) in (2.21) yields
\[\begin{array}{ll}\int_{\Omega}(\nabla u_{n}\,\nabla(u_{n}-u_{*})+u_{n}(u_{n} -u_{*}))\ \mathrm{d}x=\int_{\Omega}sgn(v_{n})\frac{(|u_{n}|+\frac{1}{n})^{\alpha_{1}}}{(|v _{n}|+\frac{1}{n})^{\beta_{1}}}(u_{n}-u_{*})\ \mathrm{d}x\\ \int_{\Omega}(\nabla v_{n}\,\nabla(v_{n}-v_{*})+v_{n}(v_{n}-v_{*}))\ \mathrm{d}x=\int_{\Omega}sgn(u_{n})\frac{(|u_{n}|+\frac{1}{n})^{\alpha_{2}}}{(|v _{n}|+\frac{1}{n})^{\beta_{2}}}(v_{n}-v_{*})\ \mathrm{d}x\end{array}\]
By (1.1), (2.20) and for \(C>1\), we have
\[\left|sgn(v_{n})\frac{(|u_{n}|+\frac{1}{n})^{\alpha_{1}}}{(|v_{n}|+ \frac{1}{n})^{\beta_{1}}}(u_{n}-u_{*})\right|\leq\left\{\begin{array}{ll}2C^{- 1}z_{1}\frac{|u_{n}|^{\alpha_{1}}}{|v_{n}|^{\beta_{1}}}&\mbox{if $\alpha_{1}<0$}\\ 2C^{-1}z_{1}\frac{(C^{-1}z_{1}+1)^{\alpha_{1}}}{|v_{n}|^{\beta_{1}}}&\mbox{if $ \alpha_{1}\geq 0$}\end{array}\right.\] \[\leq\left\{\begin{array}{ll}2\left\|z_{1}\right\|_{\infty}\frac {|u_{n}|^{\alpha_{1}}}{|v_{n}|^{\beta_{1}}}&\mbox{if $\alpha_{1}<0$}\\ 2\left\|z_{1}\right\|_{\infty}\frac{(\left\|z_{1}\right\|_{\infty}+1)^{\alpha _{1}}}{|v_{n}|^{\beta_{1}}}&\mbox{if $\alpha_{1}\geq 0$},\end{array}\right.\]
while, by (2.22), we infer that
\[sgn(v_{n})\frac{(|u_{n}|+\frac{1}{n})^{\alpha_{1}}}{(|v_{n}|+ \frac{1}{n})^{\beta_{1}}}(u_{n}-u_{*})\in L^{1}(\Omega). \tag{2.24}\]
Then, using (2.23), (2.24) and applying Fatou's Lemma, we derive that
\[\lim_{n\to\infty}\int_{\Omega}(\nabla u_{n}\,\nabla(u_{n}-u_{*})+u_{n}(u_{n}- u_{*}))\ \mathrm{d}x\leq 0.\]
Therefore, the \(\mathcal{S}_{+}\)-property of \(-\Delta+I\) on \(\mathcal{H}^{1}_{0}(\Omega)\) (see, e.g., [20, Proposition 3.5]) guarantees that
\[u_{n}\to u_{*}\ \mbox{in $\mathcal{H}^{1}_{0}(\Omega)$}. \tag{2.25}\]
In the same manner, we show that
\[v_{n}\to v_{*}\ \mbox{in $\mathcal{H}^{1}_{0}(\Omega)$}. \tag{2.26}\]
On the other hand, by (1.1), (2.25), (2.20) and (2.26), it holds
\[|sgn(v_{n})\frac{(|u_{n}|+\frac{1}{n})^{\alpha_{1}}}{(|v_{n}|+ \frac{1}{n})^{\beta_{1}}}\varphi_{1}| \leq \left\{\begin{array}{ll}\frac{|u_{n}|^{\alpha_{1}}}{|v_{n}|^{ \beta_{1}}}|\varphi_{1}|&\mbox{if $\alpha_{1}<0$}\\ \frac{(|u_{n}|+1)^{\alpha_{1}}}{|v_{n}|^{\beta_{1}}}|\varphi_{1}|&\mbox{if $ \alpha_{1}\geq 0$}\end{array}\right.\] \[\leq \left\{\begin{array}{ll}\frac{|u_{n}|^{\alpha_{1}}}{|v_{n}|^{ \beta_{1}}}|\varphi_{1}|&\mbox{if $\alpha_{1}<0$}\\ \frac{(\left\|z_{1}\right\|_{\infty}+1)^{\alpha_{1}}}{|v_{n}|^{\beta_{1}}}| \varphi_{1}|&\mbox{if $\alpha_{1}\geq 0$}\end{array}\right.\]
and
\[|sgn(u_{n})\frac{(|u_{n}|+\frac{1}{n})^{\alpha_{2}}}{(|v_{n}|+ \frac{1}{n})^{\beta_{2}}}\varphi_{2}| \leq \left\{\begin{array}{ll}\frac{|u_{n}|^{\alpha_{2}}}{|v_{n}|^{ \beta_{2}}}|\varphi_{2}|&\mbox{if $\alpha_{2}<0$}\\ \frac{(\left\|u_{n}|+1)^{\alpha_{2}}}{|v_{n}|^{\beta_{2}}}|\varphi_{2}|&\mbox{ if $\alpha_{2}\geq 0$}\end{array}\right.\] \[\leq \left\{\begin{array}{ll}\frac{|u_{n}|^{\alpha_{2}}}{|v_{n}|^{ \beta_{2}}}|\varphi_{2}|&\mbox{if $\alpha_{2}<0$}\\ \frac{(\left\|z_{1}\right\|_{\infty}+1)^{\alpha_{2}}}{|v_{n}|^{\beta_{2}}}| \varphi_{2}|&\mbox{if $\alpha_{2}\geq 0$}\end{array}\right.,\]
for all \(\varphi_{1},\varphi_{2}\in\mathcal{H}^{1}_{0}(\Omega).\) Then, on the basis of (2.22), Lebesgue's dominated convergence theorem entails
\[\lim_{n\to\infty}\int_{\Omega}sgn(v_{n})\frac{(|u_{n}|+\frac{1}{ n})^{\alpha_{1}}}{(|v_{n}|+\frac{1}{n})^{\beta_{1}}}\varphi_{1}\ \mathrm{d}x = \int_{\Omega}sgn(v_{*})\frac{|u_{*}|^{\alpha_{1}}}{|v_{*}|^{\beta_{1}}} \varphi_{1}\ \mathrm{d}x,\] \[\lim_{n\to\infty}\int_{\Omega}sgn(u_{n})\frac{(|u_{n}|+\frac{1}{n })^{\alpha_{2}}}{(|v_{n}|+\frac{1}{n})^{\beta_{2}}}\varphi_{2}\ \mathrm{d}x = \int_{\Omega}sgn(u_{*})\frac{|u_{*}|^{\alpha_{2}}}{|v_{*}|^{\beta_{2}}} \varphi_{2}\ \mathrm{d}x,\]
for all \(\varphi_{1},\varphi_{2}\in\mathcal{H}_{0}^{1}(\Omega)\)**.** Hence, we may pass to the limit in (2.21) to conclude that \((u_{*},v_{*})\) is a solution of problem (P) within \([-C^{-1}z_{1},C^{-1}z_{1}]\times[-C^{-1}z_{2},C^{-1}z_{2}]\).
## 3. **Proof of** Theorem 1
This section is devoted to the proof of the main result Theorem 1. It is performed into two steps, distinguishing the study of constant-sign solutions from that of nodal solutions.
**Remark 2**.: _All solutions \((u,v)\in\mathcal{H}_{0}^{1}(\Omega)\times\mathcal{H}_{0}^{1}(\Omega)\) of_ (P) _satisfy \(u(x),v(x)\neq 0\) for a.e. \(x\in\Omega\). This is due to the singular character at the origin of right hand-side of the equations in_ (P) _together with the fact that "a.e. in \(\Omega\)" is an equivalence relation in \(L^{1}(\Omega)\)._
### Constant sign solutions
**Proposition 3**.: _Assume (1.1) is fulfilled with \(\alpha_{i}\leq 0,\)\(i=1,2\). Then, problem_ (P) _does not admit more than two opposite constant-sign solutions \((u_{+},v_{+})\) and \((u_{-},v_{-})\) in \((\mathcal{H}_{0}^{1}(\Omega)\cap L^{\infty}(\Omega))^{2}\)._
Proof.: We only show the existence of a unique positive solution \((u_{+},v_{+})\) for problem (P) because the existence of the negative solution \((u_{1,-},u_{2,-})\) can be justified similarly. By contradiction, let \((u_{+},v_{+})\) and \((\hat{u}_{+},\hat{v}_{+})\) be two distinct positive solutions of (P) in \((\mathcal{H}_{0}^{1}(\Omega)\cap L^{\infty}(\Omega))^{2}\) satisfying
\[\left\{\begin{array}{ll}-\Delta u_{+}+u_{+}=\frac{|u_{+}|^{\alpha_{1}}}{|v_{ +}|^{\beta_{1}}}&\mbox{in $\Omega$}\\ -\Delta v_{+}+v_{+}=\frac{|u_{+}|^{\alpha_{2}}}{|v_{+}|^{\beta_{2}}}&\mbox{in $ \Omega$}\\ u_{+},v_{+}=0&\mbox{on $\partial\Omega$},\end{array}\right.\]
\[\left\{\begin{array}{ll}-\Delta\hat{u}_{+}+\hat{u}_{+}=\frac{|\hat{u}_{+}|^{ \alpha_{1}}}{|\hat{v}_{+}|^{\beta_{1}}}&\mbox{in $\Omega$}\\ -\Delta\hat{v}_{+}+\hat{v}_{+}=\frac{|\hat{u}_{+}|^{\alpha_{2}}}{|\hat{v}_{+}| ^{\beta_{2}}}&\mbox{in $\Omega$}\\ \hat{u}_{+},\hat{v}_{+}=0&\mbox{on $\partial\Omega$},\end{array}\right.\]
where \(sgn(u_{+}),sgn(v_{+}),sgn(\hat{u}_{+}),sgn(\hat{u}_{+})\equiv 1\). Thus
\[\left\{\begin{array}{ll}-\Delta u_{+}+\Delta\hat{u}_{+}+u_{+}-\hat{u}_{+}= \frac{|u_{+}|^{\alpha_{1}}}{|v_{+}|^{\beta_{1}}}-\frac{|\hat{u}_{+}|^{\alpha_ {1}}}{|\hat{v}_{+}|^{\beta_{1}}}\mbox{ in $\Omega$}\\ u_{+}-\hat{u}_{+}=0&\mbox{on $\partial\Omega$}.\end{array}\right. \tag{3.1}\]
Multiply (3.1) by \((u_{+}-\hat{u}_{+})\) and integrate over \(\Omega\), the assumption \(\alpha_{i}\leq 0\) yields
\[\begin{array}{ll}0\leq\int_{\Omega}|\nabla(u_{+}-\hat{u}_{+})|^{2}\ dx+\int_{ \Omega}|u_{+}-\hat{u}_{+}|^{2}\ dx\\ =\int_{\Omega}\left(\frac{|u_{+}|^{\alpha_{1}}}{|v_{+}|^{\beta_{1}}}-\frac{| \hat{u}_{+}|^{\alpha_{1}}}{|\hat{v}_{+}|^{\beta_{1}}}\right)(u_{+}-\hat{u}_{+ })\ dx\leq 0,\end{array}\]
showing that \(u_{+}=\hat{u}_{+}\) in \(\Omega\). A similar argument produces \(v_{+}=\hat{v}_{+}\).
**Theorem 3**.: _Under assumption (1.1), problem_ (P) _admits at least two opposite constant-sign solutions \((u_{+},v_{+})\) and \((u_{-},v_{-})\) in \(\mathcal{C}^{1,\tau}(\overline{\Omega})\times\mathcal{C}^{1,\tau}(\overline{ \Omega}),\) for certain \(\tau\in(0,1).\) Moreover, for a constant \(C>0\) large, it holds_
\[C^{-1}z_{1}(x)\ll u_{+}(x)\ll Cy_{1}(x),\ \forall x\in\Omega, \tag{3.2}\]
\[C^{-1}z_{2}(x)\ll v_{+}(x)\ll Cy_{2}(x),\ \forall x\in\Omega, \tag{3.3}\]
\[-Cy_{1}(x)\ll u_{-}(x)\ll-C^{-1}z_{1}(x),\ \forall x\in\Omega \tag{3.4}\]
_and_
\[-Cy_{2}(x)\ll v_{-}(x)\ll-C^{-1}z_{2}(x),\ \forall x\in\Omega. \tag{3.5}\]
_If \(\alpha_{i}\leq 0,i=1,2,\) then the constant-sign solutions \((u_{+},v_{+})\) and \((u_{-},v_{-})\) are unique._
Proof.: By using sub-supersolutions method, we first prove the existence of a positive solution \((u_{+},v_{+})\) within \([C^{-1}z_{1},Cy_{1}]\times[C^{-1}z_{2},Cy_{2}]\). Given that \(z_{i},y_{i}\geq 0\) in \(\overline{\Omega}\), we have \(sgn(u_{+}),sgn(v_{+})\equiv 1\) in \([C^{-1}z_{1},Cy_{1}]\) and \([C^{-1}z_{2},Cy_{2}]\), respectively. Obviously, we have
\[-C^{-1}<0\leq\min\{\frac{(C^{-1}z_{1})^{\alpha_{i}}}{(Cy_{2})^{\beta_{i}}}, \frac{(Cy_{1})^{\alpha_{i}}}{(Cy_{2})^{\beta_{i}}}\},\ \mbox{for all}\ x\in\Omega_{\delta}.\]
Moreover, by (1.1), (2.3) and for \(C>1\), for \(\alpha_{i}>0\), we obtain
\[C^{-1}d(x)^{\alpha_{i}-\beta_{i}} \leq (Cc)^{-(\alpha_{i}+\beta_{i})}d(x)^{\alpha_{i}-\beta_{i}}\] \[\leq \frac{(C^{-1}c^{-1}d(x))^{\alpha_{i}}}{(Ccd(x))^{\beta_{i}}}\leq \frac{(C^{-1}z_{1})^{\alpha_{i}}}{(Cy_{2})^{\beta_{i}}},\ \ \mbox{for all}\ x\in\Omega\backslash\overline{\Omega}_{\delta},\]
\[Cd(x)^{\alpha_{i}-\beta_{i}} \geq c^{\alpha_{i}+\beta_{i}}(Cd(x))^{\alpha_{i}-\beta_{i}}\] \[\geq \frac{(Ccd(x))^{\alpha_{i}}}{(Cc^{-1}d(x))^{\beta_{i}}}\geq\frac {(Cy_{1})^{\alpha_{i}}}{(Cy_{2})^{\beta_{i}}},\ \ \mbox{for all}\ x\in\overline{\Omega},\]
while, if \(\alpha_{i}\leq 0\), it holds
\[C^{-1}d(x)^{\alpha_{i}-\beta_{i}}\leq(Ccd(x))^{\alpha_{i}-\beta_{i}}\leq\frac {(Cy_{1})^{\alpha_{i}}}{(Cy_{2})^{\beta_{i}}},\ \ \mbox{for all}\ x\in\Omega\backslash\overline{\Omega}_{\delta},\]
\[Cd(x)^{\alpha_{i}-\beta_{i}}\geq(C^{-1}c^{-1}d(x))^{\alpha_{i}-\beta_{i}}\geq \frac{(C^{-1}z_{1})^{\alpha_{i}}}{(C^{-1}z_{2})^{\beta_{i}}},\ \ \mbox{for all}\ x\in\overline{\Omega},\]
provided that \(C>1\) is sufficiently large. Then, in view of (2.1) and (2.2), it follows that
\[-\Delta(C^{-1}z_{i})+C^{-1}z_{i}=C^{-1}\left\{\begin{array}{ll}d(x)^{\alpha _{i}-\beta_{i}}&\mbox{in}\ \ \Omega\backslash\overline{\Omega}_{\delta},\\ -1&\mbox{in}\ \ \Omega_{\delta},\end{array}\right.\]
\[\leq\left\{\begin{array}{ll}\frac{(C^{-1}z_{1})^{\alpha_{i}}}{(Cy_{2})^{ \beta_{i}}}&\mbox{if}\ \alpha_{i}\geq 0\\ \frac{(Cy_{1})^{\alpha_{i}}}{(Cy_{2})^{\beta_{i}}}&\mbox{if}\ \alpha_{i}\leq 0 \end{array}\right.\leq\frac{|u|^{\alpha_{i}}}{|v|^{\beta_{i}}}\ \mbox{in}\ \Omega,\]
and
\[-\Delta(Cy_{i})+Cy_{i}=Cd(x)^{\alpha_{i}-\beta_{i}}\]
\[\geq\left\{\begin{array}{ll}\frac{(Cy_{1})^{\alpha_{i}}}{(Cy_{2})^{\beta_ {i}}}&\mbox{if}\ \alpha_{i}\geq 0\\ \frac{(C^{-1}z_{1})^{\alpha_{i}}}{(C^{-1}z_{2})^{\beta_{i}}}&\mbox{if}\ \alpha_{i}\leq 0 \end{array}\right.\geq\frac{|u|^{\alpha_{i}}}{|v|^{\beta_{i}}}\ \mbox{in}\ \Omega,\]
for all \((u_{1},u_{2})\in[C^{-1}z_{1},Cy_{1}]\times[C^{-1}z_{2},Cy_{2}].\) Then, [15, Theorem 2] ensures the existence a solution \((u_{+},v_{+})\in\mathcal{C}^{1,\tau}(\overline{\Omega})\times\mathcal{C}^{1, \tau}(\overline{\Omega}),\) for certain \(\tau\in(0,1),\) for problem (P) within \([C^{-1}z_{1},Cy_{1}]\times[C^{-1}z_{2},Cy_{2}]\). In view of Proposition 3, \((u_{+},v_{+})\) is a unique positive solution of (P).
On the other hand, by (2.3) and (1.1), for each compact set \(\mathrm{K}\subset\Omega\), there is a constant \(\eta=\eta(\mathrm{K})>0\) such that
\[\begin{array}{l}\eta+\underline{\mathrm{X}}_{i}(x):=\eta+C^{-1}\left\{ \begin{array}{ll}d(x)^{\alpha_{i}-\beta_{i}}&\mbox{in}\ \ \Omega\backslash\overline{\Omega}_{\delta},\\ -1&\mbox{in}\ \ \Omega_{\delta},\end{array}\right.\\ \\ <\min\{(Cc)^{-(\alpha_{i}+\beta_{i})},(Cc)^{\alpha_{i}-\beta_{i}}\}\ d(x)^{ \alpha_{i}-\beta_{i}}\\ \\ \leq\left\{\begin{array}{ll}\frac{(C^{-1}c^{-1}d(x))^{\alpha_{i}}}{(Ccd(x))^ {\beta_{i}}}&\mbox{if}\ \alpha_{i}\geq 0\\ \frac{(Ccd(x))^{\alpha_{i}}}{(Ccd(x))^{\beta_{i}}}&\mbox{if}\ \alpha_{i}\leq 0 \end{array}\right.\leq\left\{\begin{array}{ll}\frac{(C^{-1}z_{1})^{\alpha_{i }}}{(Cy_{2})^{\beta_{i}}}&\mbox{if}\ \alpha_{i}\geq 0\\ \frac{(Cy_{1})^{\alpha_{i}}}{(Cy_{2})^{\beta_{i}}}&\mbox{if}\ \alpha_{i}\leq 0 \end{array}\right.\\ \\ \leq\frac{u_{+}^{\alpha_{i}}}{v_{+}^{\beta_{i}}}:=\mathrm{X}_{i}(x)\ \mbox{a.e. in}\ \Omega\cap\mathrm{K}\end{array}\]
and
\[\begin{array}{l}\overline{\mathrm{X}}_{i}(x):=Cd(x)^{\alpha_{i}-\beta_{i}}> \eta+\max\{(Cc)^{\alpha_{i}+\beta_{i}},(C^{-1}c^{-1})^{\alpha_{i}-\beta_{i}} \}\ d(x)^{\alpha_{i}-\beta_{i}}\\ \\ \geq\eta+\left\{\begin{array}{ll}\frac{(Ccd(x))^{\alpha_{i}}}{(C^{-1}c^{-1}d( x))^{\beta_{i}}}&\mbox{if}\ \alpha_{i}\geq 0\\ \frac{(C^{-1}z_{1})^{\alpha_{i}}}{(C^{-1}z_{2})^{\beta_{i}}}&\mbox{if}\ \alpha_{i}\leq 0 \end{array}\right.\geq\eta+\left\{\begin{array}{ll}\frac{(Cy_{1})^{\alpha_{i }}}{(C^{-1}z_{2})^{\beta_{i}}}&\mbox{if}\ \alpha_{i}\geq 0\\ \frac{(C^{-1}z_{1})^{\alpha_{i}}}{(C^{-1}z_{2})^{\beta_{i}}}&\mbox{if}\ \alpha_{i}\leq 0 \end{array}\right.\\ \\ \geq\eta+\frac{u_{+}^{\alpha_{i}}}{v_{+}^{\beta_{i}}}:=\eta+\mathrm{X}_{i}(x) \ \mbox{a.e. in}\ \Omega\cap\mathrm{K},\end{array}\]
with \(\underline{\mathrm{X}}_{i},\mathrm{X}_{i},\overline{\mathrm{X}}_{i}\in L^{ \infty}_{loc}(\Omega)\). By the strong comparison principle [2, Proposition 2.6], we infer that (3.2) holds true.
Following a quite similar argument as above we obtain the existence a solution \((u_{-},v_{-})\in\mathcal{C}^{1,\tau}(\overline{\Omega})\times\mathcal{C}^{1, \tau}(\overline{\Omega})\), for certain \(\tau\in(0,1)\), for problem (P) satisfying (3.5), which actually is the unique negative solution of (P).
### Nodal solutions
**Theorem 4**.: _Under assumption (1.1), problem_ (P) _admits at least a solution \((u_{*},v_{*})\) in \(\mathcal{H}_{0}^{1}(\Omega)\times\mathcal{H}_{0}^{1}(\Omega)\) within \([-C^{-1}z_{1},C^{-1}z_{1}]\times[-C^{-1}z_{2},C^{-1}z_{2}]\). Moreover, if \(\alpha_{i}\leq 0\), then \((u_{*},v_{*})\) is nodal with synchronous sign components._
Proof.: According to Theorem 2 and Remark 2, system (P) admits a nontrivial solution \((u_{*},v_{*})\in\mathcal{H}_{0}^{1}(\Omega)\times\mathcal{H}_{0}^{1}(\Omega)\) located in \([-C^{-1}z_{1},C^{-1}z_{1}]\times[-C^{-1}z_{2},C^{-1}z_{2}]\). In view of (3.2)-(3.5), we infer that \((u_{*},v_{*})\) is a third solution of (P) while Theorem 3 together with Remark 2 force that \((u_{*},v_{*})\) is nodal in the sens that the components \(u_{*}\) and \(v_{*}\) are nontrivial and at least are not of the same constant sign.
Assume that \(u_{*}<0<v_{*}\). Test the first equation in (P) by \(-u_{*}^{-}\) we get
\[\int_{\Omega}(|\nabla u_{*}^{-}|^{2}+|u_{*}^{-}|^{2})\ \mathrm{d}x = -\int_{\Omega}sgn(v_{*})\frac{|u_{*}|^{\alpha_{1}}}{|v_{*}|^{\beta_ {1}}}u_{*}^{-}\ \mathrm{d}x\] \[= -\int_{\Omega}\frac{|u_{*}|^{\alpha_{1}}}{|v_{*}|^{\beta_{1}}}u_{ *}^{-}\ \mathrm{d}x<0,\]
which forces \(u_{*}^{-}=0\), a contradiction. The same conclusion can be drawn if we assume \(v_{*}<0<u_{*}\). Hence, \(u_{*}\) and \(v_{*}\) cannot be of opposite constant sign, therefore, at least \(u_{*}\) or \(v_{*}\) change sign.
Assume that \(u_{*}\) change sign and \(v_{*}>0\). Let \(\Omega_{*}\subset\Omega\) be a subset such that \(u_{*}<0\) in \(\Omega_{*}\). In view of [20, Proposition 1.61], \(u_{*}^{-}\mathbb{1}_{\Omega_{*}}\in\mathcal{H}_{0}^{1}(\Omega)\), where \(\mathbb{1}_{\Omega_{*}}\) denotes the characteristic function of \(\Omega_{*}\). Test the first equation in (P) by \(-u_{*}^{-}\mathbb{1}_{\Omega_{*}}\in\mathcal{H}_{0}^{1}(\Omega)\) we get
\[\int_{\Omega_{*}}(|\nabla u_{*}^{-}|^{2}+|u_{*}^{-}|^{2})\ \mathrm{d}x = -\int_{\Omega_{*}}sgn(v_{*})\frac{|u_{*}|^{\alpha_{1}}}{|v_{*}|^{ \beta_{1}}}u_{*}^{-}\ \mathrm{d}x\] \[= -\int_{\Omega_{*}}\frac{|u_{*}|^{\alpha_{1}}}{|v_{*}|^{\beta_{1} }}\hat{u}_{1}^{-}\ \mathrm{d}x\leq 0,\]
which forces \(u_{*}^{-}\mathbb{1}_{\Omega_{*}}=0\), a contradiction. The same conclusion can be drawn if we assume \(u_{*}\) change sign and \(v_{*}<0\) or \(v_{*}\) change sign and \(u_{*}>0\) or \(u_{*}<0\). Hence, \(u^{*}\) and \(v^{*}\) are necessarily synchronous sign changing satisfying (1.3). This completes the proof.
|
2306.16024
|
Retrospective: RAIDR: Retention-Aware Intelligent DRAM Refresh
|
Dynamic Random Access Memory (DRAM) is the prevalent memory technology used
to build main memory systems of almost all computers. A fundamental shortcoming
of DRAM is the need to refresh memory cells to keep stored data intact. DRAM
refresh consumes energy and degrades performance. It is also a technology
scaling challenge as its negative effects become worse as DRAM cell size
reduces and DRAM chip capacity increases.
Our ISCA 2012 paper, RAIDR, examines the DRAM refresh problem from a modern
computing systems perspective, demonstrating its projected impact on systems
with higher-capacity DRAM chips expected to be manufactured in the future. It
proposes and evaluates a simple and low-cost solution that greatly reduces the
performance & energy overheads of refresh by exploiting variation in data
retention times across DRAM rows. The key idea is to group the DRAM rows into
bins in terms of their minimum data retention times, store the bins in low-cost
Bloom filters, and refresh rows in different bins at different rates.
Evaluations in our paper (and later works) show that the idea greatly improves
performance & energy efficiency and its benefits increase with DRAM chip
capacity. The paper embodies an approach we have termed system-DRAM co-design.
This short retrospective provides a brief analysis of our RAIDR paper and its
impact. We briefly describe the mindset and circumstances that led to our focus
on the DRAM refresh problem and RAIDR's development, discuss later works that
provided improved analyses and solutions, and make some educated guesses on
what the future may bring on the DRAM refresh problem (and more generally in
DRAM technology scaling).
|
Onur Mutlu
|
2023-06-28T08:51:59Z
|
http://arxiv.org/abs/2306.16024v1
|
# _Retrospective:_ RAIDR: Retention-Aware Intelligent DRAM Refresh
###### Abstract
Dynamic Random Access Memory (DRAM) is the prevalent memory technology used to build main memory systems of almost all computers. A fundamental shortcoming of DRAM is the need to refresh memory cells to keep stored data intact. DRAM refresh consumes energy and degrades performance. It is also a technology scaling challenge as its negative effects become worse as DRAM cell size reduces and DRAM chip capacity increases.
Our ISCA 2012 paper, RADIR [1], examines the DRAM refresh problem from a modern computing systems perspective, demonstrating its projected impact on systems with higher-capacity DRAM chips expected to be manufactured in the future. It proposes and evaluates a simple and low-cost solution that greatly reduces the performance & energy overheads of refresh by exploiting variation in data retention times across DRAM rows. The key idea is to group the DRAM rows into bins in terms of their minimum data retention times, store the bins in low-cost Bloom filters, and refresh rows in different bins at different rates. Evaluations in our paper (and later works) show that the idea greatly improves performance & energy efficiency and its benefits increase with DRAM chip capacity. The paper embodies an approach we have termed _system-DRAM co-design_.
This short retrospective provides a brief analysis of our RAIDR paper and its impact. We briefly describe the mindset and circumstances that led to our focus on the DRAM refresh problem and RAIDR's development, discuss later works that provided improved analyses and solutions, and make some educated guesses on what the future may bring on the DRAM refresh problem (and more generally in DRAM technology scaling).
## I Background, Approach & Mindset
At the time we began our focus on solving the DRAM refresh (i.e., data retention) challenge in late 2010, my research group, SAFARI, had already been working on memory controllers and memory technology scaling issues, motivated by many challenges memory systems, in particular the DRAM technology [2], have been facing (as described in, e.g., [3, 4, 5]). Our intense work on memory systems started during my tenure at Microsoft Research from 2006 and continued at CMU from 2009. For example, we had developed better memory schedulers for multi-core processors (e.g., [6, 7, 8, 9, 10]), developed platforms to perform voltage and frequency scaling to save DRAM energy (e.g., [11]) and architected emerging memory technologies to replace or augment DRAM (e.g., [12, 13, 14]). We were quite excited about the prospect of much more capable memory controllers in enabling better memory systems. As such, we were pursuing new memory-controller and system-level techniques to 1) overcome the challenging device- and circuit-level scaling issues of memory technologies and 2) better exploit underlying characteristics of memory technology; an approach we termed _system-DRAM co-design_[4, 5].
RAIDR is a product of this approach. Our focus on data retention issues and other low-level issues in DRAM especially increased via discussions with the Samsung DRAM Design Team, who visited us in April 2011 and encouraged the development of our system-level solutions to DRAM issues, enabling strong support both technically and funding-wise. In fact, much of our ensuing research in DRAM was supported by generous gift funding by and technical discussions with Samsung based on a proposal entitled _"New ideas to enhance DRAM scaling: Scaling-aware controller design and co-design of DRAM and controllers"_ (Intel provided similar gift funding and technical discussions).
## II Contributions and Impact of RAIDR
RAIDR is the first work to propose a low-cost memory controller technique that reduces refresh operations by exploiting variation in data retention times across DRAM rows. Its appeal comes from its simplicity and low cost, enabled by the careful use of Bloom filters [15]. Exploiting the DRAM data retention time distribution [16], RAIDR can eliminate a very large fraction (e.g., \(\sim\)75% or more) of refresh operations with very small hardware cost at the memory controller.
Apart from the new technique it introduced, we believe the RAIDR paper made two other major contributions that have enabled a large number of future works and new ideas. First, it provided an empirical scaling analysis that clearly demonstrated the importance of the DRAM refresh problem in modern systems: if nothing is done about it, DRAM refresh would waste almost half of the throughput and half of the energy of a high-capacity 64-Gb DRAM chip! This analytical prediction encouraged more works in the topic area. Second, it demonstrated a methodical way of exploiting cell-level heterogeneous data retention times at the system (e.g., memory controller) level: if data retention times of DRAM rows are accurately known, the system can use them to optimize DRAM refresh and get rid of most refresh operations. This demonstration enabled other works to develop 1) methods for accurately determining DRAM data retention times and 2) other system-level approaches to optimize DRAM behavior using data retention time information.
## III Building on RAIDR and Making It Work
We believe RAIDR enabled a refreshing approach to DRAM refresh. Its largest contribution could be the works it has inspired that rigorously examined the questions of 1) how to perform accurate DRAM data retention time profiling, 2) how to overcome potential hurdles that stand in the way of obtaining accurate minimum data retention times, 3) how to reliably get rid of unnecessary refresh operations.
We wanted to make RAIDR work in a real system setting. To this end, collaboratively with Intel, we developed an FPGA-based flexible DRAM testing infrastructure [17] that enabled us to rigorously test data retention times of cells in real DDR3 DRAM chips. Using this infrastructure, later open sourced as SoftMC [18, 19] and DRAM Bender [20, 21], we experimentally examined practical issues that affect the accuracy (and performance) of DRAM data retention time profiling. We analyzed two major issues that make such profiling very challenging: 1) data pattern dependence (DPD) of retention times [17, 22], and 2) the variable retention time (VRT) phenomenon [23, 24, 17]. Our follow-up work, which appeared at ISCA 2013 [17], provides a detailed experimental analysis of these challenges in cutting-edge DRAM chips, demonstrating that ideas like RAIDR that depend on accurate identification of retention times are not easy to exploit in practice. Later works (e.g., [25, 26, 27, 28, 29, 30, 31, 32]) developed new methods for making RAIDR-like techniques more practical by tackling especially the DPD and VRT problems and enhancing retention time profiling methods to work in the presence of DPD and VRT, usually by exploiting ECC techniques that have since become mainstream in DRAM chips (see [31, 32, 33]) to tolerate VRT [34].
The development of our flexible FPGA-based DRAM testing infrastructure also enabled experimental DRAM research in directions that are completely different from retention time profiling and refresh. These include studies that provided valuable experimental data on various DRAM characteristics, including RowHammer [20, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44], latency [45, 46, 47, 48], voltage-latency-reliability relationship [49], power consumption and modeling [50]. Using this infrastructure, later research also demonstrated the ability of real off-the-shelf DRAM chips to perform data copy/initialization and bulk bitwise operations [51, 52, 53, 54], implement physical unclonable functions [56], and generate true random numbers [57, 58]. We believe the investment we made to try to make RAIDR work using a real FPGA-based infrastructure helped us and the broader research community uncover many interesting characteristics of DRAM chips and propose new ideas to make DRAM-based systems more secure, reliable, efficient, and high performance.
Other later works provided refined models of DRAM refresh's impact on system performance (e.g., [60, 59]) and developed new
methods to reduce DRAM refresh's negative impact on performance & energy (e.g., [41, 59, 60, 61, 62, 63, 64, 65, 66, 67]). Our HPCA 2014 paper [59] developed a more refined projection of the effect of DRAM refresh as technology scales. AVATAR in DSN 2015 [26] and REAPER in ISCA 2017 [30] enabled more practical ways of exploiting heterogeneous retention times in the presence of VRT. Our recent work [66] shows that with a more flexible DRAM interface that gives some autonomy to DRAM chips, RAIDR can be more efficiently implemented inside the DRAM chip.
## IV Summary and Future Outlook
RAIDR is a nice example of how enthusiastic support from industry can foster new ideas that can open up may new analyses and other ideas. We were inspired by our deep technical discussions with especially Samsung and Intel, along with prior works that described DRAM technology scaling challenges (e.g., [3]) and that developed promising solutions (e.g., [68, 69]). Engineers from Samsung and Intel later wrote an insightful paper [34] on DRAM scaling challenges, which described refresh as a key problem and advocated a controller-DRAM co-design approach as we had been advocating [1, 4]. RAIDR was also a nice example of how teaching & research smoothly feed each other: much of the research was done as part of a group project in the Parallel Computer Architecture class I taught at CMU in Fall 2011.
Looking forward, DRAM technology scaling is getting worse and data retention will continue to be an important issue [34, 70]. The negative effects of DRAM refresh will be (and are being) exacerbated by other technology scaling issues like RowHammer [35] that require even more refreshes as a solution [41, 44, 47]. We believe there are a lot more new ideas and techniques to develop to minimize the impact of refresh on computing systems.
|
2301.05799
|
An Accelerated Lyapunov Function for Polyak's Heavy-Ball on Convex
Quadratics
|
In 1964, Polyak showed that the Heavy-ball method, the simplest momentum
technique, accelerates convergence of strongly-convex problems in the vicinity
of the solution. While Nesterov later developed a globally accelerated version,
Polyak's original algorithm remains simpler and more widely used in
applications such as deep learning. Despite this popularity, the question of
whether Heavy-ball is also globally accelerated or not has not been fully
answered yet, and no convincing counterexample has been provided. This is
largely due to the difficulty in finding an effective Lyapunov function:
indeed, most proofs of Heavy-ball acceleration in the strongly-convex quadratic
setting rely on eigenvalue arguments. Our study adopts a different approach:
studying momentum through the lens of quadratic invariants of simple harmonic
oscillators. By utilizing the modified Hamiltonian of Stormer-Verlet
integrators, we are able to construct a Lyapunov function that demonstrates an
O(1/k^2) rate for Heavy-ball in the case of convex quadratic problems. This is
a promising first step towards potentially proving the acceleration of Polyak's
momentum method and we hope it inspires further research in this field.
|
Antonio Orvieto
|
2023-01-14T01:16:40Z
|
http://arxiv.org/abs/2301.05799v1
|
# An Accelerated Lyapunov Function
###### Abstract
In 1964, Polyak showed that the Heavy-ball method, the simplest momentum technique, accelerates convergence of strongly-convex problems in the vicinity of the solution. While Nesterov later developed a globally accelerated version, Polyak's original algorithm remains simpler and more widely used in applications such as deep learning. Despite this popularity, the question of whether Heavy-ball is also globally accelerated or not has not been fully answered yet, and no convincing counterexample has been provided. This is largely due to the difficulty in finding an effective Lyapunov function: indeed, most proofs of Heavy-ball acceleration in the strongly-convex quadratic setting rely on eigenvalue arguments. Our study adopts a different approach: studying momentum through the lens of quadratic invariants of simple harmonic oscillators. By utilizing the modified Hamiltonian of Stormer-Verlet integrators, we are able to construct a Lyapunov function that demonstrates an \(O(1/k^{2})\) rate for Heavy-ball in the case of convex quadratic problems. This is a promising first step towards potentially proving the acceleration of Polyak's momentum method and we hope it inspires further research in this field.
1
Footnote 1: A differentiable function \(f:\mathbb{R}^{d}\to\mathbb{R}\) is said to be \(L\)-smooth if it has \(L\)-Lipschitz gradients.
2
Footnote 2: This lower bound holds just for \(k<d\) hence it is only interesting in the high-dimensional setting.
## 1 Introduction
The problem of unconstrained continuous convex optimization consists in finding an element of the set \(\arg\min_{q\in\mathbb{R}^{d}}f(q)\), for some lower bounded convex \(f:\mathbb{R}^{d}\to\mathbb{R}\), generally assumed to be regular, e.g., twice continuously differentiable: \(f\in C^{2}(\mathbb{R}^{d},\mathbb{R})\).
### Acceleration in discrete- and continuous-time
In 1979 Nemirovsky and Yudin [17] showed that, if \(f\) is convex and \(L\)-smooth1, no gradient-based optimizer can converge to a solution faster than \(O(1/k^{2})\), where \(k\) is the number of gradient evaluations2. While Gradient Descent (GD) converges like \(O(1/k)\), the optimal rate \(O(1/k^{2})\) is achieved by the celebrated Accelerated Gradient Descent (AGD) method, proposed by Nesterov in
1982 [19]: starting from \(p_{0}=0\) and a random \(q_{0}\), the approximation \(q_{k}\) to a problem solution \(q^{*}\) is computed iteratively as3
Footnote 3: Many similar writings are possible. Here, we consider the particular version studied in [22] and a physicist notation, where \(p_{k}\) is a velocity variable. This makes the connection to continuous-time cleaner and consistent with recent work on the geometry of momentum methods [5].
\[\begin{cases}q_{k+1}=q_{k}+\beta_{k}hp_{k}-h^{2}\nabla f(q_{k}+\beta_{k}hp_{k} )\\ p_{k+1}=(q_{k+1}-q_{k})/h\end{cases}.\] (AGD)
where \(\beta_{k}=\frac{k-1}{k+2}\) and \(h^{2}\) is the step-size (we use the notation \(h^{2}\) instead of the standard \(\eta\) for a reason which will become apparent in the next sections). Interestingly, the different behaviour of GD and AGD is retained in the continuous-time limit (as the step-size vanishes), recently studied by Su, Boyd and Candes [22], but already present in the seminal works of Polyak [20] and Gavurin [6]:
\[\dot{q}+\nabla f(q)=0;\] (GD-ODE)
\[\ddot{q}+\frac{r}{t}\dot{q}+\nabla f(q)=0.\] (AGD-ODE)
Namely, we have that GD-ODE converges like \(O(1/t)\) and AGD-ODE (with \(r\geq 3\)) like \(O(1/t^{2})\), where \(t>0\) is the time variable. This result gave researchers a new tool to grasp the baffling essence (see discussion in [2, 22]) of accelerated optimizers, and led to the design of many novel fast interpretable algorithms [1, 14, 25, 24].
### Evaluating gradients at a shifted position
There are two modifications of GD that bring AGD about:
1. inclusion of the momentum term (i.e. using \(\beta_{k}\neq 0\));
2. change in gradient extrapolation point: \(\nabla f(q_{k})\to\nabla f(q_{k}+\beta_{k}hp_{k})\).
Questions arise immediately:
_Are both these modifications necessary for acceleration?_
_In particular, is evaluating the gradient at non-iterate points_
_crucial or even necessary for acceleration?_
To put these questions in the right historical context, one has to go back to Polyak's 1964 seminal paper [20], where the very first momentum method was proposed for \(C^{2}\) and \(\mu\)-strongly-convex problems4. Using an elegant functional-analytic argument on multistep methods, Polyak proved that momentum alone -- without shifted gradient evaluation (a.k.a. Heavy-ball (HB), see equation
below) -- is able to achieve acceleration5 in a neighborhood of the solution. This local argument becomes of course global in the quadratic case (for a simplified proof, see Proposition 1 in [15]).
Footnote 5: Here to be intended as a dependency of the rate on the square root of the condition number \(L/\mu\).
\[\begin{cases}q_{k+1}=q_{k}+\beta_{k}hp_{k}-h^{2}\nabla f(q_{k})\\ p_{k+1}=(q_{k+1}-q_{k})/h\end{cases}.\] (HB)
Despite the many attempts, nobody in the last 56 years has been able to show that HB has a global (i.e. for any initialization) accelerated rate -- neither in the strongly-convex case (using a fixed momentum) nor in the non-strongly-convex case (using an increasing \(\frac{k-1}{k+2}\) momentum). Beyond the technical difficulty, another plausible reason may also be lack of interest, as the introduction of Nesterov's globally accelerated method in 1982, that overshadowed the conceptually simpler method from Polyak.
However, many researchers in the last decade, supported by numerical evidence and by the success of Heavy-ball in deep learning [13], expressed their belief that HB is accelerated:
_[...] supported by the numerical simulations we envisage that the convergence factor could be strengthened even further. This is indeed left as a future work._
- Ghadimi et al. [7], 2015
_Despite the long history of this approach, there is still an open question whether the heavy ball method converges to the optimum globally with accelerated rate when the objective function is twice continuous differentiable._
- Gorbunov et al. [8], 2019
_Neither the evaluation of the gradient at a shifted position, nor a specifically engineered damping parameter, as for example proposed in Nesterov (2004, Sec. 2.2), seem6 necessary._
Footnote 6: After talking to the first author, we decided to replace “_are_” (as in the original preprint) with “_seem_”: indeed, the argument in [16] is asymptotic and therefore somewhat equivalent to the one of Polyak [20].
- Muehlebach and Jordan [16], 2020
Other researcher believe HB is not accelerated:
_If we can translate this argument to the discrete case we can understand why_ AGD _achieves acceleration globally for strongly-convex functions but the Heavy-ball method does not._
- Shi et al. [21], 2018
While on the theoretical side the opinion is mixed, on the experimental side no numerical simulation7 has been able to show that HB is not accelerated. In Figure 1, we provide two examples for the non-strongly-convex case (i.e. \(\mu\) very small, such that an increasing momentum is preferable, leading \(1/k^{2}\) convergence as opposed to \((1-\sqrt{\mu/L})^{k}\)). In particular, we show that HB is comparable to AGD through the lens of the pathological lower-bounding quadratic example introduced by [17] and used to construct the \(O(1/k^{2})\) bound in convex optimization -- at least until the effect of non-trivial strong-convexity becomes dominant (at around \(f(q_{k})=10^{-6}\)).
Footnote 7: In [15], the authors show that there exist a strongly-convex smooth function such that Heavy-ball does not converge. However, as also pointed out by Ghadimi et al. [7], such function is not \(C^{2}\), and that a big step-size is used — which violates the convergence conditions of Thm. 4 in [7]. As such, this function does not constitute a proper counterexample.
### Contributions
The purpose of the manuscript at hand is to study the effect of shifts in gradient extrapolation points on acceleration in convex optimization (i.e. to study the difference between Heavy-ball and Nesterov's method). In particular, the next pages are organized as follows:
1. We start from a continuous-time argument: inspired by a recent idea from Flammarion and Bach [4], in Section 2.1 we show how \(\mathtt{AGD-ODE}\) with damping \(2/t\) can be derived from the equation of a simple harmonic oscillator: \(\ddot{u}=-Au\). By using Lyapunov equations and a simple change of variables, we retrieve the Lyapunov function proposed by Su, Boyd and Candes [22] to prove a rate \(O(1/t^{2})\) for \(\mathtt{AGD-ODE}\). This procedure is principled and leads to many insights on Lyapunov function design.
2. In Section 2.2, we apply the same methodology in discrete time, and show that HB with momentum \(\frac{k-1}{k+1}\) can be derived from the Stormer-Verlet discretization of the simple harmonic oscillator. Solving again Lyapunov's equations, we are able to show an \(O(1/k^{2})\) rate for a Heavy-ball algorithm for convex quadratics. While this rate is already present in [4], our proof technique is different as it relies on a Lyapunov function as opposed to an eigenvalue analysis.
3. In Section 3, by generalizing the discrete-time Lyapunov function found in Section 2.2 we derive a modified Heavy-ball method \[q_{k+1}=q_{k}+\frac{k-1}{k+r-1}(q_{k}-q_{k-1})-h^{2}\frac{k+\frac{r-2}{2}}{k+ r-1}\nabla f(q_{k}),\] with a rate of convergence \(O(1/k^{2})\) for any \(k\geq 2\) and \(r\geq 2\). Our result not only generalizes the
Figure 1: For both examples HB with momentum \(\frac{k-1}{k+2}\) exhibits an accelerated \(1/k^{2}\) convergence rate, even though \(\mathtt{AGD}\) with momentum \(\frac{k-1}{k+2}\) is faster in a neighborhood of the optimizer due to strong-convexity. Instead, GD violates the Nesterov \(O(1/k^{2})\) upper bound. We recall that, while Nesterov’s upper bound holds for all \(k>0\), the \(O(1/k^{2})\) lower bound (originally discovered by Nemirovski and Yudin [17]) only holds at \(k=d/2\) (for more details, check the discussion in [18]).
theory in [4], but also provides an interesting connection between the continuous and the discrete -- as the used Lyapunov function converges, in the limit \(h\to 0\), to the one used in [22] for \(r\geq 2\).
Recent related works.Very recently, Wang et al. [23] proved that Heavy-ball is accelerated for a class of functions satisfying the Polyak-Lojasiewicz condition. Instead, here we provide a Lyapunov function for the non-strongly-convex setting, where the Polyak-Lojasiewicz constant vanishes. We remark that, for strongly-convex quadratic potentials, Heavy-ball is already known to achieve acceleration [15]. However, the eigenvalue argument used in [15] cannot be leveraged in the non-strongly-convex setting, where the minimum eigenvalue can be arbitrarily low. As such, our work provides insights on how to construct effective Lyapunov functions in the non-quadratic case, where Lyanonov arguments are often the go-to option.
## 2 From quadratic invariants of oscillators to accelerated rates
Our procedure in this section is inspired by a beautiful idea presented by Flammarion and Bach [4]: it is sometimes possible to translate a time-dependent convergence rate problem into a time-independent stability problem. Here we go one step further, and show how, with an additional step (computation of quadratic invariants), it is possible to derive Lyapunov functions and rates for the corresponding algorithms. We first illustrate the idea in continuous-time and then proceed with the discrete-time analysis.
Our starting point is the following ODE:
\[\ddot{q}+\frac{2}{t}\dot{q}+\nabla f(q)=0.\] (AGD-ODE2)
From the analysis in [22], we know that on a quadratic \(f(q)=f^{*}+\frac{1}{2}\langle(q-q^{*}),A(q-q^{*})\rangle\), with \(A\) positive semidefinite and \(f^{*}\in\mathbb{R}\), the solution converges to \(q^{*}\in\arg\min_{x\in\mathbb{R}^{d}}f(q)\) at the rate \(O(1/t^{2})\). To prove this rate, the authors in [22] use the following Lyapunov function:
\[V(q,t)=2t^{2}(f(q)-f^{*})+\|t\dot{q}+(q-q^{*})\|^{2}. \tag{1}\]
We show here a _constructive way_ to derive \(V\) (Section 2.1) and then (Section 2.2) we apply the same procedure to get a Lyapunov function for Heavy-ball (i.e., the discretization). For simplicity, we consider here \(f^{*}=0\) and \(q^{*}=0\).
### Lyapunov functions from continuous-time invariants
Consider an harmonic oscillator on the potential \(f(u)=\frac{1}{2}u^{\top}Au\), i.e. \(\ddot{u}=-Au\). From basic physics, we know that such a system is marginally stable (bounded dynamics). By choosing \(u=tq\) we get \(\dot{u}=q+t\dot{q}\) and \(\ddot{u}=\dot{q}+\dot{q}+t\ddot{q}\). This implies
\[\dot{q}+\dot{q}+t\ddot{q}=\ddot{u}=-Atq\quad\implies\quad\ddot{q}+\frac{2}{t} \dot{q}+Aq=0.\]
That is, \(\mathtt{AGD-ODE}\) can be reconstructed from a simple linearized pendulum. By introducing the variable \(v=\dot{u}\), we can write the pendulum in phase space as a linear dynamical system
\[\begin{pmatrix}\dot{u}\\ \dot{v}\end{pmatrix}=\begin{pmatrix}0&I\\ -A&0\end{pmatrix}\begin{pmatrix}u\\ v\end{pmatrix}.\]
Hence, the pendulum has the form \(\dot{y}=Fy\), where \(y=(u,v)\). We would now like to get a Lyapunov function for this system. To do this, we recall a fundamental proposition (check Thm. 4.6. in [12]).
**Proposition 1** (Continuous-time Lyapunov equations).: The linear system \(\dot{y}=Fy\) is Lyapunov stable if and only if for all positive semidefinite matrices \(Q\), there exists a symmetric matrix \(P\) such that
\[PF+F^{T}P=-Q. \tag{2}\]
Moreover, \(V(y)=y^{T}Py\) is a Lyapunov function and \(\dot{V}(y)=-y^{T}Qy\).
Since we know that a pendulum is only marginally stable (i.e., not asymptotically stable), we can limit ourselves to the choice of a null matrix \(Q\). Hence, we need to solve the Lyapunov equation \(PF=-F^{T}P\) for \(P\). A solution to this equation (many exist) is \(P=\begin{pmatrix}A&0\\ 0&I\end{pmatrix}\), which implies that
\[V(u)=\langle u,Au\rangle+\|v\|^{2} \tag{3}\]
is a quadratic invariant, i.e. \(\dot{V}(u)=0\). This is well known, since \(V\) is actually twice the total energy (Hamiltonian) of the pendulum. Finally, we can change variables and get that
\[V(q)=t^{2}\langle q,Aq\rangle+\left\|\frac{\mathrm{d}}{\mathrm{d}t}(tq)\right\| ^{2}=2t^{2}f(q)+\|t\dot{q}+(q-q^{*})\|^{2}, \tag{4}\]
is a Lyapunov function for \(\mathtt{AGD-ODE}2\), with \(\dot{V}(q)=0\). This is precisely equation 1.
From quadratic to convex.With a small modification (using a factor \(r-1\) instead of 1), it is possible to get a Lyapunov function that works for \(\mathtt{AGD-ODE}\) in the more general convex case.
**Proposition 2** (Theorem 3 from [22]).: For convex \(L\)-smooth objectives, \(\mathtt{AGD-ODE}\) converges at a rate \(O(1/t^{2})\). This follows from the fact that
\[V(q,t)=2t^{2}(f(q)-f^{*})+\|t\dot{q}+(r-1)(q-q^{*})\|^{2}. \tag{5}\]
is a Lyapunov function, for \(r\geq 3\).
### Discrete-time invariants
We apply the construction from the last subsection to the discrete case. Inspired by Flammarion and Bach [4], we consider at first a slightly modified HB:
\[q_{k+1}=q_{k}+\frac{k-1}{k+1}(q_{k}-q_{k-1})-h^{2}\frac{k}{k+1}\nabla f(q_{k}).\] (HB2)
This algorithm is the discrete-time equivalent of \(\ddot{q}+\frac{2}{t}\dot{q}+\nabla f(q)=0\). As for the continuous-time case, we start from \(f(q)=\frac{1}{2}\langle q,Aq\rangle\). In this case, HB2 can be written as
\[(k+1)q_{k+1}=2kq_{k}+(k-1)q_{k-1}-h^{2}A(kq_{k}).\]
That is, if we set \(u_{k}=kq_{k}\), we get
\[u_{k+1}-2u_{k}+u_{k-1}=-h^{2}Au_{k}. \tag{6}\]
With surprise, we recognize that this is the Stormer-Verlet method [11] on \(\ddot{u}=-Au\), with step-size \(h\) (that's why had \(h^{2}\) from the very beginning).
It would be natural, as for the continuous-time case, to consider the total energy as a quadratic invariant to derive a Lyapunov function. However, it turns out that, interestingly, _the Stormer-Verlet method does not precisely conserve the total energy_: there are small oscillations (see Section 3 in [9])! Taking into account such small oscillations (Figure 2) is of fundamental importance -- since they lead to a crucial modification of the invariants we have to use.
**Proposition 3** (Discrete-time Lyapunov equations).: The system \(y_{k+1}=Fy_{k}\) is Lyapunov stable if an only if for all positive semidefinite matrices \(Q\), there exists a symmetric matrix \(P\) such that
\[F^{T}PF-P=-Q. \tag{7}\]
Moreover, \(V(y)=y^{T}Py\) is a Lyapunov function and \(V(y_{k+1})-V(y_{k})=-y_{k}^{T}Qy_{k}\) for all \(k\).
We apply the theorem above (for \(Q=0\)) to the linear system
\[\begin{pmatrix}u_{k+1}\\ v_{k+1}\end{pmatrix}=\begin{pmatrix}I-h^{2}A&hI\\ -hA&I\end{pmatrix}\begin{pmatrix}u_{k}\\ v_{k}\end{pmatrix}.\]
Figure 2: The Störmer–Verlet method on a one-dimensional quadratic potential (i.e., a simplified pendulum) does not conserve the total energy. Details on this phenomenon can be found in [10, 9].
Under the choice \(v_{k}=(u_{k}-u_{k-1})/h\), this system is equivalent to equation 6, i.e., the discretized pendulum we want to find a quadratic invariant for. Solving the discrete Lyapunov equation gives us
\[P=\begin{pmatrix}A&-hA/2\\ -hA/2&I\end{pmatrix},\]
and the associated modified total energy:
\[V(u_{k},v_{k})=\underbrace{\langle u_{k},Au_{k}\rangle+\|v_{k}\|^{2}}_{\text{ continuous-time invariant (energy)}}\quad-\underbrace{h\langle v_{k},Au_{k}\rangle}_{\text{ vanishing cross-term}}. \tag{8}\]
We make the following comments:
* As \(h\to 0\), the modified energy approaches the total energy in equation 3. The purpose of the additional cross-term is to eliminate the small energy oscillations we see in Figure 2.
* Assuming without loss of generality that \(A\) does not have zero eigenvalues, \(P\) is positive semidefinite (i.e., yields a valid Lyapunov function) if and only if the Schur complement of \(I\) (i.e., the block \(P_{22}\)) in \(P\) is positive semidefinite. That is, we need \[B:=A-\frac{h^{2}}{4}A^{2}=A\left(I-\frac{h^{2}}{4}A\right)\geq 0.\] (9) Since \(A\) and \((I-h^{2}A/4)\) are co-diagonalizable, the product is positive semidefinite if and only if both \(A\geq 0\) and \((I-h^{2}A/4)\geq 0\). This requires \(0\leq A\leq\frac{4}{h^{2}}I\), which in turns implies an upper bound on the step-size \(h^{2}\): \[h^{2}\leq\frac{4}{\lambda_{\max}(A)}=\frac{4}{L}.\] The same condition (note that our step-size is \(h^{2}\), not \(h\)) can be deduced from the analysis of HB2 in [4].
Now it's time to change variables back: \(u_{k}=kq_{k}\). If we set \(p_{k}:=(q_{k}-q_{k-1})/h\), as also done in the introduction, we get
\[hv_{k} =h(u_{k}-u_{k-1})/h\] \[=kq_{k}-(k-1)q_{k-1}+(k-1)q_{k}-(k-1)q_{k}\] \[=(k-1)(q_{k}-q_{k-1})+q_{k}\] \[=h(k-1)p_{k}+q_{k}.\]
By substituting these formulas in equation 8, we get the following final form for an effective Lyapunov function for HB2 -- for the quadratic case:
\[V_{k} =\langle u_{k},Au_{k}\rangle+\|v_{k}\|^{2}-\langle hv_{k},Au_{k}\rangle\] \[\implies V_{k} =k^{2}\langle q_{k},Aq_{k}\rangle+\frac{1}{h^{2}}\|h(k-1)p_{k}+q_ {k}\|^{2}-k\langle h(k-1)p_{k}+q_{k},Aq_{k}\rangle.\]
To better understand this Lyapunov function, we multiply everything by \(h^{2}\) and get
\[V_{k}=(kh)^{2}\langle q_{k},Aq_{k}\rangle+\|h(k-1)p_{k}+q_{k}\|^{2}-h^{2}k\langle h (k-1)p_{k}+q_{k},Aq_{k}\rangle.\]
Recalling that the "time" variable \(t\) is defined to be \(t_{k}=hk\), this cost becomes
\[V_{k}=t_{k}^{2}\langle q_{k},Aq_{k}\rangle+\|t_{k-1}p_{k}+q_{k}\|^{2}-ht_{k} \langle t_{k-1}p_{k}+q_{k},Aq_{k}\rangle.\]
This Lyapunov function can be easily generalized by noting that \(\langle q_{k},Aq_{k}\rangle=2(f(q_{k})-f^{*})\) and \(Aq_{k}=\nabla f(q_{k})\):
\[V_{k}=2t_{k}^{2}(f(q_{k})-f^{*})+\|t_{k-1}p_{k}+q_{k}\|^{2}-t_{k}\langle\nabla f (q_{k}),t_{k-1}p_{k}+q_{k}\rangle. \tag{10}\]
Finally, note that
* From equation 10 as \(h\to 0\) we get equation 4: the continuous-time Lyapunov function of [22].
* The mixing term is necessary and makes the positive definiteness (see equation 9) of \(V_{k}\) non-trivial.
All in all, in this subsection, we proved the following result.
**Proposition 4**.: Let \(A\in\mathbb{R}^{d\times d}\) be positive semidefinite, \(f^{*}\in\mathbb{R}\) and \(f(q)=f^{*}+\frac{1}{2}(q-q^{*})^{T}A(q-q^{*})\). Let \((q_{k})_{k\geq 0}\) be the iterates of HB2 and \(p_{k}:=(q_{k}-q_{k-1})/h\). If the step-size \(h^{2}<\frac{4}{\lambda_{\max}(A)}\), then equation 10 is non-negative and such that \(V_{k+1}-V_{k}=0\) along the HB2 trajectory, for all \(k\). From this, one can deduce an accelerated rate of \(O(1/k^{2})\) in suboptimality.
Details of the proof are given in the proof of Theorem 1 (more general).
## 3 Accelerated Heavy-ball methods for convex quadratics
In this section, we start to lift the discussion to the convex non-quadratic setting, by providing a generalization of HB2. Indeed, we know from the continuous-time analysis in [22] that \(\ddot{q}+\frac{2}{t}\dot{q}+\nabla f(q)=0\) may not have an accelerated rate for functions which are convex but not necessarily quadratic. In this case, a rate of \(O(1/t^{2})\) only holds 8 for
Footnote 8: The case \(0<r\leq 3\) was studied by Attouch et al. [3]: a convergence rate of \(O(1/t^{p})\) with \(p<2r/3\) is shown in this case. The same result also holds in discrete time.
\[\ddot{q}+\frac{r}{t}\dot{q}+\nabla f(q)=0,\]
with \(r\geq 3\). In the same way, we expect that HB2 (which is the discretization for \(r=2\)) may not have an accelerated rate in the convex non-quadratic setting and a _generalization corresponding to high friction is therefore necessary_.
Our objective in this chapter is to construct such a generalization of HB2, which we name HB\(r\).
### A generalized Heavy-ball with high friction and guarantees on quadratics
After a few weeks of intense calculations, we found that this algorithm gives the desired result (Thm. 1).
\[q_{k+1}=\underbrace{q_{k}+\frac{k-1}{k+r-1}(q_{k}-q_{k-1})}_{\text{iterate + momentum}}-h^{2}\underbrace{\frac{k+\frac{r-2}{2}}{k+r-1}\nabla f(q_{k})}_{ \text{scaled gradient of iterate}}.\] (HB \[r\] )
First, note that \(r=2\) recovers HB2 -- which we proved to be accelerated in the last subsection using a novel Lyapunov argument. The second, and perhaps the most crucial, thing to note is that HB\(r\) recalls the high friction generalization of AGD proposed by [22] (see Theorem 6 in their paper):
\[q_{k+1}=\underbrace{q_{k}+\frac{k-1}{k+r-1}(q_{k}-q_{k-1})}_{\text{iterate + momentum}}-h^{2}\underbrace{\nabla f\left(q_{k}+\frac{k-1}{k+r-1}(q_{k}-q_{k- 1})\right)}_{\text{gradient of [iterate + momentum]}}.\] (AGD \[r\] )
Between HB\(r\) and AGD\(r\) there are a few important differences:
* In AGD\(r\) the gradient is evaluated at \(q_{k}+\frac{k-1}{k+r-1}(q_{k}-q_{k-1})\), while in HB\(r\) it is evaluated at \(q_{k}\).
* in HB\(r\) the effective step-size (i.e. what multiplies the gradient) is iteration-dependent, and goes from \(h^{2}/2\) to \(h^{2}\) as \(k\to\infty\). We believe this has _not to be regarded as part of the acceleration mechanism_: it is just a small modification needed to make the analysis easier.
* Arguably HB\(r\) (neglecting the small correction) is conceptually simpler that AGD\(r\): compared to GD, only a momentum term is added at each iteration -- and this can be thought of as the source of acceleration.
We proceed in proving that HB\(r\) is accelerated in the quadratic case.
**Theorem 1**.: Let \(A\in\mathbb{R}^{d\times d}\) be positive semidefinite, \(f^{*}\in\mathbb{R}\) and \(f(q)=f^{*}+\frac{1}{2}(q-q^{*})^{T}A(q-q^{*})\). Let \((q_{k})_{k\geq 0}\) be the iterates of HB\(r\) (\(r\geq 2\)) and \(p_{k}:=(q_{k}-q_{k-1})/h\). If \(h^{2}\leq\frac{4}{\lambda_{\max}(A)}\), then
\[V_{k}=2(k+r-2)^{2}h^{2}(f(q_{k})-f^{*})+\|h(k-1)p_{k}+(r-1)(q_{k }-q^{*})\|^{2}\\ -h^{2}(k+r-2)\langle\nabla f(q_{k}),h(k-1)p_{k}+(r-1)(q_{k}-q^{* })\rangle \tag{11}\]
is non-negative and such that \(V_{k+1}-V_{k}\leq 0\) along the HB\(r\) trajectory, for all \(k\). Moreover, for any \(h^{2}<\frac{4}{\lambda_{\max}(A)}\), HB\(r\) is accelerated. In particular, if \(h^{2}=\frac{2}{\lambda_{\max}(A)}\), we have the rate
\[f(q_{k})-f^{*}\leq\frac{\lambda_{\max}(A)V_{0}}{2(k+r-2)^{2}}.\]
We note a couple of facts about the Lyapunov function \(V_{k}\) in equation 11.
* It reduces to equation 10 in the case \(r=2\). For \(r>2\), it is a graspable generalization of equation 10 -- which we instead derived in a systematic way using Lyapunov equations. The term \((r-1)\) is inspired by the continuous-time limit in equation 5.
* Consider the Lyapunov function above, but without cross term i.e. \[2(k+r-2)^{2}h^{2}(f(q_{k})-f^{*})+\|h(k-1)p_{k}+(r-1)(q_{k}-q^{*})\|^{2}.\] This function works for proving an \(O(1/k^{2})\) rate for AGDr (it's a Lyapunov function, see Thm. 6 from [22]). Therefore, higher complexity (i.e., an additional cross term) is needed to study the acceleration of Heavy-ball, when compared to Nesterov's method.
* As \(h\to 0\), the cross term vanishes \(V_{k}\) converges to equation 5 -- its continuous-time equivalent. Indeed, both HBr and AGDr converge to AGD-ODE as \(h\to 0\).
### Proof of the theorem
It is useful to simplify equation 11 and to work with variables \(q_{k}\) and \(q_{k-1}\) -- a natural choice in the discrete setting. We split the Lyapunov function into two parts: \(V_{k}=V_{k}^{1}+V_{k}^{2}\).
\[V_{k}^{1} :=2(k+r-2)^{2}h^{2}(f(q_{k})-f^{*}) \tag{12}\] \[\qquad-h^{2}(k+r-2)\langle\nabla f(q_{k}),(k-1)(q_{k}-q_{k-1})+( r-1)(q_{k}-q^{*})\rangle.\] \[V_{k}^{2} :=\|(k-1)(q_{k}-q_{k-1})+(r-1)(q_{k}-q^{*})\|^{2}. \tag{13}\]
First, we are going to study \(V_{k}^{2}\) in the non-quadratic case, and then \(V_{k}^{1}\) in the quadratic case. Theorem 1 will follow from a combination of the two corresponding lemmata.
The first lemma shares many similarities with the proof of Theorem 1 in [7].
**Lemma 1**.: For any differentiable function \(f:\mathbb{R}^{d}\to\mathbb{R}\) (not necessarily convex or \(L\)-smooth) and any sequence of iterates \((q_{k})_{k\geq 0}\) returned by HBr, we have:
\[V_{k+1}^{2}-V_{k}^{2} = -h^{2}(r-1)(2k+r-2)\langle\nabla f(q_{k}),q_{k}-q^{*}\rangle\] \[-h^{2}(k-1)(2k+r-2)\langle\nabla f(q_{k}),q_{k}-q_{k-1}\rangle\] \[+\frac{h^{4}}{4}(2k+r-2)^{2}\|\nabla f(q_{k})\|^{2},\]
where \(V_{k}^{2}\) is defined in equation 13.
Proof.: Let \(g_{k}:=(k-1)(q_{k}-q_{k-1})+(r-1)(q_{k}-q^{*})\), then,
\[V_{k+1}^{2}-V_{k}^{2}=\|g_{k+1}\|^{2}-\|g_{k}\|^{2}=\langle g_{k+1}+g_{k},g_{k+ 1}-g_{k}\rangle.\]
We proceed in computing \(g_{k+1}-g_{k}\). The algorithm symmetric structure here is fundamental:
\[g_{k+1}-g_{k} =k(q_{k+1}-q_{k})+(r-1)(q_{k+1}-q^{*})-(k-1)(q_{k}-q_{k-1})-(r-1)( q_{k}-q^{*})\] \[=(k+r-1)q_{k+1}-(k+r-1)q_{k}-(k-1)(q_{k}-q_{k-1})\] \[\overset{(\text{\ref{eq:HBr}})}{=}-h^{2}\left(k+\frac{r-2}{2} \right)\nabla f(q_{k}).\]
Instead, \(g_{k+1}+g_{k}\) is slightly more complex.
\[g_{k+1}+g_{k} =k(q_{k+1}-q_{k})+(r-1)(q_{k+1}-q^{*})+(k-1)(q_{k}-q_{k-1})+(r-1)(q_ {k}-q^{*})\] \[=(k+r-1)q_{k+1}+(-k+r-1)q_{k}+(k-1)(q_{k}-q_{k-1})-2(r-1)q^{*}\] \[\overset{(\text{HBr})}{=} (k+r-1)q_{k}+(k-1)(q_{k}-q_{k-1})-h^{2}\left(k+\frac{r-2}{2} \right)\nabla f(q_{k})\] \[+(-k+r-1)q_{k}+(k-1)(q_{k}-q_{k-1})-2(r-1)q^{*}\] \[=2(r-1)(q_{k}-q^{*})+2(k-1)(q_{k}-q_{k-1})-h^{2}\left(k+\frac{r-2} {2}\right)\nabla f(q_{k}).\]
The proof is concluded by taking the inner product.
We proceed by computing the difference \(V_{k+1}^{1}-V_{k}^{1}\). Our calculations will be very quick, since we can leverage, in the quadratic case, on a simplified expression for \(V_{k}^{1}\).
**Lemma 2**.: Let \(V_{k}^{1}\) be defined as equation 12. In the context of Theorem 1, we have
\[V_{k}^{1}=h^{2}(k+r-2)(k-1)\langle q_{k-1}-q^{*},A(q_{k}-q^{*})\rangle\]
and
\[V_{k+1}^{1}-V_{k}^{1} =h^{2}(2k+r-2)\langle q_{k}-q^{*},A(q_{k}-q^{*})\rangle\] \[+h^{2}(k-1)(2k+r-2)\langle q_{k}-q^{*},A(q_{k}-q_{k-1})\rangle\] \[-\frac{h^{4}}{2}(2k+r-2)k\|A(q_{k}-q^{*})\|^{2}.\]
Proof.: From equation 12, we get
\[V_{k}^{1} = (k+r-2)^{2}h^{2}\langle q_{k}-q^{*},A(q_{k}-q^{*})\rangle\] \[-h^{2}(k+r-2)\langle A(q_{k}-q^{*}),(k-1)(q_{k}-q_{k-1})+(r-1)(q_ {k}-q^{*})\rangle\] \[= (k+r-2)^{2}h^{2}\langle q_{k}-q^{*},A(q_{k}-q^{*})\rangle\] \[-h^{2}(k+r-2)\langle A(q_{k}-q^{*}),(k+r-2)(q_{k}-q^{*})-(k-1)(q_ {k-1}-q^{*})\rangle\] \[= h^{2}(k+r-2)(k-1)\langle q_{k-1}-q^{*},A(q_{k}-q^{*})\rangle.\]
We proceed computing \(V_{k+1}^{1}-V_{k}^{1}\) using this simplified form:
\[V_{k+1}^{1}-V_{k}^{1} = h^{2}(k+r-1)k\langle q_{k}-q^{*},A(q_{k+1}-q^{*})\rangle\] \[-h^{2}(k+r-2)(k-1)\langle q_{k-1}-q^{*},A(q_{k}-q^{*})\rangle\] \[=h^{2}\langle q_{k}-q^{*},A\Delta_{k}\rangle,\]
where
\[\Delta_{k} :=\ (k+r-1)k(q_{k+1}-q^{*})-(k+r-2)(k-1)(q_{k-1}-q^{*}).\]
Now, recall the definition of \(\operatorname{\textsc{HB}}r\):
\[(q_{k+1}-q^{*})=(q_{k}-q^{*})+\frac{k-1}{k+r-1}(q_{k}-q_{k-1})-h^{2}\frac{k+ \frac{r-2}{2}}{k+r-1}A(q_{k}-q*),\]
where we subtracted \(q^{*}\) from both sides. By plugging this into \(\Delta_{k}\), we get
\[\Delta_{k} =\ (k+r-1)k(q_{k}-q^{*})+(k-1)k(q_{k}-q_{k-1})-\frac{h^{2}}{2}(2k+r- 2)kA(q_{k}-q*)\] \[\ \ \ \ -(k+r-2)(k-1)(q_{k-1}-q^{*})\] \[=\ (2k+r-2)k(q_{k}-q^{*})-(2k+r-2)(k-1)(q_{k-1}-q^{*})\] \[\ \ \ \ -\frac{h^{2}}{2}(2k+r-2)kA(q_{k}-q^{*})\] \[=\ (2k+r-2)(q_{k}-q^{*})+(2k+r-2)(k-1)(q_{k}-q_{k-1})-\frac{h^{2} }{2}(2k+r-2)kA(q_{k}-q^{*}).\]
The result follows after taking the inner product \(h^{2}\langle q_{k}-q^{*},A\Delta_{k}\rangle\).
We are finally ready to prove the result.
Proof of Theorem 1.: First, we compute \(V^{1}_{k+1}-V_{k}=(V^{1}_{k+1}-V^{1}_{k})+(V^{2}_{k+1}-V^{2}_{k})\) using Lemma 2 and Lemma 1 (written for quadratic \(f\)). Next, we show that a certain condition on the step-size implies positivity of \(V_{k}\) and a convergence rate.
\[(V^{1}_{k+1}-V^{1}_{k})+(V^{2}_{k+1}-V^{2}_{k}) =h^{2}(2k+r-2)\langle q_{k}-q^{*},A(q_{k}-q^{*})\rangle\] \[\#h^{2}(k\!-\!1)(2k\!+\!r\!-\!2)\langle q_{k}-q^{*},A(q_{k}-q_{k- 1})\rangle\] \[\ \ \ -\frac{h^{4}}{2}(2k+r-2)k\|A(q_{k}-q^{*})\|^{2}\] \[\ \ \ -h^{2}(r-1)(2k+r-2)\langle q_{k}-q^{*},A(q_{k}-q^{*})\rangle\] \[= h^{2}(k\!-\!1)(2k\!+\!r\!-\!2)\langle q_{k}-q^{*},A(q_{k}-q_{k- 1})\rangle\] \[\ \ \ +\frac{h^{4}}{4}(2k+r-2)^{2}\|A(q_{k}-q^{*})\|^{2}.\]
Crucially, note that the terms including \(\langle q_{k}-q^{*},A(q_{k}-q_{k-1})\rangle\) cancel. This is necessary to make our proof (or, probably, any proof) work, since such inner product between the gradient and the momentum changes sign (infinitely) many times along the trajectory, and therefore cannot be easily compared to other quantities. For the same reason, in the corresponding continuous-time proof from [22], the terms including \(\langle\nabla f(q),p\rangle\) also perfectly cancel out.
All in all, by collecting some terms, we get
\[V_{k+1}-V_{k} =-h^{2}(r-2)(2k+r-2)\langle q_{k}-q^{*},A(q_{k}-q^{*})\rangle\] \[\qquad+\frac{h^{4}}{4}(2k+r-2)(r-2)\|A(q_{k}-q^{*})\|^{2}\] \[=-h^{2}(r-2)(2k+r-2)\langle q_{k}-q^{*},B(q_{k}-q^{*})\rangle,\]
where \(B:=A-\frac{h^{2}}{4}A^{2}\) is the matrix that we already studied in the context of Lyapunov equations (see equation 9). Since \(r\geq 2\), a sufficient condition for \(V_{k+1}-V_{k}\leq 0\) is \(B\geq 0\), which holds under \(h^{2}\leq\frac{4}{\lambda_{\max}(A)}\). As a sanity check, the reader can appreciate the fact that, if \(r=2\), then \(V_{k+1}=V_{k}\) -- as we already proved in Proposition 4 (follows from the fact that \(V_{k}\) solves the Lyapunov equations). Last, we have to translate the fact that \(V_{k}\) is non-increasing to a convergence rate. This is not trivial in our case, since \(V_{k}\) also contains a cross term which is not necessarily positive. Actually, we do not even know that \(V_{k}\geq 0\) yet! Hence, we have to come up with some tricks. We start from rewriting the (simplified) Lyapunov function:
\[V_{k}=h^{2}(k+r-2)(k-1)\langle q_{k-1}-q^{*},A(q_{k}-q^{*})\rangle+\|(k-1)(q_{ k}-q_{k-1})+(r-1)(q_{k}-q^{*})\|^{2}.\]
Now, let us add and subtract a term \(ch^{2}(k+r-2)^{2}\langle q_{k}-q^{*},A(q_{k}-q^{*})\rangle\), with \(c>0\). We have:
\[V_{k}=ch^{2}(k+r-2)^{2}\langle q_{k}-q^{*},A(q_{k}-q^{*})\rangle+\tilde{V}_{k},\]
with
\[\tilde{V}_{k}:= -ch^{2}(k+r-2)^{2}\langle q_{k}-q^{*},A(q_{k}-q^{*})\rangle\] \[+h^{2}(k+r-2)(k-1)\langle q_{k-1}-q^{*},A(q_{k}-q^{*})\rangle\] \[+\|(k-1)(q_{k}-q_{k-1})+(r-1)(q_{k}-q^{*})\|^{2}.\]
Now, if we show that \(\tilde{V}_{k}\) is always positive, then \(V_{k+1}\leq V_{k}\) for all \(k\) implies:
\[ch^{2}(k+r-2)^{2}\langle q_{k}-q^{*},A(q_{k}-q^{*})\rangle\leq ch^{2}(k+r-2)^{ 2}\langle q_{k}-q^{*},A(q_{k}-q^{*})\rangle+\tilde{V}_{k}=V_{k}\leq V_{0},\]
which gives the desired rate:
\[f(q_{k})-f^{*}=\frac{1}{2}\langle q_{k}-q^{*},A(q_{k}-q^{*})\rangle\leq\frac{ V_{0}}{2ch^{2}(k+r-2)^{2}}.\]
Therefore, we only need to show \(\tilde{V}_{k}\geq 0\). To do this, we introduce two new variables:
\[u_{k}:=(k+r-2)(q_{k}-q^{*}),\qquad w_{k}:=(k-1)(q_{k-1}-q^{*}),\]
and get a simplified form for \(\tilde{V}_{k}\)
\[\tilde{V}_{k} =-ch^{2}\langle u_{k},Au_{k}\rangle+h^{2}\langle v_{k},Aw_{k} \rangle+\|u_{k}-w_{k}\|^{2}\] \[=\langle u_{k},(I-ch^{2}A)u_{k}\rangle+\|w_{k}\|^{2}-2\langle u_{ k},\left(I-\frac{h^{2}}{2}A\right)w_{k}\rangle.\]
Hence, we just need to show that
\[\tilde{P}=\begin{pmatrix}I-c\,h^{2}A&-\left(I-\frac{h^{2}}{2}A\right)\\ -\left(I-\frac{h^{2}}{2}A\right)&I\end{pmatrix}\]
is positive definite, for some \(c\) and \(h^{2}\). Using the Schur characterization for positive semidefinite matrices, \(\tilde{P}\geq 0\) if and only if
\[0\leq\tilde{B}(c) :=I-c\,h^{2}A-\left(I-\frac{h^{2}}{2}A\right)^{2}\] \[=I-c\,h^{2}A-I-\frac{h^{4}}{4}A^{2}+h^{2}A\] \[=h^{2}A\left(1-c-\frac{h^{2}}{4}A\right).\]
It is clear that \(\tilde{B}(c)\) is positive semidefinite if and only if \(1-c-\frac{h^{2}}{4}\lambda_{\max}(A)\geq 0\). That is,
\[h^{2}\leq\frac{4(1-c)}{\lambda_{\max}(A)}.\]
Hence, for any \(c\in(0,1)\) we get an acceleration. In particular, in the theorem, we chose \(c=\frac{1}{2}\).
### Numerical verification of our Lyapunov function
We verify numerically that the Lyapunov function for \(\mathtt{HBr}\) proposed in equation 11 works on quadratics. To more clearly show the effect the inner product correction term, which originated from the quadratic invariant of the Stormer-Verlet method, we use here a slightly different notation: \(V_{k}=V_{k}^{11}+V_{k}^{12}+V_{k}^{2}\), with \(V_{k}^{1}=V_{k}^{11}+V_{k}^{12}\).
\[V_{k}^{11} :=2(k+r-2)^{2}h^{2}(f(q_{k})-f^{*});\] \[V_{k}^{12} :=-h^{2}(k+r-2)\langle\nabla f(q_{k}),(k-1)(q_{k}-q_{k-1})+(r-1)( q_{k}-q^{*})\rangle;\] \[V_{k}^{2} :=\|(k-1)(q_{k}-q_{k-1})+(r-1)(q_{k}-q^{*})\|^{2}.\]
We recall that, the term \(V_{k}^{12}\) (a.k.a. the _cross-term_) vanishes as \(h\to 0\), and is indeed not present in the continuous-time limit. We show that this term, which we derived using Lyapunov equations in Sec. 2, plays a fundamental role in ensuring \(V_{k+1}-V_{k}\leq 0\). In Figure 3 we verify numerically Thm. 1. In Figure 4 we show the essential role of \(V^{12}\). Here we used \(h^{2}=1/L\), but \(\mathtt{HBr}\) can take larger steps (up to \(4/L\)), while the other algorithms become unstable (Figure 5).
Figure 4: Same setting of the second example in Figure 3, but different candidate Lyapunov function (no cross term). This confirms the cross-term is necessary.
Figure 5: HBr also works for big step-sizes (see conditions in Theorem 1). Here, used is \(h^{2}=3.9/L\).
Figure 3: Dynamics of the Lyapunov function for HB\(r\) on linear regression (ill-conditioned Hessian, with condition number \(\kappa\)). Shown is the behavior for \(r=2,3\) with step-size \(1/L\). For \(r=2\), \(V_{k}\) is constant, as predicted by Prop. 4. For \(r=3\), \(V_{k}\) is decreasing as predicted by Thm. 1.
Conclusion
In conclusion, the question of whether the Heavy-ball method is globally accelerated for non-strongly-convex quadratic problems has yet to be fully answered, and has attracted the attention of recent research [23]. Our study takes a novel approach by examining momentum through the lens of quadratic invariants of simple harmonic oscillators, and by utilizing the modified Hamiltonian of Stormer-Verlet integrators we were able to construct a Lyapunov function that demonstrates an \(O(1/k^{2})\) rate for Heavy-ball in the case of convex quadratic problems, where eigenvalues can vanish. This is a promising first step towards potentially proving the acceleration of Polyak's momentum method through Lyapunov function arguments.
## 5 Acknowledgements
I would like to extend my deepest gratitude to Prof. Boris Polyak, Prof. Christian Lubich, and Konstantin Mishchenko for the stimulating discussions. My appreciation goes to Prof. Aurelien Lucchi and Prof. Thomas Hofmann for their unwavering support and motivation, which helped me to develop the project idea in Spring 2020. Lastly, I cannot express enough my gratitude to Johannes Brahms for his Violinkonzert D-Dur op. 77, which provided the perfect soundtrack to my late-night calculations, igniting my passion and drive to push through the toughest moments.
|
2307.01671
|
Eigen Value Statistics of Long-Term Monthly Average Temperature of
Meghalaya, India
|
We use Random Matrix Theory (RMT) to describe the eigenvalue spacing of
Meghalaya's historical monthly average temperature ($T_{avg}$) in grids. For
that, the Nearest Neighbor Spacings ($S_i$) of the eigenvalues of the
correlation matrices were found out for 1428 consecutive eigenvalue pair
differences. It is found that the distribution of $S_i$ follows Brody
distribution at a correlation value of $\beta=0.045$. This value of
$\beta(0.045)$ indicates weak repulsion among the eigenvalues as it is closer
to Poisson fluctuations, meaning there is a weak correlation among the grids.
|
Raju Kalita, Atul Saxena
|
2023-07-04T12:04:58Z
|
http://arxiv.org/abs/2307.01671v1
|
# Eigen Value Statistics of Long-Term Monthly Average Temperature of Meghalaya, India
###### Abstract
We use Random Matrix Theory (RMT) to describe the eigenvalue spacing of Meghalaya's historical monthly average temperature (\(T_{avg}\)) in grids. For that, the Nearest Neighbor Spacings (\(S_{i}\)) of the eigenvalues of the correlation matrices were found out for 1428 consecutive eigenvalue pair differences. It is found that the distribution of \(S_{i}\) follows Brody distribution at a correlation value of \(\beta=0.045\). This value of \(\beta(0.045)\) indicates weak repulsion among the eigenvalues as it is closer to Poisson fluctuations, meaning there is a weak correlation among the grids.
## 1 Introduction
The theory of the Random Matrix is quite successful in understanding the amount of correlation in different time series. It was Eugene P. Wigner who first applied the technique of random matrix theory to model the nuclei of heavy atoms [1]. Since then, it has been used remarkably in many multivariate data sets like financial [2], human electroencephalographic [3], city transport [4], internet traffic [5], atmospheric data [6], sea surface temperature [7], etc. The statistical properties of random matrix ensembles such as Gaussian Orthogonal (GOE), Gaussian Unitary (GUE), and Gaussian Symplectic (GSE) have been studied extensively by pioneers like Wigner, Dyson, Mehta, etc. [8]. The main advantage of this theory is that it can correctly describe the spectral statistics of various complex, chaotic systems [9].
Moreover, the spectral properties of the correlation matrices arising from the random matrix can separate signals from noise. The short-range correlations are mainly observed by studying the Nearest Neighbour Spacing Distributions (NNSD) of eigenvalues arising from the correlation matrices [10]. Since the NNSD of eigenvalues of the correlation matrices gives the nature of correlation, using RMT, their different modes of randomness can be predicted.
This paper shows that the empirical correlation matrices arising from the half-degree latitude-longitude \(T_{avg}\) grids over Meghalaya can be modeled as random matrices chosen from an appropriate ensemble.
## 2 Study area and data used
The area under study covers almost the entire state of Meghalaya, located in the North-Eastern part of India (Fig. 1(a)). The hilly terrain of Meghalaya mainly comprises of three mainlands; Khasi Hills (central region), Jaintia Hills (eastern part), and Garo Hills (western part). It lies in-between \(25.00^{0}N\) to \(26.10^{0}N\) latitude and \(89.45^{0}E\) to \(92.45^{0}E\) longitude covering an area of 22,549 square kms [11] (Fig. 1(b)).
The data set for monthly average temperature has been extracted from \(0.5^{0}\times 0.5^{0}\) latitude-longitude grid boxes of CRU TS 4.04 over Meghalaya [12] using the Google Earth interface. Grids are sorted from left top to right bottom in a logical sequence (Fig. 2). Data set for 10 out of 11 grids from 1901 to 2019 were arranged in a matrix form in such a way that the first matrix for January 1901 has five values (grid no 1 to 5) in one row (center latitude: \(25.75^{0}N\); center longitude: \(90.25^{0}E\), \(90.75^{0}E\), \(91.25^{0}E\), \(91.75^{0}E\), \(92.25^{0}E\)) and the rest five values (grid no 6 to 10) in the second row (center latitude: \(25.25^{0}N\); center longitude: \(90.25^{0}E\), \(90.75^{0}E\), \(91.25^{0}E\), \(91.75^{0}E\), \(92.25^{0}E\)).
Figure 1: Location of study area (a) India and (b) Meghalaya in grids (\(0.5^{0}\times 0.5^{0}\)).
## 3 Construction and evaluation of random matrices
The RMT framework defines the grid system as an ensemble matrix \(W_{2\times 5}\) with random inputs. This random matrix W contains each month's data of 10-time series \(X_{j}(k)\) where \(j=1,2,...,10\) (grid position) and \(k=1,2,...,1428\) (no. of months in ascending order). Since there are 1428 months from January 1901 to December 2019, each random matrix \(W\) corresponds to a particular month of each year. Then each of the correlation matrix \(C_{2\times 2}\) is constructed from the multivariate random matrix W of two rows and five columns given by,
\[C_{ij}=\frac{1}{5}\sum_{k=1}^{5}x_{i}(k)x_{j}(k) \tag{1}\]
Where \(x_{i}(k)\) corresponds to the transpose of matrix \(W\), and \(x_{j}(k)\) corresponds to matrix \(W\). With \(\lambda_{i}\), the eigenvalues, and \(\vec{\nu_{l}}\), the eigenvectors, the correlation matrix is,
\[C\vec{\nu_{l}}=\lambda_{i}\vec{\nu_{l}} \tag{2}\]
The largest eigenvalue of each correlation matrix is then sorted as \(\lambda_{1}\leq\lambda_{2}\leq\lambda_{3}.......\leq\lambda_{1428}\), with their increasing size. Now the distribution of these eigenvalues is closely related
Figure 2: Google Earth image of CRU TS v4.04 in half-degree grids over Meghalaya numbered as 1 to 11 from left top to right bottom.
to the amount of correlation in the random inputs of the multivariate data set [13]. The Nearest Neighbor Spacings \(S_{i}\) were then found out as
\[S_{i}=\frac{\lambda_{i+1}-\lambda_{i}}{<\lambda_{i+1}-\lambda_{i}>} \tag{3}\]
where \(i=1,2,...,1427\) and \(<\lambda_{i+1}-\lambda_{i}>\) denotes average value over 1428 consecutive eigenvalue pair differences. Studies have shown that the probability distribution is well described by Brody distribution [14].
\[P(S_{i})=\left[\Gamma\left(\frac{2+\beta}{1+\beta}\right)\right]^{(1+\beta)}(1+ \beta)S_{i}^{\beta}e^{-\left[\Gamma\left(\frac{2+\beta}{1+\beta}\right)\right] ^{(1+\beta)}S_{i}^{(1+\beta)}} \tag{4}\]
Where \(\Gamma(x)\) is the Gamma function. The parameter \(\beta\) in the above distribution classifies the correlation in the system with respect to its probability distribution. When there is no correlation, the spacing of levels is very close and \(\beta\to 0\) and leads to Poisson distribution given by,
\[P(S_{i})=e^{-S_{i}} \tag{5}\]
However, when a correlation is present, then the level repels each other and \(\beta\to 1\), and this leads to GOE fluctuations given by,
\[P(S_{i})=\frac{\pi}{2}S_{i}e^{-\frac{\pi}{4}S_{i}^{2}} \tag{6}\]
This Poisson to GOE fluctuation gives the measure of correlation in the system of the multivariate data set [15].
## 4 Result and discussion
After extracting the Eigenvalues from the random correlation matrices \(C_{ij}\), their distribution is plotted analytically with a non-parametric fitting (Fig. 3). It is observed that most of the eigenvalues lie on the higher side. This indicates uniformity in the next-to-next eigenvalue, as a result of which the eigenvalues are likely to reside close to each other.
To find the Nearest Neighbour Spacing Distribution (NNSD), we plot the non-parametric histogram fitting of \(S_{i}\) (Fig. 4 [blue line]). After that, the best fit is adjusted using equation (4) and is obtained at the Brody parameter, \(\beta=0.045\). This value of \(\beta\) indicates a fluctuation near to Poisson distribution. This means that though the level spacing repulsion is very small, it shows a very weak correlation among half-degree temperature grids of Meghalaya.
The 119 years for CRU TS v4.04 Tavg data analysis in RMT frameworks reveals that the half-degree grids over Meghalaya are weakly correlated. The NNSD shows fluctuations
Figure 4: Nearest Neighbour Spacing Distribution (\(S_{i}\)) of eigenvalues (\(\lambda_{i}\)) of the correlation matrices.
Figure 3: Probability density of eigenvalues (\(\lambda_{i}\)) of the correlation matrices.
closer to Poisson than the GOE ensemble (Fig. 5). Thus, in the present work, we could replace the analytical spacing distribution with an ensemble of random matrices that follows Brody distribution at \(\beta=0.045\), which indicates a weak random fluctuation in the average temperature that existed over the Meghalaya throughout the period 1901 to 2019.
Figure 5: Comparison of fitted Nearest Neighbour Spacing Distribution (\(S_{i}\)) of eigenvalues (\(\lambda_{i}\)) with Poisson (light blue dots) and GOE (light green dots) fluctuations.
|
2305.10105
|
Conduction-radiation coupling between two distant solids interacting in
near-field regime
|
In the classical approach to deal with near-field radiative heat exchanges
between two closely spaced bodies no coupling between the different heat
carriers inside the materials and thermal photons is usually considered. Here
we make an overview of the current state of studies on this coupling between
solids of different sizes by paying a specific attention to the impact of the
conduction regime inside the solids on conduction-radiation coupling. We also
describe how the shape of solids affects this coupling. We show that this
coupling can be at the origin of a drastic change of temperature profiles
inside each body and of heat flux exchanged between them. These results could
have important implications in the fields of nanoscale thermal management,
near-field solid-state cooling and nanoscale energy conversion.
|
Marta Reina, Chams Gharib Ali Barura, Philippe Ben-Abdallah, Riccardo Messina
|
2023-05-17T10:12:17Z
|
http://arxiv.org/abs/2305.10105v1
|
# Conduction-radiation coupling between two
###### Abstract
In the classical approach to deal with near-field radiative heat exchanges between two closely spaced bodies no coupling between the different heat carriers inside the materials and thermal photons is usually considered. Here we make an overview of the current state of studies on this coupling between solids of different sizes by paying a specific attention to the impact of the conduction regime inside the solids on conduction-radiation coupling. We also describe how the shape of solids affects this coupling. We show that this coupling can be at the origin of a drastic change of temperature profiles inside each body and of heat flux exchanged between them. These results could have important implications in the fields of nanoscale thermal management, near-field solid-state cooling and nanoscale energy conversion.
Introduction
Radiative heat transfer is the phenomenon through which two bodies at different temperatures can exchange energy even when separated by vacuum. A milestone in the study of this effect, dating back to the 19th century, is Stefan-Boltzmann's law, setting an upper bound for the flux two bodies at temperatures \(T_{1}\) and \(T_{2}\) can exchange: this upper limit, equal to \(\sigma(T_{1}^{4}-T_{2}^{4})\), \(\sigma\simeq 5.67\cdot 10^{-8}\,\mathrm{Wm}^{-2}\mathrm{K}^{-1}\) being the Stefan-Boltzmann constant, can be realized only in the ideal scenario ot two blackbodies (i.e. bodies absorbing all the incoming radiation) exchanging heat. A second breakthrough in the study of radiative heat transfer was set much later, in the 1970s, by the development of fluctuational electrodynamics through the pioneering work of Rytov, Polder and van Hove [1; 2]. This theoretical framework describes each body as a collection of fluctuating dipoles whose statistical properties depend, by means of the fluctuation-dissipation theorem, on the temperature and optical properties of the body they belong to. This theory allowed to shine light on the very first experimental results [3] demonstrating the possibility to beat the blackbody limit in near-field regime, i.e. when their separation distance \(d\) between the solids is small compared to the thermal wavelength \(\lambda_{\mathrm{th}}=\hbar c/k_{\mathrm{B}}T\), of the order of \(10\,\mu\mathrm{m}\) at ambient temperature. More specifically, this is prone to happen when the two bodies support resonant modes of the electromagnetic field such as phonon-polaritons (for polar materials) and plasmons (for metals) [4] or even a continuum of evanescent modes such as hyperbolic modes [5].
The unveiling of a near-field flux amplification paved the way to numerous experiments (see [6; 7; 8] for some review papers) in a variety of geometries (including e.g. plane-plane, sphere-plane and tip-plane) and for several different materials. Parallel to the experimental investigations, several ideas of applications have been put forward, ranging from energy-conversion devices [9; 10; 11; 12] to heat-assisted data recording [13; 14], infrared spectroscopy [15; 16], and thermotronics [17; 18], namely the conception of thermal equivalents of electrical circuit elements.
Although the vast majority of experiments have confirmed the theoretical predictions, some of them have observed deviations from it, both in the extreme near-field scenario [19] (nanometer and sub-nanometer range of distances) and at tens of nanometers [20; 21]. More specifically, some experiments [19] observed an amplification of the flux, whereas others [20; 21] highlighted a saturation effect. The explanation of such inconsistencies between theory and experiment, to date unsolved, has stimulated the theoretical investigations in several directions. First, is has been suggested that non-local effects must be taken into account in order to describe the energy exchange between metals at very short separation distances [22; 23]. Moreover, in the extreme near field the participation of other heat carriers (phonons and electrons) could affect significantly the exchanged flux [24; 25; 26; 27; 28; 29; 30; 31; 32], but these can only play a role below a few nanometers. Finally, some works have also explored in this sense the transition between conduction and radiation [33; 34].
A further effect which could be at the origin of a deviation with respect to the predictions of fluctuational electrodynamics is the coupling between conduction acting inside each body and near-field radiative heat transfer between them. In order to get an idea of the possible impact of this effect, we can visualize the typical theoretical system, as shown in Fig. 1. Two bodies are kept at temperatures \(T_{L}\) and \(T_{R}\) by two thermostats, locally connected to them. In almost all theoretical works on near-field radiative heat transfer, it is assumed that conduction inside each body is so efficient (compared to the energy exchange mediated by radiation) that the temperature can be assumed to be uniform in each body and equal to the one imposed by the thermostat. This allows to define properly the radiative heat transfer between two bodies at two given temperatures \(T_{L}\) and \(T_{R}\). Nevertheless, the strong dependence of near-field radiative heat transfer on the materials involved and, more importantly, on the separation distance suggests that the two effects could compete in some ranges of parameters. This would imply the existence of a temperature profile within each body (as depicted in Fig. 1) and then, in turn, a modification of the flux exchanged through radiation.
During the last years, we have performed a comprehensive study of the impact of this coupling [35; 36; 37; 38; 39; 40; 41], which is the topic of this review paper. More specifically, we have first studied this effect in the simple geometry of two parallel slabs and in the diffusive regime, as discussed in Sec. II. Following this first analyis, we have investigated the role played by the size of the two bodies. As a matter of fact, as shown pictorially in Fig. 1, this can have an impact on the conduction tranport regime inside each body. These results are discussed in Sec. III. Finally, in order to account for the variety of geometries employed in experiments, we have also studied the same coupling effect in different geometries, as discussed in Sec. IV. Finally, some conclusions are given in Sec. V.
## II Slab-Slab configuration in the diffusive conduction regime
The simplest geometry to study the effect of conduction-radiation coupling is indeed the one involving two parallel finite-thickness slabs separated by a vacuum gap of thickness \(d\), as represented in the inset of Fig. 2. To describe the action of two thermostats connected to the two bodies, we assume that the temperature in the first (second) body is fixed at \(T_{L}\) (\(T_{R}\)) except over a region of thickness \(t_{a}\) (\(t_{b}\)).
In order to further simplify the problem, we also assume that the thickness of the two slabs is large enough to safely
treat the conduction problem in the Fourier diffusive regime. In this case, the coupled equation to be solved reads
\[\frac{\partial}{\partial z}\left[\kappa(z)\frac{\partial}{\partial z}T(z)\right]+ \int\mathrm{d}z^{\prime}\,\varphi(z^{\prime},z)=0. \tag{1}\]
In this equation \(\kappa(z)\) is the bulk Fourier conductivity at point \(z\), respectively, whereas \(\varphi(z^{\prime},z)\) representes the radiative power per unit volume emitted at a point \(z^{\prime}\) and absorbed at a point \(z\). At this stage, the expression of the radiative term \(\varphi(z^{\prime},z)\) is needed. This energy exchange can be, for example, calculated by means of a framework introduced to calculate both Casimir forces and radiative heat transfer and based on the knowledge of the scattering operators of the bodies involved [42; 43; 44; 45]. In order to account for the temperature profiles, we assume that each body is divided in slabs of infinitesimal thickness, of which the scattering coefficients are known analytically, and apply the scattering approach to deduce the radiative heat transfer. Limiting ourselves to the contribution stemming from evanescent waves in transverse magnetic polarization (dominating in the near field between polar materials [4]) we write the flux
Figure 1: Configuration involving two arbitrarily-shaped bodies of finite size and kept at different temperatures \(T_{L}\) and \(T_{R}\) by two thermostats. The two bodies exchange heat radiatively, while conduction takes place inside each of them. The size \(\delta\) of each body compared to the phonon mean free path \(\Lambda\) dictates the conductive transport regime. For \(\delta\ll\Lambda\) (left) the heat transport is ballistic (no collision events during phonon trajectories), while for \(\delta\gg\Lambda\) (right) the heat transport is diffusive (many collision events). The coupling of conductive and radiative heat transfer includes, in general, two temperature profiles \(T_{1,2}(\mathbf{r})\) inside the two bodies. Reproduced from [35].
Figure 2: Geometry of two parallel slabs separated by a \(d\)-thick vaccuum gap. In the left (right) slab the temperature can vary with respect to \(T_{L}\) (\(T_{R}\)) over a thickness \(t_{a}\) (\(t_{b}\)). Temperature profile along the left slab for two silica slabs with \(t_{a}=t_{b}=100\,\mu\)m and \((T_{L},T_{R})=(600,300)\,\)K. The lines correspond to \(d=10\,\)nm (black), \(20\,\)nm (red) and \(50\,\)nm (blue). The right inset shows the position-dependent radiative flux \(\phi(z)\) for \(d=100\,\)nm. Reproduced from [36].
as the frequency and wavevector integral \(\varphi(z_{a},z_{b})=\int_{0}^{\infty}\mathrm{d}\omega\int_{\omega/c}^{\infty} \mathrm{d}\beta\ \varphi_{a}(\omega,\beta;z_{a},z_{b})\), where the spectral flux can be expressed as (see [36] for more details):
\[\varphi(\omega,\beta;z_{a},z_{b})=\frac{4\beta}{\pi^{2}}(r^{\prime\prime}k_{zm }^{\prime\prime})^{2}\frac{e^{-2k_{z}^{\prime\prime}d}\,e^{-2k_{zm}^{\prime \prime}(z_{b}-d/2)}}{|1-r^{2}e^{-2k_{z}^{\prime\prime}d}|^{2}}\Big{(}N[\omega,T(z_{a})]-N[\omega,T(z_{b})]\Big{)}, \tag{2}\]
where \(\beta\) is the parallel (\(x\)-\(y\)) component of the wavevector, while \(k_{z}=\sqrt{\omega^{2}/c^{2}-\beta^{2}}\) and \(k_{zm}=\sqrt{\varepsilon\omega^{2}/c^{2}-\beta^{2}}\) are the perpendicular components in vacuum and inside the slabs, respectively. We also introduced the Fresnel reflection coefficient of a slab, given by \(r=(\varepsilon k_{z}-k_{zm})/(\varepsilon k_{z}+k_{zm})\). Finally, in Eq. (2) \(a^{\prime\prime}\) represents the imaginary part of \(a\) and and
\[N(\omega,T)=\bigg{[}\exp\!\left(\frac{\hbar\omega}{k_{B}T}\right)-1\bigg{]}^{ -1}. \tag{3}\]
The coupled heat equation (1), combined with the flux expression (2), can be solved numerically. Nevertheless, two further approximations can be performed and allow us to obtain an analytical expression for both the temperature gap \(T_{a}-T_{b}\) between the two slabs (across the vacuum gap, see Fig. 2) and the exchanged flux. We can first assume that the radiative energy exchange takes places over a tiny thickness close to the vacuum interface of each body, allowing us to treat it as a surface term, i.e. as a boundary condition. Moreover, inspired by the results of fluctuational electrodynamics, we can assume that the total flux exchanged radiatively can be expressed as \(\phi\simeq h_{0}(T_{a}-T_{b})/d^{2}\). In this simplified expression, the flux depends only on the two temperatures at the interfaces and shows the known \(d^{-2}\) divergence. These approximations, whose validity has been verified numerically, lead to the following analytical solutions
\[\frac{T_{a}-T_{b}}{T_{L}-T_{R}}=\left(1+\frac{2th_{0}}{\kappa d^{2}}\right)^{- 1},\quad\frac{\varphi}{T_{L}-T_{R}}=\frac{h_{0}}{d^{2}}\left(\frac{T_{a}-T_{b }}{T_{L}-T_{R}}\right). \tag{4}\]
The impact of conduction-radiation coupling is shown quantitatively in Figs. 2 and 3. Figure 2 concerns two silica slabs having \(t_{a}=t_{b}=100\,\mu\)m and \((T_{L},T_{R})=(600,300)\) K. The temperature profile is shown inside the left slab for three different distances: it shows that the temperature can decrease by more than \(100\) K in the left slab for the smallest distance considered. Moreover, the inset of Fig. 2, showing the numerically-calculated distribution of flux absorbed inside the slab, shows that it is highly peaked around the vacuum interface, as assumed.
Figure 3 shows the temperature difference across the gap \(T_{a}-T_{b}\) (normalized with respect to \(T_{L}-T_{R}\), inset) and the exchanged flux (main part) for different slab thicknesses (see caption of Fig. 3). We clearly observe that not only is a tempetaure profile inuced by the coupling, with an effect growing with the slab thickness, but that also the flux is strongly modified with respect to the scenario of absence of coupling (orange dashed line in Fig. 3). The
Figure 3: Flux \(\varphi\) and temperature difference \(T_{a}-T_{b}\) across the vacuum gap (inset) as a function of \(d\) between two silica slabs with \(T_{L}=600\) K and \(T_{R}=300\) K. The solid lines correspond to different thicknesses: \(100\) nm (black), \(1\,\mu\)m (red), \(10\,\mu\)m (brown), \(100\,\mu\)m (blue) and \(500\,\mu\)m (green). The orange dashed line corresponds to the absence of temperature gradients. Reproduced from [36].
flux tends to saturate for \(d\) going to zero, and both the saturation value and the characteristic distance at which the distance-dependent flux deviates from the no-coupling scenario depend strongly on the thickness.
In order to make this more quantitative it is interesting to define a characteristic coupling distance \(\tilde{d}\), such that at this distance the temperature gradient across the vacuum gap equals half of \(T_{L}-T_{R}\). At this distance we have \(T_{a}-T_{b}=\frac{1}{2}(T_{L}-T_{R})\) and \(\varphi=\frac{1}{2}h_{0}(T_{L}-T_{R})/\tilde{d}^{2}\). This distance \(\tilde{d}=\sqrt{2th_{0}/\kappa}\) depends both on the thickness and on the material-dependent \(h_{0}/\kappa\) parameter, quantifying the competition between radiative exchange (through the conductance \(h_{0}\)) and conductive transport (through the conductivity \(\kappa\)). Figure 4 shows \(\tilde{d}\) as a function of this ratio for \(t=100\,\mu\)m. On top of this curve we highlight some examples of materials, showing that this characteristic distance can vary from some to tens or hundreds of nanometers, making the experimental observation of conduction-radiation coupling in principle feasible for some materials and thicknesses. We conclude this section mentioning that a previous study was performed in this sense [46], but applied only to a specific configuration and not exploring the strong dependence on the choice of materials and thicknesses.
## III The impact of slab thickness: from the diffusive to the ballistic regime
The results presented in the previous section are based on the assumption that the thickness of the two bodies exchanging heat is large compared to the mean free path of phonons inside them, in the micron range for typical polar materials. Nevertheless, because of the peculiar transport regimes which can arise for conduction depending on the thickness, we have also extended our study to the more general configuration of arbitrary thickness. Boltzmann's transport equation is the mathematical tool allowing us to fully grasp the transition between the transport regimes, and more specifically between the two extreme ones, namely ballistic and diffusion regimes.
At a given frequency \(\omega\) (not explicitly shown) and in the relaxation time approximation, this equation reads
\[\frac{\partial f_{p}(t,\omega,\mathbf{r},\Omega)}{\partial t}+\mathbf{v}_{g,p} (\omega)\cdot\nabla f_{p}(t,\omega,\mathbf{r},\Omega)=-\frac{f_{p}(t,\omega, \mathbf{r},\Omega)-f_{0}(\omega)}{\tau_{p}(\omega,T(\mathbf{r}))}. \tag{5}\]
The unknown of this equation is the distribution function \(f\) associated to the heat carriers within the solid for each polarization \(p\), at time \(t\), frequency \(\omega\), solid angle \(\Omega\) and position \(\mathbf{r}\). Moreover, \(\mathbf{v}_{g,p}(\omega)=\nabla_{\mathbf{k}}\omega_{p}\) is the group velocity of carriers at polarization \(p\) and frequency \(\omega\), \(f_{0}\) is the equilibrium distribution (Fermi-Dirac for electrons and Bose-Einstein for phonons) and \(\tau_{p}\) the heat-carrier relaxation time.
In order to be solved, this equation has to be coupled to the one governing the time evolution of the internal energy density \(u\), which reads
\[\frac{\partial u(\mathbf{r},t)}{\partial t}=P_{\mathrm{rad}}(\mathbf{r},t)+P_ {\mathrm{cond}}(\mathbf{r},t), \tag{6}\]
Figure 4: Characteristic distance \(\tilde{d}\) of conduction–radiation coupling (see text) for \(t_{a}=t_{b}=100\,\mu\)m and different materials. AZO[1.2] and AZO[0.05] denote aluminum zinc oxides of conductivities \(\kappa=1.2\,\)W/m-K and \(\kappa=0.05\,\)W/m-K, respectively (see [36] for more details). The inset shows \(h_{0}\) as a function of \(\Delta T=T_{L}-300\,\)K, for AZO (red), silica (blue) and SiC (black). Reproduced from [36].
\(P_{\rm rad}\) denoting the radiative power locally dissipated per unit volume within a given body and coming from the other one, which can be calculated within a fluctuational-electrodynamics approach (see [35] for more details). On the other hand, \(P_{\rm cond}\) denotes the conductive power per unit volume at position \({\bf r}\), which is connected to the distribution function \(f\) through the relation
\[\varphi_{\rm cond}(t,{\bf r})=\sum_{p}\int_{4\pi}d\Omega\int d\omega\,\hbar \omega\,{\bf v}_{g,p}(\omega)f_{p}(t,\omega,{\bf r},\Omega)\frac{D_{p}(\omega) }{4\pi}, \tag{7}\]
where \(D_{p}(\omega)\) represents the density of states.
We have solved the two coupled equations in the simple geometry of two parallel SiC slabs: the left one, denoted by index 1, is connected to a thermostat at temperature \(T_{L}=400\,\)K, whereas slab 2 (on the right) is connected to one at \(T_{R}=300\,\)K. Concerning the boundary conditions, two different cases must be taken into account: for the edges in contact with vacuum phonons hitting the surface are scattered specularly (specular reflection), whereas phonons colliding against the thermostat are scattered in all directions (diffuse reflection) [47]. Exploiting Boltzmann's equation, we are allowed to let the slab thickness vary in a wide range of values, from the ballistic to the diffusive regime.
The results at two separation distances (\(d=1\) and \(5\,\)nm) are shown in Fig. 5. The main part of the two plots shows the temperature profile inside slab 1, normalized to the temperature difference across it \(T_{1}(0)-T_{L}\). This allows to highlight a signature of the conductive tranport regime in the shape of the temperature profile. More specifically, while for the largest thickness considered \(T_{1}(z)\) becomes almost linear (as it should be in the strictly diffusive regime, according to Fourier law), when decreasing the thickness we observe a transition towards a significantly different behavior. In this ballistic-like scenario the temperature profile tends to a uniform distribution, excluding the region close to the thermostat (\(z\simeq-\delta\)), where \(T_{1}(z)\) is almost discontinuous (Casimir regime), and close to the vacuum gap, where \(T_{1}(z)\) shows a steep increase physically connected to the fact that most to the radiative flux is absorbed close to the boundary.
While this discussion highlights the impact of transport regime on the shape of the temperature profile, the information regarding the quantitative impact of distance and thickness is contained in the insets of Fig. 5. It is clear that an observable temperature profile (up to tens of degrees) can indeed arise, but mainly for large thicknesses (tens of microns) and small distances (below \(5\,\)nm).
In Fig. 6 we address the impact of the temperature profiles on the exchange radiative flux. The exact result (solid black line) is compared to two approximate configurations: the Polder and van Hove result (i.e. conventional fluctuational electrodynamics ignoring the existence of a temperature profile, red dashed line) and to the _modified Polder and van Hove_ configuration (blue long-dashed line). The latter corresponds to a conventional fluctuational-electrodynamics configuration assuming that the temperature inside each body is uniform and equal to the temperature taken at the boundaries between it and vacuum.
These curves show that at the closest distance \(d=1\,\)nm the error in using standard fluctuational electrodynamics (ignoring conduction-radiation coupling) can be enormous, revealing once more the relevance of the coupling effect for large thicknesses and small distances. On the contrary, the modified approach reproduces relatively well the exact result. On one hand, this confirms that radiative heat transfer is mainly a surface effect, and thus almost entirely depends on the temperature at the boundaries between each body and vacuum. Nevertheless, albeit its fundamental interest, this has no direct practical use, since the knowledge of these boundary temperatures stems from and thus needs the solution of the full problem in the presence of coupling. Apart from being relevant for theory-experiment
Figure 5: (a) Steady-state temperature (inset) and normalized temperature profile inside the left slab for different thicknesses and a separation distance \(d=1\,\)nm. (b) Same as (a) for \(d=5\,\)nm. Reproduced from [35].
comparison, these coupling effects could be relevant in some applications, involving e.g. the thermalization of two bodies, as discussed in detail in [40].
## IV The impact of geometry: the tip-plane configuration
After investigating the role played by the thickness of the two bodies, it is interesting to address the impact of the geometrical configuration, not only to unveil possible fundamental issues but also because experiments are only rarely performed in the plane-plane configuration, which raises the serious experimental challenge of ensuring parallelism.
In a first work [39] we have studied conduction-radiation coupling between two nanorods. In this configuration we have highlighted, the appearance of a temperature profile, modifying in turn the exchanged radiative flux, along with a deviation from a linear temperature profile, even in the diffusive regime, due to the appearance of bulk polaritonic resonances, absent in the case of two parallel planes.
We focus here more in detail on a more recent work [41], where we have analyzed the effect of conduction-radiation coupling in the tip-plane scenario, frequently employed in experiments. To this aim we have considered the geometrical configuration sketched in Fig. 7 where two cylinders of radius \(R_{0}\) and \(fR_{0}\) are placed in front of each other and separated by a vacuum gap of thickness \(d\). The former (latter) has thickness \(\delta_{L}\) (\(\delta_{R}\)) and is connected to a thermostat at temperature \(T_{L}\) (\(T_{R}\)). The modulation of the factor \(f\) allows to switch from the plane-plane configuration (\(f=1\)) to a tip-plane scenario, when \(f\) tends to zero.
Inspired by the results discussed in Sec. III, we have considered large values for the two thicknesses (\(\delta_{L}=\delta_{R}=100\,\mu\)m) such that we are in the diffusive regime. We have then solved the heat equation in cylindrical coordinates, by assuming that the two cylinders do not exchange any flux across their lateral surfaces. Moreover, we have also assumed that radiative flux can be treated as a surface term (i.e. as a boundary conduti
Figure 6: Radiative heat flux exchanged between two SiC slabs with respect to their thickness for a separation distance of (a) \(d=1\,\)nm and (b) \(d=5\,\)nm. We show the exact result (black line), the Polder and van Hove one (red dashed line, uniform temperatures \(T_{L}=300\,\)K and \(T_{R}=400\,\)K) and the modified Polder and van Hove flux (blue long-dashed line, uniform temperatures equal to the temperatures at the boundaries with the vacuum gap). Insets: absolute value of the error with respect to the PvH and modified PvH approaches. Reproduced from [35].
Figure 7: Scheme of the two-cylinder employed to simulate the tip–plane configuration. Reproduced from [41], with the permission of AIP Publishing.
Derjaguin approximation [48] (also known as proximity-force approximation), that two two cylinders exchange heat radiatively only across the surface of the smaller cylinder. In other words, for cylinder 1 we have \(\partial T(r,z_{2})/\partial z=0\) for \(fR_{0}<r<R_{0}\) while \(\partial T(r,z_{i})/\partial z=-\varphi(r)/\kappa\) for \(i=2,3\), \(\kappa\) being the thermal conductivity and \(\varphi(r)\) the radiative flux locally exchanged at coordinate \(r\). The solution can be obtained analytically under the furthe approximation that the exchanged flux can be written as
\[\varphi\simeq\frac{\gamma[T(0,z_{2})-T(0,z_{3})]}{d^{2}}, \tag{8}\]
i.e. it is uniform, it depends only on the two temperatures at the center of the two cylinder surfaces and on the separation distance as \(d^{-2}\). This allows to obtain analytical expressionf for the temperature profiles in the cylinders
\[T(r,z)=\begin{cases}T_{L}-\frac{\gamma(T_{L}-T_{R})}{\xi}\Bigg{[}f^{2}(z-z_{1} )+2R_{0}f\sum_{k=1}^{\infty}\frac{J_{1}(f\alpha_{k})}{\alpha_{k}^{2}J_{0}^{2} (\alpha_{k})}\frac{\sinh\left[\frac{\alpha_{k}(z-z_{1})}{R_{0}}\right]}{\cosh \left[\frac{\alpha_{k}(z-z_{1})}{R_{0}}\right]}J_{0}\Big{(}\alpha_{k}\frac{r}{ R}\Big{)}\Bigg{]},&z_{1}<z<z_{2},\\ T_{R}+\frac{\gamma(T_{L}-T_{R})}{\xi}(z_{4}-z),&z_{3}<z<z_{4},\end{cases} \tag{9}\]
and for the exchanged radiative flux
\[\varphi(d,f,R_{0})=\frac{\frac{\gamma(T_{L}-T_{R})}{d^{2}}}{1+\frac{\gamma}{ \kappa d^{2}}\Big{[}f^{2}\delta_{L}+\delta_{R}+2R_{0}f\Gamma\Big{(}f,\frac{ \delta_{L}}{R_{0}}\Big{)}\Big{]}}, \tag{10}\]
\(\delta_{L}=z_{2}-z_{1}\) (\(\delta_{R}=z_{4}-z_{3}\)) being the height of the larger (smaller) cylinder. In these expressions we have defined
\[\begin{split}\xi&=\kappa d^{2}+\gamma(f^{2}\delta _{L}+\delta_{R})+2\gamma R_{0}f\Gamma\Big{(}f,\frac{\delta_{L}}{R_{0}}\Big{)}, \\ \Gamma(f,\beta)&=\sum_{k=1}^{\infty}J_{1}(f\alpha_{k })\tanh(\alpha_{k}\beta)/[\alpha_{k}^{2}J_{0}^{2}(\alpha_{k})].\end{split} \tag{11}\]
As expected, Eq. (10) allows to get back the results for two parallel slabs [36] for \(f=1\). As discussed more in detail in [41], going beyond the approximation described in Eq. (8) allows to get numerical results in very good agreement, for the physical parameters taken into account below, with the analytical expressions.
The results for a configuration with \(\delta_{L}=\delta_{R}=100\,\mu\)m, \(R_{0}=10\,\mu\)m and \(f=10^{-2}\) are shown in Fig. 8(a), where they are compare to the configuration of absence of coupling and to the slab-slab scenario (corresponding to \(f=1\)).
More specifically, the figure shows the results for silica (SiO\({}_{2}\)), silicon carbide (SiC) and gold bodies. In the latter scenario we observe that the curves in the absence and in the presence of coupling (both for slabs and cylinders) are indistinguishable. The reason behind this is that the radiative exchange in the case of gold is weak compared to the case of polar materials, mainly because the surface resonant modes (surface plasmons) supported by gold bodies lie in the ultraviolet region of the spectrum, and are thus not excited thermally around ambient temperature. As a result, no significant temperature profile and no impact on the radiative flux are expected.
On the contrary, based on what we already discussed in Sec. II, for two polar materials radiative heat flux in the nanometer range of distances is supposed to compete with conduction. The results of Fig. 8(a) show that the results for two cylinders qualitatively follow the ones for two slabs: the characteristic distance at which the curve in the presence of coupling deviates from the one in the absence of coupling is almost the same, while the value of the saturated (\(d\to 0\)) flux is only slightly higher than the one for two slabs. This allows to state that also in a tip-plane configuration the impact of conduction-radiation coupling is supposed to be observable, at least for polar materials, in the nanometer range of distances.
To conclude, we analyze in Fig. 8(b) the combined effect of radius \(R_{0}\) and radii fraction \(f\). We observe that the ratio between flux in the presence [\(\varphi(d,f,R_{0})\)] and in the absence [\(\Phi(d)\)] of coupling is of the order of \(10^{-3}\) in a wide range of both parameters. Finally, the black dashed line in Fig. 8(b) indicates that the flux correction is significant for all points defining a hypothetical tip radius of \(100\,\)nm, corresponding to an experimentally reasonable value.
## V Conclusions
We have reviewed our recent work on the coupling between conduction and radiation for two solids out of thermal equilibrium interacting in near-field regime. We have shown that, depending on the separation distance and the
materials nature under scrutiny, this coupling can be at the origin of a non-negligible temperature profile inside each body (ignored in previous works), which can in turn induce a saturation of the radiative heat flux exchanged in the two bodies with respect to the predictions of conventional fluctuational electrodynamics. Because of the current possibility to explore experimentally distances in the nanometer range and below and of the ongoing miniaturization of a variety of technological devices, these results show that this coupling needs to be taken into account, both for the sake of theory-experiment comparison and in view of the design of innovative devices operating at the nanoscale.
|
2304.07750
|
GeoMultiTaskNet: remote sensing unsupervised domain adaptation using
geographical coordinates
|
Land cover maps are a pivotal element in a wide range of Earth Observation
(EO) applications. However, annotating large datasets to develop supervised
systems for remote sensing (RS) semantic segmentation is costly and
time-consuming. Unsupervised Domain Adaption (UDA) could tackle these issues by
adapting a model trained on a source domain, where labels are available, to a
target domain, without annotations. UDA, while gaining importance in computer
vision, is still under-investigated in RS. Thus, we propose a new lightweight
model, GeoMultiTaskNet, based on two contributions: a GeoMultiTask module
(GeoMT), which utilizes geographical coordinates to align the source and target
domains, and a Dynamic Class Sampling (DCS) strategy, to adapt the semantic
segmentation loss to the frequency of classes. This approach is the first to
use geographical metadata for UDA in semantic segmentation. It reaches
state-of-the-art performances (47,22% mIoU), reducing at the same time the
number of parameters (33M), on a subset of the FLAIR dataset, a recently
proposed dataset properly shaped for RS UDA, used for the first time ever for
research scopes here.
|
Valerio Marsocci, Nicolas Gonthier, Anatol Garioud, Simone Scardapane, Clément Mallet
|
2023-04-16T11:00:43Z
|
http://arxiv.org/abs/2304.07750v1
|
# GeoMultiTaskNet: remote sensing unsupervised domain adaptation using geographical coordinates
###### Abstract
Land cover maps are a pivotal element in a wide range of Earth Observation (EO) applications. However, annotating large datasets to develop supervised systems for remote sensing (RS) semantic segmentation is costly and time-consuming. Unsupervised Domain Adaption (UDA) could tackle these issues by adapting a model trained on a source domain, where labels are available, to a target domain, without annotations. UDA, while gaining importance in computer vision, is still under-investigated in RS. Thus, we propose a new lightweight model, GeoMultiTaskNet, based on two contributions: a GeoMultiTask module (GeoMT), which utilizes geographical coordinates to align the source and target domains, and a Dynamic Class Sampling (DCS) strategy, to adapt the semantic segmentation loss to the frequency of classes. This approach is the first to use geographical metadata for UDA in semantic segmentation. It reaches state-of-the-art performances (47,22% mIoU), reducing at the same time the number of parameters (33M), on a subset of the FLAIR dataset, a recently proposed dataset properly shaped for RS UDA, used for the first time ever for research scopes here.
## 1 Introduction
Accurate land cover information is crucial for a wide range of applications, including environmental monitoring and management [9, 21, 56], urban planning, and monitoring [6, 41]. In particular, semantic segmentation is a key task in the analysis of very high-resolution (VHR) remote sensing (RS) images, as it enables the automatic categorization of land cover [32]. However, annotating large datasets for supervised learning is costly and time-consuming, especially when not all data are acquired contemporaneously [31].
In this context, unsupervised domain adaptation (UDA) offers a promising solution for adapting a model trained on a source domain to a target domain, without the need for annotations [13, 20, 26], reducing domain shift. Although this task is gaining importance in computer vision (CV) [17, 18, 52], in RS it is still under-investigated. On one hand, often new RS UDA methods are applied on datasets not properly developed for this purpose [38] and, consequently, far from the real-world UDA scenario. On the other hand, general CV models are often applied to RS images, with little regard to the EO peculiarities. A clear example is the use of metadata, such as geographical coordinates, which are often discarded [39, 59].
For this reason, we experiment a new lightweight Convolutional Neural Network (CNN), named GeoMultiTaskNet (GeoMTNet), on a new dataset (FLAIR i.e., French Land cover from Aerospace ImageRy [15]), properly shaped for UDA (see for example the radiometric shifts in Fig. 1). This contribution is the first in which the FLAIR dataset is used for scientific purposes.
GeoMTNet is a novel algorithm for UDA in semantic segmentation of RS images leveraging geographical coordinates, to align the source and target domains, with two key novelties. First, we propose a simple GeoMultiTask module (GeoMT) that learns to predict the geographic position of the input image. Second, inspired by [24], we propose a Dynamic Class Sampling (DCS) module that adapts the semantic segmentation loss to the frequency of the classes.
To our knowledge, this is the first work to address UDA in semantic segmentation using geographical metadata. The proposed approach offers a promising solution for reducing the annotation cost in semantic segmentation of VHR RS images, with a simple and portable module. Our proposed method establishes on a subset of the FLAIR dataset new state-of-the-art performance (47.22% mIoU) with a limited number of parameters (33M), w.r.t. the transformer counterparts (85M).
## 2 Related Work
### Unsupervised Domain Adaptation
UDA approaches could be divided into three main branches: feature alignment, labeling adjustment, and discriminative methods. Feature alignment methods have the aim of aligning some characteristics (e.g., color histograms or features) of source and target domains. Some examples are DeepCORAL [43], KeepItSimple [2], CoVi [33], GtA [50]. Labeling adjustment makes use of pseudo-labeling to force the predictions of the target domain to be consistent. Several works followed these strategies, such as NoisyStudent [53], CBST [60]. Discriminative methods are based on loss terms that force the net to distinguish among source and target features, e.g., DANN [14], AdaptSegNet [46], ADVENT [48], DADA [49]. Moreover, some hybrid approaches are also developed. For example, we can recall methods based on a combination of the presented strategies, such as SePiCo [52], DISE [7] and DAFormer [17, 18]. Finally, hybrid UDA approaches such as self-supervised learning (SSL) [8, 34, 50] or continual learning [40] have been explored.
### UDA for Remote Sensing
Different methods, not all aiming properly for UDA, have been proposed. StandardGAN [45] works with multi-source domains, forcing the domains to have similar distributions. Seasonal Contrast (SeCo) [30] is based on two steps: gathering uncurated RS images, then, using SSL. Bidirectional sample-class alignment (BSCA) [19] addresses semi-supervised domain adaption for cross-domain scene classification. ConSecutive Pre-Training (CSPT) [57], similarly to [30], aims to leverage knowledge from unlabeled data through a self-supervision approach. MemoryAdaptNet [58] constructs an output space adversarial learning framework to tackle domain shift. UDAT [55] addresses UDA for nighttime aerial tracking, through a transformer. MATERIAL and TExture Representation Learning (MATTER) [3] aligns domains of different datasets, through a self-supervision task, on several tasks. UDA_for_RS [24], complementing [17], proposes a Gradual Class Weights (GCW) and a Local Dynamic Quality (LDQ) module.
### Using Geographical Metadata
The first attempts at using geoinformation, outside the UDA framework, were presented in [25, 44]. In [28], the authors provide a comprehensive review of location encoding. [27] proposes an efficient spatiotemporal prior, that estimates the probability that a given object category occurs at a specific location. GeoKR [23] uses metadata for an efficient pre-training strategy on a wide dataset. In [5], geographical coordinates are used for map translation. Geography-Aware SSL [4] proposes an SSL algorithm based on the geoinformation of the patches. In [29], the authors present Space2Vec to encode the absolute positions and location spatial relationships. PE-GNN [22] follows a similar approach, using graphs.
## 3 Methodology
As stated, the aim of GeoMTNet is to reduce domain shift using geographic coordinates, by designing a lightweight and easy-to-use architecture. In particular, given a set of source images \(\mathbf{X}_{S}\in\mathbb{R}^{H\times W\times B}\), where \(H\) is the height of the images, \(W\) is the width and \(B\) are the bands of the images in input, and a set of target images \(\mathbf{X}_{T}\in\mathbb{R}^{H\times W\times B}\), we want to predict the annotation maps of the target, \(\dot{\mathbf{Y}}_{T}\in\mathbb{R}^{H\times W}\), making use only of \(\mathbf{X}_{S}\), \(\mathbf{X}_{T}\) and the labels of the source images, \(\mathbf{Y}_{S}\in\mathbb{R}^{H\times W}\). The labels of the target images, \(\mathbf{Y}_{T}\in\mathbb{R}^{H\times W}\), could only be used for evaluation purposes. To achieve these targets, we decided to adopt a classic U-Net [36] as the backbone model. It is easy to train and commonly used. It has a reduced number of parameters w.r.t. transformer counterparts. It exploits a semantic segmentation training, through the use of a pixel level-classification loss. On the other hand, to tackle domain shift, visible in Fig. 1, the GeoMultiTask (GeoMT) module is added. The net is further trained with the Dynamic Class Sampling (DCS) strategy. Both of them have been shaped to be easily portable to different architectures. First, the GeoMT makes use of the geographical coordinates
Figure 1: Radiometric discrepancies of the aerial images between domains. The bands displayed are composite of near-infrared, red and green spectral information. Figure adapted from [15].
as a proxy to supervise the domain shift. Inspired by different self-supervised approaches [4, 30], we consider that an effective method to improve the performance on the target domains is to constrain, through a loss term, the features of the encoder to understand where the target images are located. Considering that also the source domain is made of different sub-domains (i.e. departments), the GeoMT is employed to constrain the encoder to learn generalized representations of all the data. Second, as the distribution of the labels is skewed, we propose DCS, to limit the errors on the under-represented classes, inspired by [24]. The whole architecture is shown in Fig. 2. In the next sections, the GeoMultiTask module (Section 3.1) and the Dynamic Class Sampling (Section 3.2) are presented in a formal and detailed way.
### GeoMultiTask Module
In other EO tasks, some approaches used geographical coordinates, such as using them as residual [5] or skip connections, or even being stacked to the input [54]. In our case, inspired by SSL [4, 30], we decided to use coordinates to drive and constrain the encoder features. Specifically, both \(\mathbf{X}_{S}\) and \(\mathbf{X}_{T}\) images pass through the encoder \(\mathcal{E}\). This results in \(\mathbf{Z}_{S}\in\mathbb{R}^{H^{\prime}\times W^{\prime}\times C}\) and \(\mathbf{Z}_{T}\in\mathbb{R}^{H^{\prime}\times W^{\prime}\times C}\) feature maps, where \(H^{\prime},W^{\prime}\) and \(C\) are the height, width and number of channels of the feature maps. \(\mathbf{Z}_{S}\) passes through the decoder \(\mathcal{D}\) to obtain \(\hat{\mathbf{Y}}_{S}\). In parallel, both representations \(\mathbf{Z}_{S}\) and \(\mathbf{Z}_{T}\) enter the GeoMT to predict a vector \(\widehat{\mathbf{C}}\in\mathbb{R}^{D}\) containing localization information. Specifically, each patch is assigned a pair of coordinates \((C_{lon},C_{lat})\), referring to the centroid of the patch itself. These coordinates undergo the following transformations to be used as supervision for \(\widehat{\mathbf{C}}\):
1. centering them in the reference system EPSG:2154, w.r.t. whom the coordinates are expressed. Particularly, we subtract \(x=489353.59\ m\) to \(C_{lon}\) and \(y=6587552.20\ m\) to \(C_{lat}\), to make the median values equal \((0,0)\);
2. noise injection of \(30\ km\) to let the net capture large-scale patterns not too specifically referred to the patches in the batch, but rather to wider areas of France, that may eventually even cross the boundaries of individual departments;
3. positional encoding of the coordinates, for similar reasons of the noise injection. The strategy uses the following formula: \[\mathbf{C}=\left[\begin{array}{c}\sin\left(C_{lon}\omega_{1}\right)\\ \cos\left(C_{lon}\omega_{1}\right)\\ \vdots\\ \sin\left(C_{lat}\omega_{d/4}\right)\\ \cos\left(C_{lat}\omega_{d/4}\right)\end{array}\right]_{d}\text{ with }\omega_{i}=\frac{1}{f^{2i/d}},\] (1) where \(D=256\) and \(f=20,000\). Particularly, for the same reasons of the noise injection, \(f\) is set to 20,000 and not 10,000 like in most applications [5, 47].
GeoMT consists, firstly, of a max-pooling layer, which is used to reduce dimensionality and select the most meaningful features. After this, 5 linear layers, 4 of which employ batch normalization and ReLUs, are stacked. The detailed sizes are given in right part of Fig. 2. This module produces \(\widehat{\mathbf{C}}\) from which we compute the self-supervised loss \(L_{coord}\) that has the form of a mean squared error:
\[L_{coord}=\frac{1}{n}\sum_{i=1}^{n}\Big{(}\widehat{\mathbf{C}}_{i}-\mathbf{C }_{i}\Big{)}^{2}, \tag{2}\]
where \(n\) is the number of samples.
Figure 2: On the left the overview of the proposed architecture, made of: an encoder (\(\mathcal{E}\)), a decoder (\(\mathcal{D}\)), the GeoMultiTask module, and the Dynamic Class Sampling module. On the right, the structure of the GeoMultiTask module with input and output sizes.
The final loss of the GeoMTNet is thus:
\[L=L_{seg}+L_{coord}^{S}+L_{coord}^{T}, \tag{3}\]
where \(L_{seg}\) is the segmentation loss, computed among \(\widehat{\mathbf{Y}}_{S}\) and \(\mathbf{Y}_{S}\), \(L_{coord}^{S}\) is the loss term referred to the source domain image, and \(L_{coord}^{T}\) to the target ones.
### Dynamic Class Sampling
Class imbalance is a common problem in deep learning, that leads to poor model generalization, especially in rare classes. To address this issue, researchers have proposed various methods, such as assigning class weights inversely proportional to the frequency of the class in the dataset [60]. The class weight for class \(c\), referred to the \(n\)-th label, is calculated as follows:
\[w(n,c)=\frac{N_{c}\cdot\exp\left[\left(1-f_{c}\right)/t\right]}{\sum_{c^{ \prime}=1}^{C}\exp\left[\left(1-f_{c^{\prime}}\right)/t\right]}, \tag{4}\]
where \(f_{c}\) is the frequency of class c in the training dataset, \(N_{c}\) is the total number of classes, and \(t\) is a temperature parameter. The frequency \(f_{c}\) is calculated as:
\[f_{c}=\frac{1}{H\times W}\sum_{h=1}^{H}\sum_{w=1}^{W}\left(y_{S}^{(h,w)} \right)_{c}, \tag{5}\]
where \(y_{S}^{(h,w)}\) denotes the one-hot source label at location \((h,w)\) in the image, and \((\cdot)_{c}\) denotes the \(c\)-th scalar of a vector. Inspired by [24], which applies a similar mechanism to the pseudo-labels predicted by the student network, the class weight is updated iteratively for each image using an exponentially weighted average:
\[\mathrm{DCS}(n,c)=\alpha\cdot\mathrm{DCS}(n-1,c)+(1-\alpha)\cdot w(n,c). \tag{6}\]
\(\alpha\) is the decay rate of the exponential average. It helps to reduce volatility, especially in the early stages of training. Unlike other approaches [24], this weighting strategy does not impact the pseudo-labels but the predicted labels directly. The distribution of the classes will be different from the whole dataset in advance, due to sampling randomness: the weights will be updated iteratively for each image. It is also worth noting that, instead of directly initializing the class weights to the distributions estimated from the first sample, they are initialized to 1 and then updated iteratively by an exponentially weighted average. A higher \(t\) leads to a more uniform distribution. A lower one makes the model pay more attention to the rare classes.
The final segmentation loss is:
\[L_{seg}=-\sum_{h=1}^{H}\sum_{w=1}^{W}\mathrm{DCS}(n,c)\cdot y_{S}^{(h,w)} \cdot\log\left(h_{\theta}\left(x_{S}^{(h,w)}\right)\right), \tag{7}\]
where \(h_{\theta}\) is the model with weights \(\theta\).
## 4 Dataset
The French National Institute of Geographical and Forest Information (IGN) [1] is a French public state administrative establishment in charge of measuring large-scale changes on the French territory. It is constructing the French national reference land cover map _Occupation du sol a grande echelle_ (OCS-GE), also making use of AI-based data and techniques. To this purpose, IGN developed the FLAIR dataset1.
Footnote 1: downloadable at [https://ignf.github.io/](https://ignf.github.io/) FLAIR/
### FLAIR dataset
The French Land cover from Aerospace ImageRy (FLAIR) dataset [15] includes 50 spatial domains varying along the different landscapes and climates of metropolitan France. Each domain is a French department (Fig. 3).
The complete dataset is composed of 77,412 patches, covering approximately \(810\)\(km^{2}\). Each patch is \(512\times 512\) pixels, with a ground sample distance (GSD) of \(0.2m\). Each domain is composed of \(1725-1800\) patches. The domains were selected considering the major landscapes (e.g., urban, agricultural, etc..) and per semantic class radiometries (see Fig. 1). To acquire the images, more than three years were needed. This led to a high intra- and inter-domain variance in the acquisitions (see Fig. 3 and 1). The images have 5 bands corresponding to blue, green, red, near-infrared and elevation channels. The first 4 channels are retrieved from VHR aerial images ORTHO HR(r) [12]. The fifth channel is obtained through the difference between the Digital Surface Model and the Digital Terrain Model (see [15] for more details). The corresponding ground truth labels describe the semantic class for each pixel. Nineteen classes are annotated. The _other_ class corresponds to pixels impossible to define with certainty. Finally, the dataset is split into 40 domains for the training and 10 for testing, ensuring a comparable distribution of the labels in train and test. The domains are highlighted in Fig. 3. Each patch is enriched with metadata:
* domain and zone label. The zone label is made of two letters, allowing a macro-distinction of the two major types of land cover of the area. The letter U indicates urban, N natural area, A agricultural area, and F forest.
* date and hour at which the aerial image was acquired;
* the geographical coordinates of the centroid and the mean altitude of the patch;
* camera type used during aerial image acquisition [42].
To our knowledge, this is the first time that this dataset has been used for scientific research. Particularly, in our experiments, we used a subset of the whole dataset: D06, D08,
D13, D17, D23, D29, D33, D58, D67, D74 as source domains and D64, D68, D71 as target domains. We ended up with more than 16k images for training and more than 5k for testing.
## 5 Experimental Setup
As stated, for our experiments, we selected 10 departments as source domains and 3 as target domains. Adopting the same strategy as [15], we considered as _other_ all the classes labeled as \(>12\). These classes are strongly under-represented, being \(<0.2\%\) of all the labels. Thus, we ended up with 13 classes (i.e. _building_, _pervious surface_, _improvious surface_, _bare soil_, _water_, _coniferous_, _deciduous_, _brushwood_, _vine_, _grassland_, _crop_, _plowed land_, _other_). A single Tesla V100-SXM2 32 GB GPU was used for the training phase. Having limited computational power, but still wanting to preserve the high resolution of the dataset (GSD = \(0.2\)\(m\)), for the training, we used random crops of \(256\times 256\). For the testing stage, we perform inference on four non-overlapping crops of \(256\times 256\), for each patch of size \(512\times 512\). For the U-Net, we use a ResNet18 [16], pretrained on ImageNet, as encoder and the softmax function as activation on the last layer. For all the experiments, we fix the batch size to 16, the number of epochs to 120 and the learning rate to 0.0001. We used early stopping with a patience of 30 epochs. The semantic segmentation loss is a cross-entropy, ignoring the _other_ class. We used Adam as optimizer and RandAugment [11] as the set of augmentations. The mean intersection over union (mIoU) on the first 12 classes is the selected evaluation metric. For the DCS module, we set the parameters to \(T=0.9\) and \(\alpha=0.7\). To assess the performance of our strategy, we selected different methods2 from the literature for an extensive comparison. We chose: AdaptSegNet [46], which employs an adversarial training approach; ADVENT [48], using an entropy minimization strategy; DAFormer [17], which adopts a transformer with a self-training strategy and UDA_for_RS [24], that optimizes the DAFormer for RS tasks. In Section 6, we present different experimental results, addressing both comparisons and several ablation studies.
Footnote 2: We tested them through the code in their official GitHub repositories.
## 6 Experimental Results
GeoMTNet reaches satisfying performance, shown in Tables 1 and 2. As expected, the under-represented classes, such as _coniferous_ and _brushwood_, are the most difficult to be correctly predicted. This is due both to the few quantities of data and the radiometric similarity with some more frequent classes. For example, _coniferous_ could be easily confused with _deciduous_. At the same time, some errors are due to the fact that images share some similar spatial patterns. This is the case of _vine_ and _crop_ pixels. Another frequent misclassification error concerns _bare soil_. Even though the performance is satisfying (55% mIoU), we can see that the variance implicit in the definition of this class led to confusion with _herbaceous cover_ or _impervious surface_. In the next sections, comparisons (Section 6.1) and ablation studies (Sections 6.2, 6.3 and 6.4) are carried one.
### Comparison
Despite using a smaller number of parameters (33M), GeoMTNet reaches better results than all the other selected architectures (47.22% mIoU). In particular, we can see from Table 1 that there is a deep gap w.r.t. AdaptSegNet [46] (24.97% mIoU) and ADVENT [48] (25.56% mIoU), which are more dated and, probably, properly developed for the synthetic-to-real benchmarks [35, 37]. On the other hand, DAFormer [17] and UDA_for_RS [24], based on a transformer, have comparable performance with the GeoMTNet (respectively 45.61% and 47.02% mIoU). When using strategies properly shaped for RS task, such as in
Figure 3: On the left, the ORTHO HR® aerial image cover of France. On the center, the train and test split of the 50 domains, with the domains selected for our experiments highlighted. On the right, the acquisition time of each domain. Figure adapted from [15].
UDA_for_RS [24], optimal results are obtained. However, from both Table 1, Table 2 and Fig. 4, we can see that GeoMTNet leads to better results also w.r.t. the aforementioned method with a reduced number of parameters (33M for GeoMTNet vs 85M for UDA_for_RS). Focusing on the detailed performance, reported in Table 2, we can observe that GeoMTNet almost has the best performance on all the classes, except for four of them (that are _pervious surface_, _bare soil_, _brushwood_, and _vine_). This is mainly justified by the fact that each different architecture tends, when deciding among two similar classes, to overestimate one of them and underestimate the other. For example, _vine_ is often confused with _plowed land_ (and sometimes _crop_, too), due to their similar pattern. DAFormer, still having a gain of more than 10% in IoU over GeoMTNet performance for _vine_, reaches poor results both on _plowed land_ (41.83% vs 54.79% in IoU) and _crop_ (23.74% vs 35.02% in IoU). This phenomenon could be observed also in Fig. 4, where some predictions of the three best models (namely DAFormer, UDA_for_RS, and GeoMTNet) are reported to draw some qualitative results. We observe that DAFormer performs overall worse, as it often predicts some irrelevant classes, with a poorer texture and shape of the polygons predicted. On the other hand, most of the time UDA_for_RS predicts a smaller number of classes with a wider predicted area for each of them w.r.t. the other methods. This is mainly due to the LDQ module of UDA_for_RS, which bases the pixel prediction on the predictions made on the contiguous pixels. This can be seen both in positive cases, Fig. 4 b), where the land cover prediction of the traffic circle is more consistent than in GeoMTNet, and in negative cases, Fig. 4 d), where the low confidence in predicting _pervious surface_ and _building_ ends in a uniformed incorrect prediction of _impervious surface_. On the other hand, we can appreciate the consistency in shape reconstructions and boundaries in GeoMTNet more than in the others (see, for example, the building edges in Fig. 4 c)). Moreover, we can see how shadows consist in an important problem for all the architectures (see for example in Fig. 4 a) how the shape of the _plowed land_ in the upper right part of the image is badly reconstructed for both UDA_for_RS and GeoMTNet). Another issue to consider is that train patches are of size \(512\times 512\) while the model is trained on \(256\times 256\) patches. Thus, sometimes, the borders of the predicted tiles to have contrasting predictions, as visible in the central part of Fig. 4 e).
### GeoMultiTask module
To understand the GeoMTNet capabilities, various informative ablation studies have been conducted. To perform these experiments in an easy and rapid way, we used a simple U-Net [36] as CNN, with less than 2M parameters. As mentioned before, the GeoMT takes as input the features provided by the encoder and tries to infer low-frequency encoded coordinates, with a random noise injection. As it could be argued from Tables 3 and 4, both of these strategies improve the net performance.
At first, we focused on the challenge of using coordinates, still feeding the GeoMT with the decoder features. As stated, the goal is to allow the net to capture large-scale patterns, not too specific to the single patch, but rather to areas of France that may eventually even cross the boundaries of individual departments. We tried two strategies: positional encoding and noise injection. Positional encoding [47], used in EO approaches [5], allows the coordinates to be represented with a vector, making it easier to grasp reciprocal phenomena of proximity between patches. Noise injection allows to make net performance more generalizable, avoiding the association of a specific coordinate with a specific patch. In light of these considerations, we have tried different configurations (Table 3). Notably, using a lower frequency (i.e., 1/20000) than the one used in the literature [5, 47], brings greater benefits. In fact, we are interested in large-scale effects, enhanced by a lower frequency. Concerning noise injection, it has been empirically demonstrated that a consistent noise (\(30\ km\)) compared to the size of a patch (about \(100\ m\)) helps the generalization process. However, increasing it too much (about \(50\ km\)) leads to excessive network confusion and a consequent drop in performance.
Secondly, we needed to limit the number of parameters, especially w.r.t. the other models in the literature. To do this, we have no longer used as input of the GeoMT the features in output from the decoder, but those in output from the encoder. In fact, the encoder features should already provide the necessary information to perform a correct segmentation. This intuition was supported by the results, shown in Table 4, which shows a slight drop in performance, completely negligible. Notably, the shape of the GeoMT is also slightly different. In the case described so far (input features of the decoder), the module consists of two convolutional layers (to reduce the dimensionality of the features with a limited number of parameters) followed by three linear layers.
\begin{table}
\begin{tabular}{c c c} Architecture & mIoU (\%) & params (M) \\ \hline AdaptSegNet [46] & 24.97 & 99 \\ ADVENT [48] & 25.56 & 99 \\ DAFormer [17] & 45.61 & 85 \\ UDA\_for\_RS [24] & 47.02 & 85 \\ GeoMultiTaskNet **(ours)** & **47.22** & **33** \\ \end{tabular}
\end{table}
Table 1: Our GeoMultiTaskNet outperforms all the other methods on the considered FLAIR target domains. In addition to the improved results in terms of mIoU, the size of the proposed model is also significantly smaller than that of the other selected algorithms.
### GeoTimeMultiTask experiments: using the temporal information
We tried to include temporal information as well, also inspired by other works [4, 30]. In particular, we inserted both month and time of day information, discarding the year. In fact, the month impacts the seasonality of some classes (e.g., the vegetative ones), and the hour the acquisition conditions [10]. As previously, we tried to inject some noise, so that the features could generalize better. Specifically, the time information was circle encoded (i.e., arranged equally spaced on a circle) and, when used, a random noise of \(\pm 1\) was added. Finally, these experiments were carried out using either the encoder or the decoder features. In both circumstances, the TimeMultiTask module (TimeMT) has been defined similarly to the GeoMT, but smaller in size. For example, in the case of using the encoder features, the TimeMT consists of one max-pooling layer and two linear layers. We refer to these experiments as GeoTimeMT, being characterized by both GeoMT and GeoMT.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \multicolumn{1}{c}{} & \multicolumn{10}{c}{IoU (\%)} \\ \cline{2-13} \multicolumn{1}{c}{} & building & pervious surface & impervious surface & bare soil & water & comiferous & deciduous & brushwood & vine & grassland & crop & plowed land \\ \hline AdapSegNet [46] & 39.98 & 20.75 & 40.23 & 20.36 & 15.25 & 4.93 & 35.37 & 10.99 & 34.51 & 42.69 & 11.06 & 23.47 \\ ADVENT [48] & 35.79 & 24.38 & 48.82 & 6.85 & 31.98 & 0.00 & 51.65 & 11.79 & 33.33 & 25.76 & 11.46 & 24.29 \\ DAFormer [17] & 67.09 & 45.56 & 61.99 & 55.35 & 65.12 & 8.91 & 54.39 & **20.31** & **64.39** & 38.79 & 23.74 & 41.83 \\ UDA\_for\_RS [24] & 66.3 & **48.05** & 62.36 & **59.28** & 61.24 & 9.22 & 60.02 & 16.52 & 57.74 & 40.12 & 30.32 & 54.17 \\ GeoMultiTaskNet (**ours**) & **67.53** & 40.86 & **63.89** & 55.31 & **67.02** & **13.85** & **60.97** & 14.08 & 53.09 & **40.33** & **35.02** & **54.79** \\ \end{tabular}
\end{table}
Table 2: Comparison in the IoU for each class of the considered FLAIR target domains.
Figure 4: Some examples of predictions for the best performing models. Particularly we can see in order: the input image, the ground truth, the prediction of DAFormer, the prediction of UDA_for_RS and the prediction of our GeoMTNet.
TimeMT modules. Also for this set of experiments, a simple U-Net was used as the backbone, without ResNet as the encoder. The results are shown in Table 5. Two behaviors can be observed immediately: using temporal metadata leads to limited improvements (+1.72% mIoU w.r.t. the baseline); GeoTimeMT, which combines geographical and temporal information, does not improve results obtained using only GeoMT (43.77% vs 44.70% mIoU). For these reasons, our GeoMTNet makes only use of geographical coordinates.
Analyzing the detailed results, we can observe that using the features outputted by the encoder is more beneficial than the ones outputted by the decoder, both from a performance (38.19% vs 42.72% mIoU) and size point of view (405M vs 11.9M parameters). In fact, the benefits derived from temporal metadata are more related to the features directly encoded from the images, such as radiometric information of the images, more than to the decoded representations of the patches, such as the one connected to land cover. In addition, we observe again that using less precise information, thus with noise injection, leads to better results (42.72% vs 43.77% mIoU). Finally, we observe that the hour information is less relevant than the month information. In fact, the large variance of the dataset and the large amount of images, make it more important and beneficial to have representations from different seasons of same classes more than under different light conditions. In fact, classes such as _brushhood_ or _crop_ really vary their radiometric information depending on the seasonality.
### Comprehensive baselines
We considered important to evaluate the impact of each component of the GeoMTNet. The results of these experiments are shown in Table 6.
The component that leads to the greatest improvement is the GeoMT which leads to a gain of about 4% in mIoU, while DCS does not go beyond a percentage point. This is due to the fact that GeoMT is properly shaped to enhance RS metadata, to empower the architecture on which it is applied. In contrast, unlike other approaches such as [24], in our GeoMTNet, the weighting module, namely DCS, does not impact the pseudo-labels, but the predicted labels directly. Therefore, its effectiveness on images from target domains is influenced directly by the source images.
## 7 Conclusions
In light of major technological innovations, more and more RS images are available. However, the annotation of these images is not progressing at the same rate, leading to a vast amount of unlabelled data. Most of the time, these images carry metadata, which are often simply discarded for CV tasks. In this work, we showed that the use of architectures specifically designed to exploit such metadata in an EO context can lead to excellent results. To this end, we proposed GeoMultiTaskNet, which outperforms other models in the literature, despite being a lightweight network, on the FLAIR dataset. This real-world scenario-oriented dataset presents a great variety of information and is well-suited for this type of experiments. In this context, this work only presents itself as a first step in a line of research that is as important as it is still under-investigated: remote sensing unsupervised domain adaptation. Future steps include the extension of GeoMultiTaskNet over the entire FLAIR dataset. In addition, the intention is to probe this model on other datasets [51], where domain shift is more important, and to find new ways to integrate geo-metadata into already performing models, such as transformers.
\begin{table}
\begin{tabular}{c c c} net & mIoU (\%) & params (M) \\ \hline baseline & 42.51 & 25 \\ +GeoMT & 46.68 & 32.7 \\ +DCS & 43.25 & 25 \\ GeoMTNet & **47.22** & 32.7 \\ \end{tabular}
\end{table}
Table 6: Ablation studies about the component of the GeoMultiTaskNet. As stated, both components lead to better results than the baseline, even though the GeoMultiTask module performs better.
\begin{table}
\begin{tabular}{c c c} input features & mIoU (\%) & params (M) \\ \hline - & 42.05 & 1.9 \\ output of the decoder & 44.83 & 270 \\ output of the encoder & 44.70 & 11.2 \\ \end{tabular}
\end{table}
Table 4: Ablation studies, showing the behavior of GeoMultiTaskModule when using different input features.
\begin{table}
\begin{tabular}{c c c c} noise (km) & 1/frequency (-) & mIoU (\%) & params (M) \\ \hline - & - & 42.05 & 1.9 \\ - & - & 41.69 & 270 \\ - & 10,000 & 43.57 & 270 \\ \(\pm\)30 & 10,000 & 43.69 & 270 \\ \(\pm\)30 & 20,000 & 44.83 & 270 \\ \(\pm\)50 & 20,000 & 42.68 & 270 \\ \end{tabular}
\end{table}
Table 3: Ablation studies, showing the behavior of GeoMultiTaskModule under different noise injections and encoding.
\begin{table}
\begin{tabular}{c c c c} input features & time used & time noise & mIoU (\%) & params (M) \\ \hline - & - & - & 42.05 & 1.9 \\ decoder & both & no & 38.19 & 405 \\ encoder & both & no & 42.72 & 11.9 \\ encoder & month & yes & 43.77 & 11.9 \\ \end{tabular}
\end{table}
Table 5: Ablation studies, showing the behavior of GeoTimeMultiTaskModule under different conditions.
|
2303.03591
|
Approach to Learning Generalized Audio Representation Through Batch
Embedding Covariance Regularization and Constant-Q Transforms
|
General-purpose embedding is highly desirable for few-shot even zero-shot
learning in many application scenarios, including audio tasks. In order to
understand representations better, we conducted a thorough error analysis and
visualization of HEAR 2021 submission results. Inspired by the analysis, this
work experiments with different front-end audio preprocessing methods,
including Constant-Q Transform (CQT) and Short-time Fourier transform (STFT),
and proposes a Batch Embedding Covariance Regularization (BECR) term to uncover
a more holistic simulation of the frequency information received by the human
auditory system. We tested the models on the suite of HEAR 2021 tasks, which
encompass a broad category of tasks. Preliminary results show (1) the proposed
BECR can incur a more dispersed embedding on the test set, (2) BECR improves
the PaSST model without extra computation complexity, and (3) STFT
preprocessing outperforms CQT in all tasks we tested.
Github:https://github.com/ankitshah009/general_audio_embedding_hear_2021
|
Ankit Shah, Shuyi Chen, Kejun Zhou, Yue Chen, Bhiksha Raj
|
2023-03-07T01:54:24Z
|
http://arxiv.org/abs/2303.03591v1
|
Approach to Learning Generalized Audio Representation Through Batch Embedding Covariance Regularization and Constant-Q Transforms
###### Abstract
General-purpose embedding is highly desirable for few-shot even zero-shot learning in many application scenarios, including the audio tasks. In order to understand representations better, we conducted thorough error analysis and visualization of HEAR 2021 submission results. Inspired by the analysis, this work experiments with different front-end audio preprocessing methods, including Constant-Q Transform (CQT) and Short-time Fourier transform (STFT), and proposes a Batch Embedding Covariance Regularization (BECR) term to uncover a more holistic simulation of the frequency information received by the human auditory system. We tested the models on the suite of HEAR 2021 tasks, which encompass a broad category of tasks. Preliminary results show (1) the proposed BECR can incur a more dispersed embedding on the test set, (2) BECR improves the PaSST model without extra computation complexity, and (3) STFT preprocessing outperforms CQT in all tasks we tested.
**Github:** [https://github.com/ankitshah009/general_audio_embedding_hear_2021](https://github.com/ankitshah009/general_audio_embedding_hear_2021)
## 1 Introduction
General-purpose representation learning is still an open question in audio datasets. Therefore, Holistic Evaluation of Audio Representations 2021 (HEAR 2021) challenge was proposed, aiming at providing longitudinal insights into different generalized audio representation models [1]. The challenge was to train one audio representation model that is flexible enough to represent unseen audio datasets. The representations were evaluated by training and testing a shallow network built on the embedding output of the models. The end-to-end process of HEAR2021 is summarized in Fig. 1.
Inspired by the results of the challenge, this work first compares Short-time Fourier transform (STFT) with Constant-Q Transform (CQT), which was not used by any team in the challenge as the audio preprocessing method. Secondly, based on thorough error analysis and visualization of HEAR 2021 submission results, we propose Batch Embedding Covariance Regularization (BECR),
a regularizing term that utilizes the Gini Index to measure the statistical dispersion of eigenvalues of the covariance matrix of the embedding on a training task. More specifically, it encourages the projection of representations of a specific pre-training task in all its eigenvectors to be as evenly dispersed as possible. Therefore, it aims to learn a deep representation network that is more versatile in low-dimension space when trained with only one dataset of a specific domain. We also propose an optimized implementation algorithm to reduce the time complexity.
We tested the two proposals along with a baseline model on four HEAR 2021 tasks, which encompass tasks from three audio domains including speech, music, and broad. Results show BECR improves the PaSST baseline in all tasks while CQT-trained results are inferior compared to Mel STFT models.
## 2 Related Work
### Audio Data Preprocessing Techniques
#### 2.1.1 Short-time Fourier transform (STFT) and Mel Spectrogram
An approach to better solve the general representation learning challenge is by applying different hand-crafted transformations based upon domain expertise for different tasks, for example, using Short-time Fourier transform (STFT) [1]. STFT is a powerful audio signal processing tool that can be applied in many tasks. It specifies complex amplitude versus time and frequency for every signal and defines a valuable class of time-frequency distributions. However, the STFT has its disadvantages, such as the limit in its time-frequency resolution capability. Low frequencies can be hardly depicted with short windows, whereas short pulses can only poorly be localized in time with long windows.[2] As humans don't perceive the sound in linear scale, Mel scale is proposed such that equal distances in pitch are equally distant to the listener, the human. The Mel spectrogram converts the values in hertz to Mel scale. This transformation can better stimulate human hearing than STFT [3].
#### 2.1.2 Constant Q-transform (CQT)
Another way suitable to preprocess the music, human voice, and other sound varying data is Constant Q-transform. In 1991, Brown proposed CQT to simulate the human auditory system by using a transform with a fixed quality factor Q [4]. Constant Q-transform is different from STFT in several ways. The Constant-Q transform has logarithmically spaced frequency bins, while the frequency component of STFT is linear. Further, the Constant-Q transform has octaves bin widths other than absolute value bin width. As the output of the transform is effectively amplitude/phase against log frequency, fewer frequency bins are required to cover a given range effectively [3]. Therefore, some argue Constant-Q transform better describes what is received by the human auditory system and is thus better positioned in the musical area [5].
### Gini Index in Machine Learning
Gini Index is a data purity measure. A small Gini Index indicates a high purity of the encodings or signals. Gini Index has been widely applied in Machine Learning. For example, Randall [6] proposed neural decision trees (NDT) based on decision trees and MLP in the practice of combining Gini Index with neural networks. Park [7] also proposed a deep learning model using the Gini Index algorithm for the extraction of features from datasets.
We noticed the need for a summary statistic that describes the overall geometric property of the embedding matrix on an evaluation dataset during the analysis of HEAR2021 results - specifically, how spread out the embedding for different tasks. Therefore we got the idea to apply Gini Index to normalized eigenvalues of embedding matrix as a regularization term. We believe this work presents the first application of Gini Index with such a definition in audio tasks.
## 3 Method
The end-to-end process of this work is summarized in Fig. 1. We first compare the effects of two preprocessing methods. Secondly, we experiment with a novel Gini Index-based regularization to improve the versatility of the model. The resulting models are used to generate embedding on a
variety of unknown datasets of HEAR2021 dataset, which will be used to train shallow MLP layers to get a final evaluation score.
### Preprocessing Methods
Short-time Fourier transform (STFT) first divides the long recording signal into short equal segments in the time domain. Then, it computes a Fourier transform on each segment and generate several frequency spectrums. Discrete STFT can be expressed as:
\[X(m,\omega)=\sum_{n=-\infty}^{\infty}x[n]\omega[n-m]e^{-j\omega n} \tag{1}\]
CQT transform mirrors the human auditory system, whereby at lower-frequencies spectral resolution is better [8]. Discrete CQT can be expressed as:
\[X[k]=\frac{1}{N[k]}\sum_{n=0}^{N[k]-1}W[k,n]x[n]e^{\frac{-j2\omega n}{N[k]}} \tag{2}\]
### Baseline Architecture
We choose PaSST model as the baseline structure [9] which achieved overall top results in the 19 tasks (14 are secret tasks) of HEAR2021 challenge [1]. PaSST is the state-of-the-art transformer-based audio model that can achieve SOTA results with less memory and time complexity compared to other CNN-based models [9]. As shown in Fig. 2, the input of the model is an audio spectrogram generated by preprocessing methods. In part 1, it experiences a patch extraction and linear projection. In part 2, frequency and time positional encodings are added. Then a Patchout operation will be applied. The Patchout idea is to encourage the transformer to classify with an incomplete sequence, similar to dropout. Finally, the sequence is flattened and then passed through Self-attention layer. In the last, a classifier MLP layer operates on the classification token and generates predictions [9].
### Batch Embedding Covariance Regularization (Beck)
#### 3.3.1 Analysis of Low-dimension Representation Projections
Through analysis of the embedding of top models in various tasks, we observed that given the same task, those models that perform better in the downstream task generally have higher embedding
Figure 1: The End-to-end Training and Evaluation Process for HEAR2021. In this work, we made two modifications of it. (1) We compared the effects of CQT and STFT, and (2) we designed a regularization term in the training time.
dispersion, for example with lower variance explained by top principal components and lower K-means F-test score. See Fig. 3 for a summary. Thus, we conjecture the high dispersion of the embedding produced by a model is helpful in downstream tasks.
#### 3.3.2 Becr Design
Since we have restricted the training process to only using one task dataset, the eigenvalues of the covariance matrix are suitable to simulate the variance of embedding when projected to different dimensions for different testing tasks. The Gini Index of normalized eigenvalues is therefore a summary statistic that describes how evenly dispersed the embedding is across the eigenspace. So we define BECR in the following way:
For embedding layer with \(D\) outputs, \(\mathbf{K}\) is the \(D\) dimensional covariance matrix of each batch embedding \(f_{\theta}(\mathbf{X}_{i})\),
Figure 3: Summary of Embedding Performance and Dispersion Metrics. Each data point is either a submitted or a replicated model of HEAR2021. Nsynth Pitch, FSD50K, and CREMA-D are music, broad, and speech tasks respectively.
Figure 2: The Patchout transformer (PaSST) architecture [9].
\[\mathbf{K}(f_{\theta}(\mathbf{X}_{i}))=(f_{\theta}(\mathbf{X}_{i})-\mathrm{E}[f_{ \theta}(\mathbf{X}_{i})])(f_{\theta}(\mathbf{X}_{i})-\mathrm{E}[f_{\theta}( \mathbf{X}_{i})])^{\mathrm{T}} \tag{3}\]
\(\mathbf{K}\) is always positive semi-definite with real nonnegative eigenvalues. \(\mathcal{G}\) applies the definition of Gini Index to the normalized eigenvalues of \(\mathbf{K}\),
\[\mathcal{G}(f_{\theta}(\mathbf{X}_{i}))=1-\sum_{i=1}^{D}(\frac{\lambda_{i}( \mathbf{K}(f_{\theta}(\mathbf{X}_{i})))}{\sum_{i=1}^{D}\lambda_{i}(\mathbf{K} (f_{\theta}(\mathbf{X}_{i})))})^{2} \tag{4}\]
\(\mathcal{R}\) is the proposed regularization term,
\[\mathcal{R}(\mathbf{X}_{i},\theta)=\max(0,\epsilon-\mathcal{G}(f_{\theta}( \mathbf{X}_{i})))^{2} \tag{5}\]
where \(\epsilon\) is a hyperparameter, defining the upper bound of the Gini Index when incurring a loss.
Finally, the total loss is defined by
\[\mathcal{L}^{\prime}(\mathbf{X}_{i},\theta)=(1-\lambda)\mathcal{L}(\mathbf{X }_{i},\theta)+\lambda\mathcal{R}(\mathbf{X}_{i},\theta) \tag{6}\]
In our case, the vanilla loss \(\mathcal{L}\) is Binary Cross Entropy loss, whose range is [0, 1]. By adding the regularization term, it encourages a large Gini Index that is encouraging evenly distributed eigenvalues. Also, the regularization term is a convex function added to the original loss function, which has desirable convergence property.
#### 3.3.3 Implementation Details of BECR
Let batch size be N and embedding dimension be K. The eigenvalue decomposition algorithm takes \(O(K^{3})\) complexity. Adding the covariance matrix calculation, the total additional complexity per batch is \(O(K^{3}+K^{2}*N)\), which is categorically impractical in our case since N is around 10 and K is around 1000. Through our experiment, training one 10-sample batch of FSD50K dataset with embedding eigenvalue decomposition takes around 6 hours in an RTX 3090 machine, 18 times longer than that without the loss. Therefore, we propose an optimized implementation without eigenvalue decomposition, that is with a complexity of \(O(K^{2}*N)\).
Recall \(tr(A)=\sum_{i}\lambda_{i}(A)\) and \(tr(A^{2})=\sum_{i}\lambda_{i}(A)^{2}\). Therefore, Eq. 4 can be simplified to
\[\mathcal{G}(f_{\theta}(\mathbf{X}_{i}))=1-\sum(\frac{\lambda_{i}}{\sum \lambda_{i}})^{2}=1-\frac{\sum\lambda_{i}^{2}}{(\sum\lambda_{i})^{2}}=1-\frac {tr(\mathbf{K}(f_{\theta}(\mathbf{X}_{i}))^{2})}{tr(\mathbf{K}(f_{\theta}( \mathbf{X}_{i})))^{2}} \tag{7}\]
Through experiment, the speed of this implementation method is similar to vanilla loss calculation. Simplified BECR takes 33 hours compared to vanilla loss's 36 hours in training as shown in Table 3. So we can apply BECR with little extra complexity.
## 4 Experimental Evaluation
### Datasets and Evaluation Metric
We evaluate the performance of the models on three types of data sets: music, speech, and broad sounds. See Table 1 for dataset and evaluation metrics summary.
For music, we use the NSynth Pitch containing 305,979 musical notes, each with a unique pitch, timbre, and envelope. For 1,006 instruments from commercial sample libraries, the dataset was generated in four seconds, monophonic 16kHz audio snippets by ranging over every pitch of a standard MIDI piano (21-108) as well as five different velocities (25, 50, 75, 100, 127). The goal of this task is to classify instrumental sounds into one of 88 pitches [10]. The Pitch Accuracy was used for evaluation on NSynth Pitch task.
We also use Beijing Opera Percussion for music task evaluation. Beijing Opera Percussion Instrument Dataset is based on recordings from The Beijing Opera [11]. It contains six main percussion instruments that can be classified into four main categories: Bangu, Naobo, Daluo, and Xiaoluo. There are 236 audio clips in total. Classification accuracy is used for evaluation on Beijing Opera Percussion task.
For speech, we use the CREMA-D for emotion recognition [12]. This dataset contains audio data of actors reciting sentences with one of six different emotions (anger, disgust, fear, happy, neutral, and sad). The goal of this task is to identify the type of emotion the actors are in when they say the sentences. Classification accuracy is used for evaluation on CREMA-D task.
For broad sounds, we use the FSD50K, each of the audio clips in this dataset is labeled using one or more of 200 classes in environmental sounds, speech, and music [13]. This dataset contains over 51K audio clips, totaling over 100 hours of audio, and is extracted from the AudioSet Ontology. We also use FSD50K for training. mAP was used for multi-label evaluation on FSD50K task.
### Hyperparameter Tuning
The proposed BECR involves two hyperparameters, \(\lambda\) and \(\epsilon\). We only experimented with \(\lambda\) of 0.05 and 0.10 as through our experiment \(\lambda\leq 0.1\) ensures the BECR does not cannibalize all the loss in the beginning few epochs.
For determination of search space of \(\epsilon\), we observed that the Gini Index of vanilla PaSST's embedding on FSD50K is 0.92 after 100 epochs (See Table 4). Also, in the initial batches, the Gini Index is around 0.3-0.6 in experiments. So it would not make sense if we set epsilon smaller than 0.6 which makes little difference in the final output (See Eq. 5). So we experimented with \(\epsilon\) larger than 0.7. The tuning results in Table 2 show \((\lambda,\epsilon)=(0.05,0.7)\) is the best combination.
## 5 Results
Results show that CQT-preprocessing is worse a choice than STFT+Mel in all four tasks. Additionally, the computational complexity of CQT transformation is larger than STFT+Mel, taking more than two times than original STFT+Mel (approximately 1 hour per epoch in FSD50K with a batch size of
\begin{table}
\begin{tabular}{l l l l} \hline \hline Task Name & Predictor Type & Split Mode & Evaluation Metric \\ \hline NSynth Pitch 5h & C & TVT & Pitch Acc. \\ Beijing Opera & C & 5-fold & Accuracy \\ CREMA-D & C & 5-fold & Accuracy \\ FSD50K & L & TVT & mAP \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the four evaluation tasks selected from HEAR 2021 [1]. For all four tasks, the embedding type are either scene based. The predictor types are either multiclass (C) or multilabel (L). The split method used during downstream evaluation are either train/validation/test (TVT) or K-fold.
\begin{table}
\begin{tabular}{l l l l} \hline \hline \(\epsilon\) & \(\lambda\) & \# Epochs &
\begin{tabular}{l} Test Set Performance \\ \end{tabular} \\ \hline
0.7 & 0.05 & 50 (19hr) & **30.7** \\
0.8 & 0.05 & 50 (18hr) & 24.4 \\
0.9 & 0.05 & 50 (18hr) & 24.9 \\
0.7 & 0.10 & 50 (19hr) & 25.6 \\
0.8 & 0.10 & 50 (18hr) & 25.7 \\
0.9 & 0.10 & 50 (18hr) & 29.8 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Hyperparameter Tuning Results on Training Set (FSD50K)
6). The proposed BECR combined with Mel+STFT outperforms the baseline model in all four tasks with similar training complexity.
### Discussion on CQT's Results
We tried to find some explanation for CQT's worse performance than STFT.
First, by comparing the resulting embedding of STFT and CQT-based PaSST model dimension reduction methods of PCA and T-SNE, we found projection of CQT are usually in linearly unseperatable shape, while those of STFT seems more cluster-like. Considering evaluation process builds a 2-layer MLP on the embedding results (see Fig 1), it's reasonable to assume that embedding of CQT-based PaSST being unseperatable accounts for its bad performance in the downstream tasks.
Secondly, CQT-based PaSST performance and some of its metrics follow the pattern mentioned in Section 3.3.1. Compared with STFT-based PaSST models, CQT-based PaSST embedding results are less dispersed in unseen datasets. See Table 4.
Thirdly, the experimental results aside, we suspect a relatively "good" preprocessing method partly depends on the choice of model. The original implementation of PaSST uses STFT transformation [9], so it's natural that PaSST model works best with STFT preprocessing method instead of others.
Figure 4: Projection of Embedding with PCA and T-SNE. The points are colored using the groundtruth labels. Dataset is CREMA-D. The models were trained on FSD-50K, not the CREMA-D dataset.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multirow{2}{*}{**\# Epochs**} & \multicolumn{4}{c}{**Downstream Evaluation Scores (\%)**} \\ & & Beijing Opera & Nsynth Pitch 5 & CREMA-D & FSD50K \\ \hline STFT Mel+PaSST (Baseline) & 100 (33hr) & 90.6 & 50.9 & 46.5 & 27.8 \\ STFT Mel+PaSST (BECR) & 100 (36hr) & **92.7** & **51.2** & **47.6** & **36.8** \\ CQT+PaSST & 50 (50hr) & 36.8 & 4.8 & 19.4 & 3.5 \\ \hline CP-JKU PaSST in HEAR2021 [1] & Unknown & 96.6 & 25.6 & 61.0 & 55.8 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Summary of Model Performances. \(\epsilon\) and \(\lambda\) of BECR are tuned on training set in section 4.2.
### Discussion on BACR's Results
To verify that the BECR is making the improvement on baseline PaSST model, we show the regularization term successfully converges to zero after the first few epochs. See Fig 5. In particular, the percentage of BECR term of total loss steadily descent from 15% to 0%, and does not cannibalize the total loss even in the early stage. Therefore compared with the baseline model, the validation loss descent is not significantly affected in the 100-epoch training process (Fig 6).
Another piece of evidence is after adding BECR to the baseline PaSST, the variance explained by top eigenvectors decreases, F-test score decreases, and Gini Index of normalized eigenvalues increases, as BECR is intended for (see Table 4). These changes indicate the embedding projection is more spread out than before, which leads to its better performance according to our analysis in Section 3.3.1.
Figure 5: BECR descent and the total training loss descent comparison (\(\lambda=0.05\), \(\epsilon=0.7\))
Figure 6: Training loss descent of the models
## 6 Conclusion
In this paper, we verify that CQT is a less ideal audio preprocessing method than STFT+Mel when used with PaSST model, not only decreasing downstream task performances but also dramatically increasing the computational complexity when transforming the audio data into the frequency domain.
We also propose Batch Embedding Covariance Regularization (BECR), a Gini Index-based regularization term, which encourages widespread embedding in its eigenspace, and a fast implementation algorithm for it. We tested BECR with SOTA audio model PaSST in a wide variety of audio domains: music, speech, and broad and achieve better performance when compared with using PaSST alone trained with the same dataset and similar hours. The simplicity in intuition and low complexity of implementation to apply the method, together with the encouraging results in challenging unknown test tasks, demonstrate the promising potential BECR has for general-purpose audio representation learning.
We would like to note some limitations of this work. First, different models work well with different preprocessing methods. So the conclusion of CQT is better than STFT limits to our experiment setting which uses a PaSST model. Second, we have not verified if BECR is a generalizable technique working beyond PaSST model. It would be interesting to apply BECR to other common baselines in the future, for example, OpenL3 and wav2vec2 models, and see the difference it makes.
## Acknowledgement
Thanks to Chaoran Zhang, Yuxiang Zhang for their helpful comments on the work.
|
2304.08463
|
Learning to Render Novel Views from Wide-Baseline Stereo Pairs
|
We introduce a method for novel view synthesis given only a single
wide-baseline stereo image pair. In this challenging regime, 3D scene points
are regularly observed only once, requiring prior-based reconstruction of scene
geometry and appearance. We find that existing approaches to novel view
synthesis from sparse observations fail due to recovering incorrect 3D geometry
and due to the high cost of differentiable rendering that precludes their
scaling to large-scale training. We take a step towards resolving these
shortcomings by formulating a multi-view transformer encoder, proposing an
efficient, image-space epipolar line sampling scheme to assemble image features
for a target ray, and a lightweight cross-attention-based renderer. Our
contributions enable training of our method on a large-scale real-world dataset
of indoor and outdoor scenes. We demonstrate that our method learns powerful
multi-view geometry priors while reducing the rendering time. We conduct
extensive comparisons on held-out test scenes across two real-world datasets,
significantly outperforming prior work on novel view synthesis from sparse
image observations and achieving multi-view-consistent novel view synthesis.
|
Yilun Du, Cameron Smith, Ayush Tewari, Vincent Sitzmann
|
2023-04-17T17:40:52Z
|
http://arxiv.org/abs/2304.08463v1
|
# Learning to Render Novel Views from Wide-Baseline Stereo Pairs
###### Abstract
We introduce a method for novel view synthesis given only a single wide-baseline stereo image pair. In this challenging regime, 3D scene points are regularly observed only once, requiring prior-based reconstruction of scene geometry and appearance. We find that existing approaches to novel view synthesis from sparse observations fail due to recovering incorrect 3D geometry and due to the high cost of differentiable rendering that precludes their scaling to large-scale training. We take a step towards resolving these shortcomings by formulating a multi-view transformer encoder, proposing an efficient, image-space epipolar line sampling scheme to assemble image features for a target ray, and a lightweight cross-attention-based renderer. Our contributions enable training of our method on a large-scale real-world dataset of indoor and outdoor scenes. We demonstrate that our method learns powerful multi-view geometry priors while reducing the rendering time. We conduct extensive comparisons on held-out test scenes across two real-world datasets, significantly outperforming prior work on novel view synthesis from sparse image observations and achieving multi-view-consistent novel view synthesis.
+
Footnote †: Project website: [https://yilundu.github.io/wide_baseline/](https://yilundu.github.io/wide_baseline/)
## 1 Introduction
The goal of novel view synthesis is to render images of a scene from unseen camera viewpoints given a set of image observations. In recent years, the emergence of differentiable rendering [26, 28, 45, 51, 46] has led to a leap in quality and applicability of these approaches, enabling near photorealistic results for most real-world 3D scenes. However, methods that approach photorealism require hundreds or even thousands of images carefully exploring every part of the scene, where special care must be taken by the user to densely image all 3D points in the scene from multiple angles.
In contrast, we are interested in the regime of novel view synthesis from a sparse set of context views. Specifically, this paper explores whether it is possible to synthesize novel view images using an extremely sparse set of observations. In the most challenging case, this problem reduces to using input images such that every 3D point in the scene is only ob
served from a _single_ camera perspective. Towards this goal, we propose a system that uses only a single wide-baseline stereo image pair of the scene as input. This stereo image pair regularly has little overlap, such that many 3D points are indeed only observed in one of the images, see Fig. 1. Image observations themselves are thus insufficient information to compute 3D geometry and appearance via multi-view stereo, and we must instead _learn_ prior-based 3D reconstruction. Nevertheless, reasoning about multi-view consistency is critical, as prior-based reconstructions must agree across images to ensure multi-view-consistent reconstruction.
This is a novel problem setting: While some existing methods demonstrate novel view synthesis from very sparse observations [59, 46, 52], they are limited to object-level scenes. In contrast, we are interested in large real-world scenes that are composed of multiple objects with complex geometry and occlusions. Previous approaches for novel view synthesis of scenes focus on small baseline renderings using \(3-10\) images as input [59, 48, 18, 25, 8, 19, 7]. In this setting, most 3D points in the scene are observed in multiple input images, and multi-view feature correspondences can be used to regress 3D geometry and appearance. Thus, these methods in practice learn to amortize multi-view stereo. In our setting, we use a wide-baseline stereo image pair as input, where it is not sufficient to rely on multi-view feature correspondences due to many points only being observed in a single view. We show that in this challenging setting, existing approaches do not faithfully recover the 3D geometry of the scene. In addition, most existing methods rely on costly volume rendering for novel view synthesis, where the number of samples per ray required for high-quality rendering makes it difficult to train on complex real-world scenes.
In this paper, we propose a new method that addresses these limitations, and provides the first solution for high-quality novel view synthesis of a scene from a wide-baseline stereo image pair. To better reason about the 3D scene, we introduce a multi-view vision transformer that computes pixel-aligned features for each input image. In contrast to a monocular image encoder commonly used in previous approaches [52, 54, 59], the multi-view transformer uses the camera pose information as input to better reason about the scene geometry. We reduce the memory and computational costs for computing image features by combining this vision transformer at lower resolutions with a CNN at higher resolutions. A multi-view feature matching step further refines the geometry encoded in these feature maps for any 3D point that can be observed in both images.
We also introduce an efficient differentiable renderer that enables large-scale training. Existing approaches that use volume rendering sample points along camera rays in 3D and project these points onto the image planes to compute the corresponding features using bilinear interpolation. Since perspective projection is a non-linear operation, uniformly sampled 3D points are not uniformly distributed in 2D, leading to some pixels in the feature maps being sampled multiple times, and other pixels not being sampled at all. Thus, this sampling strategy does not use the information in the pixel-aligned feature maps optimally. We instead take an image-centric sampling approach where we first compute the epipolar lines of a target pixel in the input images, and sample points uniformly on these lines in 2D. This exploits the fact that the number of pixels along the epipolar lines is the maximum effective number of samples. In addition, we use lightweight cross-attention layers that directly aggregate the sampled features and compute the pixel color. In contrast to volume rendering where we need to sample very close to a surface in order to render its color, thus requiring a large number of samples, our learned renderer does not share this limitation and can compute the pixel color even with sparse samples. Our lightweight rendering and feature backbone components enable us to train on large-scale real-world datasets. We demonstrate through extensive experiments on two datasets that our method achieves state-of-the-art results, significantly outperforming existing approaches for novel view synthesis from sparse inputs.
## 2 Related Work
Image-based rendering.Image-based rendering (IBR) methods generate images from novel camera viewpoints by blending information from a set of input images. We provide a brief overview of some methods. Please refer to the review by Shum and Kang [42] for details. Some IBR approaches directly model the plenoptic function without using information about the scene geometry [31, 20]. Other approaches use a proxy scene geometry computed using multi-view stereo to guide the blending of information from the input images [3, 9, 16, 23]. While rendering without computing an explicit 3D geometry leads to higher-quality results, it requires a large number of input images. In contrast, methods that rely on 3D geometry can work with sparse image inputs. However, multi-view stereo from a sparse set of input views often leads to inaccurate geometry, especially for scenes with complex geometry, limiting the quality of rendered images. Methods have been proposed for higher-quality geometry computation [5, 15], optical flow-based refinement [11, 4, 10], and improved blending [14, 35, 38]. In contrast to these image-based rendering methods, we rely on priors learned from data that enable novel-view synthesis from just a wide-baseline stereo image. We do not create any explicit proxy geometry of the scene and are thus unaffected by inaccurate multi-view stereo.
Single-Scene Volumetric Approaches.Recent progress in neural rendering [51] and neural fields [57, 28, 43] has led to a drastic jump in the quality of novel-view synthesis from several input images of a scene. Here, a 3D scene represen
tation is optimized via differentiable rendering to fit a set of image observations. Early approaches leveraged voxel grids and learned renderers [26, 32, 45]. More recent approaches rely on neural fields [2, 27, 28, 57] to parameterize the 3D scene and volumetric rendering [26, 28, 50] for image synthesis. This leads to photorealistic view synthesis but requires hundreds of input images that densely sample the 3D scene. Hand-crafted and learned priors may reduce the number of required images to the order of three to ten [33], but 3D points still need to be observed from at least two perspectives. A major challenge of these approaches is the cost of accurate differentiable rendering, regularly requiring hundreds of samples per ray. Recent approaches have achieved impressive speed-ups in 3D reconstruction leveraging high-performance data structures and sparsity [6, 13, 24, 29]. While promising, reconstruction can still take a few minutes per scene, and sparse data structures such as octrees and hash tables cannot easily be used with learned priors.
Our approach tackles a different setting than these methods, using only a single wide-baseline stereo image as input, where 3D points are regularly only observed in a _single_ view. Our approach does not require any per-scene optimization at test time. Instead, it reconstructs the scene in a single forward pass. Note that while our method does not achieve the quality of per-scene optimization methods that use hundreds of input images, it demonstrates a significant step up in novel view synthesis from very sparse image observations.
Prior-based 3D Reconstruction and View Synthesis.Instead of overfitting to a single scene, differentiable rendering can also be used to supervise prior-based inference methods. Some methods generalize image-based rendering techniques by computing feature maps on top of a proxy geometry [1, 19, 38, 56]. Volume rendering using multi-plane images has been used for small baseline novel view synthesis [47, 53, 61, 62]. Early neural fields-based approaches [34, 46] were conditioned on a single global latent code and rendered via sphere tracing. In contrast to a global latent code, several approaches use a feature backbone to compute pixel-aligned features that can be transformed using MLPs [52, 21, 59] or transformers layers [37, 54] to a radiance field. Ideas from multi-view stereo such as the construction of plane-swept cost volumes [7, 18, 25], or multi-view feature matching [8] have been used for higher-quality results.
Alternatively to these radiance field-based approaches, some methods use a light field rendering formulation where an oriented camera ray can directly be transformed to the pixel color as a function of the features computed from the input images [44, 49]. Scene Representation Transformers [39] use transformers with global attention to compute a set-latent representation that can be decoded to pixel colors when queried with a target camera ray. However, global attention layers on high-resolution input images are very compute and memory intensive. Developed concurrently with our work, Suhail [48] proposed to use a transformer to only compute features for image patches along the epipolar rays of the pixel being rendered. This is still very expensive due to global attention layer computations over multiple image patches for every rendered pixel. In addition, this method ignores the context information of the scene, since all computation is performed only for patches that lie on the epipolar lines.
All existing prior-based reconstruction methods either only support object-level scenes or very small baseline renderings, or rely on multiple image observations where most 3D points are observed in multiple input images. This is different from our setting where we only use a wide-baseline stereo image pair of scenes as input.
## 3 Method
Our goal is to render novel views of a 3D scene given a wide-baseline stereo image pair \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). We assume known camera intrinsic \(\mathbf{K}_{i}\in\mathbb{R}^{3\times 3}\) and extrinsic \(\mathbf{E}_{i}\in\mathbb{R}^{4\times 3}\) expressed relative to context camera 1. We use a multi-view encoder to compute pixel-aligned features, and a cross-attention-based renderer to transform the features into novel view renderings, see Figure 2 for an overview.
### Multiview Feature Encoding
An essential part of novel view synthesis given context images is an accurate reconstruction of scene geometry. Our method implicitly reconstructs 3D geometry and appearance of the scene in the form of pixel-aligned feature maps for each stereo image. In prior work, pixel-aligned features are obtained by separately encoding each image via a vision transformer or CNN [59, 21]. However, in our early experiments, we found this led to artifacts in renderings observing boundary regions between context images. We hypothesize that separate encoding of images leads to inconsistent geometry reconstruction across context images. We thus introduce our _multi-view encoder_, which obtains pixel-aligned features by _jointly_ processing the images and the relative pose between them. Encoding the pose information has also been shown to act as an effective inductive bias for 3D tasks [58].
We now describe this architecture in detail, which extends the dense vision transformer proposed by Ranftl et al. [36]. Please see Figure 2 for an overview. From each stereo image, we first independently extract convolutional features via a ResNet50 CNN. We then flatten both images, obtaining \(2\times 16\times 16\) features in total. To each feature, we add (1) a learned per-pixel positional embedding encoding its pixel coordinate and (2) a camera pose embedding, obtained via a linear transform of the relative camera pose between context images 1 and 2. These tokens are processed by a vision transformer, which critically performs self-attention across _all_ tokens across _both_ images. In-between self-attention layers, per-image features are re-assembled into a spatial grid,
up-sampled, and processed by a fusion CNN [36] to yield per-image spatial feature map. Directly using these spatial feature maps for novel view synthesis leads to blurry reconstructions, due to the loss of high-frequency texture information. We thus concatenate these features with high-resolution image features obtained from a shallow CNN.
### Epipolar Line Sampling and Feature Matching
We aim to render an image of the scene encoded in the two pixel-aligned feature maps from a novel camera viewpoint. A common way to achieve this is volume rendering, where we cast a camera ray, compute density and color values at many depths along the ray, and integrate them to compute the color of the pixel. Sampling locations are determined in 3D. Coarse samples are either uniformly spaced in euclidean space or spaced with uniform disparity, and fine samples are distributed closer to the surface as computed by the coarse samples [2, 30, 28]. However, in our regime of generalizable novel view synthesis with pixel-aligned feature maps, this sampling scheme is suboptimal. In this case, sampling along the ray should be determined by the resolution of the context images: the number of pixels along the epipolar line is the maximum effective number of samples available for any method. More samples would not provide any extra information. We propose a sampling strategy to exploit this and demonstrate its effectiveness in an ablation study.
Consider a pixel coordinate \(\mathbf{u}_{t}=(u,v)\) in the target image \(\mathcal{I}_{t}\), with assumed known intrinsic \(\mathbf{K}_{t}\) and extrinsic \(\mathbf{T}_{t}=\left[\begin{smallmatrix}\mathbf{R}_{t}&\mathbf{t}_{t} \\ \mathbf{0}&\end{smallmatrix}\right]\) camera parameters relative to the context camera \(\mathcal{I}_{1}\). Its epipolar lines \(\mathbf{l}_{\{1,2\}}\), in context cameras 1 and 2 are given as:
\[\mathbf{l}_{i}=\mathbf{F}_{i}\left[u,v,1\right]^{T}=\mathbf{K}_{i}^{-T}([ \mathbf{t}_{t}]_{\times}\mathbf{R}_{t})\mathbf{K}_{t}^{-1}\left[u,v,1\right]^ {T} \tag{1}\]
via the fundamental matrix \(\mathbf{F}_{i}\). We now uniformly sample \(N\) pixel coordinates along the line segment of the epipolar line within the image boundaries. To enable the renderer to reason about whether to use a certain pixel-aligned feature or not, a critical piece of information is the depth in the context coordinate frame at which we are sampling this feature. This depth value can be computed via triangulation, using a closed-form expression. Please refer to the supplement document for details. We now obtain \(N\) tuples \(\{(d,\mathbf{f})_{k}\}_{k=1}^{N}\) of depth \(d\) and image feature \(\mathbf{f}\) per context image for a total of \(2N\) samples which we call _primary_ samples.
We further propose a feature matching module to refine the geometry encoded in the primary epipolar line samples via correspondence matching. Consider a primary epipolar line sample obtained from context image \(i\), a tuple \((d,\mathbf{f})\) corresponding to a pixel coordinate \(\mathbf{u}_{t}\). We propose to augment this sample by a corresponding feature in the other context image. Specifically, we first solve for the corresponding 3D point, and then project this 3D point onto the _other_ context image to retrieve a corresponding feature \(\hat{\mathbf{f}}\), which we refer to as a _secondary_ feature. The secondary features are set to zero if the projected point is out of the image bounds. Intuitively, primary and secondary features _together_ allow a final stage of geometry refinement for 3D points that are observed in both images: if the features agree, this sample likely encodes a surface. If the projected point on the other image lies outside the image boundary, we simply set the secondary features to zeros. We obtain the input to the renderer as the final set of features by concatenating each primary epipolar line feature with its corresponding secondary feature in the other context view, yielding a set \(\{(d,\mathbf{f},\hat{\mathbf{f}})_{k}\}_{k=1}^{2N}\). In practice, we sample \(N=64\) points on the epipolar lines
Figure 2: **Method Overview.** (a) Given context images from different viewpoints, a multi-view encoder extracts pixel-aligned features, leveraging attention across the images and their corresponding camera pose embeddings. (b) Given a target ray, in each context view, we sample _primary_ features along the epipolar line equidistant in pixel space. We then project the corresponding 3D points onto the other views and sample corresponding _secondary_ epipolar line features, where out-of-bounds features are set to zero. (c) We render the target ray by performing cross-attention over the set of all primary and secondary epipolar line features from all views.
for both images, leading to a total of \(2N=128\) tuples.
### Differentiable Rendering via Cross-Attention
To render the target ray, it remains to map the set of epipolar line samples \(\{(d,\mathbf{f},\widehat{\mathbf{f}})_{k=1}^{2N}\}\) to a color value. As this operation has to be executed once per ray, a key consideration in the design of this function is computational cost. We propose to perform rendering via a lightweight cross-attention decoder.
For each point on the epipolar line, we embed the target ray origin \(\mathbf{o}_{t}\), target ray direction \(\mathbf{r}_{t}\), depth with respect to the target ray origin \(d_{t}\), and context camera ray direction \(\mathbf{r}_{c}\) for the epipolar point into a ray query token \(\mathbf{q}\) via a shallow MLP as \(\Phi([\mathbf{o}_{t},\mathbf{r}_{t},\mathbf{r}_{c},d_{t}])\). The \(2N\) ray feature values are independently transformed into key and value tokens using a 2-layer MLP. Our renderer now performs two rounds of cross-attention over this set of features to obtain a final feature embedding, which is then decoded into color via a small MLP.
The expectation of the Softmax distribution over the sampled features gives a rough idea of the scene depth as \(e=\sum_{k}d_{k}\alpha_{k}\), where \(d_{k}\) denotes the depth of the \(k\)-th epipolar ray sample along the target ray and \(\alpha_{k}\) is the corresponding Softmax weight as computed by the cross-attention operator. Note that \(e\) is not the actual depth but a measure of which epipolar samples the renderer uses to compute the pixel color. Unlike volume rendering, where we need to sample very close to a surface to render its color, our light field-based renderer can reason about the surface without exactly sampling on it. The learned cross-attention layers can use the target camera ray information, along with a sparse set of epipolar samples, to compute the pixel color. Thus, our method does not require explicit computation of accurate scene depth for rendering.
### Training and Losses
We now have a rendered image from a novel camera viewpoint. Our loss function consists of two terms:
\[\mathcal{L}=\mathcal{L}_{\text{img}}+\lambda_{\text{reg}}\mathcal{L}_{\text{ reg}}\,. \tag{2}\]
The first term evaluates the difference between the rendered image from a novel camera viewpoint, \(R\) and the ground truth, \(G\) as:
\[\mathcal{L}_{\text{img}}=||R-G||_{1}+\lambda_{\text{LPIPS}}\mathcal{L}_{ \text{LPIPS}}(R,G)\,, \tag{3}\]
where \(\mathcal{L}_{\text{LPIPS}}\) is the LPIPS perceptual loss [60]. In practice, we render square patches with a length of \(32\) pixels and evaluate these image losses at the patch level.
We also use a regularization loss on the cross-attention weights of the renderer for better multi-view consistency:
\[\mathcal{L}_{\text{reg}}=\sum_{(u,v)}\sum_{(u^{\prime}v^{\prime})\in\mathcal{ N}(u,v)}((e(u,v)-e(u^{\prime},v^{\prime}))^{2}\,. \tag{4}\]
Here, \(e(u,v)\) denotes the expected value of the depth of the epipolar samples at pixel \((u,v)\), and \(\mathcal{N}()\) defines the neighborhood around a pixel.
For better generalization, we further perform several geometrically-consistent data augmentations during the training procedure. We center crop and scale the input and target images, which leads to transformation in the intrinsics of the camera. We also flip the images which leads to transformation of the extrinsics.
## 4 Experiments
We quantitatively and qualitatively show that our approach can effectively render novel views from wide-baseline stereo pairs. We describe our underlying experimental setup in Section 4.1. Next, we evaluate our approach on challenging indoor scenes with substantial occlusions in Section 4.2. We further evaluate on outdoor scenes in Section 4.3. We analyze and ablate the underlying components in Section 4.4. Finally, we illustrate how our approach can render novel views of unposed images of scenes captured in the wild in Section 4.5.
### Experimental Setup
Datasets.We train and evaluate our approach on RealEstate10k [62], a large dataset of indoor and outdoor scenes, and ACID [22], a large dataset of outdoor scenes. We use 67477 scenes for training and 7289 scenes for testing for RealEstate10k, and 11075 scenes for training and 1972 scenes for testing for ACID, following default splits. We train our method on images at \(256\times 256\) resolution and evaluate methods on their ability to reconstruct intermediate views in test scenes (details in the supplement).
Baselines.We compare to several existing approaches for novel view synthesis from sparse image observations. We compare to pixelNeRF [59] and IBRNet [54] that use pixel-aligned features, which are decoded into 3D volumes rendered using volumetric rendering. We also compare to Generalizable Patch-based Rendering (GPNR) [48], which uses a vision transformer-based backbone to compute epipolar features, and a light field-based renderer to compute pixel colors. These baselines cover a wide range of design choices used in existing methods, such as pixel-aligned feature maps computed using CNNs [54, 59] and transformers [48], volumetric rendering by decoding features using MLPs [59] and transformers [54], and light field-based rendering [48]. We use publicly available codebases for all baselines and train them on the same datasets we use for fair evaluations. Please refer to the supplemental for comparisons to more baselines.
Evaluation Metrics.We use LPIPS [60], PSNR, SSIM [55], and MSE metrics to compare the image quality of rendered images with the ground truth.
### Indoor Scene Neural Rendering
We first evaluate the ability of our approach and baselines to render novel views in complex indoor environments with substantial occlusions between objects.
Qualitative Results.In Figure 3, we provide qualitative results of novel view renderings of our approach, compared to each of our baselines. We provide additional novel view results of our method in Figure 4. Compared to the baselines, our approach reconstructs the 3D structure of the scene better, and also captures more high-frequency details.
Quantitative Results.We quantitatively evaluate our approach and baselines in Table 1. We find that our approach substantially outperforms each compared baseline in terms of all of our metrics.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & LPIPS \(\downarrow\) & SSIM \(\uparrow\) & PSNR \(\uparrow\) & MSE \(\downarrow\) \\ \hline pixelNeRF [59] & 0.591 & 0.460 & 13.91 & 0.0440 \\ IBRNet [54] & 0.532 & 0.484 & 15.99 & 0.0280 \\ GPNR [48] & 0.459 & 0.748 & 18.55 & 0.0165 \\ Ours & **0.262** & **0.839** & **21.38** & **0.0110** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Novel view rendering performance on RealEstate10K.** Our method outperforms all baselines on all metrics.
Figure 4: **Novel view renderings of our approach given a large baseline stereo pair.** Our approach can synthesize intermediate views that are substantially different from input images, even with very limited overlap between images.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & LPIPS \(\downarrow\) & SSIM \(\uparrow\) & PSNR \(\uparrow\) & MSE \(\downarrow\) \\ \hline pixelNeRF [59] & 0.628 & 0.464 & 16.48 & 0.0275 \\ IBRNet [54] & 0.385 & 0.513 & 19.24 & 0.0167 \\ GPNR [48] & 0.558 & 0.719 & 17.57 & 0.0218 \\ Ours & **0.364** & **0.781** & **23.63** & **0.0074** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Novel view rendering performance on ACID.** Our method outperforms all baselines on all metrics.
Figure 3: **Comparative Rendering Results on RealEstate10k.** Our approach can render novel views of indoor scenes with substantial occlusions with high fidelity using a wide-baseline input image pair, outperforming all baselines. Note that many points of the 3D scene are only observed in a single image in such inputs. Our method can correctly reason about the 3D structures from such sparse views.
### Outdoor Scene Neural Rendering
We further evaluate on outdoor scenes with potentially unbounded depth.
Qualitative Results.We illustrate qualitative results in Figure 5. In comparison to the baselines, our approach is able to more accurately reconstruct the geometry, and is able to synthesize multi-view consistent renderings from two large baseline views.
Quantitative Results.Similar to indoor scenes, our approach also outperforms all baselines in terms of all metrics on outdoor scenes, see Table 2.
### Ablations and Analysis
We next analyze and ablate individual components of our approach. We use the RealEstate10k dataset for these experiments.
Ablations.We evaluate the importance of different components of our method in Table 3. The "Base Model" corresponds to a vanilla architecture that does not include some of our proposed contributions. It samples points uniformly in 3D, instead of our proposed 2D epipolar line sampling. It uses a monocular encoder instead of our proposed multi-view encoder, and does not use correspondence matching across views for refining the geometry. It also does not use the regularization loss for multi-view consistency or any data augmentation during training. We find that all components of our approach are essential for high-quality performance. The results in Table 3 show that sampling in 3D sub-optimally uses the information in the feature maps, that our multi-view encoder and cross-image correspondence matching can compute features that better encode the 3D scene structure compared to monocular encoders, and that data augmentation helps with generalization. While we found that the incorporation of the regularization loss led to a slight decrease in PSNR, we found that it improved multi-view consistency in the rendered video results, and also improved both LPIPS and SSIM perceptual metrics.
Speed.Next, in Figure 6, we study the relationship between rendering quality and rendering speed for all approaches. Our lightweight approach achieves the best trade-off, significantly outperforming all methods in terms of rendering quality, while being at-par with the most efficient baseline. By reducing the number of sampled epipolar points from \(64\) to \(48\) samples per image, we can further speed up our approach, outperforming all baselines both in terms of rendering speed and image quality.
Epipolar Attention.Finally, we visualize the underlying epipolar attention weights learned by our approach in Figure 7. The expected value of the depths of the epipolar sam
Figure 5: **Comparative Results on ACID. Our approach is able to render novels views with higher quality than all baselines.**
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Models** & LPIPS\(\downarrow\) & SSIM\(\uparrow\) & PSNR\(\uparrow\) & MSE\(\downarrow\) \\ \hline Base Model & 0.452 & 0.735 & 18.11 & 0.0201 \\ + 2D Sampling & 0.428 & 0.762 & 19.02 & 0.0159 \\ + Cross Correspondence & 0.415 & 0.766 & 19.52 & 0.0142 \\ + Multiview Encoder & 0.361 & 0.794 & 20.43 & 0.0132 \\ + Regularization Loss & 0.358 & 0.808 & 19.84 & 0.0139 \\ + Data Aug & 0.262 & 0.839 & 21.38 & 0.0110 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Ablations. All components of our proposed method are essential for high-quality novel view synthesis.**
Figure 6: **FPS vs PSNR. Our approach strikes the best trade-off between rendering quality and rendering speed. We can further reduce the number of Epipolar samples (“Ours (Faster)”), which makes our method faster than all baselines, while still significantly outperforming them in terms of rendering quality.**
ples can be seen as a proxy depth and corresponds roughly to the underlying geometry of the scene. This enables us to analyze the learned computation of our renderer.
### Novel View Synthesis from Unposed Images
Our method uses a wide-baseline stereo image as input with known relative pose between them. We show that our method can perform novel view synthesis even without the knowledge of this relative pose information. In this case, we utilize SuperGlue [40] to compute reliable pixel correspondences between the input images. Since we do not know the camera intrinsics for in-the-wild images, we use the average intrinsics of the RealEstate10k dataset and compute the Essential matrix from the correspondences using RANSAC [12]. We then compute the pose information from the essential matrix [17] and use it as input for our method. Note that the recovered translation is only defined up to a scale. Figure 8 demonstrates results on some in-the-wild scenes using images from the internet. Even in this unposed setting, our method can reason about the geometry of the scene by aggregating information across the sparse input views. This is an extremely challenging setting, and existing approaches for novel view synthesis from sparse views do not demonstrate any results on unposed images.
## 5 Discussion
While we have presented the first approach for novel view synthesis of scenes from very sparse input views, our approach still has several limitations. Our rendering results are not at the same quality as those obtained by methods that optimize on single scenes using more images. Learning priors that enable novel view synthesis from sparse views is a significantly more challenging problem compared to using a large number of input images, where 3D points are regularly observed in many images. Our approach takes a step towards photorealistic renderings of scenes using only sparse views. As our approach relies on learned priors, it does not generalize well to new scenes with very different appearances compared to the training scenes. However, our efficient approach lends itself to large-scale training on diverse datasets, in turn enabling reconstruction of diverse scenes. Finally, while our method, in theory, can be extended to take more than two input views, we have only experimented with two views as a first step towards very sparse multi-view neural rendering.
## 6 Conclusion
We introduce a method for implicit 3D reconstruction and novel view synthesis from a single, wide-baseline stereo pair, trained using only self-supervision from posed color images. By leveraging a multi-view encoder, an image-space epipolar line feature sampling scheme, and a cross-attention based renderer, our method surpasses the quality of prior art on datasets of challenging scenes. Our method further strikes a compelling trade-off between rendering speed and quality, rendering novel views significantly faster than most prior methods. Meanwhile, leveraging epipolar line geometry strikes a compelling trade-off between structured and generalist learning paradigms, enabling us to train our method on real-world datasets such as RealEstate10k. We believe that this work will inspire the community towards further exploring the regime of extreme few-shot and generalizable novel view synthesis.
**Acknowledgements.** This work was supported by the National Science Foundation under Grant No. 2211259, and by the Singapore DSTA under DST000ECI20300823 (New Representations for Vision). Yilun Du is supported by a NSF Graduate Research Fellowship.
Figure 8: **Novel View Synthesis from Unposed Images. Our approach can also render novel views using two unposed images captured in the wild. Note that parts of the scene only visible in one of the images can be correctly rendered from novel viewpoints.**
Figure 7: **Visualization of Epipolar Attention Weights. The expected value of the depths of the epipolar samples under the attention weights can be seen as a depth proxy. As our renderer is _not_ a volume renderer, these attention weights need not exactly correspond to the actual depth for correct renderings.**
|
2306.10455
|
Security of One-Way Entanglement Purification with Quantum Sampling
Against a Restricted Adversary
|
Entanglement purification protocols promise to play a critical role in the
future of quantum networks by distributing entanglement across noisy channels.
However, only the security of two-way purification protocols have been closely
studied. To address this, we propose a one-way entanglement purification
protocol which utilizes quantum sampling and prove its security against an
adversary restricted to single qubit Pauli gates. This is done through
leveraging the equivalence of one-way entanglement purification protocols with
error-correcting codes. To prove the security of this protocol, we first use
the quantum sampling framework introduced by Bouman and Fehr to estimate the
Hamming weight of the qubits which passed through the channel and then use the
estimated relative Hamming weight $\omega$ to determine the amount of
interference that Eve has subjected to the quantum channel. Since Eve is
restricted to single qubit Pauli gates, the number of applied gates can be
directly estimated using the Hamming weight. Estimating the number of
adversarial single qubit gates, allows us to perform error correction and
disentangle the logical qubit from Eve with probability
$1-\epsilon_{qu}^\delta$. Since this protocol allows communication only in one
direction, the distance of the code must be decided before transmission, and
therefore Bob will be forced to abort the protocol if he finds that Eve has
applied more gates than the code can correct. One-way protocols may find use
when communication is limited, or when we desire to decrease latency compared
to the multiple rounds of communication needed in two-way protocols. Further
research may investigate the security of this protocol against arbitrary single
or multi-qubit gates to obtain security guarantees against a more general
adversary.
|
Cameron Cianci
|
2023-06-18T02:29:21Z
|
http://arxiv.org/abs/2306.10455v1
|
# Security of One-Way Entanglement Purification with Quantum Sampling Against a Restricted Adversary
###### Abstract
Entanglement purification protocols promise to play a critical role in the future of quantum networks by distributing entanglement across noisy channels. However, only the security of two-way purification protocols have been closely studied. To address this, we propose a one-way entanglement purification protocol which utilizes quantum sampling and prove its security against an adversary restricted to single qubit Pauli gates. This is done through leveraging the equivalence of one-way entanglement purification protocols with error-correcting codes. To prove the security of this protocol, we first use the quantum sampling framework introduced by Bouman and Fehr to estimate the Hamming weight of the qubits which passed through the channel and then use the estimated relative Hamming weight \(\omega\) to determine the amount of interference that Eve has subjected to the quantum channel. Since Eve is restricted to single qubit Pauli gates, the number of applied gates can be directly estimated using the Hamming weight. Estimating the number of adversarial single qubit gates, allows us to perform error correction and disentangle the logical qubit from Eve with probability \(1-\epsilon_{qu}^{\delta}\). Since this protocol allows communication only in one direction, the distance of the code must be decided before transmission, and therefore Bob will be forced to abort the protocol if he finds that Eve has applied more gates than the code can correct. One-way protocols may find use when communication is limited, or when we desire to decrease latency compared to the multiple rounds of communication needed in two-way protocols. Further research may investigate the security of this protocol against arbitrary single or multi-qubit gates to obtain security guarantees against a more general adversary.
## 1 Introduction
Entanglement is a novel computational resource in quantum information science with no similar classical resource. This resource has found numerous uses throughout quantum information science. Two parties with entangled qubits can transmit perfectly secure quantum information through quantum teleportation [1, 2] or perfectly secure classical information through superdense coding [3]. Parties with access to entanglement can
use quantum correlations to succeed at games such as "magic squares" more than is classically possible [4]. Quantum entanglement has also found use in quantum processes such as distributed quantum computation [5] and quantum cookies [6]. Motivated by these uses, the topic of how to securely distribute quantum states has recently gained interest, including through investigation of quantum secure direct communication [7, 8, 9] and secure quantum dialogues [10, 11].
In contrast to the quantum secure direct communication and quantum dialogue protocols above which attempt to distribute entanglement through bell pairs, the protocol proposed in this paper will attempt to distribute entanglement through error correcting codes. Due to this focus on distributing entanglement, we will allow Alice and Bob the classical resource of a shared secret key prior to the start of the protocol. In this protocol, we will also utilize entanglement purification protocols, which were historically brought about to distill high fidelity entangled states quantum states travelling through a noisy channel [12]. However, since an eavesdropper can be viewed as a source of noise in the quantum channel, entanglement purification protocols also work to remove the effect of an adversary on the transmitted message. Entanglement purification protocols have been investigated for use in high fidelity quantum communication [13], and previously explored two-way protocols promise strong security for transferring quantum information [14].
### Entanglement Purification Protocols
There are two different classes of entanglement purification protocols, one-way and two-way protocols [6]. Two-way entanglement purification protocols allow for communication between Alice and Bob after qubits pass through the quantum channel. Since Alice and Bob can conditionally apply gates to their systems based off each other's measurement results, two-way entanglement purification protocols are in general stronger than one-way protocols. For example, two-way purification protocols can allow for Alice and Bob to purify their qubits from a 50% depolarizing channel, which one-way protocols cannot correct [6]. Two-way purification protocols have been previously proven secure through demonstrating that a two-way protocol can disentangle the purified system from an external system or eavesdropper [14]. This posits the question if one-way protocols can similarly be proven secure.
In comparison to two-way protocols, one-way entanglement purification protocols do not allow for communication between parties after the message is sent. Restriction to one-way communication incidentally makes these protocols equivalent to quantum error correction since Alice and Bob can be time-like separated [6]. Due to this, we can use quantum error-correcting codes to evaluate the security of one-way entanglement purification protocols [15]. The correspondence between one-way entanglement purification protocols and error-correcting codes has a prior basis in the literature, as it has previously been used by Shor and Preskill to prove the BB84 QKD protocol secure [16]. Investigating the security of one-way protocols may be useful in scenarios where communication between two parties is limited, or when we desire to decrease the latency of quantum communication compared to the multiple rounds of classical communication
needed between Alice and Bob in two-way protocols.
In one-way entanglement purification, Bob is forbidden from sending messages to Alice. This presents a problem for quantum sampling, as this restriction requires Bob to know the states in which the sampling qubits were prepared in order to perform sampling and determine the Hamming weight of the sampling qubits. If Alice naively announces the prepared sampling states over a classical channel, then Eve can simply intercept all the qubits and use this information to resend identical qubits to Bob, circumventing the quantum sampling procedure. Fundamentally, this problem occurs because in one-way protocols the actors Bob and Eve are symmetric [6]. To solve this problem, we will assume that Alice and Bob share a secret classical key, breaking this symmetry.
A shared classical key will allow Alice to send a secure classical message to Bob. This message will contain many critical pieces of information for the protocol, including the prepared sampling states, the permutation which Alice applied to the transmitted qubits, and the distance of the error-correcting code which Alice employed. At this point in the protocol, Bob will be able to perform quantum sampling and estimate the relative Hamming weight \(\omega\) of Eve's attack. Bob will then be able to use the relative Hamming weight to estimate the number of gates Eve has applied to the logical qubit, if we assume Eve is restricted to single qubit Pauli gates. Finally, Bob can calculate if the error-correcting code distance is large enough to disentangle Eve. If the code is sufficiently large, then Bob performs error correction and keeps the resulting logical qubit. Otherwise, the code distance is too small and Bob aborts.
## 2 Sampling
Before presenting the protocol we should first familiarize ourselves with quantum sampling. The quantum sampling framework used here was introduced by Bouman and Fehr [17], and we refer to their paper for a more rigorous introduction. This section is intended to be a broad overview to the framework put forth there.
Generically, sampling allows for an individual to learn information about a population through measuring a subset of that population. Classical sampling has been well studied [18]. However, due to entanglement in quantum systems it is not obvious how classical sampling strategies could be extended into quantum systems. Bouman and Fehr's quantum sampling framework addresses this by allowing for classical sampling strategies to be used in quantum systems. The sampling framework outputs an estimate of the Hamming weight of a quantum system. The definition of the Hamming weight is extended in this framework to entangled states and states in superposition [17].
From Bouman and Fehr's quantum sampling framework we will be interested in two quantities: the estimated relative Hamming weight \(\omega\) of the message qubits and the error bound of the quantum sampling process \(\epsilon_{qu}^{\delta}\). In Sections 5 and 6 we will use the relative Hamming weight \(\omega\) to estimate the number of gates applied by an eavesdropper, and we will use the error bound \(\epsilon_{qu}^{\delta}\) to estimate the probability that the sampling strategy has failed and Eve has gained access to the transmitted logical qubit without detection.
### Classical Sampling
As an introduction to sampling, let us consider a classical string \(q=(q_{1},q_{2},...q_{n})\in\{0,1\}^{n}\). The Hamming weight of this string is defined as \(wt(q)=|\{i|q_{i}\neq 0\}|\), or in simpler terms, the Hamming weight is the number of nonzero characters in this string. The role of sampling is to use a substring \(q_{t}\) to estimate the Hamming weight of the remaining string \(q_{\bar{t}}\). The sampling strategy employed will select a completely random subset of \(q\) to determine \(q_{t}\). With the sampled substring \(q_{t}\), the relative Hamming weight \(\omega(q_{t})=\frac{wt(q_{t})}{N}\) will be used as an estimate of the relative Hamming weight of the remaining string \(\omega(q_{\bar{t}})=\frac{wt(q_{\overline{t}})}{M}\), where \(N\) is the number of sampling qubits and \(M\) is the number of message qubits.
To find the failure probability of the sampling procedure, we will start by considering the set of all substrings that would output a relative Hamming weight which is \(\delta\)-close to the true relative Hamming weight.
\[B_{t}^{\delta}=\{\mathbf{q}\in\mathcal{A}^{n}:|\omega(q_{\overline{t}})- \omega(q_{t})|<\delta\} \tag{1}\]
This is the set of all substrings for which the sampling procedure will succeed. Through considering this set, it is clear that the probability that the sampling procedure fails, producing an estimate greater than \(\delta\) from the true Hamming weight, which is equal to the probability that the string is not in the set \(B_{t}^{\delta}\).
\[\epsilon_{cl}^{\delta}=\max_{q\in\mathcal{A}^{n}}Pr[q\notin B_{t}^{\delta}] \tag{2}\]
Assuming we are randomly sampling \(k\) entries [17], we find,
\[\epsilon_{cl}^{\delta}<4\exp\biggl{\{}-\frac{1}{3}\delta^{2}k\biggr{\}} \tag{3}\]
Therefore, classical sampling is able to estimate the relative Hamming weight \(\omega(q_{t})\) of a string which is \(\delta\)-close to the true relative Hamming weight \(\omega(q_{\bar{t}})\), with probability \(1-\epsilon_{cl}^{\delta}\). From this point on we will simply refer to the estimated relative Hamming weight as \(\omega\).
### Quantum Sampling
The quantum version of sampling naturally extends from classical sampling in Bouman and Fehr's framework, except that sampling is performed in both the \(X\) and \(Z\) bases. Classical sampling along these two non-orthogonal bases allows for us to estimate the Hamming weight of the quantum system while still using well-studied classical sampling methods. The main result of Bouman and Fehr's paper shows that the error bound of a quantum sampling protocol is simply the square root of the error bound of the underlying classical sampling protocol utilized [17].
\[\epsilon_{qu}^{\delta}=\sqrt{\epsilon_{cl}^{\delta}} \tag{4}\]
Using this, we find the quantum error bound for a randomly sampled substring to be,
\[\epsilon_{qu}^{\delta}<2\exp\biggl{\{}-\frac{1}{6}\delta^{2}k\biggr{\}} \tag{5}\]
Through this equation, Bouman and Fehr's framework allows for classical sampling methods to be applied to quantum systems. This finding has allowed this framework to aid in proving the security of Quantum Key Distribution [17] and Quantum Random Number Generators [19, 20], as well as in deriving lower bounds on the quantum conditional min entropy of high dimensional systems [21].
To illustrate quantum sampling with an example, consider the situation in Figure 1. Focusing on the first qubit, Alice prepared a sampling qubit in the \(|1\rangle\) state and sent it to Bob over a quantum channel. However, Eve tampered with the qubit in the channel, and when Bob measured this qubit he found it in the \(|0\rangle\) state. In this case, Bob can deduce that an error has occurred. Now considering all the sampling qubits sent, Bob can obtain a more accurate estimate of the influence of the quantum channel. With this information, when Alice sends additional message qubits along with these sampling qubits, Bob can use the quantum sampling to estimate the total error of this quantum message in the form of the Hamming weight, \(M\omega\).
When analyzing the proposed protocol in Section 5, \(\omega\) will be used to determine the number of gates Eve has applied, and \(\epsilon_{qu}^{\delta}\) will be used to determine the failure probability of the protocol. The value of \(\delta\) will be determined in Sections 5 and 6 by the Hamming weight, code distance, and gate set given to Eve.
## 3 Estimating Eve's Interference
We will now use the Hamming weight, along with restrictions on Eve's available gate set, to estimate the number of gates Eve has applied. Let us first consider the example of Eve
Figure 1: **Sampling and message qubits sent over a quantum channel. Eve applies Pauli gates randomly to the message and sampling qubits. Bob can estimate the number of Pauli gates applied to the message qubits through the sampling procedure.**
tampering with the quantum sampling procedure in Figure 1. Alice began by preparing sampling qubits in the states \(\left|0\right\rangle\), \(\left|1\right\rangle\), \(\left|+\right\rangle\), and \(\left|-\right\rangle\) and sent these sampling qubits along with some message qubits to Bob. After Bob received these qubits, he measured the sampling qubits in the same basis Alice prepared and estimated the relative Hamming weight \(\omega\) of the remaining qubits. Bob can use this relative Hamming weight \(\omega\) to gain insight into the possible attacks Eve could have performed as follows.
The Hamming distance is defined as the number of qubits which would be measured by Bob in an orthogonal state as compared to the state which Alice prepared. This change of state is caused by Eve's interference in the quantum channel, shown below by the operator \(E\). This implies a definition of the relative Hamming weight as follows,
\[\omega=\frac{1}{4}|\langle 1|\,E\left|0\right\rangle|^{2}+\frac{1}{4}|\langle 0 |\,E\left|1\right\rangle|^{2}+\frac{1}{4}|\langle+|\,E\left|-\right\rangle|^{2 }+\frac{1}{4}|\langle-|\,E\left|+\right\rangle|^{2} \tag{6}\]
Extending this equation to multiple qubits gives the Hamming weight for the sampling qubits as,
\[N\omega=\sum_{i}^{N}\frac{1}{4}|\langle 1|\,E_{i}\left|0\right\rangle|^{2}+ \frac{1}{4}|\langle 0|\,E_{i}\left|1\right\rangle|^{2}+\frac{1}{4}|\langle-|\,E_{i} \left|+\right\rangle|^{2}+\frac{1}{4}|\langle+|\,E_{i}\left|-\right\rangle|^{2} \tag{7}\]
By multiplying the relative Hamming weight by the number of message qubits \(M\), we can determine the estimated Hamming weight of the message qubits,
\[M\omega=\frac{M}{N}\sum_{i}^{N}\frac{1}{4}|\langle 1|\,E_{i}\left|0\right\rangle |^{2}+\frac{1}{4}|\langle 0|\,E_{i}\left|1\right\rangle|^{2}+\frac{1}{4}| \langle-|\,E_{i}\left|+\right\rangle|^{2}+\frac{1}{4}|\langle+|\,E_{i}\left|- \right\rangle|^{2} \tag{8}\]
Given \(\omega\), we will use this equation to gain insight into the operators \(E_{i}\) in Sections 5 and 6. In these sections we will find that knowledge of the gate set and the Hamming weight of the message qubits \(M\omega\) can be used to estimate the total number of gates applied.
## 4 The Protocol
With quantum sampling and the Hamming weight of quantum systems better understood, we are now in a position to state the proposed one-way entanglement purification protocol, which is as follows,
1. Alice chooses a code distance \(d\) and encodes a logical qubit with this distance. She entangles this logical qubit with a local qubit, which she will keep.
2. Alice then concatenates many sampling qubits to this logical qubit and permutes her quantum registers. She sends all these qubits through the quantum channel.
3. Alice sends a classical message to Bob using a previously distributed shared secret key, informing him of the permutation, sampling states, and code distance.
4. Bob receives both the classical message and qubits from their respective channels.
5. Bob uses the classical information given by Alice to locate the sampling qubits and perform quantum sampling, obtaining an estimate of the relative Hamming weight \(\omega\).
6. Using the estimated Hamming weight \(M\omega\), Bob can additionally estimate the number of operations applied to the logical qubit. If the number of operations is less than that allowed by the code distance, then Bob performs error correction and keeps the logical qubit. Otherwise, Bob aborts.
Bob's prediction is correct with probability \(1-\epsilon_{qu}^{\delta}\). Bob can determine the value of \(\delta\) as is shown in the next section.
## 5 Security Against Pauli Gates
We will now prove the security of this protocol when Eve's gate set is restricted to single qubit Pauli gates. Starting from equation 8, with Eve's attacks restricted to single qubit Pauli gates \(E_{i}\in\{X_{i},Y_{i},Z_{i}\}\),
\[M\omega=\frac{M}{N}\sum_{i}^{N}\frac{1}{4}{\left|\left\langle 1\right|E_{i} \left|0\right\rangle\right|}^{2}+\frac{1}{4}{\left|\left\langle 0\right|E_{i} \left|1\right\rangle\right|}^{2}+\frac{1}{4}{\left|\left\langle-\right|E_{i} \left|+\right\rangle\right|}^{2}+\frac{1}{4}{\left|\left\langle+\right|E_{i} \left|-\right\rangle\right|}^{2} \tag{9}\]
From this equation, we find that the estimated Hamming weight of the sampled substring is at least equal to half of the number of applied gates. For example, applying Pauli \(X\) gates to all qubits gives,
\[M\omega=\frac{M}{N}\sum_{i}^{N}\frac{1}{4}{\left|\left\langle 1\right|X \left|0\right\rangle\right|}^{2}+\frac{1}{4}{\left|\left\langle 0\right|X \left|1\right\rangle\right|}^{2}+\frac{1}{4}{\left|\left\langle-\right|X\left|+ \right\rangle\right|}^{2}+\frac{1}{4}{\left|\left\langle+\right|X\left|- \right\rangle\right|}^{2} \tag{10}\]
From this we find that the relative Hamming weight of these applied \(X\)-gates is,
\[\omega=\frac{1}{2} \tag{11}\]
We obtain the same results from applying \(Z\)-gates as well. \(Y\)-gates can be detected from every sampling qubit and give a larger relative Hamming weight of \(\omega=1\). Therefore, given the adversary is restricted to Pauli gates, we can estimate the number of gates applied to the message qubits as twice the estimated Hamming weight, \(2M\omega\). Given that the error-correcting code can correct up to \(\frac{d-1}{2}\) errors, the code can remove Eve's influence if,
\[2M\omega\leq\frac{d-1}{2} \tag{12}\]
For a specific example, suppose that Alice prepared the logical qubit in the distance 5 surface code as is depicted in Figure 2. She sends this logical qubit through a quantum channel along with some sampling qubits. After passing through the quantum channel, Bob is able to use the sampling qubits to estimate the Hamming weight of the logical qubit as \(M\omega=1\). In this scenario, we can use this Hamming weight to estimate that Eve has applied 2 Pauli gates to the logical qubit. Through this, Bob can determine that the distance 5 surface code prepared by Alice will be able to remove these two gates applied by Eve.
### Using the Error Bound \(\epsilon_{qu}^{\delta}\)
We must now address the probability that the sampling procedure has failed and Eve has by chance avoided the sampling qubits. This failure would allow for the true relative Hamming weight to be greater than \(\omega+\delta\) with probability \(\epsilon_{qu}^{\delta}\). This would in turn imply that Eve has applied greater than \(2M(\omega+\delta)\) Pauli gates to the logical qubit. To calculate this failure probability we must first determine the value of \(\delta\) which would cause the protocol to fail. As we are removing Eve's influence through an error-correcting code, the greatest value of \(\delta\) we can allow is the value which saturates the error-correcting code. In this way, \(\delta\) is set such that,
\[2M(\omega+\delta)=\frac{d-1}{2} \tag{13}\]
The true relative Hamming weight is less than this estimate of \(\omega+\delta\) with probability \(1-\epsilon_{qu}^{\delta}\). Recall from Section 2 that \(\epsilon_{qu}^{\delta}=2\exp\bigl{\{}-\frac{1}{6}\delta^{2}k\bigr{\}}\), where \(k\) is the number of sampling qubits, giving,
\[1-\epsilon_{qu}^{\delta}=1-2\exp\biggl{\{}-\frac{1}{6M^{2}}(\frac{d-1}{4}-M \omega)^{2}k\biggr{\}} \tag{14}\]
Figure 2: **The proposed protocol using the distance 5 surface code and some sampling qubits. In this depiction, Eve applies four X-gates in the quantum channel. Bob uses the sampling qubits to estimate the number of gates applied to the logical qubit and uses the surface code to remove Eve’s influence.**
Let us illustrate this with a numerical example. Alice sends the distance 5 surface code, along with 20000 sampling qubits to Bob. Bob then performs quantum sampling and estimates the Hamming weight of the message qubits as \(M\omega=\frac{1}{2}\). Since the employed code can correct up to \(\frac{5-1}{2}=2\) errors, Bob can now perform error correction, keep the logical qubit, and state that the protocol has succeeded with probability,
\[1-2\epsilon_{qu}^{\delta}=1-2\exp\biggl{\{}-\frac{1}{12(25)^{2}}*20000 \biggr{\}}=86.1\% \tag{15}\]
As can be seen through this example, this one-way protocol requires significantly more resources to distribute a single entangled qubit than two-way purification protocols. However, only one round of communication is necessary, decreasing the latency of the protocol at the cost of requiring more qubits.
## 6 Security Against Qubit Measurements
Now that the security of the proposed protocol has been established against single qubit Pauli gates, we can explore the security of the protocol if we additionally allow Eve the ability to perform qubit measurements.
For example, consider a measurement resulting in finding the qubit in the \(\left|0\right\rangle\) state,
\[E_{i}=M_{\left|0\right\rangle}=\left|0\right\rangle\left\langle 0\right| \tag{16}\]
We obtain from Equation 7,
\[N\omega=\sum_{i}^{N}\frac{1}{4}(\left|\left\langle 1\right|0\right\rangle \left\langle 0\right|0)\right|^{2}+\left|\left\langle 0\right|0\right\rangle \left\langle 0\right|1)\right|^{2}+\left|\left\langle-\left|0\right\rangle \left\langle 0\right|+\right\rangle\right|^{2}+\left|\left\langle+\left|0\right\rangle \left\langle 0\right|-\right\rangle\right|^{2}) \tag{17}\]
This leads to the Hamming weight,
\[\omega=\frac{1}{4} \tag{18}\]
This similarly follows for measurements \(M_{\left|1\right\rangle}\) which result in \(\left|1\right\rangle\). Qubit measurements therefore increase the Hamming weight by \(\frac{1}{4}\). This makes measurements more difficult to detect with quantum sampling compared to a Pauli gate.
Equation 17 indicates that if we restrict Eve to measurements she can now affect twice as many qubits while retaining the same Hamming weight as compared to Section 4. For a code with distance \(d\), we must ensure that the distance is large enough to correct double the number of tampered qubits. This changes the requirement in step 6 of the protocol from \(2M\omega\leq\frac{d-1}{2}\) to \(4M\omega\leq\frac{d-1}{2}\) and Bob's calculation of \(\delta\) to,
\[4M(\omega+\delta)=\frac{d-1}{2} \tag{19}\]
This changes the probability of success to,
\[1-2\epsilon_{qu}^{\delta}=1-2\exp\biggl{\{}-\frac{1}{6M^{2}}(\frac{d-1}{8}-M \omega)^{2}k\biggr{\}} \tag{20}\]
By changing this constraint in the protocol, Alice and Bob will now be able to additionally guarantee security against qubit measurements.
## 7 Conclusion and Future Directions
In this paper, we have proposed a one-way entanglement purification protocol with quantum sampling and proved its security against a restricted adversary with access to Pauli gates and measurements. The proof of security of this protocol is straightforwardly proven from the properties of quantum error-correcting codes and the sampling framework utilized.
This protocol breaks the symmetry between Bob and Eve which is typically present in one-way entanglement purification protocols by utilizing a shared secret key. This secret key allows Bob to securely conduct quantum sampling without requiring him to send message back to Alice. This protocol may allow for lower latency as compared to two-way error-correcting protocols, as less communication is needed between the two parties. However, more qubits are necessary in one-way protocols to achieve similar performance to two-way protocols.
The protocol proposed in this paper has so far only been proven secure to a significantly restricted adversary. Further research directions include examining the security of this protocol with respect to an adversary with access to arbitrary single or multi-qubit gates. At first glance, this approach may seem futile when Eve is given access to infinitesimally small gates, as Eve could apply infinitely small rotation gates to each qubit while maintaining approximately zero Hamming weight. However, two facts help to mitigate the effectiveness of this approach for Eve. First, error-correcting codes discretize errors, and therefore would discretize the small gates Eve has applied, suppressing them from affecting the logical qubit. Second, the Eastin Knill theorem states that an operator made of infinitesimally small transverse gates cannot be a fault-tolerant logical operator. Due to this, any single qubit gates Eve employs must be finite sized to manipulate the logical qubit sent by Alice [22]. This indicates that there may be a lower limit to the size of single qubit gates which Eve can apply to improve her likelihood of eavesdropping.
However, as the Eastin-Knill theorem does not apply to multi-qubit non-transverse gates, it may be more difficult to prove the security of one-way entanglement purification against arbitrary multi-qubit attacks. This would require research into the lowest Hamming weight multi-qubit operator that can perform logical rotations on an error corrected qubit. However, there has been relatively little research into constructing continuous logical operators on error-corrected qubits [22, 23], since fault-tolerant quantum computation can be achieved with finite sized gate sets such as Clifford+T [24]. Therefore, further examination into constructing continuous logical operators may have applications in the security of this protocol against arbitrary multi-qubit attacks.
|
2307.08133
|
Testing whether gravity acts as a quantum entity when measured
|
A defining signature of classical systems is "in principle measurability"
without disturbance: a feature manifestly violated by quantum systems. We
describe a multi-interferometer experimental setup that can, in principle,
reveal the nonclassicality of a spatial superposition-sourced gravitational
field if an irreducible disturbance is caused by a measurement of gravity.
While one interferometer sources the field, the others are used to measure the
gravitational field created by the superposition. This requires neither any
specific form of nonclassical gravity, nor the generation of entanglement
between any relevant degrees of freedom at any stage, thus distinguishing it
from the experiments proposed so far. This test, when added to the recent
entanglement-witness based proposals, enlarges the domain of quantum postulates
being tested for gravity. Moreover, the proposed test yields a signature of
quantum measurement induced disturbance for any finite rate of decoherence, and
is device independent.
|
Farhan Hanif, Debarshi Das, Jonathan Halliwell, Dipankar Home, Anupam Mazumdar, Hendrik Ulbricht, Sougato Bose
|
2023-07-16T19:10:25Z
|
http://arxiv.org/abs/2307.08133v4
|
# Testing whether gravity acts as a quantum entity when measured
###### Abstract
A defining signature of classical systems is their in principle measurability without disturbance: a feature manifestly violated by quantum systems. We show that this can be used to test the non-classicality of the gravitational field generated by a source in quantum superposition. To this end, we describe a multi-interferometer experimental setup that can, in principle, reveal the non-classicality of a superposition-sourced gravitational field by showing that it is necessarily disturbed by a measurement of gravity. While one interferometer sources the field, the others are used to measure the gravitational field created by the superposition. The resulting measurement induced quantum update of the state (disturbance) is evidenced through spin measurement statistics. This test, when added to the recently proposed entanglement-witness based tests, enlarge the domain of quantum mechanical postulates being tested for gravity. Moreover, the proposed test yields a signature of quantum measurement induced disturbance for any rate of decoherence, and is device independent.
_Introduction:_ As far as contemporary experimental evidence is concerned, fundamental physics has been found to be described accurately as a hybrid of quantum field theories (all matter and three of the forces) and a classical theory of gravity (general relativity). However, matter sources gravity, and thereby an open, fundamental, and age old question is whether the gravitational field of a mass in a quantum superposition of distinct states is quantum or classical [1; 2; 3; 4; 5]. Creating large enough masses in such quantum superpositions of states is certainly a low energy enterprise, although remains very challenging [6; 7; 8; 9; 10; 11; 12; 13]. Thus there is an expectation that "ruling out" gravity as a classical field/curvature should be a somewhat easier, more compact and less resource intensive experiment than much more ambitious endeavours to detect quantum corrections to gravitational interactions [14] or quanta (on-shell graviton clicks) [15; 16]. In this respect, a major progress has been made recently, with the proposal to entangle two masses in quantum superpositions through their gravitational interaction [17; 18; 19]. Although the interaction between the masses is, to any degree of near-term testability, purely Newtonian, it can be easily argued that the generation of this entanglement between the masses necessitates a quantum superposition of geometries (as if this is disallowed, the generation of entanglement does not take place) [20]. Alternatively, in the weak field limit (where these experiments reside), we can write gravity as a tensor field in Minkowski space-time which interacts locally with matter. Thus if it was entirely classical, we would only have Local quantum Operations and Classical Communications (LOCC) between two masses and would never be able to entangle them [21; 18]. Reasoning in yet another way, the gravitational curvature sourced by a mass needs to be operator valued, rather than number valued as in a classical theory, if it is to result in a direct coherent interaction between the masses [22]. Several other persuasive arguments have been put forward linking this class of experiments with the nonclassical nature of gravity more generally [23; 24; 25; 26; 27] and several variants of this class of experiments have been proposed [28; 29; 30; 31; 32; 33].
While the above tests, if found to yield entanglement between masses, will compel gravity to have a nonclassical description in the sense of obeying the quantum superposition principle, the whole of quantum mechanics is more than just that. Importantly, there is the _measurement postulate_. Measurement on a system (matter/field) means acquiring information about measurement outcomes by inspecting the state of a probe, which necessarily requires system-probe interaction. This enforces an instantaneous update of the state of the measured system in quantum mechanics. On the contrary, an ideal measurement on a classical system should not, _in principle_, alter the state of the system or its subsequent dynamics. This aspect of non-disturbing measurability can be viewed as a necessary ingredient for a large class of definitions of classicality [34]. This leads to the testable "Non-Disturbance Condition" (NDC) for classicality [35] (which goes by several names, e.g., no-signalling-in-time condition [36], or quantum witness [37]): The act of performing an intermediate measurement should not influence the statistics of outcomes of a subsequent measurement. Observing a discrepancy between intermediately measured and intermediately unmeasured statistics would thus be a signature of non-classicality. Nonetheless, in practice, any measurement performed on a classical system can disturb the system (classical disturbance). Crucially, this disturbance is not an inherent part of classical physics - one can arbitrarily reduce such classical disturbance by modelling the measurement appropriately. On the other hand, quantum measurement-induced disturbance is an intrinsic part of quantum theory, which fundamentally _cannot_ be eliminated by any means. This feature is central to our proposed test for the quantum measurement-induced disturbance in gravity, where the effects of possible classical disturbances must be decisively cancelled by design.
In this work, we extend the demonstration of nonclassicality of gravity in a direction _complementary_ to that of the demonstration of quantum superposition of geometries explored in earlier proposals [17; 18; 19]. Namely, we seek to show the applicability of the quantum measurement postulate to gravity or, put more sharply, quantum measurement-induced disturbance. To this end, we use the violation of the NDC, and will essentially present a way to obtain a detectable disturbance to a source mass by only measuring its gravitational field at a distance by a probe mass. We will exploit the fact that masses are not in contact, so that the probe mass can only measure the gravitational field of the source mass. If a measurement of the gravitational field of the source mass cannot be performed, even in principle, without causing an update to the state of the source mass, it will imply that gravity cannot be measured without any disturbance. We will use large quantum superpositions as the measuring apparatus in order for the measurement to be informative enough. Adding this test to the entanglement-witness based tests [18; 19] will take us towards a more _complete_ demonstration of gravity as a quantum entity. Moreover, while in the entanglement-witness based tests [18; 19], a non-zero signature requires the rate of decoherence to be kept below a certain threshold [38; 39; 40], the nonclassicality observed here persists for any decoherence rate while only requiring an increasing number of experimental repeats to observe it as the decoherence rate increases. It also has the added feature of being a device independent test of nonclassicality (measurement apparatus can be treated as black box) as opposed to the entanglement-witness based tests [18; 19].
_Schematics:_ We first present the general idea at a schematic level before proceeding to _how_ one would accomplish it. Consider the Mach-Zehnder interferometers presented in Figs.1(a) and (b). A mass described by quantum mechanics, but large enough to produce a detectable gravitational field at a proximal detector, is made to undergo an interferometry with equal amplitudes in the arms (labelled here by quantum states \(|L\rangle\) and \(|R\rangle\)). We call this the source mass. The outputs at the end of the interferometry (which could be direct electromagnetic detection of the source mass) are labelled \(+\) and \(-\), while the relative phase \(\Delta\lambda\) between the arms is ensured to be \(0\). Note that nothing can be said about gravity from this setting of the experiment (Fig.1(a)) as no measurements of gravity takes place. Although we have assumed that the mass gravitates, nothing about the _form_ of its gravitational field is tested
Figure 1: A mass is prepared in a superposition of states \(|L\rangle\) and \(|R\rangle\) by subjecting it through an ideal Mach-Zehnder interferometer, while ensuring no interferometric phase difference between the arms (\(\Delta\lambda=0\)). (a) Given that no intermediate measurements are performed, the final detector outcome is certain to be \(+\): \(P_{+}=1\). The gravitational field source by the mass is not shown as it is not measured and there is no way to infer about the nature of the gravitational field sourced by the mass in this part of the experiment. (b) An intermediate measurement of the gravitational field sourced by the mass is performed by a detector with delocalised wavefunction over a support given by the coordinate \(x_{2}\). This intermediate measurement has two outcomes each with equal probability. These outcomes, \(+\) or \(-\), are equivalent to measurement of gravitational potentials \(V_{R}(x_{2})\) or \(V_{L}(x_{2})\), each corresponding to the source mass being in the interferometric arms \(|R\rangle\) or \(|L\rangle\) respectively. When \(+\) outcome is obtained (the measured gravitational potential for this case is shown by concentric circles in the figure), the probabilities of both the \(+\) and \(-\) in the final detector are \(1/2\). Hence, the joint probability of \(+\) outcome in the intermediate detector followed by \(+\) outcome in the final detector is \(P_{+,+}=1/4\).
in Fig.1(a) alone as it is an experiment on an isolated mass. However, the case of Fig.1(a) is then compared with another setting of the experiment, namely the case of Fig.1(b), where a gravitational field detector is placed proximal to the mass during the interferometry. In practice, as we will clarify, the most sensitive such detector will be another similar mass/masses undergoing interferometry/interferometries, and let us assume that the "probe" mass/masses have a delocalized wavefunction over a support given by the coordinate \(x_{2}\) as shown in Fig.1(b). This detector performs an intermediate measurement (midway during the interferometry) _of the gravitational potential_ of the source mass undergoing interferometry. It is crucial to ensure that the detector measures the gravitational potential rather than position of the interferometric mass itself by other means (i.e., via electromagnetic channels, interactions or scattered photons). Subsequently, a detection of the source mass is also made in the \(+\) and \(-\) outputs of the interferometer. If the system is described by a 'hybrid model' with quantum matter, but classical gravity, then, by definition, the measurement of gravity by the intermediate detector cannot cause any change in the final probabilities, i.e.,
\[P_{+}(\text{no intermediate meas})-P_{+}(\text{after intermediate meas})=0. \tag{1}\]
This is the NDC to be satisfied by any classical entity.
Now, let us consider the case where _everything_ (the mass along with its associated gravitational field) is quantum mechanical. In the scenario of Fig.1(a), assuming no measurements are made during the interferometry, and negligible decoherence from other sources, we should have the probability of detecting the \(+\) output \(P_{+}=1\). Next, we move to the scenario of Fig.1(b). By the formulation of the setup, the intermediate detector is able to distinguish between two possible values of the gravitational potential: Either \(V_{L}(x_{2})\) (the source mass being in the interferometric arm \(|L\rangle\), which we label as the \(-\) outcome) or \(V_{R}(x_{2})\) (the source mass being in the interferometric arm \(|R\rangle\), which we label as the \(+\) outcome), with the latter scenario being depicted in Fig.1(b). For the \(+\) outcome (occurring with probability \(1/2\)), we should follow the quantum measurement postulate and reset the state of the mass to be in the state \(|R\rangle\). This reset makes the probabilities of both the \(+\) and \(-\) outcomes in the final detector (after the interferometer) to become \(1/2\). Thus, the probability of \(+\) in the intermediate detector followed by \(+\) in the final detector becomes \(P_{+,+}=1/4\). Similarly, the probability of \(-\) in the intermediate detector followed by \(+\) in the final detector becomes \(P_{-,+}=1/4\). Thus, a discrepancy will appear between the probabilities of the final detector's \(+\) outcome according to whether the intermediate detection was made or not:
\[P_{+}-P_{+,+}-P_{-,+}=\frac{1}{2}. \tag{2}\]
Eq.(2) is the _violation_ of Eq.(1) which indicates that gravity is not classical. Any violation seen even when other types of measurements are done on the gravitational field (e.g., a measurement projecting on to the \(|L\rangle\pm|R\rangle\) outcomes, as will be the case in our study) will still indicate its nonclassicality. However we have to ensure that \(\Delta\lambda=0\) is still maintained while going from the case of Fig.1(a) to Fig.1(b) even though an extra intermediate detector is coupled, as otherwise the probability of \(P_{+}\) can simply change due to an interferometric phase difference rather than due to the measurement. Later in the paper, when we introduce workable formulations of the interferometer, the intermediate and the final detectors, we will ensure that the intermediate detector methodology is such that \(\Delta\lambda=0\) is maintained.
_Assumptions:_ In order to conclude the nonclassicality of gravity from the violation of Eq.(1), two assumptions have to be made:
1. The gravitational field (here we speak of the Newtonian field) of the source mass is _entirely_ determined by it. However, we are making _no assumption_ about _how_ the state of the mass determines its gravitational field. For example, it could be a hybrid model, with the classical value of the gravitational potential being determined by an expectation value of the mass density in the quantum state of the mass [41; 42] or a model where a quantum source produces a probabilistic classical gravity [43; 44; 4].
2. The probe mass is _at a distance_ from the source mass- it does not share the same support as the source mass and can _only_ directly measure its gravitational field (curvature); in other words, it measures the gravitational potential \(V(x_{2})\) created by the source mass at its position \(x_{2}\). With this assumption, the probe measures purely the gravitational field of the source mass, rather than source mass directly (i.e., by other, such as photon scattering or other electromagnetic means).
_NDC satisfaction examples in hybrid models:_ Any NDC violation in our experiment will rule out _all_ hybrid models (classical gravitational field sourced by quantum matter) as classical gravitational field can, in principle, be measured without disturbance, which follows from the usual definition of classicality. We will exemplify this with two extreme instances of hybrid models:
(i) A Moller-Rosenfeld [41; 42] mean field model in which the Newtonian gravitational potential is produced by the expectation value of the mass distribution \(\langle T_{00}\rangle\), which we denote as \(V_{0}(x_{2})\) at the position of the probe. In this case, for the setup of Fig.1(b) the probe will measure a gravitational potential \(V_{0}(x_{2})\), and thus not change the gravitational potential to either of the values \(V_{L}(x_{2})\) (sourced by the mass in the state \(|L\rangle\)) or \(V_{R}(x_{2})\) (sourced by the mass in \(|R\rangle\)). Thus \(P_{+}\) remains same in both the measured and unmeasured cases, and there is no violation of the NDC condition (as there will be only one outcome \(V_{0}(x_{2})\) of the intermediate measurement, it can be arbitrarily set to \(+\) or \(-\)).
(ii) Any hybrid model in which gravitational field has a definite state with some probability [43; 44; 4]. For example, one such model involves spontaneous collapse of the matter wave function (even in the absence of any measurement) [2], implying the associated gravitational field acquiring different definite values with different probabilities. Thus, again, the \(P_{+}\) probability becomes equal in the setups of Figs.1(a) and (b) and no violation of the NDC condition is obtained.
Here we also emphasize the importance of both parts Fig.1(a) and Fig.1(b) of the experiment. Fig.1(a) alone reveals nothing about the form of the gravity during the interferome
try as no gravitational field is measured at any stage. On the other hand, Fig.1(b) alone does not tell whether the source mass-gravity combination was _already_ in either \(|L\rangle,V_{L}(x_{2})\) or \(|R\rangle,V_{R}(x_{2})\) before the measurement, which simply read out the potential. Note that proposals of Fig.1(b) alone, without involving 1(a), have previously appeared in the literature [45].
In what follows, we will consider a specific implementation of the protocol in which two probes are utilised, both of which are masses with embedded spins (spin degrees of freedom are in one-to-one correspondence with the spatial degrees of freedom of the masses) undergoing interferometry. The spin degrees of freedom are introduced for a convenient realisation of the measurements of gravity [46; 47].
Interferometric setup:We consider a specific arrangement in which a mass \(M\) (which is our source mass), with an embedded spin degree of freedom, is made to undergo a spin dependent spatial interferometry (also called a Stern-Gerlach interferometry [11]). This replaces the Mach-Zehnder interferometer depicted in the schematic of Fig.1. The unmeasured case (corresponding to Fig.1(a)) of the experiment is performed only with this mass. The intermediate detector for measuring the gravitational field of the source mass (corresponding to Fig.1(b)) is realised by two successive massive-object probe-interferometers, each with mass \(m\), arranged in a geometrically parallel configuration with respect to the source-interferometer at some distance \(d\) away (where the use of two probe interferometers is to be justified shortly). The spatial superposition of the source mass is then closed and a projective measurement is performed on its embedded spin (this spin measurement replaces the final detector at the end of the Mach-Zehnder interferometry of Fig.1). A diagrammatic description of the protocol is given in Fig.2. We finally compare the statistics of the final spin measurement both with and without the intermediate gravitational field measurements in order to test the NDC.
All masses are prepared, held in spatial superposition, and recombined for completing interferometry through specific means, such as spin motion coupling. Here we refrain from committing to any particular mechanism while simply pointing out the extensive body of studies on how to create such superpositions [6; 7; 8; 9; 10; 11; 12; 13]. Note, the spins of each probe-interferometer are automatically decoupled from the source as soon as the spin of the probe is measured. Thus the same mass with its embedded spin can be re-used to realise the second probe-interferometry. In what follows, for clarity, we will label the masses in each probe-interferometry separately. Thus we let \(M_{i}\) and \(S_{i}\) denote the mass and embedded spin degrees of freedom of a given mass indexed by \(i\) according to whether the source system (\(i=1\)) or one of the two probe systems (\(i=2,3\) in sequence) is referenced.
First, we consider an idealised scenario in the absence of environmental decoherence effects (with full calculation details to be found in Appendix A). The initial state of mass and embedded spin degrees of freedom of the source at \(t=0\) is given by,
\[\left|\psi(t=0)\right\rangle=\left|C\right\rangle_{M_{1}}\otimes\frac{1}{ \sqrt{2}}(\left|\uparrow\right\rangle_{S_{1}}+\left|\downarrow\right\rangle_ {S_{1}}),\]
where \(\left|C\right\rangle_{M_{1}}\) is the initial localised state of the source mass \(M\) at the center of the axis of the source-interferometer. Over a time \(T\), the source mass is prepared in spatial superposition via the unitary evolution:
\[\left|C\right\rangle_{M_{1}}\otimes\left|\uparrow\right\rangle_{S _{1}}\rightarrow\left|L\uparrow\right\rangle_{1},\] \[\left|C\right\rangle_{M_{1}}\otimes\left|\downarrow\right\rangle_ {S_{1}}\rightarrow\left|R\downarrow\right\rangle_{1}. \tag{3}\]
In the above, the states \(\left|L\uparrow\right\rangle\) and \(\left|R\downarrow\right\rangle\) are separated by a distance \(\Delta x(t)\), which grows from the value \(0\) at \(t=0\) to attain a maximum at \(t=T\) with \(\Delta x(T)=\Delta x\). The first probe mass \(M_{2}\) (of mass \(m\)) with embedded spin \(S_{2}\) is then introduced and subjected to the same evolution as Eq.(3) with the subscript '1' being replaced by '2'.
With both superpositions fully prepared, the source and the probe now interact exclusively through gravity in a static geometrical arrangement for a time \(\tau\) before the spatial probe superposition is closed over a time \(T\)[11; 18]. Thus the total interaction time interval is \(2T+\tau\).
Figure 2: The gravitational field generated by the interferometric source mass (depicted in red) is measured sequentially by a pair of massive interferometric probes (depicted in blue), where the gravitational interactions are indicated by wavy lines. Finally, the source mass superposition is closed and a final measurement is performed on the embedded spin degree of freedom of the source mass. Statistics are compared between the two cases where the blue elements of the diagram are either introduced or not in order to quantify the disturbance due to the effective gravitational field measurements of the probes
When the source mass is in the state \(|L\uparrow\rangle_{1}\), the probe mass is affected by the gravitational potential \(V_{L}(x_{2})\) of the source mass and, for any time interval \(\delta t\) evolves as
\[e^{\frac{-imV_{L}(x_{2}=x_{L}(t))\delta t}{\hbar}}|L\uparrow\rangle_{2}+e^{ \frac{-imV_{L}(x_{2}=x_{R}(t))\delta t}{\hbar}}|R\downarrow\rangle_{2}, \tag{4}\]
where \(x_{2}=0\) is chosen at the centre of \(|L\uparrow\rangle_{1}\); \(x_{i}(t)\) is the distance of probe mass in the state \(|i\rangle\) from the source mass at any instant \(t\). Similar argument holds for the source in the state \(|R\downarrow\rangle_{1}\). Note that the probe mass is not affected by contact (or otherwise electromagnetically) with the source mass, but only being affected at a distance by the source's gravity (i.e., through the metric \(g_{00}=1+\frac{2V_{L}(x_{2})}{c^{2}}\), which is completely determined by the source mass). After closure of the interferometry of the probe, its spin state is decoupled from its spatial state which enables accessing the information about the relative phases accumulated between \(|L\uparrow\rangle_{2}\) and \(|R\downarrow\rangle_{2}\) during the above evolution from only the measurement of the spin. Accordingly, a projective measurement of the probe spin is now performed on this spin state in the \(|\pm\rangle_{S_{2}}=(|\uparrow\rangle_{S_{2}}\pm|\downarrow\rangle_{S_{2}})/\sqrt {2}\) basis. This projection results in a POVM on the source system (mass and its associated field). Since only the gravitational field of the source is in contact with the probe, we can say that it is essentially a measurement of gravity which has resulted in this POVM.
The first probe (system 2) is then discarded, and a new probe (system 3) is introduced. As before, the new probe now interacts with the source system via the gravitational field for a further time \(2T+\tau\) in an identical fashion before a projective measurement in the \(|\pm\rangle_{S_{3}}=(|\uparrow\rangle_{S_{3}}\pm|\downarrow\rangle_{S_{3}})/ \sqrt{2}\) basis is performed on the internal spin degree of freedom of system 3 at \(t=t_{1}=5T+2\tau\). The aforementioned argument implies that this measurement is also a measurement of gravity of the source. The second probe is then also discarded. Over a time \(T\), the spatial superposition of the source interferometer is now closed via the reversal of the unitary evolution in Eq.(3).
A final projective measurement of the embedded spin state of the source is then performed in the \(|\pm\rangle_{S_{1}}=(|\uparrow\rangle_{S_{1}}\pm|\downarrow\rangle_{S_{1}})/ \sqrt{2}\) basis at time \(t=t_{2}\) where \(t_{2}-t_{1}=T\). Specifically, this final measurement on the spin of the source is performed to infer the disturbance caused by the intermediate measurements on the gravitational field of the source. This measurement is not a measurement of gravity as no other mass is now present in the vicinity of the source. This measurement yields the following unnormalised states of the source conditioned on the outcomes of the three measurements:
\[|\psi_{a,b,c}(t_{2})\rangle=\frac{1}{8}\bigg{[}\bigg{(}1+ae^{i \Delta\phi}\bigg{)}\bigg{(}1+be^{i\Delta\phi}\bigg{)}\\ +ce^{2i\Delta\phi}\bigg{(}1+ae^{-i\Delta\phi}\bigg{)}\bigg{(}1+ be^{-i\Delta\phi}\bigg{)}\bigg{]}\left|C\right\rangle_{M_{1}}\left|c\right\rangle_{S_{1}},\]
where \(a,b,c\in\{+,-\}\) denote the outcomes (\(+\) or \(-\)) of the first and second probe measurements followed by the final measurement on the source spin respectively. From the above, the joint probabilities \(P_{a,b,c}\) are readily obtained from the norms of these states. Here, \(\Delta\phi=\Delta\phi_{\tau}+2\Delta\phi_{T}<0\) is the relative phase accumulated between the diagonally opposite (\(LR\) and \(RL\)) and adjacent (\(LL\) and \(RR\)) arms of the source and each of the probe interferometers over their total interaction time duration \(2T+\tau\). Of it's constituent parts, \(\Delta\phi_{T}\) represents the relative phase accumulated during the opening or the closing of the spatial probe superposition, with its expression being somewhat elaborate (given in Appendix A), while \(\Delta\phi_{\tau}\) is associated with the relative phase development for the duration \(\tau\) when the spatial superpositions of source and probe are held in a static geometrical arrangement and is given by,
\[\Delta\phi_{\tau}=\frac{GMm\tau}{\hbar\sqrt{d^{2}+(\Delta x)^{2}}}-\frac{GMm \tau}{\hbar d}. \tag{5}\]
Let us now consider the same scenario as described above, except that the probes are not introduced, and thus no intermediate measurement takes place prior to the final measurement on the source spin at \(t=t_{2}\). In this case, the probabilities of the final measurement outcomes are given by,
\[P_{+}=1,\hskip 14.226378ptP_{-}=0.\]
Thus the violation of the NDC is given by,
\[V(\pm)=P_{\pm}-\sum_{a,b\in\{\pm\}}P_{a,b,\pm}=\pm\frac{1}{2}\sin^{2}\Delta\phi. \tag{6}\]
While classical disturbances on the source caused by the electromagnetic and other interactions due to the presence of the probe are to be eliminated through screening and other means, the gravitational interaction due to the presence of the probe itself can also be a root of classical disturbance. The use of two probes is chosen to solely eliminate this classical disturbance. Note, as a comparison of measured and unmeasured cases is carried out in the computation of the NDC, decoherence effects that are common to both instances (i.e. effects not due to the presence of the probe solely) are effectively cancelled by taking the difference of the measurement statistics in the two cases.
_Why two probes:_ Quantum measurements, accompanied by an averaging over the outcomes, essentially cause a _dephasing_ of the source mass. This is mathematically equivalent to a probabilistic phase flip (relative phase going from \(0\) to \(\pi\)), with the probability of phase flip growing from \(0\) initially to \(1/2\) at infinite time (complete dephasing with all coherence lost). This is indeed the effect we want to see during intermediate measurements, and is at the core of violating the NDC condition. However, we should take care to rule out any additional deterministic phase (equivalent to \(\Delta\lambda\neq 0\)) as this can manifestly cause a quantitative change in measurement statistics of the source, even in the absence of the quantum measurement-induced collapse that we wish to observe. This phase can be interpreted as a classical Collella-Overhauser-Werner (COW) phase shift of the source mass [48] due to a common gravitational acceleration experienced by both \(|L\rangle\) and \(|R\rangle\) parts of its superposition. In this particular proposal, two separate probe measurements are employed to eliminate this classical contribution to the developed phase.
In order to close the loophole of observing a violation of the NDC in the absence of measurement induced disturbance, we should only permit the following stochastic rotation (a consequence of decoherence solely due to the intermediate measurements) on the state of the source mass - \([(1+e^{-\beta t})/2\ \mathbb{I}+(1-e^{-\beta t})/2\ \sigma_{z}]\), instead of the following - \(\mathcal{R}_{z}(\theta)[(1+e^{-\beta t})/2\ \mathbb{I}+(1-e^{-\beta t})/2\ \sigma_{z}]\) (here \(t\) denotes the total time-scale of gravitational field measurement by the two probes and \(\beta\) denote the rate of quantum measurement-induced decoherence of the source), which is a stochastic rotation with an additional deterministic rotation about \(z\)-axis by some angle \(\theta\). This deterministic rotation comes into play due to the interferometric phase difference solely because of the presence of the probe, and the action of this deterministic rotation is independent of whether any quantum measurement has been performed. In Appendix B, we show that the above criteria is satisfied in the two-probe setup, as opposed to what is seen in the case of a single probe. We remark that there may be other techniques to eliminate the effect of such classical disturbances, while here we have used this double probe setup as one simple feasible solution.
_Interpretation of NDC Violation as Nonclassicality of Gravity:_ Under our assumptions, the masses do not directly interact with each other; they interact only indirectly through the gravitational field. We remark that, while this calculation was carried out under the application of an instantaneous, manifestly non-local Newtonian field, this is merely to be considered a _calculational tool_ that yields outcomes consistent with the more complete description of gravitational phase accumulation within the framework of the effective field theory of linearised gravity [21, 22, 27]. For a _classical_ gravitational field, the probe measurements of gravity at a distance from the source mass should not alter the measurement statistics of the source mass. This logic holds for all classical descriptions of gravity. Thus, we conclude that the observation of source-state disturbance through its internal spin measurement statistics demonstrates a description of gravity that necessarily includes the state-update aspect of the quantum measurement postulate. That is, the probe measurements of the gravitational field changes the state of the gravitational field. As the gravitational field is fully determined by the source (by assumption 1), the state of the source also changes due to the change in the state of its gravitational field. This is responsible for the violation of the NDC.
_Decoherence effects:_ In any full realisation of this protocol, as with all table-top experiments in this class, the effects of decoherence on the various superposition states due to environmental interactions must be carefully accounted for. Here, we take the natural position that, beyond laboratory environmental conditions, the rate of decoherence \(\Gamma[\Delta x(t)]\) of any superposition state at any instant depends exclusively on the superposition size \(\Delta x(t)\) at that time. The full calculation can be found in Appendix C. With these effects considered, the violation of the NDC is given by,
\[V(\pm)=\pm\frac{1}{2}\eta\sin^{2}\Delta\phi, \tag{7}\]
where,
\[\eta=\exp\biggl{(}-2\int_{0}^{T}\Gamma[\Delta x(t)]dt-\Gamma_{\text{Max}}(4T+ 2\tau)\biggr{)}.\]
As mentioned earlier, \(\Delta x(t)\) is taken to be maximised at \(t=T\) as it first achieves the desired final superposition size \(\Delta x\) at that time. Assuming reasonably that \(\Gamma\) is monotonically increasing as a function of a monotonically increasing \(\Delta x(t)\), it therefore achieves it's maximum at \(t=T\) as well. Hence, using the maximal bound, the integral in the exponent above is crudely bounded above by \(\Gamma_{\text{Max}}T\), where \(\Gamma_{\text{Max}}=\Gamma[\Delta x(t=T)]\). Thus we have
\[\frac{1}{2}e^{-\Gamma_{\text{Max}}t_{2}}\sin^{2}\Delta\phi\leq|V(\pm)|\leq \frac{1}{2}\sin^{2}\Delta\phi, \tag{8}\]
where the upper bound corresponds to the decoherence-free case; and the lower bound reflects the case of maximal overestimation of decoherence effects during preparing and closing the superpositions by assuming that \(\Gamma[\Delta x(t)]=\Gamma_{\text{Max}}\) for all \(t\), whereas in reality \(\Gamma[\Delta x(t)]\leq\Gamma_{\text{Max}}\) for all \(t\).
We observe that the violation of the NDC persists for any rate of decoherence, although it becomes exponentially damped by the total duration of the experiment and the rate of decoherence. Notably, when the decoherence rate exceeds a critical value defined by the rate of relative phase accumulation, the gravity induced entanglement witness protocol [17, 18, 19] is no longer effective [38, 39, 40]. However, a quantum violation of the NDC persists in the present protocol for any decoherence rate. This is a consequence of the fact that the joint state of the source-probes-environment remains entangled for any decoherence rate (implying the possibility of disturbance of the gravitational field due to measurements by the probes), whereas the reduced state of the source-probes (after tracing out the environment) becomes separable.
_Parameter Regimes:_ To exemplify, let us consider the parameter regime with \(M,m\sim 10^{-14}\) kg, and closest approach of the masses \(d\sim 157\mu\)m (which is within the regime in which gravity is significantly stronger than the electromagnetic interactions between neutral masses, namely the Casimir-Polder interaction) [39]. Pressures, temperatures and inertial noises required to keep the effective decoherence well within limits for various superposition sizes have been discussed elsewhere [18, 39, 49]. In the limit of negligible decoherence, a large NDC violation \(\gtrsim 0.4\) can be obtained for equal interaction and preparation times \(\tau,T\) in the range \(1.9\) s- \(3.2\) s by taking massive superpositions with widths \(\Delta x\sim 479\mu\)m (with the requirement reducing as far as \(\Delta x\gtrsim 215\mu\)m for the larger interaction times in this range) under the assumption of a linear-in-time preparation and closing of probe superpositions (see Appendix D for further details, including effects due to decoherence). Screening and trapping will typically allow one to reduce \(M,m,d\) and/or \(\Delta x\) by a few orders of magnitude [50], so that similar violations may be obtained with less demands. Moreover, in a practical sense it may be easier for experiments to reduce \(M,m\) and \(\Delta x\) at the price of increasing the number of runs. As observing an NDC violation effectively amounts to measuring probabilities, we can
measure a violation of \(0.01\) by getting a precision finer than \(0.01\), which requires averaging the results of the \(10^{4}\) experimental runs.
One may wonder whether the intermediate detector/s of mass \(m\) could have been some other gravitational sensor. For atomic interferometers, each \(\sim 10\) atomic mass unit atom gives a \(\Delta\phi\) about \(10^{-12}\) times smaller, so that there is no particular advantage in using them. In effect, by using an entire crystal, we are using a sensor where the motion of \(10^{12}\) such atoms are entangled (all moving one way or other), and a similar correlated state of atoms have to be generated - these are called NOON states. For much larger localized masses as probe, it is their acceleration difference due to \(|L\rangle\) or \(|R\rangle\) states of the source, which should be observed, which is exceptionally low \(\sim 10^{-17}\) ms\({}^{-2}\), although perhaps not impossible in view of recent advances [51].
_Conclusions:_ Quantum mechanics consists of the superposition principle (which, along with completeness of bases, results in its Hilbert space description), the unitarity of evolution, as well as the update of the state of a system when measured with probabilities of the measurement outcomes given by the Born rule. There is an existing proposal for testing the validity of the quantum superposition principle for gravity via witnessing the gravitationally generated entanglement between two masses [17; 18; 19] (that conclusion is justified under a minimal set of assumptions [20; 21; 22; 27]). Here, we have instead suggested a scheme which will _complement_ that test by showing that when gravity is measured, there is a necessary collapse/update of the state of a quantum system. As we are summing over the measurement-outcomes for the NDC witness, the measurement is equivalent to decoherence, but a decoherence which is controllably triggered only by the very act of measurement [52]. The coherent part of the experiment (Fig.1(a)) would be consistent with gravity being both quantum and classical (say, sourced as a mean field from the expectation value of a quantum source) as gravity is never measured, while the decoherent part of the experiment (Fig.1(b)) collapses it to one of the two classical alternatives. Thus, we cannot claim that the experiment described here witnesses the coherent superposition of geometries, and therefore the entanglement based tests are still necessary for testing that aspect of quantum gravity. On the other hand, here a violation of classicality by gravity is being tested, which is necessarily linked to quantum measurement. We should point out that conceptually our present work is different from [53] where the violation/non-violation of Leggett-Garg inequalities (a class of inequalities not violated by classical physics) is used to infer gravitational entanglement; in particular, the quantum disturbance due to measurement of gravity is not sought to be tested. This violation of NDC by gravity can in principle be seen even in the presence of arbitrarily large decoherence, with the tacit price of more experimental runs to detect smaller violation. This might make the measurement induced disturbance aspect of gravity more readily testable than verification of its ability to exist in quantum superpositions [18; 20].
Finally, note that the present protocol for testing non-classicality of gravity relies on the fact that one can, in principle, measure gravitational field without any disturbance in any hybrid model where the matter is described quantum mechanically, and the gravitational field sourced by the matter behaves classically. This naturally follows from the usual definition of classicality as measurement induced state update/disturbance is not an intrinsic feature of classical physics.
_Acknowledgements:-_ FH acknowledges support from the Engineering and Physical Sciences Research Council [grant number EP/L015242/1]. DD acknowledges the fruitful discussions with Lorenzo Braccini and the Royal Society (United Kingdom) for the support through the Newton International Fellowship (NIF\(\backslash\)R1\(\backslash\)212007). JH acknowledges useful conversations with Clement Mawby. DH acknowledges support from NASI Senior Scientist Fellowship and QuEST-DST Project Q-98 of the Government of India. HU would like to acknowledge support EPSRC through grants EP/W007444/1, EP/V035975/1 and EP/V000624/1, the Leverhulme Trust (RPG-2022-57), the EU Horizon 2020 FET-Open project TeQ (766900), and the EU EIC Pathfinder project QuCoM (10032223). SB would like to acknowledge EPSRC grant EP/X009467/1 and STFC grant ST/W006227/1.
|
2301.05941
|
Improving Confidentiality for NFT Referenced Data Stores
|
A non-fungible token (NFT) references a data store location, typically, using
a URL or another unique identifier. At the minimum, a NFT is expected to
guarantee ownership and control over the tokenised asset. However, information
stored on a third party data store may be copied and stolen. We propose a
solution to give control back to the information owner by storing encrypted
content on the data store and providing additional security against hacks and
zero day exploits. The content on our data store is never decrypted or returned
to its owner for decryption during rekeying. Also, the key size in our protocol
does not increase with each rekeying. With this, we reduce the synchronisation
steps and maintain a bounded key size.
|
Sarad Venugopalan, Heiko Aydt
|
2023-01-14T16:04:21Z
|
http://arxiv.org/abs/2301.05941v1
|
# Improving Confidentiality for NFT Referenced Data Stores
###### Abstract
A non-fungible token (NFT) references a data store location, typically, using a URL or another unique identifier. At the minimum, a NFT is expected to guarantee ownership and control over the tokenised asset. However, information stored on a third party data store may be copied and stolen. We propose a solution to give control back to the information owner by storing encrypted content on the data store and providing additional security against hacks and zero day exploits. The content on our data store is never decrypted or returned to its owner for decryption during rekeying. Also, the key size in our protocol does not increase with each rekeying. With this, we reduce the synchronisation steps and maintain a bounded key size.
NFT, Data Store, Confidentiality, Blockchain.
## I Introduction
Protecting the information on NFT referenced data stores is a pertinent problem. This is because information on a third party data store is easily copied, and we are unable to protect it [1]. It may result in theft, by issuing a fake NFT that points to a copy [2]. There is a class of NFT applications that requires the information owner to retain both ownership and control (for its information stored on a data store). To achieve varying degrees of control, we make the distinction between licensing and ownership sale. In the licensing business model, its consumers must pay the information owner by sending monies/crypto coins to the NFT smart contract, to allow retrieval of information (and a licence with its terms of use) from a data store. Paying a NFT supplies its consumer with a licence to use the information but not sell it. The ownership is retained by its owner. For example, any user may freely view a low resolution art image on a data store but is required to pay the NFT to view its licenced and watermarked high resolution image. This gives serious buyers enough information to decide whether to buy its watermark free high resolution image. The high resolution images (water-marked and non-watermarked) are stored encrypted on the data store. Ownership sales involves transferring of the underlying digital asset token on the blockchain and supplying both high resolution image decryption keys, to its new owner. This way the non-watermarked high resolution art image is never made public. An escrow account may be set up on a smart contract to ensure this transaction is paid for and decryption keys received.
Another application is in the building and construction industry [3]. It may be useful for city planners and analytics companies, to have knowledge of recyclable and reusable material in a building [4]. This information may be collected and digitised by a building owner, and converted into a tokenised data asset [5]. A consumer of this information must pay the NFT to retrieve the requested information and licence from a data store. Other use cases may include tokenising the power (utility) bills of an apartment in a building. The monthly power bills may be retrieved from a data store by paying the corresponding NFT. This may be useful in giving insights such as -- "do not rent west sun facing high floor apartments to reduce power consumption". In another application, a NFT may point to the electric wiring diagram or the plumbing diagram for an apartment. For repairs, the corresponding NFTs may be paid to retrieve the required wiring diagrams. This would save the contractors time and effort attempting to deduce its location behind plastered walls. Effectively, monetisation incentives the asset owner to digitise information and tokenise the asset, allowing valuable but hard to find information to be licensed or sold for profit. To licence or sell a tokenised asset, the information must not find its way into the public domain. Also, we recognise plain-text information might be illegally sold by a past owner or a licensed consumer. We rely on the sales and licensing terms to discourage uncontrolled plain-text information dissemination.
An information owner may decide to host the data store herself on the internet, but this solution suffers from high costs. It is individually expensive to buy server infrastructure and manage network downtime. Since, it may not be cost-efficient for the owner to be always-online, she may decide to delegate this functionality to an online third party data store. However, the hosting data store is able to view all the information on its storage. To resolve this, an owner may encrypt and deposit the information on the third party data store, and keep the decryption keys separately on another online key store. Again, the key store might be compromised by an external adversary using a zero day exploit and steal its decryption keys. To prevent this, we need to more than just encrypt the plain-text information (hereafter referred to as record). One of the options is to use a hardware security module (HSM) [6]. A HSM may be efficiently used when the device is trusted. However, where trust is not fully explicit, a more benign solution is required. The rest of the paper is organised as follows. We describe the solution outline in section II. The system architecture is discussed in section III. Section IV explains the confidentiality protocol and section V discusses its security and speed optimisations.
## II Solution Outline
We present a solution that involves no trusted third parties and uses ephemeral keys to encrypt and decrypt the records. Ephemeral keys improve protection against zero day exploits that allow an external attacker to break in and steal keys. With ephemeral keys, any previously used (stolen) keys can not be used to decrypt the latest encrypted record on our data store. Our solution does not require the record owner to be always-online to supply decryption keys. The records on the data store are never decrypted and key updates do not require decryption. Our solution employs 2 key stores (see Fig. 1), one is under the control of storage X (it also hosts the data store) and the other is storage Y, rented by the information owner. We assume that X and owner rented storage Y are collusion free. X and Y are third party services that the owner is able to access via supported API calls. For example, X and Y may be online cloud hosting services. X is unable to generate decryption keys for the records stored by itself. There are two secret master keys (MKs), one held by X and the other with Y. The content keys (CKs) are ephemeral keys used to encrypt the records. They change each time a record is served to a consumer. Each of X and Y hold partial content keys on their key store. They must be combined in order to encrypt/decrypt a record. Our solution partitions the storage of partial keys and are held secret. We assume all communication channels are encrypted by default. I.e., each communicating party has access to the public key of the counterparty and uses public key cryptography. Hence, any information sent to a consumer will be encrypted with her public key. Only the intended recipient is able to recover the plain-text record.
## III System Architecture
### _Stakeholders & Threat Model_
The stakeholders are record owner, data consumer, third party storage X and Y. A record owner has ownership of the information. A data consumer requests this information to gain insights (or carry out analytics) by paying the required NFT smart contract. X hosts a NFT referenced data store and a key store, whereas Y only hosts a key store. Both X and Y are assumed to be mutually non-trusting. For example, they are different hosting companies. An adversary may eavesdrop on information passing through the communication channels. Both X and Y are expected to carry out operations honestly
Fig. 1: All communication channels between a sender and receiver are encrypted using public key cryptography. The process involves bootstrapping the system (steps 1 & 2), followed by adding the encrypted record to storage X’s data store (step 3a.) and state information to Y (step 3b.). Re-encryption sequence for the encrypted record on storage X’s data store is shown in step 4. An optional acknowledgement is seen in step 5. A consumer requesting and retrieving the encrypted record from the data store on storage X, and the keys required to decrypt it are shown in steps 6 to 8. Note the partial decryption keys are provided to the consumer only after storage X and Y verifies the consumer paid the NFT smart contract for record use (not shown in figure). This is achieved by querying for payments to the NFT smart contract. All transactions on a blockchain are digitally signed and straightforward to verify.
but X may leak any plain-text data on its storage. X or Y (but not both) may be compromised by an external adversary.
### _Components & Interactions_
Both X & Y are access controlled. Only authorised users are able to view & modify information on the data store and key stores. A record owner encrypts her plain-text records offline. The bootstrap process is as follows (see Fig. 1). The record owner, after setting up system parameters 1.) sends a secret master key (\(mk_{x}\)) to storage X's key store. Further, the record owner 2.) sends another secret master key (\(mk_{y}\)) to storage Y's key store. This completes the bootstrap process and storage X's data store is ready to receive encrypted records. 3a.) Record owner sends to storage X, \(Enc(R_{i}),i,j,g_{i}\). I.e., an encryption of plain-text record \(R\), uniquely identified by NFT index \(i\). The value of \(j\) corresponds to the number of times a record \(R_{i}\) was encrypted. For its first encryption, the value of \(j\) is 1. The value of \(g_{i}\) corresponds to the initial value of a pseudo random number generator (PRNG). 3b.) Record owner sends storage Y, \(i,j,g_{i}\). Next, the encrypted record on the data store is updated (re-encrypted) as follows. 4a.) Storage Y sends X, a new partial ephemeral encryption key (content key) identified by index \(i\). It also sends the updated counter \(j+1\). 4b.) Storage X creates a new partial ephemeral encryption key for the updated \(j+1\) counter. 4c.) Further, X uses the partial encryption key sent by storage Y along with its own newly generated partial encryption key, to update the encrypted record on its data store. 5.) An optional step is to acknowledge the updated counter \(j+1\) for the record identified by \(i\), to sync with storage Y. Next, a consumer pays the required NFT smart contract for a record (not shown in Fig. 1). Further, the consumer 6a.) requests for an encrypted record identified by NFT index \(i\) from the NFT data store on X and 6b.) sends to storage Y, the NFT identifier of the record requested. Both X & Y queries the NFT smart contract to verify if the necessary payments were made for the record requested (not shown in Fig. 1). Next, 7a.) Storage X returns the encrypted record and a partial ephemeral decryption key held by it. 7b.) Storage Y returns its partial ephemeral decryption key. 8.) Consumer combines the partial ephemeral decryption keys to recover plain-text record \(R_{i}\). To ready the next consumer request for this record, step 4 of Fig. 1 is called to re-encrypt the record on the data store with a new pair of ephemeral keys.
## IV Confidentiality Protocol
Phases 1-3 are for bootstrapping the protocol and encrypting a record offline (by its record owner). Phases 4-6 corresponds to their online interactions. Phase 7 is the offline decryption of the record by its consumer. Phase 8 updates the record on the data store.
_Phase 1 (Setup Parameters):_ A secret master key called \(mk_{x}\) is generated and shared by the record owner directly with storage X (see step 1, Fig. 1). It is a shared secret known only to the owner and storage X. Another secret \(mk_{y}\) is generated by the owner and shared with storage Y (see step 2, Fig. 1). It is known only to her and storage Y. The parameters for bootstrapping the confidentiality protocol are as follows:
Let \(R=\{R_{1},R_{2},\ldots,R_{n}\}\) be the set of plain-text records. The first step is to set up a different generator for each \(R_{i}\in R|i\in\{1,\ldots,n\}\) such that \(g_{i}\in\mathbb{F}_{p}^{*}\), a prime field. This is to initialise a PRNG with a large period. Map each of the record onto an element in \(\mathbb{F}_{p}^{*}\) using an invertible map. The value of \(p\) is chosen to be a safe prime, i.e., \(p=2\cdot q+1\), where \(q\) is a prime. Once chosen, \(p\) remains unchanged throughout the protocol. A safe prime is chosen to ensure the multiplicative group of order \(p-1=2\cdot q\), has no small subgroups that are non-trivial to detect. Due to Fermat's little theorem [7], to test if any \(a\in\mathbb{F}_{p}^{*}\) is a generator, it is sufficient to verify if \(a^{(p-1)/2}\equiv-1\mod p\). 1 Alternatively, to make sure that \(a\in\mathbb{F}_{p}^{*}\) generates a large subgroup, it is sufficient to ensure \(a^{2}\neq 1\mod p\). Since, our data store may have millions of records, and a suitable \(a_{i}\) (of large order/period) is required for each \(R_{i}\), this is useful to quickly find an \(a\in\mathbb{F}_{p}^{*}\), such that order\((a)=q\) or \(2.q\). Each elimination (of small subgroups) by testing requires only a single squaring operation modulo \(p\). When \(q\) is chosen to be a sufficiently large prime, our generators \(g_{i}\) may be substituted with \(a_{i}\), since each of these elements generate a subgroup at least half the size of \(p-1\).
Footnote 1: Modular exponentiation by repeated squaring is used to compute \(g^{x}\mod p\). It has a time complexity of \(\mathcal{O}((log~{}x)\cdot(log^{2}~{}p))\)[7]. The increase in time complexity w.r.t. the exponent \(x\) is logarithmic.
_Phase 2 (Generate ephemeral encryption keys):_ The record owner is required to encrypt her records before it is added to X's data store. Let \(||\) be the concatenation operator and \(g_{i}\) be the generator corresponding to record \(R_{i}\). Owner carries out the following two sets of key generations for each \(R_{i}\) using \(j=1\). For the record \(R_{i}\), the owner computes \(ck_{i,j}^{x}\) and \(ck_{i,j}^{y}\).
\[ck_{i,j}^{x}=HMAC(mk_{x},g_{i}^{1+2\cdot(j-1)}\mod p)||\\ HMAC(mk_{x},g_{i}^{2+2\cdot(j-1)}\mod p) \tag{1}\]
\[ck_{i,j}^{y}=HMAC(mk_{y},g_{i}^{1+2\cdot(j-1)}\mod p)||\\ HMAC(mk_{y},g_{i}^{2+2\cdot(j-1)}\mod p) \tag{2}\]
We use HMAC-SHA3-512 for hashing. It generates a 512 bits output. Each of \(ck_{i,j}^{x}\) and \(ck_{i,j}^{y}\) (content keys) are a concatenation of 2 HMAC(.) outputs. Hence, \(ck_{i,j}^{x}\) and \(ck_{i,j}^{y}\) are each, typically, \(1024\) bits long. The length of \(mk_{x}\) and \(mk_{y}\) (master keys), are each chosen to be 512 bits in length. I.e., same as the length of the HMAC output. We assume the safe prime \(p\) chosen is of length 1024 bits. The pair of ephemeral encryption keys for record \(R_{i}\) are \(ck_{i,j}^{x}\mod p\) and \(ck_{i,j}^{y}\mod p\).
_Phase 3 (Encrypt a Record):_ Record owner (on her offline computer) encrypts a plain-text record \(R_{i}\). The offline record encryption uses \(j=1\), for its first encryption. The arithmetic operations are in \(\mathbb{F}_{p}^{*}\).
\[S_{i,1} = E(R_{i}) = ck_{i,1}^{x}\ \cdot\ ck_{i,1}^{y}\ \cdot\ R_{i} \tag{3}\]
_Phase 4 (Add an encrypted record to data store):_ For the plain-text record \(R_{i}\), record owner sends to storage X the values of \(S_{i,1},i,j=1,g_{i}\) (see step 3a, Fig. 1). The owner rented storage Y is sent the values of \(i,j=1,g_{i}\) (see step 3b, Fig. 1).
_Phase 5 (Re-encrypt record on data store):_ Owner rented storage Y computes a new partial ephemeral key \(ck_{i,j}^{y}\) by running Equation. 2 with \(j\gets j+1\) and sends it to storage X (see step 4a, Fig. 1 ). Similarly, storage X computes \(ck_{i,j}^{x}\) by running Equation. 1 with \(j\gets j+1\) (see step 4b, Fig. 1). Next, the data store on X re-encrypts its record \(S_{i,j}\) (see Equation. 4). All arithmetic operations are in \(\mathbb{F}_{p}^{*}\). This is step 4c, in Fig. 1.
\[S_{i,j}=ck_{i,j}^{x}\cdot ck_{i,j}^{y}\cdot S_{i,j-1}=\prod_{k=1}^{j}ck_{i,k}^{ x}\cdot\prod_{k=1}^{j}ck_{i,k}^{y}\cdot R_{i} \tag{4}\]
_Phase 6 (Supply consumer with encrypted record and ephemeral decryption keys)_: Consumer requests encrypted record and partial decryption keys (see step 6a and 6b, Fig. 1). Storage X looks up its data store to retrieve the latest \(S_{i,j}\). The partial ephemeral encryption key \(\prod_{k=1}^{j}ck_{i,k}^{x}\) is constructed using Equation. 1. The partial ephemeral decryption key is trivially determined as its multiplicative inverse, namely, \(\left(\prod_{k=1}^{j}ck_{i,k}^{x}\right)^{-1}\ \bmod p\). The latest encrypted record \(S_{i,j}\) in the data store and its partial ephemeral decryption key is sent to the consumer (see step 7a, Fig. 1 ). Storage Y carries out a similar set of operations to construct its ephemeral decryption key \(\left(\prod_{k=1}^{j}ck_{i,k}^{y}\right)^{-1}\ \bmod p\) for record \(S_{i,j}\), using Equation 2. This ephemeral decryption key is sent to the consumer (see step 7b, Fig. 1).
_Phase 7 (Record decryption by the consumer):_ The consumer carries out the following computation to retrieve the plain-text record \(R_{i}\) using its partial decryption keys (see Equation 5). Arithmetic operations are in \(\mathbb{F}_{p}^{*}\). This is step 8, in Fig. 1.
\[R_{i} = D(S_{i,j}) = \prod_{k=1}^{j}ck_{i,k}^{x}\ \cdot\ \prod_{k=1}^{j}ck_{i,k}^{y}\ \cdot\ S_{i,j} \tag{5}\]
_Phase 8 (Update the record on the data store):_ The encrypted record \(S_{i,j}\) on storage X is updated by re-encryption with a new pair of ephemeral keys. This is carried out by re-peating phase 5 with an incremental value of \(j\), corresponding to the record.
Consider the example shown in Fig. 2. Each record is first encrypted by its owner before it is added to X's data store. Plain-text records are \(R_{1},R_{2}\) and \(R_{3}\) and the initial encrypted records on the data store are \(S_{1,1},S_{2,1}\) and \(S_{3,1}\), respectively. Here, \(j=1\) corresponds to the initial encryption for the record. On the data store, each encrypted record with \(j=1\) is re-encrypted. This gives us \(S_{1,2},S_{2,2}\) and \(S_{3,2}\). In our example, an encrypted record for \(R_{3}\) is requested by a consumer. The record served from the data store is \(S_{3,2}\). The decryption of \(S_{3,2}\) is \(D(S_{3,2})=(ck_{3,1}^{x}\cdot ck_{3,2}^{x})\cdot(ck_{3,1}^{x}\cdot ck_{3,2}^{x })^{-1}\cdot(ck_{3,1}^{y}\cdot ck_{3,2}^{y})\cdot(ck_{3,1}^{y}\cdot ck_{3,2}^{y })^{-1}\cdot R_{3}=R_{3}\). Once record \(S_{3,2}\) is served to the consumer, the data store re-encrypts \(S_{3,2}\) to give \(S_{3,3}\).
## V Discussion and Practical Considerations
Generator \(g_{i}\) is used as a PRNG to increase the hamming distance between subsequent variable inputs to the HMAC (as opposed to an incremental counter). The powers of the generator are the variable input to the HMAC (see Equation. 1 and 2). The output of the HMAC is used as a cryptographically secure PRNG. The security of the protocol relies on the difficulty to recover the secret master keys, \(mk_{x}\) and \(mk_{y}\), from its corresponding HMAC outputs. Further, the security of HMAC used depends on the underlying hash algorithm, output size, and the key size [8]. Since we employ HMAC-SHA3-512 to compute \(ck_{i,j}^{x}\) and \(ck_{i,j}^{y}\), an adversary retrieving \(mk_{y}\) from \(ck_{i,j}^{y}\) is expected to be at least as hard as launching a first preimage attack on the SHA3-512 hash. SHA3 uses Keccak [9] as its underlying algorithm and has so far shown excellent preimage attack resistance [10, 11]. Another possible attack is for storage X to attempt and infer the first ephemeral key \(ck_{i,1}^{y}\) used in the encryption of record \(S_{i,1}\) (see Equation. 3). However, \(ck_{i,1}^{y}\) is never sent to storage X by Y as part of the protocol (see Fig. 1). Storage Y sends the consumer, the inverse of its partial product of ephemeral keys, \(\left(\prod ck_{i,k}^{y}\right)^{-1}\) for the decryption of the record. At this point, the storage X and the consumer may collude to deduce the first ephemeral key (\(ck_{i,1}^{y}\)) but this serves no useful purpose. Since all keys required for record decryption were received, the consumer may as well supply the plain-text record to storage X. We do not attempt to prevent the dissemination of record information by the consumer, once it is decrypted. We rely on the data licence terms for the record usage to discourage the consumer from uncontrolled sharing of information. With respect to computational speed, it is not necessary to regenerate past ephemeral keys and multiply them every time a record decryption is required. Computing the product (in Phase 6) requires iterating over all values of \(j\). This may be sped up by storing the partial products modulo \(p\), on the storage. Further, all keys and their products are computed modulo \(p\). Hence, the encryption and decryption keys are bounded by the size of prime \(p\).
Fig. 2: Encrypted records on a data store. All operations are modulo \(p\). A record is served to a consumer only after its first re-encryption.
## VI Conclusions
We presented a protocol to improve the confidentiality of information stored on a third party data store. By using two key stores, one alongside the data store on storage X and the other on owner controlled storage Y -- a high level of information confidentiality was achieved. The ephemeral keys used made it less vulnerable to hacks. It may serve as a valuable tool for business owners to control and selectively disseminate their content stored on a third party data store.
## Acknowledgment
This research is supported by the National Research Foundation, under its Campus for Research Excellence and Technological Enterprise (CREATE) Programme.
|
2306.00646
|
Stability, quasinormal modes in a charged black hole in perfect fluid
dark matter
|
In this work, we study time-like and null geodesics in a charged black hole
background immersed in perfect fluid dark matter (PFDM). Using the condition
for circular geodesics, we evaluate the energy ($E$) and angular momentum ($L$)
in terms of the radius ($r_c$) of the circular orbits. The existence and
finite-ness of $E$ and $L$ constrain the possible range of PFDM parameter
($\chi$) and the radius of the circular orbit ($r_c$). We then use the Lyapunov
exponent ($\lambda$) to study the stability of the geodesics. Then we analyze
the critical exponent ($\gamma$) useful for determining the possibility of
detection of gravitational wave signals. After that, we study the perturbation
due to a massless scalar field in such a background and calculate the
quasinrmal mode (QNM) frequencies and their dependence on PFDM parameter $\chi$
and black hole charge $Q$. Also, we compare the obtained QNM frequencies both
in the exact case and in the eikonal limit. We also calculate the quality
factor of the oscillating system and study its dependence on $\chi$ and $Q$.
Finally, we evaluate the black hole shadow radius $R_s$ and graphically observe
the effect of $\chi$ and $Q$ on it.
|
Anish Das, Anirban Roy Chowdhury, Sunandan Gangopadhyay
|
2023-06-01T13:12:30Z
|
http://arxiv.org/abs/2306.00646v1
|
# Stability, quasinormal modes in a charged black hole in perfect fluid dark matter
###### Abstract
In this work, we study time-like and null geodesics in a charged black hole background immersed in perfect fluid dark matter (PFDM). Using the condition for circular geodesics, we evaluate the energy (\(E\)) and angular momentum (\(L\)) in terms of the radius (\(r_{c}\)) of the circular orbits. The existence and finite-ness of \(E\) and \(L\) constrain the possible range of PFDM parameter (\(\chi\)) and the radius of the circular orbit (\(r_{c}\)). We then use the Lyapunov exponent (\(\lambda\)) to study the stability of the geodesics. Then we analyze the critical exponent (\(\gamma\)) useful for determining the possibility of detection of gravitational wave signals. After that, we study the perturbation due to a massless scalar field in such a background and calculate the quasinmal mode (QNM) frequencies and their dependence on PFDM parameter \(\chi\) and black hole charge \(Q\). Also, we compare the obtained QNM frequencies both in the exact case and in the eikonal limit. We also calculate the quality factor of the oscillating system and study its dependence on \(\chi\) and \(Q\). Finally, we evaluate the black hole shadow radius \(R_{s}\) and graphically observe the effect of \(\chi\) and \(Q\) on it.
## 1 Introduction
The theory of general relativity proposed by Einstein has led to the predictions of compact objects such as black holes whose existence has been verified recently [1]-[3]. Also, Einstein's theory predicts that spacetime is dynamic and gets affected by the presence of any massive object. The presence of compact objects like black holes drastically alters the spacetime fabric in their vicinity. This effect can be observed in the trajectories of particles moving in the black hole background. Any object in an arbitrary spacetime in absence of additional forces follows the geodesics. The knowledge of geodesics helps us gain information about the spacetime background.
Again, the geodesics around black holes comprise different closed and open orbits depending on their position with respect to the black holes. The orbits can be both stable and unstable. The stability/instability can be understood by understanding the potential (\(V\)) that the particles encounter. Another way to study the stability of geodesics is by using the Lyapunov exponents (\(\lambda\)) [4],[5]. The Lyapunov exponents are very powerful mathematical objects useful for studying a system having chaotic dynamics generally encountered in non-linear systems. We can also use them to study the stability of particle trajectories. Lyapunov exponents \(\lambda\) are mathematically defined as the average rate at which two nearby geodesics or geodesic congruences can converge or diverge [6]. The principal Lyapunov exponent can be expressed in terms of the second derivative of the potential evaluated at the extremum point of the potential.
In general, black holes are surrounded by matter (accretion disks) which can result in perturbing the spacetime structure. This perturbation influences the geodesics in the black hole spacetime and the information of the perturbation is carried by the geodesics. Again, whether such perturbations will grow or not decides the stability of the black holes against such perturbation. Due to such perturbation, the black hole
spacetime starts to oscillate with a particular frequency termed 'quasi-normal frequency' coined by Press [7]. The term quasi-normal is used because the oscillations can either grow or decay depending on the black hole's stability. The detection of these frequencies helps us get information on the space-time parameters and is independent of the type of perturbation. The study of quasinormal modes is also important in the context of the AdS/CFT correspondence [8]. We can study the equilibrium and non-equilibrium properties of strongly coupled thermal gauge theories by computing the QNMs of its gravity dual [9]. The spectrum of the quasinormal modes of the dual gravitational background gives us the poles of the retarded correlators in the filed theory side [10, 11]. Investigations of the quasinormal modes is also important in the context of the astrophysical black holes and gravitational wave astronomy [12]. Black hole quasinormal modes can be used to predict the mass, angular momentum and other important properties of the astrophysical black holes [12]-[15]. A lot of studies have been done on black hole QNMs [16]-[55] along with review works [56]-[59].
Again, the universe is composed mostly of dark matter and dark energy. Dark matter dominates the matter content of the universe. Many different models for dark matter have been proposed like cold dark matter, axions, neutralinos, etc [60]. But these models have failed to explain certain aspects like the too-big-to-fail problem [61], [62], missing satellite problem [63], etc. The failures of all those models led to the interest in newer models like the perfect fluid dark matter (PFDM) model proposed by Kiselev [64], [65]. Further works in this PFDM model have been done in [66]-[72]. Also, quasinormal mode frequencies in the presence of PFDM have been done previously in [41],[42]. The consideration of dark matter around the black holes is realistic since dark matter is supposed to pervade all around the galaxies. The model of our choice namely the perfect fluid dark matter (PFDM) has gained popularity in recent years. Also, the consideration of charge \(Q\) is for completeness and we refrain from adding spin \(a\) to the black hole for simplicity. Theoretically, the effect of dark matter must be present in all observable features of a black hole and hence considering it to study different aspects of a black hole is worthwhile.
In this work, we are interested in studying the stability of geodesics around black holes by using the concept of Lyapunov exponents \(\lambda\). The stability of geodesics in general for any black hole spacetime is determined by the potential and the corresponding conditions imposed onto it. We will find that the Lyapunov exponent is related to the second derivative of the potential \(V^{\prime\prime}\). Also, we use different times (proper and coordinate time) to determine \(\lambda\) to check whether they are coordinate-dependent or not. Then we calculate the critical exponent \(\gamma\)[73], [74] which helps us get an idea of the timescale at which gravitational waves are possible for detection.
Again, we are interested in studying the scalar perturbations which result in the production of quasinormal frequencies. The quasinormal frequencies are independent of the type of perturbation and depend only on the black hole parameters. Here, we assume a black hole immersed in a dark matter background to analyze the dependence of black hole QNMs on PFDM parameter \(\chi\). Also in the eikonal limit [75], [76] we determine the QNMs and compare them with the exact ones. Then we determine the quality factor \((QF)\)[77], [40] and how it depends on \(\chi\) and \(Q\). Then we study the shadow of charged black hole [78], [79] in PFDM and its dependence on \(\chi\) and \(Q\).
The paper is organized as follows. In section 1 we provide the background and brief literature on PFDM, QNMs, and black hole stability. In section 2, we provide the mathematical formulation for studying black hole stability along with the idea of Lyapunov exponents \(\lambda\) and critical exponent \(\gamma\). After that in section 3 we overview the system of charged black holes immersed in PFDM and study the geodesics along with the relevant physical quantities quantifying the stability of geodesics in black hole spacetime. Then in section 4 we study the quasinormal modes due to massless scalar perturbation on the background of interest. We study and compare the exact and eikonal approximated results. Later in section 5 we study the black hole shadow and how it gets affected by PFDM parameter \(\chi\) and black hole charge \(Q\). Finally, we summarize in section 6. We have worked in units of \(c=G=1\). The signature of our metric is (-,+,+,+).
## 2 Equatorial geodesics in general static spherically symmetric spacetime
In this work, we are interested in studying the equatorial geodesics of a general static spherically symmetric spacetime. The knowledge of these geodesics will help us determine the stability of the spacetime against external perturbations. To carry out our analysis, we consider a general static spherically symmetric metric
in (3+1)-dimensions. We assume the metric to be
\[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d\phi^ {2} \tag{1}\]
where \(f(r)\) corresponds to the lapse function and is a function of \(r\) only. We derive the expressions of geodesics in terms of \(f(r)\).
Since our system is spherically symmetric, all planes are identical and hence for simplicity, we calculate the geodesics in the equatorial plane. The condition for the equatorial plane is \(\theta=\frac{\pi}{2}\) which results into \(\dot{\theta}=0\). The metric is independent of \(t\) and \(\phi\) coordinates, so the corresponding generalized momenta are conserved. This can be checked by using a Lagrangian of the form \(\mathcal{L}=\frac{1}{2}g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}\) and the Lagrange's equation of motion which takes the form
\[\frac{d}{d\zeta}\Big{(}\frac{\partial\mathcal{L}}{\partial\dot{x}^{\mu}}\Big{)} =\frac{\partial\mathcal{L}}{\partial x^{\mu}}. \tag{2}\]
The generalised momenta is defined as \(p_{\mu}=\Big{(}\frac{\partial\mathcal{L}}{\partial\dot{x}^{\mu}}\Big{)}\) which for \(t\) and \(\phi\) gives \(p_{t}=\) constant \(=-E\) and \(p_{\phi}=\) constant \(=L\) (Since the metric is independent of \(t\) and \(\phi\) as mentioned above and also \(x^{\mu}\) and \(\dot{x}^{\mu}\) are independent of each other). The geodesics for \(t\) and \(\phi\) take the form
\[\frac{dt}{d\zeta}=\frac{E}{f(r)}\ \ ;\ \ \frac{d\phi}{d\zeta}=\frac{L}{r^{2}} \tag{3}\]
where \(\zeta\) parametrizes the geodesic trajectory. The radial geodesic equation can be obtained using the normalization condition
\[g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}=\beta \tag{4}\]
with, \(\beta=-1,0,1\) for time-like, null, and space-like geodesics respectively. Using the metric coefficients (eq.(1)) and the geodesic equations (eq.(3)) in eq.(4), we obtain the radial geodesic equation as
\[\dot{r}^{2}=E^{2}-f(r)\Big{(}\frac{L}{r^{2}}-\beta\Big{)}=V_{r}. \tag{5}\]
Here, \(V_{r}\) corresponds to the effective radial potential. We are interested in studying circular geodesics which are subject to the condition
\[\dot{r}^{2}\Big{|}_{r=r_{c}}=\Big{(}\dot{r}^{2}\Big{)}^{{}^{\prime}}\Big{|}_{ r=r_{c}}=0. \tag{6}\]
Here, prime (\({}^{\prime}\)) corresponds to the derivative with respect to \(r\) and \(r_{c}\) corresponds to the radius of circular geodesics. Since only time-like and null geodesics have a physical existence, so we study them case-wise.
### Time-like geodesics
Time-like geodesics correspond to the trajectory of massive particles in any spacetime background. In this case, we have constant quantities \(E\) and \(L\) corresponding to momentum conservation along \(t\) and \(\phi\) directions. Here, \(E\) and \(L\) correspond to the energy and angular momentum per unit mass of the particles moving along time-like geodesics. Since \(\beta=-1\) for time-like geodesics, we obtain the radial geodesic equation as
\[\dot{r}^{2}=E^{2}-f(r)\Big{(}\frac{L^{2}}{r^{2}}+1\Big{)}=\overline{V}_{r}. \tag{7}\]
Here, \(\overline{V}_{r}\) corresponds to the effective potential encountered by the massive particles. The condition for circular geodesics are
\[\dot{r}^{2}\Big{|}_{r=r_{0}}=\Big{(}\dot{r}^{2}\Big{)}^{{}^{\prime}}\Big{|}_{r =r_{0}}=0 \tag{8}\]
with \(r_{0}\) being the radius of circular time-like geodesics. As we can see, here we have two conditions (constraints) whereas three undetermined quantities \(E,L\) and \(r_{0}\). So we need to express any two of the physical
quantities in terms of the other. We wish to express \(E\) and \(L\) in terms of \(r_{0}\). The conditions result in the following equations
\[E_{0}^{2}=f(r_{0})\Big{(}\frac{L_{0}^{2}}{r_{0}^{2}}+1\Big{)} \tag{9}\]
\[2L_{0}^{2}\frac{f(r_{0})}{r_{0}^{3}}=f^{\prime}(r_{0})\Big{(}\frac{L_{0}^{2}}{r _{0}^{2}}+1\Big{)}. \tag{10}\]
Solving eq.(10), we obtain \(L_{0}^{2}\). Using the value of \(L_{0}^{2}\) in eq.(9), we get the expression for \(E_{0}^{2}\). The expressions for \(E_{0}^{2}\) and \(L_{0}^{2}\) take the form [6]
\[E_{0}^{2}=\frac{2f^{2}(r_{0})}{2f(r_{0})-r_{0}f^{\prime}(r_{0})} \tag{11}\]
\[L_{0}^{2}=\frac{r_{0}^{3}f^{\prime}(r_{0})}{2f(r_{0})-r_{0}f^{\prime}(r_{0})}. \tag{12}\]
Here, \(E_{0}\) and \(L_{0}\) give the energy and momentum per unit mass of particles moving along circular geodesics of radius \(r_{0}\). The ratio of angular momentum \(L_{0}\) and energy \(E_{0}\) takes the form
\[\Big{(}\frac{L_{0}}{E_{0}}\Big{)}^{2}=\frac{r_{0}^{3}f^{\prime}(r_{0})}{2f^{2 }(r_{0})}. \tag{13}\]
In eq.(11), we find the expressions for the square of the quantities \(E_{0}\) and \(L_{0}\). For \(E_{0}\) to be real and finite, we must have the denominator of eq.(11) to be positive, that is
\[\Big{(}2f(r_{0})-r_{0}f^{\prime}(r_{0})\Big{)}>0 \tag{14}\]
and for \(L_{0}\) to be real and finite, we must simultaneously have both the numerator and denominator of eq.(12) to be positive, that is
\[r_{0}^{3}f^{\prime}(r_{0})>0\ ;\ \Big{(}2f(r_{0})-r_{0}f^{\prime}(r_{0})\Big{)}>0. \tag{15}\]
### Null geodesics
Null geodesics correspond to trajectories of massless (null) particles in any space-time background. In the case of null geodesics, we have \(\beta=0\) which results in the radial geodesic equation in the equatorial plane as
\[\dot{r}^{2}=E^{2}-f(r)\frac{L^{2}}{r^{2}}=\widetilde{V}_{r}. \tag{16}\]
with \(\widetilde{V}_{r}\) giving the effective potential. The condition for circular geodesics results in the determination of radius \(r_{p}\) of null geodesics through the equation
\[2f(r_{p})-r_{p}f^{\prime}(r_{p})=0. \tag{17}\]
Also, it helps us determine the ratio of angular momentum \(L_{p}\) and energy \(E_{p}\) of the massless particles in terms of \(r_{p}\)
\[\Big{(}\frac{L_{p}}{E_{p}}\Big{)}^{2}=\frac{r_{p}^{3}f^{\prime}(r_{p})}{2f^{2} (r_{p})}=\frac{r_{p}^{2}}{f(r_{p})}. \tag{18}\]
Here, \(E_{p}\) and \(L_{p}\) give the energy and momentum of null particles moving along circular geodesics. From eq.(s)(13) and (18) we find that the ratio of \(L\) and \(E\) are same for both massive and massless particles.
### Brief review on Lyapunov exponents
Lyapunov exponents measure the rate of divergence or convergence of nearby trajectories in a dynamical system. In a dynamical system, the state of the system at a given time is described by a set of variables that can be represented as a point in a multi-dimensional space. Trajectories in this space represent the evolution of the system over time, and nearby trajectories may either converge or diverge depending on the nature of the system.
If the Lyapunov exponent is positive, the system is said to be chaotic, meaning that small perturbations grow exponentially and the behaviour of the system becomes unpredictable over time. If the Lyapunov exponent is negative, the system is said to be stable, meaning that small perturbations converge to a fixed point or periodic orbit. The vanishing of the Lyapunov exponent means that the rate of divergence or convergence of nearby trajectories approaches zero, indicating that the system is marginally stable. This means that small perturbations in the system will not grow or decay over time.
The equation of a dynamical system takes the form [80]
\[\frac{dx}{dt}=F(x) \tag{19}\]
where, \(x(t)\) denotes the trajectory of the system, and the evolution of the system is dictated by the function \(F(x)\). We perturb the system slightly which takes \(x(t)\to x(t)+\delta x(t)\). Using this in eq.(19), we have (upto linear order)
\[\frac{d(\delta x)}{dt}=\frac{\partial F}{\partial x}\Bigg{|}_{x}\delta x. \tag{20}\]
Generalizing this to higher dimensions, we obtain
\[\frac{d(\delta X_{i}(t))}{dt}=\frac{\partial F_{i}(X_{j})}{\partial X_{j}} \Bigg{|}_{X_{i}}\delta X_{j}(t)=K_{ij}(t)\delta X_{j}(t)\ \ ;\ \ i,j=1,....,N \tag{21}\]
with \(K_{ij}(t)\) being the _linear stability matrix_[75]. From the knowledge of the one-dimensional solution, the solution can be extended to higher dimensions as
\[\delta X_{i}(t)=L_{ij}(t)\delta X_{j}(0) \tag{22}\]
where \(L_{ij}(t)\) is the evolution matrix [75] determining the evolution of the perturbation on the system with the property \(L_{ii}(0)=\delta_{ii}\). The eigenvalues of the linear stability matrix correspond to the Lyapunov exponents (\(\lambda\)) which are responsible for determining the stability of the system. The solution of eq.(21) determines \(\lambda\) as
\[\lambda=\lim_{t\rightarrow\infty}\frac{1}{t}ln\Bigg{(}\frac{\delta X_{i}(t)}{ \delta X_{i}(0)}\Bigg{)}=\lim_{t\rightarrow\infty}\frac{1}{t}ln\Bigg{(}\frac {L_{ii}(t)}{L_{ii}(0)}\Bigg{)}. \tag{23}\]
We wish to calculate the Lyapunov exponent (\(\lambda\)) in order to study the stability of the system. To do so, we perturb the geodesic equations to obtain equations in linearised form as eq.(20). The eigenvalues of the matrix \(K_{ij}\) give the Lyapunov exponents.
As mentioned above, we are interested in circular orbits that lie in the equatorial plane. Thus we are concerned with the radial geodesic equation. For this, we will work in the phase space \((p_{r},r)\). The analysis can be carried out both for proper time \(\tau\) and coordinate time \(t\).
We start with the Euler-Lagrange equation which takes the form
\[\frac{dp_{r}}{d\tau}=\frac{\partial\mathcal{L}}{\partial r} \tag{24}\]
representing the evolution equation of \(p_{r}\). The equation for \(r\) takes the form
\[\frac{dr}{d\tau}=\frac{p_{r}}{g_{rr}} \tag{25}\]
using Lagrange's equation of motion. The system represented by the combined eq.(s) (24), (25) are perturbed with \(r\to r+\delta r\) and \(p_{r}\to p_{r}+\delta p_{r}\). The perturbed equation takes the form
\[\frac{d(\delta p_{r})}{d\tau}=\frac{d}{dr}\Big{(}\frac{\partial{\cal L}}{ \partial r}\Big{)}\delta r\ \ ;\ \ \frac{d(\delta r)}{d\tau}=\frac{1}{g_{rr}}\delta p_{r}. \tag{26}\]
The reason for the above results is that in eq.(24), the Lagrangian \({\cal L}\) is independent of \(p_{r}\) and in eq.(25) we have \(\dot{r}=\frac{p_{r}}{g_{rr}}\) which in dependent of \(r\). The above equations can be written in matrix form as
\[\frac{d}{d\tau}\begin{pmatrix}\delta p_{r}\\ \delta r\end{pmatrix}=\begin{pmatrix}0&\frac{d}{dr}\Big{(}\frac{\partial{\cal L }}{\partial r}\Big{)}\\ \frac{1}{g_{rr}}&0\end{pmatrix}\begin{pmatrix}\delta p_{r}\\ \delta r\end{pmatrix}. \tag{27}\]
The eigenvalues of the matrix
\[K_{ij}=\begin{pmatrix}0&\frac{d}{dr}\Big{(}\frac{\partial{\cal L}}{\partial r }\Big{)}\\ \frac{1}{g_{rr}}&0\end{pmatrix} \tag{28}\]
evaluated at the circular orbit \(r_{c}\) subject to the condition \(\dot{r}^{2}\Big{|}_{r=r_{c}}=\Big{(}\dot{r}^{2}\Big{)}^{{}^{\prime}}\Big{|}_ {r=r_{c}}=0\) gives the Lyapunov exponent
\[\lambda^{2}=\frac{1}{g_{rr}}\frac{d}{dr}\Big{(}\frac{\partial{\cal L}}{ \partial r}\Big{)}\Bigg{|}_{r=r_{c}}. \tag{29}\]
Using the condition of circular geodesics and after some simplification, the proper time Lyapunov exponent reads [75]
\[\lambda_{p}=\pm\sqrt{\frac{V_{r}^{\prime\prime}}{2}}\Bigg{|}_{r=r_{c}}. \tag{30}\]
On the other hand, the coordinate time Lyapunov exponent takes the form [75]
\[\lambda_{c}=\pm\sqrt{\frac{V_{r}^{\prime\prime}}{2t^{2}}}\Bigg{|}_{r=r_{c}}. \tag{31}\]
### Critical exponent
Another quantity of interest is the critical exponent \(\gamma\) defined as the ratio of the Lyapunov timescale or instability timescale \(T_{\lambda}\) to the orbital timescale \(T_{\Omega}\) given as [73]-[75]
\[\gamma = \frac{Lyapunov\ \ timescale}{Orbital\ \ timescale} \tag{32}\] \[= \frac{T_{\lambda}}{T_{\Omega}}\] \[= \frac{\Omega}{2\pi\lambda}\]
with \(T_{\lambda}=\frac{1}{\lambda}\) and \(T_{\Omega}=\frac{2\pi}{\Omega}\), with the orbital velocity \(\Omega\) given as \(\Omega=\frac{d\phi}{dt}\). The critical exponents corresponding to proper and coordinate time take the form
\[\gamma_{p}=\frac{\Omega}{2\pi\lambda_{p}}\ \ ;\ \ \gamma_{c}=\frac{\Omega}{2\pi \lambda_{c}}\ . \tag{33}\]
The critical exponent \(\gamma\) gives an idea of the detectability of gravitational wave signals. The particles moving in circular orbits around the black hole produce gravitational waveforms. This waveform reaches the observer in every \(T_{\Omega}\) time interval. If the particle in the orbit is perturbed, then the gravitational waveform due to perturbation reaches the observer after a time interval of \(T_{\lambda}\) dictated by the Lyapunov exponent \(\lambda\). If the perturbed signal reaches the observer within the time interval \(T_{\Omega}\), then only the perturbation and thereby the corresponding gravitational signal can be detected by the observer. If the signal produced due to perturbation reaches the observer after a time interval \(T_{\Omega}\), then the observer cannot distinguish the signals, which one is due to circular motion and which one is due to perturbation. Thus the requirement for the detection of gravitational signals produced due to perturbation is \(T_{\lambda}<T_{\Omega}\)[73]-[75].
Charged black hole in perfect fluid dark matter
The analysis carried out above is valid for any arbitrary static spherically symmetric metric of the form eq.(1). Here we want to explicitly study a system where a _charged black hole is immersed in a background of perfect fluid dark matter (PFDM)_[70]-[72]. The action and the corresponding equation of motion for the system have the form [70]-[72] (with \(c=G=1\))
\[S=\int d^{4}x\sqrt{-g}\Big{(}\frac{R}{16\pi}-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}+ \mathcal{L}_{DM}\Big{)} \tag{34}\]
\[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=8\pi\Big{(}T^{M}_{\mu\nu}-T^{DM}_{\mu\nu} \Big{)}. \tag{35}\]
Here \(R\) and \(R_{\mu\nu}\) are the Ricci scalar and Ricci tensor respectively. \(F_{\mu\nu}\) is the electromagnetic field strength tensor related to 4-vector potential \(A_{\mu}\), \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\). Also, \(\mathcal{L}_{DM}\) gives the Lagrangian density of the PFDM. \(T^{M}_{\mu\nu}\) and \(T^{DM}_{\mu\nu}\) give the energy-momentum tensor for the electromagnetic field and the dark matter respectively. The energy-momentum tensors take the form
\[\Big{(}T^{\mu}_{\ \nu}\Big{)}^{M}=\frac{Q^{2}}{8\pi r^{4}}diag\left(-1,-1,1,1 \right);\Big{(}T^{\mu}_{\ \nu}\Big{)}^{DM}=diag\Big{(}-\rho,P_{r},P_{\theta},P_{\phi}\Big{)} \tag{36}\]
with \(P_{r}=-\rho\), \(P_{\theta}=P_{\phi}=P\). The equation of state for PFDM is \(\frac{P}{\rho}=\frac{1}{2}\)[71].
To obtain the desired metric, we need to solve the Einstein field equations. The first step is to assume an ansatz metric of the form
\[ds^{2}=-e^{\mu}dt^{2}+e^{\xi}dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2} ). \tag{37}\]
Here \(\mu\) and \(\xi\) are assumed to be functions of \(r\) only. Using the ansatz metric, the Einstein equations take the form [71]
\[e^{-\xi}\Big{(}\frac{1}{r^{2}}-\frac{\xi^{{}^{\prime}}}{r}\Big{)} -\frac{1}{r^{2}}=8\pi\Big{(}-\rho-\frac{Q^{2}}{8\pi r^{4}}\Big{)}\] \[e^{-\xi}\Big{(}\frac{1}{r^{2}}+\frac{\mu^{{}^{\prime}}}{r}\Big{)} -\frac{1}{r^{2}}=8\pi\Big{(}-\rho-\frac{Q^{2}}{8\pi r^{4}}\Big{)}\] \[\frac{e^{-\xi}}{2}\Big{(}\mu^{\prime\prime}+\frac{\mu^{{}^{ \prime}}^{2}}{2}+\frac{\mu^{{}^{\prime}}-\xi^{{}^{\prime}}}{r}-\frac{\mu^{{}^{ \prime}}\xi^{{}^{\prime}}}{2}\Big{)}=8\pi\Big{(}P+\frac{Q^{2}}{8\pi r^{4}} \Big{)}\] \[\frac{e^{-\xi}}{2}\Big{(}\mu^{\prime\prime}+\frac{\mu^{{}^{ \prime}}^{2}}{2}+\frac{\mu^{{}^{\prime}}-\xi^{{}^{\prime}}}{r}-\frac{\mu^{{}^{ \prime}}\xi^{{}^{\prime}}}{2}\Big{)}=8\pi\Big{(}P+\frac{Q^{2}}{8\pi r^{4}} \Big{)}. \tag{38}\]
Here prime (\({}^{\prime}\)) and double prime (\({}^{\prime\prime}\)) denote the first and second derivatives with respect to \(r\). Upon rearrangement, the first and third equations respectively can be recast as
\[e^{-\xi}\Big{(}\frac{1}{r^{2}}-\frac{\xi^{{}^{\prime}}}{r}\Big{)}-\frac{1}{r^{ 2}}+\frac{Q^{2}}{r^{4}}=-8\pi\rho \tag{39}\]
\[\frac{e^{-\xi}}{2}\Big{(}\mu^{\prime\prime}+\frac{\mu^{{}^{\prime} }^{2}}{2}+\frac{\mu^{{}^{\prime}}-\xi^{{}^{\prime}}}{r}-\frac{\mu^{{}^{\prime}} \xi^{{}^{\prime}}}{2}\Big{)}-\frac{Q^{2}}{r^{4}}=8\pi P. \tag{40}\]
Taking the ratio of eq.(40) and eq.(39), we obtain
\[\frac{e^{-\xi}}{2}\Big{(}\mu^{\prime\prime}+\frac{\mu^{{}^{\prime} }^{2}}{2}+\frac{\mu^{{}^{\prime}}-\xi^{{}^{\prime}}}{r}-\frac{\mu^{{}^{\prime}} \xi^{{}^{\prime}}}{2}\Big{)}-\frac{Q^{2}}{r^{4}}=-\frac{1}{2}\Bigg{[}e^{-\xi} \Big{(}\frac{1}{r^{2}}-\frac{\xi^{{}^{\prime}}}{r}\Big{)}-\frac{1}{r^{2}}+ \frac{Q^{2}}{r^{4}}\Bigg{]}. \tag{41}\]
Assuming \(\mu=-\xi=\ln(1-K)\), where \(K\equiv K(r)\), the above equation simplifies to
\[r^{2}K^{{}^{\prime\prime}}+3rK^{{}^{\prime}}+K+\frac{Q^{2}}{r^{2}}=0. \tag{42}\]
The solution of the above equation is
\[K(r)=\frac{r_{g}}{r}-\frac{Q^{2}}{r^{2}}-\frac{\chi}{r}ln\Big{(}\frac{r}{|\chi|} \Big{)} \tag{43}\]
where \(r_{g}\) and \(\chi\) are integration constants. To obtain the value of \(r_{g}\), we set \(Q=0\) and \(\chi=0\). In this limit, we can use the weak field approximation to obtain \(r_{g}=2M\). Therefore the lapse function takes the form
\[f(r)=e^{\mu}=e^{-\xi}=e^{ln(1-K)}=1-K=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}+\frac{ \chi}{r}ln\Big{(}\frac{r}{|\chi|}\Big{)}. \tag{44}\]
In the lapse function, \(M\) is the black hole mass, \(Q\) is the charge due to the electromagnetic field and \(\chi\) is the dark matter parameter related to the energy density \(\rho\) of PFDM as \(\rho=\frac{\chi}{8\pi r^{3}}\)[71]. The expression of \(\rho\) can be obtained by replacing eq.(44) in the first equation of eq.(38). \(\chi\) gives the mass of PFDM enclosed within radius \(r\).
For our calculational purposes, we rescale all parameters by the black hole mass \(M\) which is equivalent to setting \(M\) to unity. So in all discussions and results, we omit \(M\) but keep the mass \(M\) in the plot labels. The _event horizon_ of the black hole can be obtained by using the condition
\[f(r)\Big{|}_{r=r_{+}}=0\ \ \Rightarrow\ \ r_{+}^{2}-2r_{+}+Q^{2}+\chi r_{+}ln \Big{(}\frac{r_{+}}{|\chi|}\Big{)}=0. \tag{45}\]
The concerned system is quite unique. The black hole size (event horizon) is dictated by the PFDM parameter \(\chi\). Analyzing the lapse function, we found that \(r_{+}\) initially decreases with an increase in \(\chi\) reaches a minimum, and then starts to increase. This nature can be observed in the left plot in Fig. 1.
The reason for such an observation can be explained by supposing the system to be composed of two masses, one coming from the black hole, \(M\), and the other coming from PFDM, \(M_{0}\)[67]. Below and up to \(\chi_{c}\) (\(\chi\leq\chi_{c}\)), \(M_{0}\) opposes the effect of the black hole mass \(M\), and thereby the event horizon radius \(r_{+}\) decreases up to \(\chi_{c}\). But beyond \(\chi_{c}\) (\(\chi>\chi_{c}\)), the total mass of the system (BH + PFDM) is dictated by \(M_{0}\) and hence the event horizon of the black hole-dark matter system increases.
The minimum of the event horizon can be obtained using the condition
\[\left.\frac{\partial r_{+}}{\partial\chi}\right|_{\chi_{c}}=0\ \ ;\ \ \ \left.\frac{\partial^{2}r_{+}}{\partial\chi^{2}}\right|_{\chi_{c}}>0 \tag{46}\]
where \(\chi_{c}\) is the value of \(\chi\) corresponding to the minimum of the event horizon radius \((r_{+})_{c}\). Using eq.(s) (45), (46), we obtain
\[\chi_{c}=\frac{1}{1+e}\Bigg{(}1+\sqrt{1-Q^{2}\Big{(}1-\frac{1}{e}\Big{)}} \Bigg{)}\ ;\ (r_{+})_{c}=\frac{e}{1+e}\Bigg{(}1+\sqrt{1-Q^{2}\Big{(}1-\frac{1}{e} \Big{)}}\Bigg{)}. \tag{47}\]
Figure 1: Plots showing variation of the event horizon (\(r_{+}\)) and temperature \(T_{h}\) of the black hole with variation in PFDM parameter \(\chi\). The plots are shown for \(\frac{Q}{M}=0.0\) (black) and \(\frac{Q}{M}=0.5\) (dotted black).
In the limit of \(Q\to 0\), we get \(\chi_{c}=\frac{2}{1+e}\)[41], [67]. In order to derive eq.(47), we differentiate eq.(45) with respect to \(\chi\). Then we use eq.(46) which gives the relation
\[(r_{+})_{c}=\chi_{c}e. \tag{48}\]
Using the above relation in eq.(45) and eliminating \((r_{+})_{c}\), we obtain \(\chi_{c}\). Then using \(\chi_{c}\) in eq.(48) we get \((r_{+})_{c}\).
The expression of the temperature of the black hole takes the form
\[T_{h} = \frac{f^{\prime}(r_{+})}{4\pi} \tag{49}\] \[= \frac{1}{4\pi}\Bigg{[}\frac{2+\chi}{r_{+}^{2}}-\frac{2Q^{2}}{r_{+ }^{3}}-\frac{\chi}{r_{+}^{2}}ln\Big{(}\frac{r_{+}}{|\chi|}\Big{)}\Bigg{]}\] \[= \frac{1}{4\pi r_{+}^{3}}\Big{[}r_{+}^{2}+\chi r_{+}-Q^{2}\Big{]}\.\]
Setting \(\chi=0\), we get back the results of the Reissner-Nordstrom black hole [32]
\[T_{h}=\frac{1}{4\pi}\Bigg{[}\frac{2}{r_{+}^{2}}-\frac{2Q^{2}}{r_{+}^{3}} \Bigg{]}. \tag{50}\]
In Fig.1, we have shown the variation of the black hole event horizon (\(r_{+}\)) (left) and black hole temperature (\(T_{h}\)) (right) with increment in the event horizon radius \(r_{+}\). The plots are shown for \(Q=0.0\) (black dotted) and \(Q=0.5\) (black).
The nature of variation of the left plot in Fig.1 has been explained above. Also, we found that the event horizon has a lower value in the presence of a black hole charge \(Q\). The reason is, the presence of charge \(Q\) reduces the effective mass of the black hole system which gets reflected in the corresponding reduction in the radius of the event horizon (\(r_{+}\)).
The right plot in Fig. 1 shows the variation of the black hole temperature (\(T_{h}\)) with increase in PFDM parameter \(\chi\). We find that \(T_{h}\) initially increases with an increment in \(\chi\), reaches a maximum and then starts to decrease. The reason for such an observation is that the black hole temperature gets reduced with an increase in mass. In our system, the effective mass of the black hole system (BH + PFDM) initially decreases with \(\chi\), and thus the temperature increases. But after a critical value, the effective mass of the system increases resulting in the decrease of the black hole temperature. Also, we find that in the presence of charge \(Q\), the black hole temperature increases. The reason being the effective mass of the black hole system gets reduced due to the presence of charge \(Q\) which results in an increment in the black hole temperature.
### Time-like geodesics
The radius of the circular time-like geodesics is designated as \(r_{0}\) and the corresponding energy and angular momentum per unit mass as \(E_{0}\) and \(L_{0}\). The lapse function and its derivative takes the form
\[f(r)=1-\frac{2}{r}+\Big{(}\frac{Q}{r}\Big{)}^{2}+\frac{\chi}{r}ln\Big{(} \frac{r}{|\chi|}\Big{)}\ \ ;\ \ f^{\prime}(r)=\frac{2+\chi}{r^{2}}-\frac{2Q^{2}}{r^{3}}-\frac{\chi}{r^{2}}ln \Big{(}\frac{r}{|\chi|}\Big{)}. \tag{51}\]
Using the expressions of \(f(r)\) and \(f^{\prime}(r)\) in eq.(11), we get \(E_{0}\) and \(L_{0}\) to be
\[E_{0}^{2}=\frac{2\Bigg{[}r_{0}^{2}-2r_{0}+Q^{2}+\chi r_{0}ln\Big{(}\frac{r_{0} }{|\chi|}\Big{)}\Bigg{]}^{2}}{r_{0}^{2}\Bigg{[}2r_{0}^{2}-(\chi+6)r_{0}+4Q^{2} +3\chi r_{0}ln\Big{(}\frac{r_{0}}{|\chi|}\Big{)}\Bigg{]}}\ \ ;\ \ L_{0}^{2}=\frac{r_{0}^{2}\Bigg{[} \Big{(}\chi+2\Big{)}r_{0}-2Q^{2}-\chi r_{0}ln\Big{(}\frac{r_{0}}{|\chi|}\Big{)} \Bigg{]}}{\Bigg{[}2r_{0}^{2}-(\chi+6)r_{0}+4Q^{2}+3\chi r_{0}ln\Big{(}\frac{ r_{0}}{|\chi|}\Big{)}\Bigg{]}}. \tag{52}\]
For the energy \(E_{0}\) and angular momentum \(L_{0}\) to be real and finite, we must simultaneously have
\[\Bigg{[}\Big{(}\chi+2\Big{)}r_{0}-2Q^{2}-\chi r_{0}ln\Big{(}\frac{r_{0}}{| \chi|}\Big{)}\Bigg{]}\equiv A>0 \tag{53}\]
\[\left[2r_{0}^{2}-(\chi+6)r_{0}+4Q^{2}+3\chi r_{0}ln\Big{(}\frac{r_{0}}{|\chi|} \Big{)}\right]\equiv B>0. \tag{54}\]
Also we define
\[\left[r_{0}^{2}-2r_{0}+Q^{2}+\chi r_{0}ln\Big{(}\frac{r_{0}}{|\chi|}\Big{)} \right]\equiv C \tag{55}\]
which gives
\[E_{0}=\sqrt{\frac{2}{B}}\frac{C}{r_{0}}\ \ ;\ \ L_{0}=r_{0}\sqrt{\frac{A}{B}}\ \ \Rightarrow\frac{L_{0}}{E_{0}}=\frac{r_{0}^{2}}{C}\sqrt{\frac{A}{2}}. \tag{56}\]
Eq.(s) (53), (54) are the conditions for the existence of particles with real and finite values of energy \(E_{0}\) and angular momentum \(L_{0}\).
The plot in Fig.2 gives us the region of existence of real and finite \(E_{0}\) and \(L_{0}\) compatible with the possible values of \(\chi\) and \(r_{0}\). The compatible range of \(\chi\) and \(r_{0}\) are shown for \(Q=0.0\) (blue), \(0.5\) (red), and \(1.0\) (magenta). We find that with an increase in charge \(Q\) the possible region of existence of circular orbits having finite values of \(E\) and \(L\) increases. The domain for \(Q=0.5\) covers the domain for \(Q=0\) along with some extra domain and the compatible region of \(\chi\) and \(r_{0}\) for \(Q=1.0\) covers the region of both \(Q=0.0\) and \(Q=0.5\) along with some additional region. We find that with an increase in charge \(Q\), the possibility of the existence of orbits closer to the black hole increases. Also, orbits with finite values of \(E_{0}\) and \(L_{0}\) are more probable to be found for lower values of PFDM parameter \(\chi\) with increasing charge \(Q\).
Next, we try to find the conditions of _stability/instability_ of the circular geodesics and possible conditions for the observability of those instabilities. In order to analyse the stability we compute the Lyapunov exponent (\(\lambda\)). As mentioned earlier, there are two Lyapunov exponents in the case of time-like geodesics, one corresponding to co-ordinate time (\(t\)), \(\lambda_{0c}\) and the other to the proper time (\(\tau\)), \(\lambda_{0p}\).
In order to compute the co-ordinate time Lyapunov exponent \(\lambda_{0c}\), we use the expression in eq.(31). Here we replace the potential \(V_{r}\) from eq.(7) with the lapse function \(f(r)\) given by eq.(44). Also, we use the geodesic equation for \(t\) as obtained in eq.(3). Since the Lyapunov exponent is evaluated at the radius \(r_{0}\) for the circular orbits, so we replace \(r\) by \(r_{0}\) and the corresponding values for \(E\) and \(L\) by \(E_{0}\) and \(L_{0}\) respectively.
Figure 2: Parametric plot in \((\frac{\chi}{M}-\frac{r_{0}}{M})\) plane for the real and finite \(E_{0}\) and \(L_{0}\). The plots are shown for black hole charge \(\frac{Q}{M}\)=0 (blue), 0.5 (red), and 1.0 (magenta).
Finally, we use the expressions of \(E_{0}\) and \(L_{0}\) as obtained in eq.(52) to obtain the expression of \(\lambda_{0c}\) as
\[\lambda_{0c} = \frac{1}{\sqrt{2r_{0}^{6}}}\Bigg{[}r_{0}^{3}\Big{(}\chi ln(\frac{r _{0}}{|\chi|})-2\Big{)}+r_{0}^{2}\Bigg{(}2\chi^{2}+8\chi+12-12\chi ln\left( \frac{r_{0}}{|\chi|}\right) \tag{57}\] \[- 4\chi^{2}(\frac{r_{0}}{|\chi|})+3\chi^{2}\Big{(}ln(\frac{r_{0}} {|\chi|})\Big{)}^{2}\Bigg{)}+r_{0}\Big{(}9\chi Q^{2}ln\left(\frac{r_{0}}{|\chi |}\right)-18Q^{2}-8\chi Q^{2}\Big{)}+8Q^{4}\Bigg{]}^{\frac{1}{2}}\] \[= \sqrt{\frac{\Delta}{2r_{0}^{6}}}\.\]
In the case of proper time Lyapunov exponent \(\lambda_{0p}\) we use the expression in eq.(30). Then we replace the potential \(V_{r}\) using eq.(7). We use the lapse function \(f(r)\) as given in eq.(44). Since the expression for \(\lambda_{0p}\) is evaluated at the circular radius \(r_{0}\), so we replace the \(E\) and \(L\) by \(E_{0}\) and \(L_{0}\) respectively and put their values as calculated in eq.(52) to obtain
\[\lambda_{0p} = \frac{1}{\sqrt{r_{0}^{4}B}}\Bigg{[}r_{0}^{3}\Big{(}\chi ln(\frac {r_{0}}{|\chi|})-2\Big{)}+r_{0}^{2}\Bigg{(}2\chi^{2}+8\chi+12-12\chi ln\left( \frac{r_{0}}{|\chi|}\right) \tag{58}\] \[- 4\chi^{2}\left(\frac{r_{0}}{|\chi|}\right)+3\chi^{2}\Big{(}ln \left(\frac{r_{0}}{|\chi|}\right)\Big{)}^{2}\Bigg{)}+r_{0}\Big{(}9\chi Q^{2} ln\left(\frac{r_{0}}{|\chi|}\right)-18Q^{2}-8\chi Q^{2}\Big{)}+8Q^{4}\Bigg{]}^{ \frac{1}{2}}\] \[= \sqrt{\frac{\Delta}{r_{0}^{4}B}}\.\]
Previously we mentioned that \(B>0\) is a required condition for the existence of particles with finite values of \(E\) and \(L\). Together with those conditions, we also require \(\Delta<0\), which results in imaginary \(\lambda\) and corresponds to the stability of the geodesic [37]. On the other hand for \(\Delta>0\), we have a real \(\lambda\) which corresponds to unstable geodesics. And the condition of \(\Delta=0\) gives the marginal circular geodesic or the innermost stable circular geodesic.
The plot in Fig.3 shows the region of existence of stable orbits of radius \(r_{0}\) compatible with the values of \(\chi\). The region increases with an increase in \(Q\), thereby increasing the possible compatible range of \(\chi\) and \(r_{0}\). The plots are shown for \(Q=0.0\) (blue), \(0.5\) (red), and \(1.0\) (magenta). The region for \(Q=0.5\) (red)
Figure 3: Parametric plot in \((\frac{\chi}{M}-\frac{r_{0}}{M})\) plane for the existence of stable orbit. The plots are shown for black hole charge \(\frac{Q}{M}\)=0 (blue), \(0.5\) (red), and \(1.0\) (magenta).
covers the region accessible in case of \(Q=0.0\) (blue) and the domain in case of \(Q=1.0\) (magenta) covers the domain for both \(Q=0.0\) (blue) and \(Q=0.5\) (red). We observe an increase in the range of accessible stable orbits of radius \(r_{0}\). The stable orbits get closer to the black hole with an increase in charge \(Q\). Due to the increase in charge \(Q\), the event horizon and thereby the size of the black hole decreases. Hence the possibility of the existence of orbits closer to the black hole increases. These effectively led to the existence of stable orbits in the region closer to the black hole.
In order to analyse the observability of gravitational signals, we need to determine the critical exponent \(\gamma\). As defined in eq.(33) \(\gamma\) depends on the angular velocity \(\Omega\) and Lyapunov exponent \(\lambda\). We first calculate the angular velocity \(\Omega\) which takes the form
\[\Omega_{0}=\frac{d\phi}{dt}\Big{|}_{r=r_{0}}=\sqrt{\frac{f^{\prime}(r_{0})}{2 r_{0}}}=\sqrt{\frac{\Big{(}\chi+2\Big{)}r_{0}-2Q^{2}-\chi r_{0}ln\Big{(}\frac{r_{0}}{| \chi|}\Big{)}}{2r_{0}^{4}}}=\sqrt{\frac{A}{2r_{0}^{4}}}. \tag{59}\]
Again the critical exponent \(\gamma\) is related to two different time periods whose values give us an idea of the observability of gravitational wave signals [74]. The critical exponent \(\gamma_{0c}\) (for co-ordinate time) and \(\gamma_{0p}\) (for proper time) takes the form
\[\gamma_{0c}=\frac{T_{\lambda_{0c}}}{T_{\Omega}}=\frac{1}{2\pi}\frac{\Omega_{0 }}{\lambda_{0c}}=\frac{1}{2\pi}\sqrt{\frac{r_{0}^{2}A}{\Delta}}\ \ ;\ \ \gamma_{0p}=\frac{T_{\lambda_{0p}}}{T_{\Omega}}=\frac{1}{2\pi}\frac{\Omega_{0 }}{\lambda_{0p}}=\frac{1}{2\pi}\sqrt{\frac{AB}{2\Delta}}. \tag{60}\]
From all our previous results we have \(A,B,C>0\). Additionally, \(\Delta>0\) results in positive values of the Lyapunov exponents and thereby the existence of unstable circular geodesics. Also in order to have observational relevance, \(\gamma\) must be less than 1, leading to \(T_{\lambda}<T_{\Omega}\)[81]. We can therefore find the parameter space of \((r_{0},Q,\chi)\) which corresponds to such a condition.
The plots in Fig.4 show the parameter space corresponding to the detection of gravitational waves. The plots shown, are for charge \(Q=0.0\) (blue), \(0.5\) (red), and \(1.0\) (magenta). The plots are for coordinate time (left) and proper time (right). We find that the region and thereby the possibility of detection decreases with an increase in black hole charge \(Q\). We find that in the case of coordinate time, the domain for \(Q=0.5\) (red) almost covers the domain for \(Q=0.0\) (blue) with some regions left out. On the other hand in the case of proper time, we find that \(Q=0.5\) (red) almost covers the region covering the entire domain of \(Q=0.0\) (blue) along with some extra region for smaller \(r_{0}\) and leaving out some regions of higher values of \(r_{0}\). Also, we find that for \(Q=1.0\) (magenta) the region covered is a portion of the domain covered for \(Q=0.0\) (blue),
Figure 4: Parametric plot in \((\frac{\chi}{M}-\frac{\pi n}{M})\) plane for possibility of detection of gravitational waves. The coordinate time critical exponent is on the left whereas the proper time exponent is on the right. The plots are shown for black hole charge \(\frac{Q}{M}\)=0 (blue), 0.5 (red), and 1.0 (magenta).
\(Q=0.5\) (red) along with some additional regions and some left-out regions as well. So, effectively we find that with an increase in charge \(Q\), the smaller values of \(r_{0}\) become more accessible. Also, we find that the detectability of signals and thereby the parameter space of \(\chi-r_{0}\) is larger in the case of coordinate time than that in proper time.
The coordinate time is more relevant, since it is the time measured by an observer, whereas the proper time is the time as measured in the particle's frame. So, in the case of a coordinate time critical exponent, we find that the entire range of \(\chi\) is accessible, that is the detection of gravitational signals does not put any constraint on \(\chi\). On the other hand, we find that unstable orbits exist up to a certain region. So, the possibility of detecting gravitational signals constrains \(r_{0}\). Also, we find that the possible values of \(r_{0}\) decrease with \(Q\). The reason being with an increase in black hole charge \(Q\), the event horizon and thereby the size of the black hole decreases. Thus, the possibility of the existence of all kinds of orbits closer to the black hole increases. With the increase in charge \(Q\), the region of existence of unstable orbits \(r_{0}\) decreases.
### Null geodesics
Here we are interested in analyzing the geodesics of null particles (photons) for which \(\beta=0\). The radial geodesic equation is of the form
\[\dot{r}^{2}=E^{2}-f(r)\frac{L^{2}}{r^{2}}=E^{2}-\left(1-\frac{2M}{r}+\Big{(} \frac{Q}{r}\Big{)}^{2}+\frac{\chi}{r}ln\Big{(}\frac{r}{|\chi|}\Big{)}\right) \frac{L^{2}}{r^{2}}=\widetilde{V}_{r}. \tag{61}\]
Using the condition for null circular geodesics, that is
\[\dot{r}^{2}\Bigg{|}_{r=r_{p}}=(\dot{r}^{2})^{\prime}\Bigg{|}_{r=r_{p}}=0 \tag{62}\]
we get two constraint equations.
The second condition in eq.(62) gives
\[r_{p}f^{\prime}(r_{p})-2f(r_{p})=0 \Rightarrow 2r_{p}^{2}-(\chi+6)r_{p}+4Q^{2}+3\chi r_{p}ln\Big{(}\frac{r_{p}}{ |\chi|}\Big{)}=0. \tag{63}\]
This equation can be solved to determine the photon sphere radius \(r_{p}\). But since the above equation is a transcendental equation, it cannot be solved analytically and has to be solved numerically to get \(r_{p}=r_{p}(Q,\chi)\). The first condition in eq.((62)) results in a constraint on \(L\) and \(E\) in terms of \(r_{p}\) as
\[\frac{L_{p}}{E_{p}}=\sqrt{\frac{f(r_{p})}{r_{p}^{2}}}=\sqrt{\frac{r_{p}^{2}-2 r_{p}+Q^{2}+\chi r_{p}ln\Big{(}\frac{r_{p}}{|\chi|}\Big{)}}{r_{p}^{4}}}= \sqrt{\frac{r_{p}^{2}+\chi r_{p}-Q^{2}}{3r_{p}^{4}}}. \tag{64}\]
The last expression in eq.(64) is obtained using eq.(63) and replacing the logarithmic term. Also, \(E_{p}\) and \(L_{p}\) are energy and angular momentum of null particles moving in circular orbits of radius \(r_{p}\).
In Fig(12), we have shown the variation of photon orbit radius \(r_{p}\) with \(\chi\). The plots are shown for \(Q=0.0\) (black dashed) and \(0.5\) (black). We find that for fixed values of charge \(Q\), the photon sphere radius \(r_{p}\) initially decreases with \(\chi\) reaches a minimum and then again starts to increase. Also, we find that for a fixed value of \(\chi\), \(r_{p}\) decreases with an increase in charge \(Q\).
The reason for the above observation can be understood from the location and process of formation of the photon sphere. The photon sphere is the sphere composed of unstable photons from which we receive light and is the closest anything can reach a black hole and escape it. The size of the photon sphere is dictated by the size of the event horizon of the black hole which for Schwarzschild black hole is \(r_{p}=3M=\frac{3}{2}r_{+}=1.5r_{+}\) which can be obtained from eq.(63) by setting \(\chi=Q=0\). So we find that \(r_{p}\) indirectly gives the size of \(r_{+}\) (the event horizon) and thereby is expected to have a similar dependence on \(\chi\) as that of \(r_{+}\) (discussed earlier) which is reflected in the above Figure.
The angular velocity \(\Omega_{p}\) for null circular geodesics takes the form
\[\Omega_{p}=\frac{d\phi}{dt}\bigg{|}_{r=r_{p}}=\sqrt{\frac{f(r_{p})}{r_{p}^{2}} }=\frac{E_{p}}{L_{p}}=\sqrt{\frac{r_{p}^{2}+\chi r_{p}-Q^{2}}{3r_{p}^{4}}}. \tag{65}\]
The above expression can be obtained using the geodesics of \(\phi\) and \(t\) (eq.(3)) and then using eq.(64). The expression \(\frac{L_{p}}{E_{p}}=D\) is defined as the impact parameter which is related to the angular velocity of photons moving in circular null geodesics as shown in eq.(65).
In the case of null geodesics, only the co-ordinate time Lyapunov exponent \(\lambda_{p}\) can be defined which takes the form
\[\lambda_{p} = \sqrt{\frac{\widetilde{V}^{\prime\prime}_{r}}{2t^{2}}}\bigg{|}_{ r=r_{p}} \tag{66}\] \[= \frac{1}{\sqrt{4r_{p}^{6}}}\Bigg{[}\Bigg{(}(\chi+2)r_{p}-2Q^{2}- \chi r_{p}ln\Big{(}\frac{r_{p}}{|\chi|}\Big{)}\Bigg{)}\Bigg{(}(6+4\chi)r_{p}- 8Q^{2}-3\chi r_{p}ln\Big{(}\frac{r_{p}}{|\chi|}\Big{)}\Bigg{)}\Bigg{]}^{\frac {1}{2}}\] \[= \sqrt{\frac{PR}{4r_{p}^{6}}}\]
with \(P\) and \(R\) taking the form
\[P=(\chi+2)r_{p}-2Q^{2}-\chi r_{p}ln\Big{(}\frac{r_{p}}{|\chi|}\Big{)}\ \ ;\ \ R=(6+4\chi)r_{p}-8Q^{2}-3\chi r_{p}ln\Big{(}\frac{r_{p}}{|\chi|}\Big{)}. \tag{67}\]
The above expression for \(\lambda_{p}\) can be obtained using the potential \(\widetilde{V}_{r}\) as in eq.(61) and \(t\) geodesic (eq.(3)). Then we use eq. (s)(63), (64) to obtain eq.(66).
For stable geodesics, we need \(\lambda_{p}\) to be imaginary. Hence we must have either \(P>0\) and \(R<0\) or \(P<0\) and \(R>0\). Again for unstable geodesics, we must have real \(\lambda_{p}\) which corresponds to \(P>0\) and \(R>0\) or \(P<0\) and \(R<0\).
The critical exponent \(\gamma\) takes the form
\[\gamma=\frac{1}{2\pi}\frac{\Omega_{p}}{\lambda_{p}}=\frac{1}{2\pi}\sqrt{\frac{4r_ {p}^{2}\Big{(}r_{p}^{2}+\chi r_{p}-Q^{2}\Big{)}}{3PR}} \tag{68}\]
which can be obtained by using the expressions for \(\Omega_{p}\) and \(\lambda_{p}\) from eq.(s)(65), (66) respectively. Here the suffix p does not correspond to proper time but corresponds to photon sphere radius \(r_{p}\) at which all quantities are evaluated. The condition \(\gamma<1\) leads to the possibility of observation of gravitational signals [81].
In the above Figure, we have shown the parametric plot of \(\chi\) and \(r_{p}\) compatible with the detection of gravitational waves. We observe that the presence of charge \(Q\) affects the compatible range of \(\chi\) and \(r_{p}\). The plots are shown for \(Q=0\) (blue) and \(Q=0.5\) (red). We find that the possible domain for \(Q=0.5\) (red) leaves out a small portion of the domain available for \(Q=0.0\) (blue). With the increase in black hole charge \(Q\), the possible values of \(r_{p}\) decrease. This is because the black hole charge reduces the size of the event horizon \(r_{+}\) which in effect reduces the possible values of unstable null geodesics. Also, we observe that with an increase in \(\chi\), the range of possible values of \(r_{p}\) decreases reaching a minimum, and then again starts to increase. This can be explained by the behaviour of \(r_{+}\) which is reflected in the nature of \(r_{p}\).
## 4 Perturbation by a scalar field on the black hole background
In this section, we are interested in studying the perturbation of a black hole spacetime due to the presence of a massless scalar field \(\Phi\). The equation of motion of the perturbing field \(\Phi\) in the black hole background takes the form
\[\boxed{\nabla_{\mu}\nabla^{\mu}\Phi=\frac{1}{\sqrt{-g}}\partial_{\mu}\Bigg{(} \sqrt{-g}g^{\mu\nu}\partial_{\nu}\Bigg{)}\Phi=0\.} \tag{69}\]
Since we are interested in perturbation around a spherically symmetric metric so the ansatz solution \(\Phi\) can be expressed in terms of spherical harmonics \(Y_{lm}(\theta,\phi)\)[82]. The solution can be assumed to have the form [32], [41]
\[\Phi=\sum_{l,m}e^{-i\omega t}Y_{lm}(\theta,\phi)\frac{R(r)}{r}. \tag{70}\]
Figure 6: Parametric plot in \((\frac{\chi}{M}-\frac{r_{p}}{M})\) plane for possible detection of gravitational waves. The plots are shown for black hole charge \(\frac{Q}{M}\)=0 (blue) and 0.5 (red).
Replacing the above ansatz in eq.(69) and using the equation of the _associated Legendre polynomial_[82], we get a radial equation of the form
\[\frac{1}{r^{2}}\frac{d}{dr}\Bigg{[}r^{2}f(r)\frac{d}{dr}\Bigg{(}\frac{R(r)}{r} \Bigg{)}\Bigg{]}+\Bigg{(}\frac{\omega^{2}}{f(r)}-\frac{l(l+1)}{r^{2}}\Bigg{)} \frac{R(r)}{r}=0. \tag{71}\]
Using tortoise co-ordinate \(r_{*}\) which is related to \(r\) as \(\frac{dr}{dr_{*}}=f(r)\), we get the radial equation as
\[\frac{d^{2}R(r_{*})}{dr_{*}^{2}}+\Big{(}\omega^{2}-V(r_{*})\Big{)}R(r_{*})=0 \tag{72}\]
with \(V(r_{*})\) giving the potential as
\[V(r_{*})=f(r)\Bigg{(}\frac{l(l+1)}{r^{2}}+\frac{f^{\prime}(r)}{r}\Bigg{)}\ \ ;\ \ \frac{dr}{dr_{*}}=f(r)\Rightarrow r_{*}=\int\frac{dr}{f(r)}. \tag{73}\]
The above equation can be rewritten in the form
\[\frac{d^{2}\psi(x)}{dx^{2}}+Q(x)\psi(x)=0 \tag{74}\]
with \(r_{*}\to x\), \(\Big{(}\omega^{2}-V(r_{*})\Big{)}\to Q(x)\) and \(R(r_{*})\rightarrow\psi(x)\). The above equation is Schrodinger-like and can be easily solved for \(Q(x)=\)constant. But in this case, \(Q(x)\) being a function of \(x\), the equation can be solved for some specific forms of \(Q(x)\) (thereby of \(V(r_{*})\)). The formulation has been detailed in Schutz and Will [83]. They used the semi-analytic WKB approximation to solve the problem.
In general, the WKB method is valid in the case of slowly varying potentials [84]. So, our first criterion will be that the potential or rather \(Q(x)\) must be nearly constant. Again, in problems using \(WKB\) approximation, we have an incident, reflected, and transition amplitudes with the incident and reflected ones being comparable.
But in the case of black hole perturbations, there are no such incident amplitudes, and thus the reflected and transition amplitudes are comparable. So in such cases, the WKB approximation can be made applicable only if
\[Q(x)\Bigg{|}_{x=x_{0}}=\frac{dQ(x)}{dx}\Bigg{|}_{x=x_{0}}=0. \tag{75}\]
The above condition is similar to the condition of circular geodesics. But the turning points will be too close to be applicable for the \(WKB\) approximation. The only possibility is if the matching of the solutions across the two boundaries is done simultaneously. Also, the potential is assumed to be parabolic in nature1.
Footnote 1: In general, most potentials are parabolic close to the maxima.
The solutions are matched across the different regions by using Taylor expansion of the potential [84].
We expand \(Q(x)\) about the maxima (\(x_{0}\)) and find a solution in the region between the two turning points. Since the solution is continuous, we can find the approximate solution in the two regions which are constant at far-off regions, since \(Q(x)\rightarrow\omega(=\) constant) at far regions. This results in a relation termed as _quasi-normal mode condition_ or QNM condition in literature since it leads to the existence of quasi-normal modes.
The condition takes the form [83]
\[\frac{Q(x_{0})}{\sqrt{2Q^{{}^{\prime\prime}}(x_{0})}}=-i\Big{(}n+\frac{1}{2} \Big{)}. \tag{76}\]
A short derivation of this result is presented in the Appendix.
Using the above condition in eq.(72), we get
\[\frac{\omega^{2}-V(\tilde{r}_{0})}{\sqrt{-2V^{{}^{\prime\prime}}(\tilde{r}_{0 })}}=-i\Big{(}n+\frac{1}{2}\Big{)}\ \ \Rightarrow\ \ \omega^{2}=V(\tilde{r}_{0})-i\Big{(}n+\frac{1}{2}\Big{)}\sqrt{-2V^{{}^{ \prime\prime}}(\tilde{r}_{0})}=A-iB \tag{77}\]
where \(\tilde{r}_{0}\) corresponds to the extrema of the potential with \(A=V(\tilde{r}_{0})\) and \(B=\Big{(}n+\frac{1}{2}\Big{)}\sqrt{-2V^{\prime^{\prime}}(\tilde{r}_{0})}\). The _exact_ expression of the frequency takes the form
\[\omega=\omega_{R}-i\omega_{I}\ \ ;\ \ \omega_{R}=\sqrt{\frac{A+\sqrt{A^{2}+B^{2}}}{ 2}}\ ;\ \omega_{I}=\frac{B}{\sqrt{2\Big{(}A+\sqrt{A^{2}+B^{2}}\Big{)}}}. \tag{78}\]
The condition for determining the extremum point \(\tilde{r}_{0}\) of the potential takes the form
\[\frac{l(l+1)}{\tilde{r}_{0}^{3}}\Big{(}\tilde{r}_{0}f^{\prime}(\tilde{r}_{0})- 2f(\tilde{r}_{0})\Big{)}+\Big{(}\frac{(f^{\prime}(\tilde{r}_{0}))^{2}}{\tilde{ r}_{0}}+\frac{f^{\prime\prime}(\tilde{r}_{0})f(\tilde{r}_{0})}{\tilde{r}_{0}}- \frac{f^{\prime}(\tilde{r}_{0})f(\tilde{r}_{0})}{\tilde{r}_{0}^{2}}\Big{)}=0. \tag{79}\]
The above eq.(79) in case of the Schwarzschild black hole takes the form
\[\tilde{r}_{0S}^{2}-\Big{(}3-\frac{4}{l(l+1)}\Big{)}\tilde{r}_{0S}-\frac{10}{l (l+1)}=0. \tag{80}\]
\(\tilde{r}_{0S}\) is the point of maxima of the potential in the Schwarzschild background. Since the above equation is quadratic in nature, so one can obtain an analytical solution that has the form
\[\tilde{r}_{0S}=\frac{1}{2}\Bigg{(}\Big{(}3-\frac{4}{l(l+1)}\Big{)}+\sqrt{ \Big{(}3-\frac{4}{l(l+1)}\Big{)}^{2}+\frac{40}{l(l+1)}}\Bigg{)}. \tag{81}\]
The above solution in the limit of large \(l\), that is \(l\to\infty\), results in \(\tilde{r}_{0S}=3\). For Reissner-Nordstrom black hole, eq.(79) takes the form
\[\tilde{r}_{0RN}^{4}-\Big{(}3-\frac{4}{l(l+1)}\Big{)}\tilde{r}_{0RN}^{3}+\Big{(} 2Q^{2}-\frac{10(1+Q^{2})}{l(l+1)}\Big{)}\tilde{r}_{0RN}^{2}+\frac{18Q^{2}}{l(l +1)}\tilde{r}_{0RN}-\frac{7Q^{4}}{l(l+1)}=0 \tag{82}\]
\(\tilde{r}_{0RN}\) is the point of maxima of the potential in the Reissner-Norstrom background. The above equation is quartic and needs to be solved numerically. In the limit of large \(l\), that is \(l\to\infty\), we have the equation
\[\tilde{r}_{0RN}^{2}\Big{(}\tilde{r}_{0RN}^{2}-3\tilde{r}_{0RN}+2Q^{2}\Big{)}= 0. \tag{83}\]
The solution of the above equation gives
\[\tilde{r}_{0RN}=\frac{3}{2}\Bigg{(}1+\sqrt{1-\frac{8}{9}Q^{2}}\Bigg{)}. \tag{84}\]
The equation for determining \(\tilde{r}_{0}\) in case of charged black hole immersed in PFDM is complicated (not shown) and can be solved numerically for general \(l\) as well as for \(l>>1\). We would also like to point out that the above results are shown for the first-order WKB approximation which is followed all throughout our analysis for simplicity. The higher order WKB approximations and the corresponding expressions for quasinormal frequency \(\omega\) are obtained in [85], [86], [87].
### Eikonal approximation
The eikonal approximation is used to approximate the solutions of certain partial differential equations (PDEs), particularly those which arise in the study of wave propagation problems. In the eikonal approximation, the main idea is to neglect the wave nature of a propagating wave and consider only its geometrical properties. This approximation assumes that the wavelength of the wave is much smaller compared to the characteristic length scale of the system under consideration. This approximation simplifies calculations and provides insight into the propagation of waves. In optics, the eikonal approximation is often used to study the behavior of light rays in refractive media or near optical interfaces [88]. The eikonal approximation can also be used to determine the path of light rays [89], [90], and calculate the bending of light at boundaries
[89], [91], to name a few. This approximation is also very useful in the study of scattering problems in quantum mechanics [84], [92].
The eikonal approximation is used to study the quasinormal modes of a black hole [75]. These modes are important for understanding the behaviour of black holes and testing various theories of gravity. In the eikonal approximation, the black hole is treated as a geometric object with a specific shape and mass distribution. The quasinormal modes can then be calculated by solving the wave equation for small perturbations around the black hole geometry. The eikonal approximation assumes that the wavelength of the perturbation is much smaller than the black hole horizon radius. The eikonal approximation leads to an approximate formula for the quasinormal frequencies of the black hole, which depends on the properties of the black hole, such as its mass, spin, and charge. It provides an estimate of the quasinormal decay rate, which describe how quickly the oscillations of the black hole decay over time. The decay rate is related to the imaginary part of the quasinormal frequency. It is important to note that the eikonal approximation may break down if the perturbations are too large, or if the black hole is spinning rapidly. However, it remains a useful tool for studying the quasinormal modes of black holes and understanding the behaviour of gravity in the strong gravitational fields near a black hole.
In order to incorporate the eikonal approximation in our analysis, we impose the condition \(l>>1\)[75]. This results into an approximated potential \(V(r_{*})\) of the form
\[V(r_{*})=f(r)\frac{l(l+1)}{r^{2}}. \tag{85}\]
The condition for the maxima of the potential (\(V^{\prime}(r_{*})|_{r=r_{0}}\)\(=0\)) gives
\[\frac{l(l+1)}{\tilde{r}_{0}^{3}}\Big{(}\tilde{r}_{0}f^{\prime}(\tilde{r}_{0}) -2f(\tilde{r}_{0})\Big{)}=0. \tag{86}\]
We find that both the eq.(s) (17), (86) correspond to the same condition for any arbitrary lapse function \(f(r)\). _Thus in the eikonal limit, the extremum point (\(\tilde{r}_{0}\)) of the function \(\Big{(}\omega^{2}-V(r_{*})\Big{)}\) corresponds to unstable circular null geodesics (\(r_{p}\))_. Hence, we can get an idea of the quasi-normal frequencies from the knowledge of unstable circular null geodesics.
The eikonal approximation results in the quasi-normal frequencies
\[\omega^{2}=V(r_{p})-i(n+\frac{1}{2})\sqrt{-2V^{\prime\prime}(r_{p})}=l(l+1) \Omega_{p}^{2}-i2\Big{(}n+\frac{1}{2}\Big{)}\sqrt{l(l+1)}\lambda\Omega_{p}=C -iD \tag{87}\]
where the quasinormal frequencies can be represented as \(\omega=\tilde{\omega}_{R}-i\tilde{\omega}_{I}\) and which in terms of \(C\) and \(D\) take the form
\[\widetilde{\omega}_{R}=\sqrt{\frac{C+\sqrt{C^{2}+D^{2}}}{2}}\ ;\ \widetilde{ \omega}_{I}=\frac{D}{\sqrt{2\Big{(}C+\sqrt{C^{2}+D^{2}}\Big{)}}}. \tag{88}\]
The final form of equation eq.(87) can be obtained by using eq.(s) (64), (65), (66).
We wish to calculate the values of \(\omega_{R}\) and \(\omega_{I}\) using the eikonal approximation that is using \(r_{p}\) and compare with the same evaluated using \(\tilde{r}_{0}\). As mentioned above, we obtain our results using the first-order \(WKB\) approximation. The higher-order corrections would be performed in our future works. Previous works have been done using PFDM using the sixth order [41] but no such comparisons are shown. Besides in our work we carry out the analysis using the charged black hole.
In the eikonal limit, a further approximation is possible in eq.(87). We can assume that \(l>>n\) and since \(l\) is large, thus \(l(l+1)\sim l^{2}\). Hence, eq. (87) simplifies to [75]
\[\omega=l\Omega_{p}-i(n+\frac{1}{2})|\lambda|. \tag{89}\]
We find in the next section that the black hole shadow radius \(R_{s}\) takes the form \(R_{s}=\frac{r_{p}}{\sqrt{f(r_{p})}}=\Omega_{p}^{-1}\). Thus the quasinormal frequency \(\omega\) can be written as [41]
\[\omega=\frac{l}{R_{s}}-i(n+\frac{1}{2})|\lambda|. \tag{90}\]
Also, the Lyapunov exponent \(\lambda\) can also be related to the shadow radius \(R_{s}\) along with some other parameters as shown in [41]. Thus the shadow radius and QNM frequency are interrelated and one can be obtained with knowledge of the other. Besides, since both are measurable quantities, hence the system of interest can be cross-checked using both results.
The plots in Fig.7 show the variation of the real (left) and imaginary (right) parts of the frequency of quasinormal modes with increment in the PFDM parameter \(\chi\). The plots are shown for the expression of \(\omega\)'s obtained from eq.(s) (77) and (78). We find that both \(\omega_{R}\) and \(\omega_{I}\) increase with \(\chi\) reaches a maximum and then decrease. The plots are shown for two different values of black hole charge \(Q=\)0.0 (black dashed) and 0.5 (black). We find that both frequencies are higher in presence of charge. The nature of variation of the frequencies with \(\chi\) can be explained by the fact that \(\omega\sim\frac{1}{Mass^{b}}\) where \(b\) is a positive integer or fraction. The nature of the variation of mass of the black hole system is dictated by the PFDM parameter \(\chi\) as discussed previously. That in turn dictates the nature of variation of QNM frequencies with \(\chi\). The same is true for the case of charge \(Q\) dependency of QNM.
The plots in Fig. 8 show the variation of the real (left) and imaginary (right) parts of the \(QNM\) frequencies with eq.(77) (Exact), eq.(87) (Partial) and eq.(89) (Total). We compare the different expressions for \(l=1,n=0\) and found that there are certain differences in the results obtained. The results are obtained for the black hole charge \(Q=0.5\).
\(\omega_{R}\) obtained using the eikonal approximation \(l>>1\) (Partial) have a much smaller value compared to the exact ones. Also, the values obtained using both \(l>>1\) and \(l>>n\) (Total) are smaller. Thus these approximations are valid only for large \(l\) having a value around \(l\sim 100\).
Also, we find that in the case of \(\omega_{I}\), the results are almost the same (Exact and Total) even for \(l=1\) whereas the Partial one lies below them and can be matched only for higher values of \(l\). The adjacent figure shows the variation of the quality factor (\(QF\)) of the perturbed black hole system. In general, the quality
Figure 8: Variation of the real (\(M\omega_{R}\)) and imaginary (\(M\omega_{I}\)) parts of the QNM frequencies with change in PFDM parameter \(\frac{\chi}{M}\). The plots are for \(l=1\) and \(n=0\) with charge \(\frac{Q}{M}=0.5\).
Figure 7: Variation of the real (\(M\omega_{R}\)) and imaginary (\(M\omega_{I}\)) parts of the QNM frequencies with change in PFDM parameter \(\frac{\chi}{M}\). The plots are for \(l=1\) and \(n=0\).
factor, as can be guessed by the name gives the quality of the resonating system. If \(QF\) is large, then the system is said to be underdamped whereas if the \(QF\) is small, then the system is said to be overdamped. Mathematically, it is defined as [35], [77]
\[Quality\ \ Factor=\frac{\omega_{R}}{2|\omega_{I}|}. \tag{91}\]
The plot in Figure 9 shows that with an increase in PFDM parameter \(\chi\), the quality factor of the system reduces implying that dark matter over-damps the system. We have shown the plots for charges \(Q=0.0\) (dotted black) and \(Q=0.5\) (black). We find that the increase in charge \(Q\) increases the \(QF\) and thus we find that the presence of charge \(Q\) under-damps the system. Hence the increase in charge \(Q\) turns the system into a better oscillator.
## 5 Shadow of charged black hole immersed in perfect fluid dark matter
Black hole shadow is formed by the null geodesics which are subject to the condition (eq.(63))
\[2f(r_{p})-rf^{\prime}(r_{p})=0\ \ \Rightarrow\ \ 2r_{p}^{2}-(6+\chi)r_{p}+4Q^{2}+3 \chi r_{p}ln\Big{(}\frac{r_{p}}{|\chi|}\Big{)}=0. \tag{92}\]
Solving the above equation we get the value of photon sphere radius \(r_{p}\) in terms of \(\chi,Q\), that is \(r_{p}=r_{p}\Big{(}\chi,Q\Big{)}\). \(r_{p}\) has a maximum corresponding to some value of \(\chi\) which can be yield using \(\frac{\partial r_{p}}{\partial\chi}=0\). Using this condition, we get
\[(\chi_{p})_{m}=\frac{1+\sqrt{1-\frac{4}{9}Q^{2}\Big{(}2+3e^{-\frac{4}{3}} \Big{)}}}{1+\frac{2}{3}e^{\frac{4}{3}}}\ \ ;\ \ (r_{p})_{m}=(\chi_{p})_{m}e^{\frac{4}{3}}= \frac{1+\sqrt{1-\frac{4}{9}Q^{2}\Big{(}2+3e^{-\frac{4}{3}}\Big{)}}}{\frac{2}{ 3}+e^{-\frac{4}{3}}}. \tag{93}\]
\((\chi_{p})_{m}\) is the value of \(\chi\) corresponding to the minimum of the photon sphere radius \((r_{p})_{m}\). In the limit of \(Q\to 0\), we get
\[(\chi_{p})_{m}=\frac{2}{1+\frac{2}{3}e^{\frac{4}{3}}}\ \ ;\ \ (r_{p})_{m}= \frac{2}{\frac{2}{3}+e^{-\frac{4}{3}}}. \tag{94}\]
Figure 9: Variation of quality factor with change in \(PFDM\) parameter \(\frac{\chi}{M}\).
To obtain eq.(93), we first take the derivative of eq.(92) with respect to \(\chi\). Then we use the condition \(\frac{\partial r_{p}}{\partial\chi}=0\) to obtain
\[(r_{p})_{m}=(\chi_{p})_{m}e^{\frac{4}{3}}. \tag{95}\]
Replacing \((r_{p})_{m}\) in eq.(92), we get \((\chi_{p})_{m}\) and thereby \((r_{p})_{m}\) as in eq.(93).
The shadow radius is given as \(R_{s}=\frac{r_{p}(\chi,Q)}{\sqrt{f(r_{p}(\chi,Q))}}\). This expression can be obtained using the null geodesics in the equatorial plane. The radial equation takes the form of eq.(61) and the condition for unstable null circular geodesics gives eq.(63) and eq.(64).
Black hole shadow is formed by strong gravitational lensing of light rays in the vicinity of a black hole. Light from any background source comes near a black hole and travels in various trajectories. The light rays which move along the unstable circular geodesics can either fall into the black hole or travel to infinity upon encountering the slightest perturbation. For an observer placed in the equatorial plane \(\theta_{0}=\frac{\pi}{2}\) and at a distance \(\bar{r}_{0}\) from the black hole, the angular shadow size can be calculated using \(\tan\delta=\lim_{\Delta x\to 0}\frac{\Delta y}{\Delta x}\)[93]. The expression can be rewritten in terms of geodesics. In the appropriate limit, we obtain \(\tan\delta\) and thereby \(\sin\delta\) as [93]
\[\tan\delta=\sqrt{\frac{f(r)}{r^{2}\frac{E^{2}}{L^{2}}-f(r)}}\Bigg{|}_{r=\bar {r}_{o}}\ \ ;\ \ \sin\delta=\sqrt{\frac{f(r)}{r^{2}}}\frac{L_{p}}{E_{p}}\Bigg{|}_{r=\bar{r}_{o }}=\sqrt{\frac{f(\bar{r}_{0})}{\bar{r}_{0}^{2}}}\sqrt{\frac{r_{p}^{2}}{f(r_{p })}}. \tag{96}\]
For an observer positioned at a large distance from the black hole, the shadow radius takes the form \(R_{s}=\bar{r}_{0}\tan\delta\approx\bar{r}_{0}\sin\delta\). In the limit of \(\bar{r}_{0}\rightarrow\infty\), we have \(f(r)\to 1\) and the shadow radius \(R_{s}\) takes the form \(R_{s}=\frac{r_{p}}{\sqrt{f(r_{p})}}\).
The maximum value of \(R_{s}\) corresponding to some values of \(\chi\) can be found using the condition \(\frac{\partial R_{s}}{\partial k}=0\). Using this condition, we get the values
\[\chi_{rm}=\frac{3\Big{(}1+\sqrt{1-\frac{8}{9}Q^{2}(1+e^{-1})}\Big{)}}{2(1+e) }\ \ ;\ \ (R_{s})_{rm}=\frac{\chi_{rm}e}{\sqrt{f(\chi_{rm}e)}}. \tag{97}\]
In the limit \(Q\to 0\), we get [41], [67]
\[\chi_{rm}(Q=0)=\frac{3}{1+e}. \tag{98}\]
In Fig.10, we have shown the contour plots of the black hole shadow for different values of PFDM parameter \(\chi\). The plots are shown for black hole charge \(Q=0.5\). The left plots are for \(\chi<\chi_{c}\) and the right ones are
for \(\chi>\chi_{c}\). We find that the shadow size reduces with an increase in \(\chi\) for \(\chi<\chi_{c}\) and increases with \(\chi\) for \(\chi>\chi_{c}\). Also, we find that the effects are more pronounced for \(\chi<\chi_{c}\). The observation can be assigned to the fact that the shadow is formed by the light coming from the photon sphere. Thereby the shadow radius \(R_{s}\) is dictated by the size of the photon sphere \(r_{p}\) which in turn is dictated by the event horizon radius \(r_{+}\) as mentioned previously. The variation of the event horizon with \(\chi\) gets reflected on \(r_{p}\) and thereby on \(R_{s}\) and hence we observe such dependence of \(R_{s}\) on PFDM parameter \(\chi\).
The left plot in Fig.11 shows the variation of the shadow radius \(R_{s}\) with the PFDM parameter \(\chi\). We found that the shadow size initially decreases with \(\chi\) reaching a minimum and then starts to increase. This observation can be explained by the dependence of \(R_{s}\) on \(r_{+}\) which in turn depends on \(\chi\) as discussed above. On the other hand, the right plot shows the variation of shadow radius \(R_{s}\) with the black hole charge \(Q\). We find that the shadow radius decreases with an increase in black hole charge \(Q\). This can also be explained by the dependence of \(R_{s}\) on \(r_{+}\) which decreases with an increase in charge \(Q\) and that gets reflected in the shadow radius \(R_{s}\).
## 6 Summary and Conclusion
We summarize our findings now. We considered a charged black hole immersed in perfect fluid dark matter (PFDM). We studied the spacetime and found that the event horizon size \(r_{+}\) depends on both PFDM parameter \(\chi\) and charge \(Q\). We find that the event horizon radius initially decreases with \(\chi\) reaching a minimum at \(\chi=\chi_{c}\) and then starts to increase. The expression for the _minimum value_ of \(\chi_{c}\) and the corresponding value of \(r_{+}\) are calculated and found to be dependent on charge \(Q\) which is obtained for the first time in this work. We also find that the presence of PFDM parameter \(\chi\) does not increase the number of horizons. The nature of variation of event horizon radius \(r_{+}\) and thereby the black hole size with \(\chi\) can be explained by the fact that the system is composed of two masses, one due to the black hole (\(M\)) and the other due to PFDM (\(M_{0}\)). Below the critical value \(\chi_{c}\), the PFDM mass \(M_{0}\) hinders the black hole mass \(M\), and hence the effective mass of the system and thereby the size of event horizon \(r_{+}\) decreases. But after \(\chi_{c}\), the total mass of the system is dictated by the mass \(M_{0}\) of the PFDM, thus \(r_{+}\) increases. Also, we find that the size of the event horizon decreases with an increase in black hole charge \(Q\).
Then we studied the variation of black hole temperature \(T_{h}\) with the PFDM parameter \(\chi\). We find that the temperature of the black hole initially increases with \(\chi\) reaching a maximum and then starts to decrease. The reason being the temperature of the black hole is inversely dependent on the black hole's mass. Thus for \(\chi<\chi_{c}\), the temperature of the system increases whereas the black hole mass decreases. On the other hand, for \(\chi>\chi_{c}\), the temperature falls with an increase in the effective mass of the black hole system. Again, we also found that the black hole temperature \(T_{h}\) increases with an increase in black hole charge \(Q\). Then we studied the timelike geodesics. The existence and finiteness of energy \(E\) and angular momentum \(L\) per unit mass of the massive particle constrain the values of circular orbit radius \(r_{0}\) and PFDM parameter \(\chi\) which can be represented as a parametric plot in \(\chi-r_{0}\) plane. We find that the range of values of \(r_{0}\) and \(PFDM\) parameter \(\chi\) increases with an increase in charge \(Q\). Then by using the Lyapunov exponent (\(\lambda\)),
Figure 11: Plots showing variation of shadow radius \(R_{s}\) with \(\chi\) (left) and \(Q\) (right).
we studied the existence of stable circular geodesics orbiting a black hole. The existence of stable geodesics puts a constraint on the possible values of \(r_{0}\) and \(\chi\) which we represent via a parametric plot in \((\chi-r_{0})\) plane. We find that the region of compatible values \(\chi\) and \(r_{0}\) increases with black hole charge \(Q\). After that, we studied the critical exponent \(\gamma\) and showed the parametric plot for \(\chi-r_{0}\) compatible with the detection of gravitational wave signals. We find that the compatible range for \(r_{0}\) and \(\chi\) increases with charge \(Q\).
Then we studied the null geodesics where we find the dependence of photon sphere radius \(r_{p}\) on \(\chi\). The photon sphere radius \(r_{p}\) initially decreases with \(\chi\) reaching a minimum and then again starts to increase. We obtained the expression of the minimum value of \(\chi\) and the corresponding one of \(r_{p}\). The nature of variation of \(r_{p}\) can be explained by the fact that the photon sphere radius indirectly gives the size of the event horizon radius \(r_{+}\). Thus the dependence of \(r_{+}\) on \(\chi\) gets reflected in the nature of \(r_{p}\). We also observed that the possibility of detection of gravitational waves in the case of null particles increases with an increase in charge \(Q\).
Then we studied the effect of scalar field perturbation on the black hole background. Using the first-order WKB approximation, we calculated the real \((\omega_{R})\) and imaginary \((\omega_{I})\) parts of the quasinormal frequencies and their variation with the PFDM parameter \(\chi\). We find that both \(\omega_{R}\) and \(\omega_{I}\) initially increase with \(\chi\) reaching a maximum and then decrease. Also, we find that the QNM frequencies increase with the increase in charge \(Q\). The observed variation of the QNM frequency with change in \(\chi\) can be explained by considering \(\omega\sim\frac{1}{(Mass)^{b}}\) (which is true for any massive oscillating system) where \(b\) is a positive integer or fraction. The change in mass of the total system (black hole + PFDM) is dictated by the PFDM parameter \(\chi\) as discussed previously which in turn describes the nature of variation of QNM frequencies with \(\chi\). Besides, we have also done a comparative study of the different expressions of QNM frequency. We find that the results for \(\omega_{R}\) in the different cases are significantly varying for \(l=1\) whereas the values of \(\omega_{I}\) are quite close even for \(l=1\). We also studied the quality factor \((QF)\) of the oscillating black hole system and found that \(QF\) gets reduced with an increase in PFDM parameter \(\chi\) whereas it increases with the black hole charge \(Q\). Thus increase in \(\chi\) makes the system over-damped whereas the increment in charge turns the system into a better oscillator.
Finally, we studied the black hole shadow and find the dependence of the shadow size and thereby the shadow radius \(R_{s}\) on the PFDM parameter \(\chi\). The shadow size reduces with an increase in \(\chi\) reaches a minimum and then again starts to increase. On the other hand, the shadow size \(R_{s}\) reduces with an increase in charge \(Q\). This observation can be explained by the fact that shadow is formed by photons coming from photon spheres whose size dictates the black hole size or the size of the event horizon \(r_{+}\). The dependence of \(r_{+}\) on \(\chi\) and \(Q\) gets reflected on \(r_{p}\) and thereby on \(R_{s}\) and thus we obtain the above observation. Besides we have also obtained expressions for the minimum value of photon sphere radius \(r_{p}\) and shadow radius \(R_{s}\) corresponding to some value of PFDM parameter \(\chi\) in the presence of charge \(Q\) which was not done previously.
## 7 Acknowledgements
AD would like to acknowledge the support by SNBNCBS for the Senior Research Fellowship. ARC would like to acknowledge SNBNCBS for Senior Research Fellowship.
## Appendix
The equation whose solution we wish to obtain is
\[\frac{d^{2}\psi(x)}{dx^{2}}+Q(x)\psi(x)=0. \tag{99}\]
The solution of the above equation in the first order (where \(Q(x)>0\)) is given as [84]
\[\psi(x)=\frac{1}{Q(x)^{\frac{1}{4}}}exp\Big{(}\pm i\int_{x}^{x_{1}}\sqrt{Q(x^ {\prime})}dx^{\prime}\Big{)},\ \ \ \ \ \ \ Region\ I \tag{100}\]
\[\psi(x)=\frac{1}{Q(x)^{\frac{1}{4}}}exp\Big{(}\pm i\int_{x_{2}}^{x}\sqrt{Q(x^{ \prime})}dx^{\prime}\Big{)},\hskip 28.452756ptRegion\ III \tag{101}\]
where \(x_{1}\) and \(x_{2}\) are two turning points with \(x_{1}<x_{2}\). We want to obtain the solution for \(\psi(x)\) in the region where \(Q(x)<0\) for completeness. In the region \(Q(x)<0\), \(Q(x)\) can be approximated by a parabola. The approximation is valid if we assume the two turning points (where \(Q(x)=0\)) to be lying close to one another implying the region of \(Q(x)<0\) is very small. In this region, \(Q(x)\) can be expanded by using Taylor's expansion about the point of extrema of the potential. The point of extremum can be obtained by using \(\frac{dQ(x)}{dx}\Big{|}_{x=x_{0}}=0\). In order to have continuous solutions, we obtain the asymptotic solutions of the region \(Q(x)<0\) and match them to the ones in \(Q(x)>0\).
The Taylor expansion gives \(Q(x)\) as
\[Q(x)=Q(x_{0})+\frac{1}{2}Q^{\prime\prime}(x)\Big{|}_{x=x_{0}}(x-x_{0})^{2}+O(x -x_{0})^{3},\hskip 14.226378ptRegion\ II. \tag{102}\]
To find the solution in the region \(Q(x)<0\), we replace \(Q(x)\) from eq.(102) in eq.(99) upto \((x-x_{0})^{2}\) which gives
\[\frac{d^{2}\psi(x)}{dx^{2}}+\Big{(}Q(x_{0})+\frac{1}{2}Q^{\prime\prime}(x_{0}) (x-x_{0})^{2}\Big{)}\psi(x)=0 \tag{103}\]
In order to obtain the solution, we modify the equation in such a way that the equation can have an exact solution. The exact solution can be represented in terms of parabolic cylinder functions [94]. We need to use
\[\kappa\equiv\frac{1}{2}Q^{\prime\prime}(x_{0})\ \ ;\ \ \eta\equiv(4\kappa)^{ \frac{1}{4}}e^{i\frac{\pi}{4}}(x-x_{0})\ \ ;\ \ \mu+\frac{1}{2}\equiv\frac{-iQ(x_{0})}{\sqrt{2Q^{\prime\prime}(x_{0})}}. \tag{104}\]
which gives the modified equation in the form
\[\frac{d^{2}\psi(\eta)}{d\eta^{2}}+\Big{(}\mu+\frac{1}{2}-\frac{1}{4}\eta^{2} \Big{)}\psi(\eta)=0. \tag{105}\]
The solution of the equation can be obtained in terms of parabolic cylinder function \(D_{\mu}(\eta)\). Also, \(D_{\mu}(-\eta)\) and \(D_{-\mu-1}(-i\eta)\) are also solutions of the above equation. So they must be linearly dependent. So \(D_{\mu}\) can be written as a linear combination of \(D_{\mu}(-\eta)\) and \(D_{-\mu-1}(-i\eta)\). Thus,
\[\psi(\eta)=D_{\mu}(\eta)=A_{1}D_{\mu}(-\eta)+A_{2}D_{-\mu-1}(-i\eta) \tag{106}\]
where \(A_{1}\) and \(A_{2}\) are coefficients. Using \(D_{\mu}(0)\) and \(D_{\mu}^{\prime}(0)\), the coefficients \(A_{1}\) and \(A_{2}\) can be determined and the final solution takes the form [94]
\[\psi(\eta)=e^{i\mu\pi}D_{\mu}(-\eta)+\frac{\sqrt{2\pi}}{\Gamma(-\eta)}D_{-\mu -1}(-i\eta). \tag{107}\]
Figure 12: The function \(Q(x)\) in different regions.
In the asymptotic limit, the solution \(\psi(\eta)\) can be explicitly expressed as
\[\psi(\eta)\sim\eta^{\mu}e^{-\frac{\eta^{2}}{4}}\Bigg{(}1-\frac{\mu(\mu-1)}{2\eta^{ 2}}+\frac{\mu(\mu-1)(\mu-2)(\mu-3)}{2.4\eta^{4}}....\Bigg{)} \tag{108}\]
\[-\frac{\sqrt{2\pi}}{\Gamma(-\mu)}e^{i\mu\pi}\eta^{-\mu-1}e^{\frac{\eta^{2}}{4 }}\Bigg{(}1+\frac{(\mu+1)(\mu+2)}{2\eta^{2}}+\frac{(\mu+1)(\mu+2)(\mu+3)(\mu+4 )}{2.4\eta^{4}}....\Bigg{)}=\psi_{I}+\psi_{II}\.\]
We find that the first term goes as
\[\psi_{I}\sim e^{-\frac{\eta^{2}}{4}}=e^{-i\sqrt{\frac{\eta}{4}}(x-x_{0})^{2}} \tag{109}\]
and the second term goes as
\[\psi_{II}\sim e^{\frac{\eta^{2}}{4}}=e^{i\sqrt{\frac{\eta}{4}}(x-x_{0})^{2}}. \tag{110}\]
In the asymptotic limits, we require the solution to be of the form \(\psi\sim e^{-ia(x-x_{0})^{2}}\), where \(a\) is some function of \(Q(x)\) which in the asymptotic limit is a constant. So \(\psi_{II}\) term must be zero. The only way out is to set \(\Gamma(-\mu)=\infty\) which implies \(\mu\) to be any positive integer \(n\) (\(n=0,1,2,3,....\)). Thus, we obtain
\[n+\frac{1}{2}\equiv\frac{-iQ(x_{0})}{\sqrt{2Q^{\prime\prime}(x_{0})}} \tag{111}\]
which we designate as the condition for the determination of quasinormal frequencies or the quasi-normal mode (\(QNM\)) condition.
|
2302.03860
|
EVEN: An Event-Based Framework for Monocular Depth Estimation at Adverse
Night Conditions
|
Accurate depth estimation under adverse night conditions has practical impact
and applications, such as on autonomous driving and rescue robots. In this
work, we studied monocular depth estimation at night time in which various
adverse weather, light, and different road conditions exist, with data captured
in both RGB and event modalities. Event camera can better capture intensity
changes by virtue of its high dynamic range (HDR), which is particularly
suitable to be applied at adverse night conditions in which the amount of light
is limited in the scene. Although event data can retain visual perception that
conventional RGB camera may fail to capture, the lack of texture and color
information of event data hinders its applicability to accurately estimate
depth alone. To tackle this problem, we propose an event-vision based framework
that integrates low-light enhancement for the RGB source, and exploits the
complementary merits of RGB and event data. A dataset that includes paired RGB
and event streams, and ground truth depth maps has been constructed.
Comprehensive experiments have been conducted, and the impact of different
adverse weather combinations on the performance of framework has also been
investigated. The results have shown that our proposed framework can better
estimate monocular depth at adverse nights than six baselines.
|
Peilun Shi, Jiachuan Peng, Jianing Qiu, Xinwei Ju, Frank Po Wen Lo, Benny Lo
|
2023-02-08T03:35:47Z
|
http://arxiv.org/abs/2302.03860v1
|
# EVEN: An Event-Based Framework for Monocular Depth Estimation at Adverse Night Conditions
###### Abstract
Accurate depth estimation under adverse night conditions has practical impact and applications, such as on autonomous driving and rescue robots. In this work, we studied monocular depth estimation at night time in which various adverse weather, light, and different road conditions exist, with data captured in both RGB and event modalities. Event camera can better capture intensity changes by virtue of its high dynamic range (HDR), which is particularly suitable to be applied at adverse night conditions in which the amount of light is limited in the scene. Although event data can retain visual perception that conventional RGB camera may fail to capture, the lack of texture and color information of event data hinders its applicability to accurately estimate depth alone. To tackle this problem, we propose an event-vision based framework that integrates low-light enhancement for the RGB source, and exploits the complementary merits of RGB and event data. A dataset that includes paired RGB and event streams, and ground truth depth maps has been constructed. Comprehensive experiments have been conducted, and the impact of different adverse weather combinations on the performance of framework has also been investigated. The results have shown that our proposed framework can better estimate monocular depth at adverse nights than six baselines.
## I Introduction
Depth estimation with monocular cameras has been actively studied over the past decades [1, 2, 3], as it offers an efficient and economic way of obtaining depth. Compared to LiDAR, a monocular camera can be deployed pervasively, and due to its small scale, it can also be installed on an agent, e.g., an autonomous car, unobtrusively.
Albeit convenient and flexible, accurately estimating depth from a monocular camera is non-trivial, especially at night time, at which the visual perception of conventional RGB cameras degrades. The low dynamic range and sensitivity to motion blur of conventional cameras can lead to defective imaging at night, and the captured images/videos often exhibit underexposure due to low-lighting or backlighting [4]. For an autonomous car, when it is driving at night accompanied by adverse weather (e.g., rain and fog), the dual occurrence of adverse light and weather can cause a challenge for its RGB-based vision system.
Recently, event camera has gained popularity in visual perception and robotics. Event camera is a bio-inspired vision sensor that works in a different way than conventional cameras [5, 6]. Rather than capturing intensity images at a fixed rate, event cameras measure intensity changes asynchronously in the form of an event stream. Event cameras have distinct advantages over conventional RGB cameras, including very high dynamic range (HDR), high temporal resolution, less motion blur, and low power consumption. These features of the event camera can complement its RGB counterpart, providing extra visibility and leading to an enhanced visual perception system.
On the other hand, in depth estimation, texture and salient edges play more important roles than color as recognized by research in the computer vision community [7]. Texture can be well retained in RGB data whereas salient edges can be better captured by the event camera. Therefore, using both data modalities is a straightforward attempt to boost the overall depth estimation accuracy.
Although there are few studies [8, 9, 10] that have been proposed to jointly utilize RGB and event data for monocular depth estimation, they mainly focus on day time or normal weather conditions. Thus far, no research has been carried out on event-based monocular depth estimation under adverse night conditions, which is challenging as the RGB source does not contain as much effective visual information as it
Fig. 1: Data samples from our MonoANC dataset. We show paired RGB and event images, and the ground truth depth map for each sample. The adverse night scenarios from top to bottom are: 1) driving in the heavy rain on a city road; 2) driving under a bridge at a foggy night; 3) driving at the countryside at a rainy and foggy night.
does at day time, and how to effectively fuse RGB data with event stream at night time has yet to be addressed.
Despite practical applications, such as more intelligent and lightweight night-time autonomous driving and rescue robots, there is currently also no dataset that contains paired RGB, event and ground truth depth data captured at adverse night conditions to validate and benchmark research in this direction. Hence, in this work, we made the following two contributions:
1. We propose the first adverse night-time driving dataset that contains paired RGB images, event streams, and ground truth depth maps. The adverse night conditions in our dataset are diverse in a variety of aspects including adverse weather such as rain and fog, and different scenes such as driving on dim countryside roads.
2. We propose a novel three-phase framework, which employs low-light enhancement and multi-modal fusion to tackle the problem of monocular depth estimation at adverse night conditions with event-based vision. The entire framework has been thoroughly evaluated, with the results showing that it outperforms six baselines.
## II Related Work
### _Monocular Depth Estimation with Multi-Modal Fusion_
Monocular depth estimation can be achieved using RGB modality alone [1, 2, 3]. Recent advances in multiple data modalities have further improved the depth estimation accuracy. For instance, some research works proposed to use RGB and optical flow [11, 12, 13, 14], RGB combined with segmentation maps [15, 16, 17], or RGB with extra saliency features [18, 19] as the inputs, and use multi-modal fusion to enhance depth estimation.
LiDAR has been explored for enhancing monocular depth estimation recently. [20] and [21] proposed using late fusion methods to fuse depth data from LiDAR and monocular RGB inputs. Apart from pure visual signals, radar has also been used with RGB modality for monocular depth estimation [22, 23]. Recently, an attention-based method has been proposed for fusing radar signals with monocular RGB images [24].
### _Event-Based Depth Estimation_
Daniel et al. [8] combined event-based data and monocular RGB frames with a recurrent asynchronous network for depth estimation, which is also the first work to fuse the event and monocular RGB frames. Zhou et al. [25] investigated the use of stereo event cameras for semi-dense depth estimation by maximizing a temporal consistency between the corresponding event streams. Another event vision-based method was proposed by Zhu et al. [9] which eliminates disparity for depth estimation. The method proposed by [10] shows the first learning-based stereo depth estimation for event cameras which is also the first one that produces dense results. [26] is an unsupervised framework that learns motion information only from event streams, achieving multi-task objectives including optical flow, egomotion and depth estimation. Cui et al. [27] proposed a dense depth estimation method based on the fusion of dense event stream and sparse point cloud.
Despite the efforts being made in event-based depth estimation, existing works are not engineered to specifically tackle monocular depth estimation at adverse night conditions, but instead mainly target at day time and normal weather conditions. In this work, we target monocular depth estimation at adverse night conditions. In order to improve the illumination in the field of view (FOV) and to take advantage of the HDR property of the event-based camera, we propose to combine low-light enhancement and multi-modal fusion of event and RGB data for better depth estimation. To the best of our knowledge, we are the first work that uses the event-based vision along with low-light image enhancement to estimate monocular depth at adverse night conditions.
## III Method
Our framework decomposes monocular depth estimation at adverse night conditions into three phases as shown in Fig. 2. In phase one, the raw RGB image is first enlightened using low-light image enhancement; In phase two, the enhanced RGB image and the event image are fused to generate a fusion image; In phase three, depth estimation is carried out based on the fusion image. We denote our framework as **EVEN** as it is based on **EV**ent vision and low-light **EN**hancement. We elaborate our framework in the following.
### _Event Stream_
Asynchronous event streams reflect changes in light intensity. In order to efficiently make full use of the information from the event-based data. We convert event streams in the voxel grid format to image format. Specifically, spatial points (indexed by \(x\) and \(y\) positions in image coordinates with the value being the polarity \(p\)) are stacked along the time axis \(t\) using a fixed time period \(\Delta t\) = 0.125 s. This produces a compact event image.
### _Phase-1: Low-light Enhancement_
The visual perception of conventional RGB cameras degrades at night due to the limited amount of light. To recover the necessary scene color and texture information captured by the RGB camera, we utilize EnlightenGAN [28] to enhance the raw night-time RGB image. EnlightenGAN is of an attention-based U-Net structure. The input RGB image is normalized by using the illumination channel as the attention map for the ultimate low-light enhancement.
### _Phase-2: Multi-modal Fusion_
Event data can capture much more HDR and temporal details of night-time scenes, whereas RGB data can provide necessary texture and color information. As these two modalities complement each other, and in order to leverage the merits of both, a novel fusion network (refer to Fig. 3), which is built on top of selective kernel network [29], is designed to integrate event data with RGB modality.
#### Iii-A1 Fusion Network
given an event image \(\mathbf{X}_{Event}\) and an enhanced RGB image \(\mathbf{X}_{Enhanced}\), we use two convolutional kernels with different kernel sizes to transform the input images into feature maps. After transformation, two feature maps \(\mathbf{F}_{Event}\) and \(\mathbf{F}_{Enhanced}\) are obtained:
\[\mathbf{F}_{Event}=g(\mathbf{X}_{Event}),\mathbf{F}_{Event}\in\mathbb{R}^{H \times W\times C} \tag{1}\]
\[\mathbf{F}_{Enhanced}=h(\mathbf{X}_{Enhanced}),\mathbf{F}_{Enhanced}\in \mathbb{R}^{H\times W\times C} \tag{2}\]
where \(g(\cdot)\) and \(h(\cdot)\) are separate convolutional neural network layers that conduct transformation. For the event image, we use a kernel size of \(5\times 5\) as the information carried in event modality is relatively sparse. Therefore, a large kernel size is used. For the enhanced RGB image, we use a kernel size of \(3\times 3\). Following convolutional transformation, the feature maps of the two modalities are merged using an element-wise summation:
\[\mathbf{F}_{sum}=\mathbf{F}_{Event}+\mathbf{F}_{Enhanced},\mathbf{F}_{sum}\in \mathbb{R}^{H\times W\times C} \tag{3}\]
We then apply global average pooling to conduct dimension reduction (along the \(H\) and \(W\) dimensions) for the merged feature map \(\mathbf{F}_{sum}\), which produces a vector \(\mathbf{V}\in\mathbb{R}^{1\times C}\). Similar to [29], we then use a simple fully connected layer \(f(\cdot)\) to create a compact vector \(\mathbf{k}\) on the basis of \(\mathbf{V}\):
\[\mathbf{k}=f(\mathbf{V}),\mathbf{k}\in\mathbb{R}^{d\times 1} \tag{4}\]
\(\mathbf{k}\) is then used to guide adaptive fusion of the two modalities. Specifically, we create soft attention across channel \(C\). For \(c\)-th element along the channel \(C\), the soft attention for fusing event and enhanced RGB feature maps can be formulated as follows:
\[a_{c}=\frac{e^{\mathbf{A}_{c}\mathbf{k}}}{e^{\mathbf{A}_{c}\mathbf{k}}+e^{ \mathbf{B}_{c}\mathbf{k}}},b_{c}=\frac{e^{\mathbf{B}_{c}\mathbf{k}}}{e^{ \mathbf{A}_{c}\mathbf{k}}+e^{\mathbf{B}_{c}\mathbf{k}}} \tag{5}\]
\[\mathbf{F}_{fused_{c}}=a_{c}\cdot\mathbf{F}_{Event_{c}}+b_{c}\cdot\mathbf{F}_ {Enhanced_{c}},a_{c}+b_{c}=1 \tag{6}\]
where \(\mathbf{A}_{c}\in\mathbb{R}^{1\times d}\) and \(\mathbf{B}_{c}\in\mathbb{R}^{1\times d}\) are learnable vectors.
The fused feature map \(\mathbf{F}_{fused}\) is then fed into an U-Net [30] followed by a group of convolution and ReLU operations to 1) further fuse features of the event and RGB modalities, and 2) reconstruct a fusion image \(\mathbf{Y}\) of the same resolution to the input event and enhanced RGB images:
\[\mathbf{Y}=\text{Conv}(\text{ReLU}(\text{Conv}(\text{U-Net}(\mathbf{F}_{fused })))) \tag{7}\]
The resulting fusion image, which has HDR property and better edge salience, also suppresses areas of overexposure caused by low-light enhancement as shown in Fig. 3.
#### Iii-A2 Fusion Loss
In order to allow the entire fusion network to effectively merge visual information from the two modalities, a joint loss \(\mathcal{L}_{joint}\) is designed as shown in Equation 8. We use the reconstruction loss between the fusion image and the enhanced RGB image as the primary loss (i.e., \(\mathcal{L}_{Enhanced}\)), and that between the fusion image and the event image as the auxiliary loss (i.e., \(\mathcal{L}_{Event}\)). Both reconstruction losses are implemented as an \(\mathcal{L}_{2}\) loss that measures the mean squared error between the fusion image and the respective event or enhanced RGB image. During training, the fusion network is trained to decrease \(\mathcal{L}_{joint}\).
Fig. 3: The multi-modal fusion network of EVEN.
Fig. 2: An overview of the proposed framework for monocular depth estimation at adverse night conditions (e.g., at foggy night). Our framework, named EVEN, leverages a three-phase process to estimate depth: 1) phase-1: enlightening the low-light RGB image; 2) phase-2: fusing visual information from enhanced RGB and event images; 3) phase-3: estimating depth based on reconstructed fusion image.
\[\mathcal{L}_{joint}=\beta\times\mathcal{L}_{Enhanced}+(1-\beta)\times\mathcal{L}_{ Event} \tag{8}\]
### _Phase-3: Depth Estimation_
The fusion image, which contains visual information from both event and RGB modalities, is then used as the source for depth estimation. We separately adopt two state-of-the-art depth estimation networks, i.e., Depthformer [31] and SimIPU [32] in our EVEN framework to carry out the depth estimation with the fusion image as their input.
## IV Dataset
To the best of our knowledge, there is currently no dataset that is proposed for monocular depth estimation at adverse night conditions, containing paired RGB, event and depth images. In order to validate the effectiveness of our proposed framework, and advance future research in this direction, we construct the first adverse night-time driving dataset that includes the aforementioned data modalities and the ground truth depth maps. The dataset was constructed using CARLA [33], a popular simulator in the field of autonomous driving, and the event camera plugin [34].
### _Data Collection and Statistics_
We collect the data through a sensor suite that contains an RGB camera, an event-based camera, and a depth camera. The positive and negative thresholds for triggering an event of the event camera was set to 0.4. All the sensors were set with a FOV of 90 degrees, a resolution of \(640\times 480\) pixels, and a data generation rate of 8 Hz. All sensors had an egocentric view mounted on a vehicle while it was driving around. The start and end points of the vehicle were manually selected and the routes of the vehicle were accurately planned. The speed and following distance of the vehicles as well as the lights at night were based on road traffic guidelines to achieve the maximum realism of night-time driving. The driving scenes are diverse, including typical city and rural roads, as well as tunnel and highway. The statistics of the driving scenarios is shown in Fig. 3(a). We also apply adverse weather conditions such as rain, fog, and the combination of rain and fog to the scene, and Fig. 3(b) shows the distribution of the adverse weather in our dataset. The dataset contains 11,191 samples. Each sample has paired RGB, event and ground truth depth images. We split the entire dataset into 70% for training, 15% for validation and the rest 15% for testing. We name our dataset as **MonoANC** (**Monocular depth estimation at **A**dverse **N**ight **C**onditions).
## V Experiment
In this section, we first describe the implementation details of our framework - EVEN, and then the evaluation metrics, followed by the baseline methods that are used to compare against our framework. We then show overall results of all methods on MonoANC, and present the results of cross validation of the performance of EVEN on different adverse weather combinations at the end.
### _Implementation Details_
We implement our EVEN framework using PyTorch. The learning rate for training the multi-modal fusion network was 1e-3. AdamW [35] was used as the optimizer. Weight decay was set to 1e-3. Step size of scheduler was 5 during the training of the fusion network and we trained it for 100 epochs. We set \(\beta\) to 0.8 in Equation 8. After the fusion network was properly trained, we pre-generated the fusion images, and trained depth estimation network (i.e., Depthformer and SimIPU) using their default settings [36].
### _Evaluation Metrics_
We use standard evaluation protocols following [37] to evaluate our framework and all baseline methods. Specifically, we measure the mean absolute relative error (Abs. Rel.), mean squared relative error (Sq. Rel.), root mean squared error (RMSE), and mean log10 error (Log10). Apart from the error metrics, we also adopted three different thresholds as the accuracy metrics which are the common practice in the literature, i.e., \(\alpha=1.25^{i},i=1,2,3\).
### _Baseline Methods_
We implement six baselines to compare and examine the effectiveness of our framework on boosting depth estimation, i.e., the use of low-light enhancement and fusion with event and RGB modalities. As mentioned early, salient edges and texture are core features for depth estimation. We therefore adopted the Sobel operator [38], which is an edge detector, to process the RGB modality, and using the resulting image as the alternative to the event image in our framework to justify the use of event data, which is also able to retain salient edge information.
1. RGB: the raw RGB image is fed directly into the depth estimation network as the only input for depth estimation.
2. Event: the event image is fed directly into the depth estimation network as the only input.
3. RGB + Sobel: the paired raw RGB and Sobel operator processed images are used as the inputs to the phase-2 of EVEN, followed by depth estimation of phase-3.
Fig. 4: Distribution of different night-time driving environments (a) and different adverse weather conditions (b).
4. RGB + Event: the paired raw RGB and event images are used as the inputs to the phase-2 of EVEN, followed by depth estimation of phase-3.
5. RGB\(Enhanced\): the enhanced RGB image after phase-1 is fed directly into the depth estimation network as the only input for depth estimation.
6. RGB\(Enhanced\) + Sobel: the paired enhanced RGB image after phase-1 and Sobel operator processed image are used as the inputs to the phase-2 of EVEN, followed by depth estimation of phase-3.
### _Overall Results_
As we instantiate the depth estimation network separately as either a Depthformer or a SimIPU, we run six baselines accordingly based on the instantiated depth estimation network. Table I summarizes the overall results. Our complete EVEN framework outperforms the baseline methods, and its performance improvement is consistent across Depthformer and SimIPU. The absolute relative error (Abs. Rel.) is reduced by 41.7% and 57.3% respectively compared to a single RGB input to the Depthformer and SimIPU. An 11.5% relative improvement on \(\alpha\)1 accuracy metric can also be observed for EVEN using Depthformer, and a 20.7% increase for EVEN using SimIPU, compared to a single RGB image as the input to these two depth estimation networks.
Fig. 5 shows four qualitative results of depth estimation on MonoANC. It can be observed that the depth maps estimated by EVEN have a noticeable improvement in detail at the edges as well as the objects in the far distance compared to those of baselines. As indicated by the red boxes in Fig. 5, our complete EVEN framework can produce depth maps without much artifacts, and are closer to the ground truth. These visually prove that the fusion of the edge information and HDR features of event data in EVEN is effective. When we replace the event image with Sobel operator processed image, i.e., indicated by RGB\(Enhanced\) + Sobel, the quality of the estimated depth map slightly degrades, but is still better than those of the rest baseline methods.
### _Cross Validation on Adverse Weather_
We further split MonoANC based on different weather conditions. Specifically, there are three adverse weather conditions as shown in Fig. 4(b): 1) rain only; 2) fog only; 3) rain and fog occur together. We split the dataset into two sets. One set contains samples of rain only and fog only, and the other set contains samples of simultaneous occurrence of rain and fog in the scene. A two-fold cross-validation is then conducted to evaluate the performance of EVEN. Table II shows the results. When the framework has seen each individual weather condition during the training, it can well estimate the depth of the scene with mixed adverse weather conditions, i.e., rain and fog occurring at the same time in the scene. Conversely, it becomes difficult for the framework to estimate depth for the scenes with only a single adverse weather condition if the training data is scenes of mixed adverse weather. Hence, cost function of decomposing adverse weather combinations is worth investigating for better depth estimation in future work.
## VI Conclusion
In this paper, we have proposed a framework that integrates low-light enhancement and fuses RGB and event modalities for effective monocular depth estimation under adverse night conditions. A synthetic night-time driving dataset that contains paired RGB, event and depth images has also been constructed, which includes scenes that encountering adverse weather, light and road conditions. The experiment results have shown that our proposed framework is able to achieve satisfactory depth estimation results in various adverse night scenarios.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Input Sequence} & \multicolumn{8}{c}{Depthformer} & \multicolumn{8}{c}{SimIPU} \\ \cline{2-13} & \multicolumn{4}{c}{Error Metric \(\downarrow\)} & \multicolumn{4}{c}{Accuracy Metric \(\uparrow\)} & \multicolumn{4}{c}{Error Metric \(\downarrow\)} & \multicolumn{4}{c}{Accuracy Metric \(\uparrow\)} \\ \hline Train Set & Test Set & Abs. Rel. & Stg. Rel. & RMSE & Log10 & \(\alpha\)1 & \(\alpha\)2 & \(\alpha\)3 & Abs. Rel. & Stg. Rel. & RMSE & Log10 & \(\alpha\)1 & \(\alpha\)2 & \(\alpha\)3 \\ \hline rain and fog at the same time & rain only and fog only & 0.325 & 1.987 & 8.475 & 0.187 & 0.471 & 0.645 & 0.797 & 0.330 & 1.865 & 8.710 & 0.187 & 0.420 & 0.655 & 0.786 \\ \hline rain only and fog only & rain and fog at the same time & **0.267** & **0.315** & **4.934** & **0.031** & **0.646** & **0.833** & **0.937** & **0.250** & **0.307** & **4.933** & **0.031** & **0.680** & **0.844** & **0.939** \\ \hline \hline \end{tabular}
\end{table} TABLE II: Cross Validation Results of EVEN on Different Adverse Weather Conditions
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Input Sequence} & \multicolumn{8}{c}{Depthformer} & \multicolumn{8}{c}{SimIPU} \\ \cline{2-13} & \multicolumn{4}{c}{Error Metric \(\downarrow\)} & \multicolumn{4}{c}{Accuracy Metric \(\uparrow\)} & \multicolumn{4}{c}{Error Metric \(\downarrow\)} & \multicolumn{4}{c}{Accuracy Metric \(\uparrow\)} \\ \cline{2-13} & Abs. Rel. & Stg. Rel. & RMSE & Log10 & \(\alpha\)1 & \(\alpha\)2 & \(\alpha\)3 & Abs. Rel. & Stg. Rel. & RMSE & Log10 & \(\alpha\)1 & \(\alpha\)2 & \(\alpha\)3 \\ \hline RGB & 0.192 & 0.310 & 4.973 & 0.069 & 0.810 & 0.911 & 0.985 & 0.293 & 0.370 & 5.177 & 0.079 & 0.710 & 0.921 & 0.972 \\ \hline Event & 0.452 & 0.220 & 7.775 & 0.172 & 0.390 & 0.622 & 0.795 & 0.594 & 1.240 & 9.180 & 0.116 & 0.552 & 0.828 & 0.932 \\ \hline RGB + Sobel & 0.180 & 0.340 & 5.304 & 0.064 & 0.808 & 0.908 & 0.956 & 0.266 & 0.310 & 4.947 & 0.067 & 0.773 & 0.930 & 0.976 \\ \hline RGB + Event & 0.179 & 0.340 & 5.992 & 0.067 & 0.795 & 0.920 & 0.956 & 0.229 & 0.280 & 5.151 & 0.057 & 0.837 & 0.953 & 0.984 \\ \hline RGB\(Enhanced\) & 0.181 & 0.390 & 5.737 & 0.074 & 0.765 & 0.924 & 0.971 & 0.263 & 0.300 & 4.998 & 0.058 & 0.824 & 0.948 & 0.984 \\ \hline RGB\(Enhanced\) + Sobel & 0.139 & **0.280** & 5.023 & 0.063 & 0.806 & 0.970 & 0.988 & 0.216 & **0.240** & **4.080** & 0.063 & 0.846 & 0.954 & 0.986 \\ \hline EVEN (Ours) & **0.112** & **0.280** & **4.335** & **0.049** & **0.903** & **0.976** & **0.993** & **0.125** & 0.280 & 4.845 & **0.049** & **0.857** & **0.959** & **0.988** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Results on MonoANC Dataset When the Depth Estimation Network in EVEN is instantiated as Depthformer and SimIPU Respectively
## Appendix A
Fig. 5: Qualitative results of depth estimation on MonoANC dataset. Top two examples are the results when Depthformer is adopted as the depth estimation network, and the bottom two examples are the results when SimIPU is adopted as the depth estimation network. Areas indicated by the red boxes show that our EVEN framework can better estimate monocular depth than other baseline methods.
|
2304.07988
|
Reward-free Policy Imitation Learning for Conversational Search
|
Existing conversational search studies mainly focused on asking better
clarifying questions and/or improving search result quality. These works aim at
retrieving better responses according to the search context, and their
performances are evaluated on either single-turn tasks or multi-turn tasks
under naive conversation policy settings. This leaves some questions about
their applicability in real-world multi-turn conversations where realistically,
each and every action needs to be made by the system itself, and search session
efficiency is often an important concern of conversational search systems.
While some recent works have identified the need for improving search
efficiency in conversational search, they mostly require extensive data
annotations and use hand-crafted rewards or heuristics to train systems that
can achieve reasonable performance in a restricted number of turns, which has
limited generalizability in practice.
In this paper, we propose a reward-free conversation policy imitation
learning framework, which can train a conversation policy without annotated
conversation data or manually designed rewards. The trained conversation policy
can be used to guide the conversational retrieval models to balance
conversational search quality and efficiency. To evaluate the proposed
conversational search system, we propose a new multi-turn-multi-response
conversational evaluation metric named Expected Conversational Reciprocal Rank
(ECRR). ECRR is designed to evaluate entire multi-turn conversational search
sessions towards comprehensively evaluating both search result quality and
search efficiency.
|
Zhenduo Wang, Zhichao Xu, Qingyao Ai
|
2023-04-17T04:49:01Z
|
http://arxiv.org/abs/2304.07988v1
|
# Reward-free Policy Imitation Learning for Conversational Search
###### Abstract.
Existing conversational search studies mainly focused on asking better clarifying questions and/or improving search result quality. These works aim at retrieving better responses according to the search context, and their performances are evaluated on either single-turn tasks or multi-turn tasks under naive conversation policy settings. This leaves some questions about their applicability in real-world multi-turn conversations where realistically, each and every action needs to be made by the system itself, and search session efficiency is often an important concern of conversational search systems. While some recent works have identified the need for improving search efficiency in conversational search, they mostly require extensive data annotations and use hand-crafted rewards or heuristics to train systems that can achieve reasonable performance in a restricted number of turns, which has limited generalizability in practice.
In this paper, we propose a reward-free conversation policy imitation learning framework, which can train a conversation policy without annotated conversation data or manually designed rewards. The trained conversation policy can be used to guide the conversational retrieval models to balance conversational search quality and efficiency. To evaluate the proposed conversational search system, we propose a new multi-turn-multi-response conversational evaluation metric named Expected Conversational Reciprocal Rank (ECRR). ECRR is designed to evaluate entire multi-turn conversational search sessions towards comprehensively evaluating both search result quality and search efficiency.
conversational search, imitation learning, conversational search +
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Journal of the Communication and Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication and Communication of Communication of Communication and Communication of Communication of Communication and Communication of Communication of Communication of Communication and Communication of Communication of Communication of Communication and Communication of Communication of Communication of Communication and Communication of Communication of Communication of Communication of Communication of Communication and Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of Communication of the of Communication of Communication of Communication of Communication of Communication of the of Communication of Communication of Communication of Communication of Communication of the of Communication of the of Communication of Communication of the of Communication of the of Communication of the of Communication of Communication of the of Communication of the of Communication of the of
In this paper, we propose a novel reward-free conversational search policy imitation learning (IL) framework. Our IL framework trains a simple conversation policy model, which decides between asking the retrieved clarifying question and returning the retrieved results to the user at each conversation stage, given the current conversation context and retrieval results. Inspired by recent advances in Generative Adversarial Imitation Learning (GAIL) (Sundundur et al., 2017), to train the conversation policy model, we compute and identify an expert trajectory (a multi-turn-multi-response tuple) from all possible conversation trajectories using an automatic evaluation metric, and we use the expert trajectories as the training samples without any data annotations or manually-crafted rewards. Our IL framework also allows us to train policies according to different conversational search evaluation metrics that represent different user assumptions. Hence, our policy learning framework could potentially generalize to other future conversational search tasks, datasets, and user assumptions.
To better evaluate the multi-turn-multi-response conversational search sessions, we also propose a new automatic conversational evaluation metric called Expected Conversational Reciprocal Rank (ECRR). Our metric can evaluate the entire response trajectory of the search agent in the multi-turn conversational search session. This is fundamentally different from naive metrics which evaluate one single-turn retrieved response quality at a time. Our metric is also an attempt toward a comprehensive evaluation metric for conversational search, which evaluates both the search result quality and search session efficiency.
We consider our work has three contributions:
* We propose a new reward-free conversational search policy imitation learning framework, which can learn a conversation policy to jointly optimize search effectiveness and efficiency without data annotations or manually designed rewards.
* To comprehensively evaluate search effectiveness and efficiency, we propose a new multi-turn-multi-response conversational search evaluation metric named Expected Conversation Reciprocal Rank, which can better approximate system performances in real-world scenarios.
* To the best of our knowledge, we are also the first to use multi-turn metrics directly as an indirect learning goal of conversational policy learning.
The rest of the paper is organized as follows: we first review the related work (SS2); then introduce the proposed IL-based framework (SS3) and propose to evaluate conversational search task with the proposed ECRR metric (SS4). We cover the detailed experimental setup (SS5), report and analyze the experimental results (SS6). Finally, we wrap this work up and point out future directions (SS7). To facilitate reproducibility of the IR community, our code will be made public once this work is accepted.
## 2. Related Works
Conversational SearchStudies on conversational search can be traced back to the very beginning of research on interactive information retrieval (Sundur et al., 2017; Krizhevsky et al., 2014; Krizhevsky et al., 2014). The framework proposed by Radlinski and Craswell (Craswell, 2015), along with other conceptualization works including (Krizhevsky et al., 2014; Krizhevsky et al., 2014) is some examples of the recent surge of conversational search system studies. These works have taken various approaches and covered multiple aspects of conversational search. For example, Yu et al. (Yu et al., 2017; Yu et al., 2017) study effective query rewriting and learning contextualized query representation from an ad hoc teacher model. Zhang et al. (Zhang et al., 2017) study conversational preference elicitation. Vakulenko et al. (Vakulenko et al., 2016) study knowledge-based conversational search. Some works (Zhu et al., 2017; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014) provide datasets with different focuses for these studies. Other works such as (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014) provide seminars and tutorials from different standpoints.
Among these topics, a popular line of work (e.g. (Zhu et al., 2017; Krizhevsky et al., 2014; Krizhevsky et al., 2014)) about asking clarifying questions is highly related to this paper. Asking clarifying questions in conversational search can be traced back to the TREC 2004 HARD track (Brock et al., 2015), where asking clarifying questions is a system option to get additional information. Recently, more works have shown that asking clarifying questions can benefit conversational search systems. A study in 2018 (Sundur et al., 2017) showed that users enjoyed interacting with conversational systems. Following this work, several other works (Zhu et al., 2017; Krizhevsky et al., 2014; Krizhevsky et al., 2014) demonstrate that asking clarifying questions is a convenient alternative when the query information is insufficient for retrieving good results. We may need to reconsider the question today that how long we can be optimistic about users' tolerance and patience for the mistakes of our conversational systems after their freshness for conversational AI, as interacting with conversational AI will soon if not has already become a norm of human life. However, only a few works (Zhu et al., 2017; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014) have addressed the problem of deciding whether to ask clarifying questions in conversational search systems. Our work extends these works and generalizes the problem of identifying the need of asking clarifying questions to conversational search policy modeling.
Conversational Search EvaluationThe evaluation of conversational search remains open and diverse because of (1) conversational intelligent system being a common research interest for both natural language processing and information retrieval community, (2) the term-sharing between conversational search and other related areas like dialog system and conversational QA, (3) the variety of task configurations, solutions, and evaluations (e.g. slot filing/response retrieval/generation, single/multiple responses per turn, single/multi-turn evaluation), and more. Due to the cost and complexity of involving human users in the loop, apart from a few works which study the online evaluation framework and interface (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014), most works simulate users' behaviors and evaluate the conversation sessions with offline methods for its simplicity and reproducibility. Metrics evaluated on single-turn single-response can be categorized as word-overlap-based (BLEU (Sundur et al., 2017), METEOR (Krizhevsky et al., 2014)), embedding-based (BERT-Score (Krizhevsky et al., 2014)), learning-based (BERT-RUBERT (Krizhevsky et al., 2014)), or F1 score for slot filing tasks. Metrics evaluated on single-turn multi-response are mainly ranking-based metrics (nDCG (Sundur et al., 2017), RBP (Sundur et al., 2017), ERR (Sundur et al., 2017)).
The evaluation of conversational search still faces many challenges (Krizhevsky et al., 2014). Metrics evaluated on multi-turn sessions are usually done by combining single-turn metrics with or without session-based weighting (SWF (Sundur et al., 2017), sDCG (Sundur et al., 2017)). As a matter of fact, the adaption of single-turn evaluation metrics for multi-turn evaluation is actually more complicated than it shows in the above methods. The reason is that a conversational search system which, by its definition, will interact with the user and generate indefinite search
sessions, which means that different system responses can lead to completely different conversation trajectories. Thus combining single-turn evaluation metrics through weighting seems questionable, as it overlooks the effect of each turn on future turns due to the turn-independence assumption. Recently, there have been some works about multi-turn evaluation. sRBP (Wang et al., 2017) evaluates session rank based on user model. Wang and Ai (Wang and Ai, 2018; Wang et al., 2019) evaluate conversational search sessions by simulating with various types of users. ECS (Wang et al., 2017) suggests to evaluate entire conversation session by identifying sub-topic. We follow the same setup with these works and use a cascade user model for multi-turn evaluation.
While most of the above metrics are focused on evaluating result quality or search effectiveness, other works also mention measuring search efficiency. Optimizing search efficiency (or search effort) means promoting more efficient conversational systems which can achieve the same goal with less interaction with the user, and are usually measured by the number of conversation turns. Early works like (Wang et al., 2019; Wang et al., 2019) highlight efficiency as another major measure beside effectiveness in interactive QA evaluation. Among more recent works, RBP (Wang et al., 2019) and ERR (Wang et al., 2019) are two single-turn multi-response evaluation metrics with efficiency measuring aspect. They are both based on an explicit user behavior simulator that models the user's patience during the search in exhausting the search result list.
Task-oriented Dialog SystemWith the ability to interact with the user, conversational search systems are closely related to dialog systems, as they both aim to help the user through multi-turn conversations. However, a task-oriented dialog system usually knows the exact task it needs to solve, such as flight booking or weather query, while conversational search tasks are more open. One way to connect the two tasks and think about their similarities is to regard conversational search as a goal-oriented dialogue, where its goal is information seeking (Wang et al., 2019). Anand et al. also (Anand et al., 2019) type conversational search system as _an information-seeking dialogue system with information retrieval capabilities_. Works like these have shown that conversational search and task-oriented dialog systems are highly similar in many aspects. Recently, a system-ask-user-respond scheme that is common in task-oriented dialog systems is also seen in conversational search and recommendation system studies like (Anand et al., 2019; Anand et al., 2019). While task-oriented dialog systems usually respond to the user by generating natural language responses, conversational search systems focus on retrieving relevant information from massive web sources to the user. Besides this difference, a task-oriented dialog system can often involve solving multiple sub-questionnaires of the task, while the conversational search problem we focus on in this paper is to clarify and answer the user's initial information request through multi-turn conversation.
Because of the connections between task-oriented dialog systems and conversational search systems, these systems share many structural similarities. Previous research on task-oriented dialogue systems in NLP can be roughly categorized into two groups (Wang et al., 2019; Wang et al., 2019): (1) pipeline/modular system, and (2) end-to-end system. Pipeline systems regard the process of a task-oriented dialogue system as an iterative decision-making process with four major modules: natural language understanding (NLU), dialogue state tracking (DST), dialogue policy (POL), and natural language generation (NLG). During the iterative decision-making process, the system reads the dialogue (NLU) and processes the information to have an understanding of the current dialogue state (DST), then decides its next action concerning the understanding (POL), and finally generates natural language output implementing the action (NLG). Some works try to combine some of the modules such as combining NLU with DST (Wang et al., 2019; Wang et al., 2019), or policy with NLG (Wang et al., 2019; Wang et al., 2019).
End-to-end systems such as (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) model task-oriented dialogue as a response generation problem through language modeling, with the entire dialogue history modeled as a long text sequence, essentially serializing the pipeline.
Imitation LearningImitation learning (IL) aims to train a system to mimic human behavior by learning from human demonstrations (Wang et al., 2019). IL can be implemented as supervised learning, where the agent learns to behave like the expert on all training samples. This is known as the behavior cloning method (Wang et al., 2019). However, simply copying the behavior on the training set has been shown to be inefficient and not generalizable. IL is more often cast as an inverse reinforcement learning (IRL) problem (Beng et al., 2016; Wang et al., 2019; Wang et al., 2019), where the system tries to infer the reasoning/logic behind these expert demonstrations, then learn to apply them to unseen scenarios to generalize. In general reinforcement learning (RL) algorithms, it is usually required to have explicit or manually designed rewards signals for training. However, in real-world tasks such as automatic driving and robotics, such signal rarely exists and defining rewards is usually difficult or even impossible. IRL is a learning paradigm to learn a policy with no available rewards by inferring the rewards from expert demonstrations for the RL algorithm to use.
The IRL algorithms and their deep neural network versions iteratively perform the IRL-RL learning steps from expert demonstrations and are generally considered to be costly in terms of time. With the advance of studies about generative adversarial nets (GAN) (Finn et al., 2017), Finn et al. (Finn et al., 2017) suggest the equivalencies between GAN, IRL, and energy-based models, and Ho et al. (Ho et al., 2017) propose that the IRL-RL learning iteration can be simplified by solving the dual problem of occupancy measure matching. They subsequently propose a generative adversarial imitation learning (GAIL) algorithm, which could reduce the cost of imitation learning by a large margin. Till today, GAIL has been studied and extended by many works (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019).
The applications of imitation learning methods can mostly be found in robotics and autonomous driving. To the best of our knowledge, our work is among the very first efforts to apply imitation learning methods to conversational search system studies.
## 3. Proposed System
We start this section with an overview of our conversational search system (SS3.1); then we walk through the system from retrieval models (SS3.2) to the policy model (SS3.3). Finally we introduce the inference of the proposed model (SS3.4).
### Conversational Search System with Policy
Our conversational search system models a conversational search agent which aims to retrieve relevant information for the user's information need by interacting with the user through multi-turn conversation. Our system consists of two modules:
The first module is a collection of retrieval models, each modeling a specific action that the agent may take, e.g. retrieving results, or
asking clarifying questions. We assume that most conversational search systems such as (Bang et al., 2017; Wang et al., 2018) have the same or similar structure as the first module. However, most of them do not have a model to decide which action to make, which is our second module.
The second module is a conversational search policy model, which decides the next agent action given the user's search query, current conversation context, and the retrieved candidates of each action from the first module. In our work, we only consider two main agent actions, returning the retrieved result to the user or asking a clarifying question to the user. Therefore, our policy model only needs to make decisions between the two actions, and there will be only two retrieval models, namely the result retriever and the clarifying question retriever. However, given enough community attention and studies on other possible agent actions and related datasets, our agent should easily be extended and adapted. Our conversational search agent is also an implementation of recent conversational search conceptualization studies, which suggest categorizing agent actions and modeling them individually as discussed in Section 1.
It has been proposed and shown in (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) that integrating the retriever outputs in the policy model can improve the overall agent performance and control the risk in clarifying questions compared to the popular baseline of first predicting agent action and then running the corresponding retrieval model. Following their works, our system also runs the retrieval models before the policy model. Hence, we will first introduce the retrieval models (SS3.2) and then the policy model (SS3.3).
### Retrieval Models
We have two retrieval models for modeling the search result retrieval and clarifying question retrieval tasks, respectively. The goal of the retrieval models is to retrieve the best result or clarifying question-based on the search query and conversation context. The result and clarifying question retriever use the same poly-encoder (Wang et al., 2018) structure but do not share parameters and are trained separately.
Poly-encoderAssume the conversation context between user and agent can be represented as \(\{U_{0},A_{0},..,U_{N},A_{N}\}\), where \(U\)s are user utterances, and \(A\)s are agent utterances. The response candidates can be represented as \(P=\{p_{1},..,p_{K}\}\). In poly-encoder, the appended search context \(q=\texttt{[S]}U_{0}[\texttt{SEP]}A_{0}[\texttt{SEP]}U_{1}...\) is first encoded into vectors \((h_{q_{1}},..,h_{q_{2N}})\) with pretrained transformer, where \(\texttt{[S]}\) represents the start of sentence token, and \(\texttt{[SEP]}\) represents the sentence segmentation token. Then it attends to \(m\) learnable codes \((C^{1},..,C^{m})\) to generate \(m\) attended context vectors \((q^{1},..,q^{m})\):
\[q^{i}=\sum_{j}^{2N}w_{j}^{i}h_{qj} \tag{1}\]
where \((w_{j}^{i},..,w_{2N}^{i})=\text{softmax}(C^{i}\cdot h_{q_{1}},..,C^{i}\cdot h _{q_{2N}})\).
The poly-encoder then encodes each candidate \(p_{k}\) into a vector \(E_{p_{k}}\) using pretrained transformer \(T\) and dimension reduction function \(red(\cdot)\), which can aggregate a sequence of vectors into one vector:
\[E_{p_{k}}=red(T(p_{k})) \tag{2}\]
Finally, the \(m\) context vectors \((q^{1},..,q^{m})\) attend to the candidate vector \(E_{p_{k}}\) to compute candidate-attended context vector \(E_{q}\):
\[E_{q}=\sum_{i}^{m}w_{i}q^{i} \tag{3}\]
where \((w_{1},..,w_{m})=\text{softmax}(E_{p_{k}}\cdot q^{1},..,E_{p_{k}}\cdot q^{m})\). Then the ranking score of candidate \(p_{k}\) is computed as the dot product \(s_{k}=E_{p_{k}}\cdot E_{q}\). The output of the retriever is the rank of all the candidates and their ranking scores.
The result retriever and clarifying question retriever are trained separately. Both of them are first initialized using pretrained checkpoints, and then fine-tuned on batches of (search context, result) or (search context, clarifying question) pairs to minimize the cross-entropy loss between the softmax result of ranking score vector and the true relevance label vector.
\[L_{\text{poly}}=\sum^{B}\text{CE}(\text{softmax}(s_{1},..,s_{K}),I(p_{1},..,p_ {K})) \tag{4}\]
After the retrieval models are trained, their parameters will be fixed during the training of policy models.
### Policy Model and Imitation Learning
Markov Decision ProcessWe consider the task of choosing which retrieval result to return in each stage of conversational search as a Markov Decision Process (MDP) problem. The conversational search MDP is a tuple \(\{\mathcal{S},\mathcal{R},\mathcal{T},\mathcal{R}\}\) where:
* \(\mathcal{S}\) is a set of conversational states
* \(\mathcal{A}\) is a set of system actions
* \(\mathcal{T}(s,a)\) is a transition probability distribution that an action \(a\) in state \(s\) will lead to \(s^{\prime}\)
* \(\mathcal{R}(s,a)\) is the immediate reward of taking action \(a\) in state \(s\)
To be more specific, the state \(\mathcal{S}\) comprises the initial query, utterance history context, and retrieved clarifying questions and results. The system action set \(\mathcal{A}=\{\text{return results},\text{ask question}\}\). The transition probability \(\mathcal{T}\) is modeled by a user cascade model, which will be explained in section 5
A trajectory \(\tau\) in MDP is a series of system actions together with their resulting states. \(\tau\) can be efficiently represented as \(\tau=\{(s_{1},a_{1}),..,(s_{t},a_{t})\}\). In conversational search, the system only has two actions. We assume that the action of returning results will end the conversation, and the action of asking a question will continue the conversation. Therefore, any conversational search trajectory will be an arbitrary number of asking-question actions followed by one returning-result action.
Policy ModelThe job of our conversational search policy model is to make decisions of which agent action to take, at any state \(S\). In our setting, the agent only has two actions, either retrieving result for user's query or asking a clarifying question. Hence, the output of our policy model is a probability distribution over the two actions. To represent the input state \(S\), we first use pretrained transformer to encode the initial query, \(iq\), the current conversation utterances \(q=\texttt{[CLS]}U_{0}[\texttt{SEP]}A_{0}[\texttt{SEP]}U_{1}...\), the retrieved result candidates \((re_{1},..,re_{K})\), and the retrieved clarifying question candidates \((cq_{1},..,cq_{K})\). Then, we concatenate all these encodings together with the retrieved result and clarifying question scores as
the state representation \(S=(\textit{iq},\textit{q},\textit{re}_{1},..,\textit{re}_{K},\textit{cq}_{1},.., \textit{cq}_{K},s_{\textit{re}}^{1;K},s_{\textit{cq}}^{1;K})\). The policy model \(G\) is a 2-layer feed-forward neural net:
\[G(S)=\text{softmax}(W_{G2}\cdot\phi(W_{G1}\cdot S+b_{G1})+b_{G2}) \tag{5}\]
where \(W_{G1},W_{G2},b_{G1},b_{G2}\) are learnable parameters.
LearningPolicy models are usually trained by reinforcement learning methods such as policy gradient or Q-learning, which require manually designing and tuning the rewards function of each action. For example, Wang and Ai (Wang and Ai, 2018) empirically determine the reward of asking clarifying questions and returning answers with the intention to control the risks of agent actions. However, this could lead to a rather conservative conversational search policy, which will further weaken the possibility of improving result quality by asking more clarifying questions.
RL algorithms are notoriously hard to train due to their requirement of manually designing and tuning rewards functions. To avoid this effort, we propose to use imitation learning (IL), which is to learn the policy directly from the expert demonstrations of the task without the help of additional training signals. The imitation learning algorithm we use is Generative Adversarial Imitation Learning (GAIL), which alternatively updates and optimizes a (state, action) evaluator as a discriminator \(D\) and the policy model as a generator \(G\). We use GAIL instead of other IL algorithms as it can best approximate the expert policy by minimizing a real metric defined on policy space and is more efficient compared to other IL algorithms (Wang and Ai, 2018). The algorithm suited for our task is briefed in Algorithm 1. Compared to the original GAIL, we change the discriminator loss function to least square objective (Wang and Ai, 2018) for better performance. Therefore, we denote our framework as Least Square-GAIL or LSAIL.
GAIL algorithm requires expert demonstration \(\tau_{E}\) as input, which is a trajectory of (state, action) pairs that can provide the best conversational search quality. \(\tau_{E}\) can be represented as \(\tau_{E}=\{(s_{0},a_{0}),..,(s_{T},a_{T})\}\). However, most conversational search datasets only have the raw conversations which can be represented as \(\{U_{0},A_{0},..,U_{N},A_{N}\}\). To get \(\tau_{E}\), we evaluate all possible conversational search trajectories in terms of resulting conversational search quality and select the best as the expert demonstration. The metric we use to evaluate the conversational search quality will be discussed in SS5. The sampled trajectory \(\tau_{i}\) is the trajectory decided by the policy model parameters in the current iteration.
The input of the (state, action) discriminator \(D\) in the GAIL algorithm is the concatenated vector \(S_{a}\) of the initial query \(\textit{iq}\), current utterance history \(\textit{q}\), the candidate text of the action, and the candidate ranking scores \(s\) of the action. For example if the action is to return the result, then \(S_{a}=(\textit{iq},\textit{q},\textit{re}_{1},..,\textit{re}_{K},s_{\textit{ re}}^{1;K})\). If the action is to ask the clarifying question, then \(S_{a}=(\textit{iq},\textit{q},\textit{cq}_{1},..,\textit{cq}_{K},s_{\textit{cq }}^{1;K})\). The output of \(D\) is a scalar indicating the probability of the (state, action) pair being from the expert trajectory. \(D\) is a 2-layer feed-forward neural net:
\[D(S_{a})=\text{Sigmoid}(W_{D2}\cdot\phi(W_{D1}\cdot S_{a}+b_{D1})+b_{D2}) \tag{6}\]
where \(W_{D1},W_{D2},b_{D1},b_{D2}\) are learnable parameters.
Our policy module is theoretically superior to previous RL-based conversational search models (Wang and Ai, 2018; Wang and Ai, 2018) as the training of discriminator \(D\) and policy \(G\) is reward-free, thus avoiding the burdensome reward designing and tuning steps. The idea behind GAIL is to train \(D\) with expert demonstrations and demonstrations generated by \(G\), and use \(\log(D(s,a))\) as the estimated reward during the update of \(G\).
### Inference
During inference, an initial user query is first fed into the conversational search system; the system then enters the interaction loop with the user, where it will decide and make an action between asking clarifying questions and returning the search result to the user. The result and question retrieval models are run once to generate the input for the policy model, and then the policy model will make the final decision. The interaction loop will continue if the user responds to the clarifying question, and will stop if the system returns the search result or its retrieved clarifying question list does not contain any useful questions. Finally, the output of our system is a simulated conversational search trajectory which is a series of clarifying question and search result retrieval results and can be represented as \(\{\pi_{\textit{cq}}^{1;T-1},\pi_{\textit{res}}^{T}\}\), where \(\pi_{\textit{cq}}^{1;T-1}\) represents the retrieved clarifying question ranklists for the first \((T-1)\) clarifying turns, and \(\pi_{\textit{res}}^{T}\) is the retrieved result ranklist for the last turn. We will evaluate our system using the entire trajectory it produces.
## 4. Evaluation Metrics
### Expected Conversational Reciprocal Rank
In SS3, we briefly mentioned an evaluation metric we use to select the expert demonstration from all possible conversation trajectories. We will explain this metric which we use for both selecting the expert demonstration and evaluating the performance of our system and the baseline models during inference.
The output of our conversational search system is a conversational search trajectory which can be represented as \(\{\pi_{\textit{cq}}^{1;T-1},\pi_{\textit{res}}^{T}\}\), where \(\pi_{\textit{cq}}^{1;T-1}\) represents the retrieved clarifying question ranklists for the first \((T-1)\) clarifying turns, and \(\pi_{\textit{res}}^{T}\) is the retrieved result ranklist for the last turn. With the above notations, we define the evaluation metric Mean Expected Conversational Reciprocal Rank based on Cascade model (Mean ECRR) as:
\[\text{Mean ECRR}=\frac{1}{M}\sum_{i=1}^{M}\text{ECRR}_{\mathcal{U}}(\pi_{ \textit{cq}}^{1;T-1},\pi_{\textit{res}}^{T})\]
where ECRR is the Expected Conversational Reciprocal Rank for single conversation, and \(\mathcal{U}\) is a cascade model to simulate user behavior in response to a retrieved clarifying question list. This is
very similar to previous works such as (Kumar et al., 2017), which also uses a user cascade model in their metric. To better understand cascade model and ECRR, we recommend imagining the term _user_ not as a single user but the entire user base instead. The cascade model assumes that (1) user will examine the clarifying question list by its rank order, and have a probability \(\alpha\) to examine each clarifying question; (2) when seeing a relevant clarifying question, user will answer the question and wait for next response from the conversational system; (3) when seeing an irrelevant answer or clarifying question, user will continue to examine the next response with probability \(\alpha\) or leave the conversation with probability \((1-\alpha)\). The parameter \(\alpha\) reflects the percentage of users we assume to be tolerant and patient in the user base.
With the cascade model assumption, during all but the last conversation turn \(t\leq(T-1)\), when a retrieved clarifying question ranklist \(\pi_{cq}^{t}\) is returned to user, we can compute the following probability
\[p^{t}=\alpha^{r}(\pi_{cq}^{t}=1) \tag{7}\]
This is the probability that user will examine the true relevant clarifying question using cascade model on \(\pi_{cq}^{t}\), where \(r(\pi_{cq}^{t}=1)\) denotes the rank of relevant clarifying question. Then, user will either examine the clarifying question and continue the conversation with this probability \(p^{t}\), which will leave the quality of the conversation to be further determined by future turns, or leave the conversation with probability \((1-p^{t})\) without seeing a result, which is a disappointing scenario for both the user and our system. The above process can be described using the following equation:
\[\text{ECRR}^{t}=p^{t}\cdot\text{ECRR}^{t+1}+(1-p^{t})\cdot 0 \tag{8}\]
In the last turn, when a retrieved result ranklist \(\pi_{res}^{T}\) is returned to user, we can finally evaluate the retrieved result ranklist quality using reciprocal rank of the true relevant result:
\[\text{ECRR}^{T}=\text{RR}(\pi_{res}^{T}) \tag{9}\]
From equation (1) and (2), we can derive that the ECRR of the entire multi-turn conversation is computed as:
\[\text{ECRR}_{\alpha\alpha}(\pi_{cq}^{1-T-1},\pi_{res}^{T})=\prod_{t=0}^{T-1} \alpha^{r}(\pi_{cq}^{t}=1)\cdot\text{RR}(\pi_{res}^{T}) \tag{10}\]
As mentioned earlier, the Ubuntu Dialog Corpus dataset only contains raw conversations. To train the imitation learning framework, we need to generate and compute expert trajectories from the raw data using ECRR. This can be done in the following steps (using one conversation as an example):
1. In each turn of the conversation, we run both of the retrieval models to generate a result rank list \(\pi_{res}\) and clarifying question rank list \(\pi_{cq}\).
2. Assume the conversation has \(T\) turns, then there are \(T\) possible trajectories \(\{(\pi_{cq}^{1-t},\pi_{res}^{t})^{t=1:T}\}\) in total, each ends at one of the \(T\) turns. We use formula (9) to compute the ECRR of all these trajectories, and then select the trajectory with the highest ECRR as the expert trajectory. Notice that different \(\alpha\) can lead to different expert trajectories. For example, if we have a 2-turn conversation, and the retrieved results reciprocal rank for the 1st and 2nd turn are 0.33, 1, and the retrieved question rank for the 1st turn is 3. Using ECRR with \(\alpha=0.5\), the expert trajectory is to not ask the clarifying question, because asking the question has \(\text{ECRR}=0.5^{3}\times 1=0.125<0.33\). Using ECRR with \(\alpha=0.7\), the expert trajectory is to ask the clarifying question, since \(0.7^{3}\times 1=0.343>0.33\).
ECRR can be seen as the average user satisfaction given the conversational search system response trajectory. This metric evaluates the full trajectory without assuming any specific user type. The entire user population is modeled by the parameterized cascade model. Some users may go through the full trajectory and receive a search result and other may not. Because of this, this metric is more realistic and applicable compared to the metrics used in (Zhu et al., 2017).
According to the definition of ECRR (Equation 10), the final score is determined by the number and quality of clarifying questions (cumulative product term) and the final retrieval result quality (reciprocal rank term). A policy trained to optimize ECRR is encouraged to get the best retrieved result (high effectiveness) while minimizing the number of clarifying questions asked to user (high efficiency). Thus ECRR accounts for both search effectiveness and efficiency. The parameter \(\alpha\) in the cascade model of ECRR controls the trade-off between effectiveness and efficiency.
As discussed in section 2, most existing works about conversational search system use single-turn metrics or stack single-turn metrics under naive conversation policy, our work is among the few works which evaluate conversational search with multi-turn metrics. Among these few works, the majority of them uses multi-turn evaluation metrics only for evaluation, while we propose to also use the evaluation metric during conversational search policy training.
### Recall and MRR
Besides the ECRR metric, we also include recall and MRR as other evaluation metrics, which are more commonly used when evaluating ranked lists. However, it is important to notice that we use them on the entire conversational search system trajectory \(\{\pi_{cq}^{1:T-1},\pi_{res}^{T}\}\), instead of any single turn ranking. In this case, it is also worth mentioning that MRR is a special case of ECRR when the cascade user model has **binary**\(\alpha\). Specifically, \(\alpha=1\) when the top question is relevant, and \(\alpha=0\) otherwise. which means that users will always leave on seeing irrelevant clarifying questions.
## 5. Experiments and Evaluations
### Experiment Design
The goal of our experiments is to show that the proposed reward-free IL framework can (1) work reasonably well without reward tuning, and (2) generalize to different evaluation metrics or user assumptions. To show (1), we compare our framework with a risk-aware conversational search policy model Risk-aware Conversational Search agent with Q-learning (RCSQ) (Zhu et al., 2017), which needs manually tuning the reward. We will denote this model as RCSQ for the rest of our paper. With the retrieval models fixed, we run our framework and the RCSQ model with different rewards and compare their performances using various evaluation metrics. To
show (2), we train our conversational search policy regarding different evaluation metrics and compare its performance with the RCSQ model performance on these different evaluation metrics.
### Dataset
We use the Ubuntu Dialog Corpus (UDC) dataset (Vinyals et al., 2017) in our experiments. The UDC dataset consists of question answering conversations about the Ubuntu system. We adopt the same processed and filtered dataset and simulation experiments as (Zhu et al., 2017; Zhu et al., 2018). The processing and filtering of the UDC dataset ensure that (1) all the conversations involve only two participants, i.e., the user and Microsoft agents, and in turns, (2) all the conversations have at least two turns, which means that there will be at least one clarifying question in each conversation. We use the filtered and randomly sampled 10000 conversations from (Zhu et al., 2017) as our dataset. Further details of the dataset can be found in Table 1.
### Baselines
Following our experiment design, we include two groups of baseline models in our experiments. The first group comprises 3 naive conversational search policies and 1 simple policy learning model:
1. **Q0A**, a baseline that always returns results without asking a clarifying question. Hence, it always uses the initial query as the only information for answer retrieval.
2. **Q1A**, a baseline that always returns results after asking exactly one clarifying question. It will always ask a clarifying question in the 1st turn, and then return results.
3. **Q2A**, a baseline similar to Q1A, but it will always return results in the 3rd turn after asking clarifying questions in the first two turns.
4. **CtxPred**, a simple conversational search policy learning model using behavior cloning (Zhu et al., 2017). This is similar to the clarifying-question-need classification models in works such as (Beng et al., 2017; Zhu et al., 2018).
The second group of baselines comprises a few untuned variations of the RCSQ model in (Zhu et al., 2017). We use RCSQ as a representative for reinforcement learning policy models in general. The meaning of having untuned versions of RCSQ is for the comparison between the proposed reward-free IL framework and reward-tuning methods such as RCSQ.
Finally, we include an oracle policy that can foresee all possible conversation trajectories and can always take the best action by backtracking. It is worth mentioning that a conversational search policy does not improve the retrieval models it works with. It only decides which retrieval model output (results or questions) to return to the user. Hence, given fixed retrieval models, there will be an upper bound for any conversational search policy. This oracle policy is the upper bound and is only meant to be used for reference.
### Technical Details
We use the poly-encoder implementation from ParIAI1. We download their pretrained checkpoints and finetune them on the UDC dataset. We implement our proposed IL framework from scratch based on Pytorch. We use the RCSQ model implementation from their GitHub repository. Our main experiments are run on a single-core GeForce RTX 2080 Ti with 11GB of memory. The pre-training of the poly-encoder retrievers is done on 4 of the above cores with batch size = 8.
Footnote 1: [https://github.com/facebookresearch/ParIAI/tree/master/projects/polyencoder](https://github.com/facebookresearch/ParIAI/tree/master/projects/polyencoder)
In our experiments, we use the split dataset of train/val/test sets using 8:1:1 ratio in (Zhu et al., 2017). The number of negative samples \(k=99\). The original RCSQ model uses \(r=0.11\) as their tuned reward. We test their model with untuned rewards \(r=-0.1,0.3,0.5,0.7,0.9\). We train our model with a learning rate \(lr=10^{-4}\), and entropy weight \(\lambda=10^{-2}\). In each epoch of GAIL training, we will train the discriminator 5 times and the generator 1 time.
## 6. Results and Analysis
### Experiment Results
Our experiment results are shown in Table 3. The rows are different conversational search policies. The 1st to 4th rows are the naive baseline models. Q0A, Q1A, and Q2A are determined policies, which ask exactly 0, 1, and 2 questions respectively before returning results to the user. CtxPred is a behavior cloning baseline that is trained using expert trajectories. These 4 baseline models in the first block represent the performances when using the state-of-the-art dense retrieval models directly without a conversational search policy. The second block is the untuned risk-control policies (Zhu et al., 2017) denoted as RCSQ. For example, the row of RCSQ \(r=0.11\) is RCSQ with a reward table shown in Table 2. The third block is our proposed IL framework trained regarding different training targets, denoted as LSGAIL. For example, the row of LSGAIL \(\alpha=0.3\) is LSGAIL trained under ECRR with cascade \(\alpha=0.3\), where \(\alpha\) is the frequency that the user will remain in the conversation after seeing an irrelevant clarifying question. The last row is the oracle policy, which is the performance upper bound of any conversational search policy, given the fixed retrieval models. This performance upper bound is only meant to be used for reference, since it is almost unachievable.
The columns are different evaluation metrics. R@1/100 means recall@1 from 100 candidates, and MRR is the mean reciprocal rank. Notice that they are both computed on the entire conversational search trajectory. Followed by them are 4 different ECRRs with \(\alpha=0.3,0.5,0.7,0.9\). From the table, we can see that each LSGAIL variation performs the best on ECRR with the same \(\alpha\) (bolded in the table). This shows that different LSGAIL variations are trained
\begin{table}
\begin{tabular}{l|l|l} \hline item & Ubuntu Dialog & Ours \\ \hline \#conversations & 930,000 & 10,000 \\ \hline Max. turns & - & 10 \\ \hline Min. turns & 3 & 4 \\ \hline Avg. turns & 7.71 & 4.77 \\ \hline Avg. \#words per utterance & 10.34 & 20.85 \\ \hline \end{tabular}
\end{table}
Table 1. Original Ubuntu Dialog and our dataset statistics
\begin{table}
\begin{tabular}{c|c|c} \hline & Relevant & Irrelevant \\ \hline Search results & Result Reciprocal Rank \\ \hline Clarifying question & \(r_{cq}=0.11\) & \(P_{cq}=-0.89\) \\ \hline \end{tabular}
\end{table}
Table 2. RCSQ model reward table
indirectly using different evaluation metrics as their training targets for conversation policy. Most of the time, LSGAIL can train the policy to optimize the designated evaluation metric.
Now, we answer three research questions to explain the result table and explain why the results can show our claims in the experiment design are positive.
_RQ1: How does different user types (\(\alpha\)) affect RCSQ?_ From Table 2, we can see that none of the RCSQ models give the best performing policy on all the ECRR metrics. The tuned RCSQ with \(r=0.3\) outperforms other models on all the metrics except for Recall@1/100 and ECRR \(\alpha=0.9\). The best performing model on these two metrics is RCSQ with \(r=0.5\). We can also see how the performances of RCSQ with different rewards vary from Fig 1.
The fact that no single universal RCSQ reward can fit all of the user types is why RCSQ is not good enough for conversational search policy learning. We now explain why this result is reasonable. The idea behind the RCSQ model is to model conversational search policy as a risk-controlling task in conversational search actions. (1) The parameter \(r\) in RCSQ represents the reward for clarifying questions. Using a small reward makes the policy more conservative about asking clarifying questions, and a higher reward makes the policy more optimistic about asking questions. (2) The parameter \(\alpha\) in ECRR reflects the actual degree of conversation action risks in the form of users' patience for clarifying questions. Smaller \(\alpha\) represents impatient users, who will be more likely to leave the conversation when seeing bad clarifying questions. Larger \(\alpha\) represents patient users, who will spend more time on clarifying questions and interact more with the system.
As a result of (1) and (2), RCSQ with smaller \(r\) should theoretically work better on ECRR with smaller \(\alpha\), and worse on ECRR with larger \(\alpha\), and vice versa. which is shown in our experiments (this can be seen from the figure and the cells where RCSQ with \(r=0.5\), \(0.7,0.9\) underperform the baselines on ECRR \(\alpha=0.3,0.5\), while RCSQ with \(r=0.11,0.3\) underperform on ECRR \(\alpha=0.9\)). However, in real-world scenarios, we do not know ahead about the users of our search systems. Even if we do, no reward selection mechanism utilizes our knowledge about the users in RCSQ besides grid search. This makes RCSQ (and all reinforcement learning models) hard to generalize to various user scenarios.
_RQ2: How does LSGAIL compare to RCSQ?_ From Table 2, by comparing the performances of the LSGAIL framework and other
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline Policies & R@1/100 (binary \(\alpha\)) & MRR (binary \(\alpha\)) & ECRR \(\alpha=0.3\) & ECRR \(\alpha=0.5\) & ECRR \(\alpha=0.7\) & ECRR \(\alpha=0.9\) \\ \hline Q0A & **0.1580** & **0.2381** & **0.2381** & **0.2381** & **0.2381** & 0.2381 \\ Q1A & 0.1200 & 0.1630 & 0.1799 & 0.1938 & 0.2214 & **0.2693** \\ Q2A & 0.0230 & 0.0284 & 0.0359 & 0.0387 & 0.0530 & 0.0761 \\ CtxPred & 0.1165 & 0.1581 & 0.1531 & 0.1841 & 0.1951 & 0.2182 \\ \hline RCSQ r=-0.1\(\star\) & 0.1580 & 0.2381 & 0.2381 & 0.2381 & 0.2381 & 0.2381 \\ RCSQ r=0.11\(\star\) & 0.1675 & 0.2449 & 0.2452 & 0.2479 & 0.2524 & 0.2612 \\ RCSQ r=0.3 & 0.1700 & **0.2504\(\ddagger\)** & **0.2471\(\ddagger\)** & **0.2495\(\ddagger\)** & **0.2535\(\ddagger\)** & 0.2613 \\ RCSQ r=0.5 & **0.1710\(\ddagger\)** & 0.2348 & 0.2190 & 0.2291 & 0.2455 & **0.2767\(\ddagger\)** \\ RCSQ r=0.7 & 0.1630 & 0.2204 & 0.2015 & 0.2262 & 0.2347 & 0.2735 \\ RCSQ r=0.9 & 0.1470 & 0.1999 & 0.1771 & 0.2220 & 0.2263 & 0.2764 \\ \hline LSGAIL \(\alpha=0.3\) & 0.1600 & 0.2404 & **0.2397** & 0.2399 & 0.2403 & 0.2407 \\ LSGAIL \(\alpha=0.5\) & **0.1630** & **0.2410** & 0.2346 & **0.2405** & 0.2408 & 0.2412 \\ LSGAIL \(\alpha=0.7\) & 0.1620 & 0.2406 & 0.2394 & 0.2400 & **0.2410** & 0.2428 \\ LSGAIL \(\alpha=0.9\) & 0.1600 & 0.2123 & 0.1929 & 0.2066 & 0.2286 & **0.2696** \\ \hline Oracle & 0.2330 & 0.3233 & 0.3152 & 0.3208 & 0.3298 & 0.3570 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Comparison of all models and baselines on sampled UDC dataset using poly-encoder as re-ranker. GAIL model is trained using ECRR cascade user model \(\alpha=0.3,0.5,0.7,0.9\). \(\rho\) is the user patience for total clarifying questions, \(\tau\) is the user tolerance for irrelevant clarifying questions. Numbers in bold mean the result is the best among all the variation. \(\dagger\) and \(\ddagger\) means \(p<0.1\) and \(p<0.05\) statistical significance over the all other models. \(\star\) denotes the model variation used by (Kumar et al., 2017).
Figure 1. Comparing LSGAIL with different \(\alpha\) and RCSQ model with different rewards.
policies vertically, we can draw the following conclusions: LSGAIL outperforms all the baseline policies most of the time. Using LSGAIL \(\alpha=0.5\) as an example, LSGAIL outperforms all the baseline policies including Q0A, Q1A, Q2A, and CtxPred on R@1/100, MRR, and ECRR \(\alpha=0.3\) significantly. From the table, the best baseline model is Q0A, with R@1/100 = 0.1580, MRR = 0.2381, and ECRR = 0.2381. LSGAIL gets R@1/100 = 0.1600, MRR = 0.2403, and ECRR = 0.2397, 0.2399, 0.2403. This implies that LSGAIL itself is an effective conversation policy training algorithm.
In general, LSGAIL performances are on par with RCSQ. When compared with RCSQ variations with rewards that are too small or too large, LSGAIL can achieve better performances. This is shown in Fig 1. It does perform slightly worse than the finetuned RCSQ model variations with \(r\)=0.11 or 0.3 due to the lack of reward signal during training. However, when used on new dataset or unknown user types, LSGAIL does not need to manually finetune to find the best reward to achieve similar performances.
_RQ3: How does different user types (\(\alpha\)) change the comparison?_ In RQ2, we see that LSGAIL can keep on par with RCSQ, although it always slightly underperforms the best RCSQ variation. In RQ3, we further study their performances by comparing the result table and Fig 1 horizontally. Surprisingly, we cannot find any single RCSQ reward that always outperforms LSGAIL on all ECRR with different \(\alpha\). Specifically, RCSQ with \(r=0.1,0.3\) are better than LSGAIL on ECRR with \(\alpha=0.3,0.5,0.7\). However, they do not generalize to ECRR with \(\alpha=0.9\), which represents the case with more patient users. To get good performance, RCSQ needs to finetune its reward to the range of \([0.5,0.9]\). In contrast, LSGAIL never needs reward tuning and achieves comparable performances, regardless of small or large \(\alpha\). This further shows that LSGAIL is theoretically more generalizable and easier to deploy than RCSQ.
### Case Study
We conduct case studies to understand why LSGAIL can outperform the untuned RCSQ model. Table 4 is an example we choose in the experiment that compares LSGAIL \(\alpha=0.9\) and RCSQ \(r=0.1\):
In this example, LSGAIL asks one clarifying question and gets ECRR = 1, RCSQ directly returns the result and gets ECRR = 0.167. From this example, we can see that RCSQ tends to only ask clarifying questions when the retrieved result reciprocal rank is lower than the finetuned reward. Because of this mechanism, RCSQ policy can improve the search result when it is completely irrelevant, e.g., when there are no relevant results in the top 10. However, its shortage is it cannot improve sub-optimal results to good results, e.g., improve the correct result from 6th to 1st. The training of LSGAIL does not set a hard reward for clarifying questions. Hence, it does not have the RCSQ policy problem, and it can further improve sub-optimal results to good results.
## 7. Conclusions
In this paper, we highlight the necessity of a conversational search policy in conversational search systems. As a solution, we propose a reward-free imitation learning framework for conversational search policy learning to address the problem in reinforcement learning methods. That is, they require heavy reward tuning, and are hard to generalize to different tasks and user types. The reward-free imitation learning framework trains the policy by inferring the best rewards from expert trajectories, which can be computed at a low cost from raw conversational search logs. The algorithm could potentially generalize to any user assumptions. Hence, it solves both of the two problems of reinforcement learning methods.
To show our proposed framework can solve the two problems, we design experiments on the Ubuntu Dialog Corpus dataset and compare our proposed framework with three naive baseline policies, one behavior cloning policy learning method, and one representative reward-tuning reinforcement learning model. To evaluate the
\begin{table}
\begin{tabular}{p{56.9pt}|p{56.9pt}|p{113.8pt}|p{113.8pt}} \hline \hline
**Turn** & \multicolumn{2}{c|}{**Conversation**} & \multicolumn{2}{c|}{**Analysis**} \\ \hline \multirow{3}{*}{Turn 1} & User & \multicolumn{1}{p{113.8pt}|}{(Initial Query) Well i logged out and logged back in and somedo, the new user, still cannot sudo without being told off that this incident will be reported.} & This retrieved result is the correct result, but is ranked 6th by the retirieval model. This clarifying question is relevant, and is ranked 1st by the question retrieval model. In this case, the reward of returning the result is \(1/6=0.167\), while the reward of asking the question is \(r=0.1<0.167\). Because of this, RCSQ chooses to directly return the result to the user. Hence the ECRR of the RCSQ model is equal to \(1/6=0.167\). LSGAIL chooses to ask with the clarifying question ranked. Let’s see what happens in the next turn. & This retrieved result is the same as in turn 1, but it is now ranked 1st by the result retrieval model thanks to more context. The retrieved question is also relevant, but it is ranked 7th by the question retrieval model. LSGAIL chooses to ask no more questions and returns the result, hence ECRR = 0.7\({}^{1}*1=0.7\). If the system would ask the question, which is still relevant, and then go to turn 3, it would get an ECRR = 0.7\({}^{7}*1\approx 0.08\), which is lower because it would consume the user’s time and degrade the user’s search experience. \\ \hline \hline \end{tabular}
\end{table}
Table 4. An example of LSGAIL choosing the best search system actions.
entire conversational search trajectory, we propose a new multi-turn evaluation metric called ECRR. Our experiment results show that our proposed framework can work reasonably well without reward tuning, and it can generalize well to different user assumptions. Our paper provides a useful reward-free imitation learning framework for conversational search policy training, which is easier to deploy than traditional reinforcement learning methods and more flexible to various user assumptions.
|
2306.11993
|
Early Structure Formation from Primordial Density Fluctuations with a
Blue, Tilted Power Spectrum: High-Redshift Galaxies
|
Recent observations by the James Webb Space Telescope (JWST) discovered
unexpectedly abundant luminous galaxies at high redshift, posing possibly a
severe challenge to popular galaxy formation models. We study early structure
formation in a cosmological model with a blue, tilted power spectrum (BTPS)
given by $P(k) \propto k^{m_{\rm s}}$ with $m_{\rm s} > 1$ at small length
scales. We run a set of cosmological $N$-body simulations and derive the
abundance of dark matter halos and galaxies under simplified assumptions on
star formation efficiency. The enhanced small-scale power allows rapid
nonlinear structure formation at $z>7$, and galaxies with stellar mass
exceeding $10^{10}\,M_\odot$ can be formed by $z=9$. Because of frequent
mergers, the structure of galaxies and galaxy groups appears clumpy. The BTPS
model reproduces the observed stellar mass density at $z=7-9$, and thus eases
the claimed tension between galaxy formation theory and recent JWST
observations. The large-scale structure of the present-day Universe is largely
unaffected by the modification of the small-scale power spectrum. We conduct a
systematic study by varying the slope of the small-scale power spectrum to
derive constraints on the BTPS model from a set of observations of
high-redshift galaxies.
|
Shingo Hirano, Naoki Yoshida
|
2023-06-21T03:07:33Z
|
http://arxiv.org/abs/2306.11993v3
|
# Early Structure Formation from Primordial Density Fluctuations
###### Abstract
The first series of observations by the James Webb Space Telescope (JWST) discovered unexpectedly abundant luminous galaxies at high redshift, posing possibly a serious challenge to popular galaxy formation models. We study early structure formation in a cosmological model with a blue, tilted power spectrum (BTPS) given by \(P(k)\propto k^{m_{\rm s}}\) with \(m_{\rm s}>1\) at small length scales. We run a set of cosmological \(N\)-body simulations and derive the abundance of dark matter halos and of galaxies under simplified assumptions on star formation efficiency. The enhanced small-scale power allows rapid formation of nonlinear structure at \(z>7\), and galaxies with stellar mass exceeding \(10^{10}\,M_{\odot}\) can be formed by \(z=9\). Because of frequent mergers, the structure of galaxies and galaxy groups appears overall clumpy. The BTPS model reproduces the observed stellar mass density at \(z=7-9\), and thus eases the claimed tension between galaxy formation theory and recent JWST observations. Large-scale structure of the present-day Universe is largely unaffected by the modification of the small-scale power spectrum. Finally, we discuss the formation of the first stars and early super-massive black holes in the BTPS model.
Cosmology (343) -- Dark matter (353) -- Early universe (435) -- Galaxy formation (595) -- Population III stars (1285) 0000-0002-4878-8082]Shingo Hirano
0000-0002-4888-0828]Naoki Yoshida
0000-0002-4888-0828]Naoki Yoshida
## 1 Introduction
The so-called \(\Lambda\) Cold Dark Matter (\(\Lambda\)CDM) cosmology successfully reproduces a broad range of observations of the large-scale structure of the Universe, and thus has been established as the standard cosmological model. An important element of the standard model is the primordial density fluctuations generated in the very early universe with a nearly scale-independent power spectrum. While the large-scale density fluctuations have been observationally probed to \(k\sim 1\,{\rm Mpc}^{-1}\), the amplitude and the shape of the power spectrum on smaller, (sub-)galactic length scales are poorly constrained (e.g., Hlozek et al., 2012; Bullock & Boylan-Kolchin, 2017). Hence theoretical studies on galaxy formation often rely on significant extrapolation of the assumed scale-invariant primordial power spectrum (PPS).
Various possibilities have been proposed from the physics of the early universe that posit deviations from scale-invariance. Blue, tilted or enhanced power spectra can arise in beyond-standard cosmological models (e.g., Covi & Lyth, 1999; Martin & Brandenberger, 2001; Gong & Sasaki, 2011; Inman & Kohri, 2022), and yield a number of interesting cosmological and astrophysical consequences (e.g., Clesse & Garcia-Bellido, 2015; Germani & Prokopec, 2017). Earlier in Hirano et al. (2015), we studied early structure formation in models with a blue, tilted power spectrum (BTPS). It was shown that the enhanced small-scale density fluctuations drive the formation of nonlinear structure and of the first stars at very early epochs.
In this _Letter_, we study the formation and abundance of the first galaxies in the BTPS model in light of re
cent observations by the James Webb Space Telescope (JWST). A number of galaxies (candidates) with unexpectedly high stellar masses have been discovered (e.g., Finkelstein et al., 2022; Labbe et al., 2023). Boylan-Kolchin (2023) concludes that the inferred high stellar masses of the observed galaxy candidates require an extremely high star formation efficiency far exceeding plausible values of \(\epsilon\lesssim 0.3\) suggested by popular galaxy formation models (Gribel et al., 2017; Tacchella et al., 2018; Behroozi et al., 2020). The challenge brought by recent JWST observations motivates us to reconsider either the detailed physics of galaxy formation in the early universe or the standard cosmology model.
Modification of PPS may provide a viable solution by promoting early structure formation (Parashari and Laha, 2023; Padmanabhan and Loeb, 2023). Parashari and Laha (2023) compute the cumulative comoving stellar mass density (CCSMD) by adopting a modified form of PPS similar to Hirano et al. (2015). Their model can successfully reproduce the observed CCSMD _without_ assuming an unrealistically high star formation efficiency. They further argue that observations of high-redshift galaxies can provide invaluable information on small-scale density fluctuations that otherwise cannot be probed directly. Clearly, it is important and timely to study the formation of high-redshift galaxies by performing cosmological simulations for the BTPS model.
Throughout the present _Letter_, we adopt the cosmological parameters with total matter density \(\Omega_{\rm m}=0.3153\), baryon density \(\Omega_{\rm b}=0.0493\) in units of the critical density, a Hubble constant \(H_{0}=67.36\,{\rm km\,s^{-1}\,Mpc^{-1}}\), the root-mean-square matter fluctuation averaged over a sphere of radius \(8\,h^{-1}\,{\rm Mpc}\)\(\sigma_{8}=0.8111\), and primordial index \(n_{\rm s}=0.9649\)(Planck Collaboration et al., 2020).
## 2 Numerical simulations
We largely follow the method of our previous study (Hirano et al., 2015) to perform cosmological \(N\)-body simulations for models with three different PPS.
### Primordial power spectrum
The standard, scale-independent PPS is given by
\[P_{\rm prim}(k)\propto k^{n_{\rm s}}\,, \tag{1}\]
whereas the one with enhancement at small scales is given by
\[P_{\rm prim}(k) \propto k^{n_{\rm s}}\ ({\rm for}\ k\leq k_{\rm p})\,, \tag{2}\] \[\propto k_{\rm p}^{n_{\rm s}-m_{\rm s}}\cdot k^{m_{\rm s}}\ ({\rm for}\ k>k_{\rm p})\,. \tag{3}\]
We fix the pivot scale \(k_{\rm p}=1\,h\,{\rm cMpc^{-1}}\) (\(h\) comoving Mpc\({}^{-1}\)) and adopt two values \(m_{\rm s}=1.5\) and 2.0, which
Figure 1: The matter power spectra at \(z=1089\) that we use to generate the cosmological initial conditions. The gray line is for the standard \(\Lambda\)CDM model, whereas the other lines show the blue-tilted models with the pivot scale \(k_{\rm p}=1\,h\,{\rm cMpc^{-1}}\) and the tilt \(m_{\rm s}=1.5\) (soft-tilt) and 2.0 (hard-tilt), respectively.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multicolumn{1}{c}{ PPS model} & \(k_{\rm p}\) & \(m_{\rm s}\) & \(\sigma_{8}\) & \(L_{\rm box}\) & \(\{\epsilon_{\rm min},\,\epsilon_{\rm mean},\,\epsilon_{\rm max}\}\) & \(\{\epsilon_{\rm min},\,\epsilon_{\rm mean},\,\epsilon_{\rm max}\}\) \\ & (\(h\,{\rm cMpc^{-1}}\)) & & & (\({\rm cMpc}\,h^{-1}\)) & at \(z=9\) & at \(z=7.5\) \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline Hard-tilt & 1.0 & 2.0 & 0.8114093 & 10, 25, 50 & \{0.07, 0.12, 0.24\} & \{0.13, 0.30, 0.70\} \\ Soft-tilt & 1.0 & 1.5 & 0.8112334 & 10, 25, 50 & \{0.10, 0.18, 0.32\} & \{0.17, 0.40, 0.97\} \\ \multicolumn{1}{c}{ACDM} & - & - & 0.8111 & 10, 25, 50 & \{0.17, 0.35, 0.64\} & \{0.25, 0.60, 1.46\} \\ \hline \end{tabular} Note. – Column (1): model name. Column (2): pivot scale (\(k_{\rm p}\)). Column (3): tilt index (\(m_{\rm s}\)). Column (4): root-mean-square matter fluctuation averaged over a sphere of radius \(8\,h^{-1}\,{\rm Mpc}\) (\(\sigma_{8}\)). Column (5): periodic box length (\(L_{\rm box}\)). Columns (6) and (7): star formation efficiencies required to exceed the lower limit, center, and upper limit of the observation-limited CCSMD region (\(\epsilon_{\rm min}\), \(\epsilon_{\rm mean}\), \(\epsilon_{\rm max}\)) at \(z=9\) and 7.5.
\end{table}
Table 1: List of three models
Figure 2: Projected density distributions \(\delta+1=\rho/\bar{\rho}\) and DM halos with \(M_{\rm halo}\geq 10^{9}\,M_{\odot}\). The top and middle panels show the structure at \(z=9\) in a volume of side-length \(10\,{\rm cMpc}\,h^{-1}\), whereas the bottom panels show large-scale structure at \(z=0\) with a side-length of \(50\,{\rm cMpc}\,h^{-1}\). The left, center, and right panels are for \(\Lambda\)CDM, soft-tilt, and hard-tilt models. The circle size in the middle panels scales with the halo mass. The inset in panel (c) is a zoom-in image of one of the most massive halos.
we call "soft-tilt" and "hard-tilt", respectively. We note that our soft-tilt model is in marginally acceptable parameter space from available observational constraints (see Figure 2 in Parashari and Laha, 2023).
For the BTPS models, we adjust the normalization \(\sigma_{8}\) to ensure that the amplitude of the fluctuations at large wavelength above the pivot scale is the same as in the standard \(\Lambda\)CDM model. Figure 1 shows the resulting matter power spectra at \(z=1089\).
### Cosmological simulations
We use the public code MUSIC(Hahn and Abel, 2011) to generate cosmological initial conditions. We employ \(512^{3}\) dark matter particles in comoving cubes of \(L_{\rm box}=10\), \(25\), and \(50\,{\rm{cMpc}}\,h^{-1}\).Three different simulation volumes are adopted to derive the statistics of nonlinear structure accurately over a wide range of length and mass scales. The particle mass is \(6.52\times 10^{5}\), \(1.02\times 10^{7}\), and \(8.15\times 10^{7}\,M_{\odot}\), respectively. Dark matter halos with \(M_{\rm halo}=10^{8}\,M_{\odot}\) are resolved by more than 150 particles in our highest-resolution runs. Table 1 summarizes the basic simulation parameters. We use the parallel \(N\)-body code GADGET-2(Springel, 2005) to follow structure formation from redshift \(z=99\) to 0.
### Analysis
We follow Boylan-Kolchin (2023) and Parashari and Laha (2023) to compute the cumulative comoving stellar mass density (CCSMD) from the simulation outputs. We determine the required star formation efficiency (\(\epsilon\)) to reconcile the recent JWST observation.
We first run a Friends-of-Friends group finder with linking length \(b=0.164(m/\bar{\rho}(z))^{-3}\), where \(m\) is the \(N\)-body particle mass and \(\bar{\rho}(z)\) is the mean density of the universe at redshift \(z\), to obtain the halo mass function \({\rm d}n(M,z)/{\rm d}M\). We combine the halo mass functions from our simulations with different volumes (particle masses) to construct the halo mass function for each model in a wide mass range. Next, we calculate the cumulative comoving mass density of halos
\[\rho(\geq M_{\rm halo},z)=\int_{M_{\rm halo}}^{\infty}{\rm d}MM\frac{{\rm d}n (M,z)}{{\rm d}M}\,. \tag{4}\]
Finally, we compute the CCSMD with stellar mass larger than \(M_{*}\) at each redshift, \(\rho_{*}(\geq M_{*},z)=\epsilon f_{\rm b}\rho(\geq M_{\rm halo},z)\). We assume \(M_{*}=\epsilon f_{\rm b}M_{\rm halo}\), where \(f_{\rm b}=\Omega_{\rm b}/\Omega_{\rm m}\), and also assume that the star formation efficiency \(\epsilon(\leq\!1)\) is constant over the redshift range we consider.
## 3 Results
Figure 2 shows the projected density distributions for the three models. The enhanced small-scale density fluctuations yield stronger density contrast in the BTPS models than in the \(\Lambda\)CDM model; numerous nonlinear objects (halos) are already formed by \(z=9\). We compare the halo mass functions in Figure 3, which will be used later to discuss the cumulative stellar mass. The number density of halos with mass \(10^{9}<M/M_{\odot}<10^{11}\) differs by an order of magnitude between the hard-tilt model and the \(\Lambda\)CDM model.
The middle panels of Figure 2 show that massive galaxies (halos) are strongly clustered in the BTPS models. The relative galaxy bias with respect to the underlying dark matter distribution will provide crucial information on the nature of the early galaxy population. Future wide-field observations of galaxy distribution and clustering by JWST or the Roman Space Telescope will enable us to discriminate theoretical models of galaxy formation (Munoz et al., 2023).
The bottom panels of Figure 2 suggest that the large-scale structure of several tens Mpc in the present-day universe (\(z=0\)) remains essentially the same. Clearly, the enhancement of PPS at the small length scales does not ruin the success of the \(\Lambda\)CDM model. The same holds for the halo mass function at \(z=0\) plotted in Figure 3(c).
Figure 4 shows the CCSMDs for the three models at \(z=9\) and 7.5. There we assume \(\epsilon=0.1\) (solid lines) and 0.3 (dashed lines) with the latter corresponding to the plausible upper limit suggested by theoretical models of galaxy formation (Gribel et al., 2017; Tacchella et al., 2018; Behroozi et al., 2020). The green regions indicate the JWST observations adopted from Parashari and Laha (2023), who calculated the CCSMD from the observation of Labbe et al. (2023) with the corresponding spectroscopic updates in two redshift bins \(z\in[7,8.5]\) and \([8.5,10]\) using the three most massive galaxies.
As has been suggested in the recent literature, a large value of \(\epsilon\sim 0.3\) is required for the \(\Lambda\)CDM model to reproduce the mean value of the observed CCSMD at \(z=9\) (Figure 4a). Even with the "maximal" \(\epsilon\), the CCSMD does not reach the lower limit of the uncertainty range of the observations at \(z=7.5\) (Figure 4b). The CCSMD of the soft-tilt model is about ten and three times larger than the \(\Lambda\)CDM model at \(z=9\) and 7.5, respectively. Thus the soft-tilt model requires a moderate star formation efficiency of \(\epsilon\sim 0.1-0.3\) to match the observed CCSMDs both at \(z=9\) and 7.5. Table 1 summarizes the star formation efficiency required for the CCSMD of each model to exceed the observationally inferred lower limit (\(\epsilon_{\rm min}\)), mean (\(\epsilon_{\rm mean}\)), and upper limit (\(\epsilon_{\rm max}\)).
## 4 Discussion
The result of our cosmological simulations is largely consistent with the estimate of Parashari & Laha (2023) based on analytic halo mass functions. Our simulations confirm that the standard \(\Lambda\)CDM model requires unrealistically high star formation efficiencies of \(\epsilon>0.3\) to reconcile the observed CCSMD. Although there have already been proposals for galaxy formation physics that can realize a high star formation efficiency (e.g., Dekel et al., 2023), a slight modification of the PPS may be another promising solution that alleviates the need for large deviation from the currently popular galaxy formation models.
Interestingly, recent JWST observations also show that there are many galaxies with clumpy structures and also galaxies in the process of mergers in proto-cluster environments at \(z=7-9\)(Hashimoto et al., 2023; Hainline et al., 2023). Nonlinear structure forms early in the BTPS model, assembles via mergers, and thus galaxies at \(z=7-9\) tend to appear clumpy, as seen in Figure 2.
Figure 4: Cumulative comoving stellar mass density (CCSMD) for the \(\Lambda\)CDM (gray), soft-tilted (light blue), and hard-tilted (blue) models at \(z=9\) (panel a) and \(7.5\) (b). We adopt moderate star formation efficiency of \(\epsilon=0.1\) (solid lines) and \(0.3\) (dashed lines). The green regions are the CCSMD adopted from Parashari & Laha (2023) for the observations of Labbe et al. (2023).
Figure 3: Halo mass functions at \(z=9\), \(7.5\), and \(0\) from left to right. The solid lines show our simulation results and the dashed lines represent the analytical Sheth–Tormen mass functions (Sheth & Tormen, 1999) calculated by the public code genmf(Reed et al., 2007). The gray, light blue, and dark blue lines are for the \(\Lambda\)CDM, soft-tilted, and hard-tilted models. We construct the simulation data by combining results for different box sizes, \(L_{\rm box}=10\), \(25\), and \(50\,{\rm cMpc}\,h^{-1}\).
The formation epoch and the properties of the first stars are strongly affected by the PPS. Hirano et al. (2015) run a set of hydrodynamics simulations starting from nearly the same BTPS, and find that very massive stars with mass exceeding \(100\,M_{\odot}\) are formed early. Xing et al. (2023) discovered a metal-poor star that shows definite chemical signatures of the so-called pair-instability supernova caused by a very massive star early in the formation history of the Milky Way.
The existence of super-massive black holes in the early universe generally suggests rapid growth of structure and formation of appropriate seed black holes at early epochs (e.g., Inayoshi et al., 2020). Larson et al. (2023) discovered a massive black hole candidate at \(z=8.6\). Stellar-mass black holes are formed as remnants of massive Population III stars as early as \(z\sim 50-100\) in the BTPS model, leaving enough time for the seeds to grow by mass accretion to massive black holes by \(z=8\). Frequent halo mergers realized in the BTPS model may also enhance the formation of massive black holes in the early universe (Wise et al., 2019; Regan, 2023).
Tight constraints can be placed on the slope or the amplitude of PPS at sub-galactic length scales from the epoch of reionization. Contribution to reionization from individual Population III stars is severely limited by the measurement of the Thomson optical depth by Planck (Visbal et al., 2015). Since there still remains substantial uncertainty in both observations of reionization history (Hinshaw et al., 2013; Planck Collaboration et al., 2020; Forconi et al., 2023) and in astrophysical modeling (Fialkov, 2022), it would be necessary to perform detailed numerical simulations of galaxy formation and reionization for the BTPS model.
JWST has opened a new window into the distant universe. Future observations by JWST and by other cosmology surveys will reveal the physical properties and the formation history of galaxy _populations_ at high redshift, and will ultimately provide new insight into the early universe physics that generates the primordial density fluctuations from which the first galaxies are formed.
Numerical computations were carried out on Cray XC50 at CfCA in National Astronomical Observatory of Japan and Yukawa-21 at YITP in Kyoto University. Numerical analyses were in part carried out on the analysis servers at CfCA in National Astronomical Observatory of Japan. This work was supported by JSPS KAKENHI Grant Numbers JP21K13960, JP21H01123, and JP22H01259 (S.H.), and MEXT as "Program for Promoting Researches on the Supercomputer Fugaku" (Structure and Evolution of the Universe Unraveled by Fusion of Simulation and AI; Grant Number JPMXP1020230406, Project ID hp230204) (S.H. and N.Y).
|
2308.07485
|
Prospecting effective Yang-Mills-Higgs models for the asymptotic
confining flux tube
|
In this work, we analyze a large class of effective Yang-Mills-Higgs models
constructed in terms of adjoint scalars. In particular, we reproduce asymptotic
properties of the confining string, suggested by lattice simulations of $SU(N)$
pure Yang-Mills theory, in models that are stable in the whole range of
Higgs-field mass parameters. These properties include $N$-ality, Abelian-like
flux-tube profiles, independence of the profiles with the $N$-ality of the
quark representation, and Casimir scaling. We find that although these models
are formulated in terms of many fields and possible Higgs potentials, a
collective behavior can be established in a large region of parameter space,
where the desired asymptotic behavior is realized.
|
David R. Junior, Luis E. Oxman, Gustavo M. Simões
|
2023-08-14T22:34:08Z
|
http://arxiv.org/abs/2308.07485v2
|
# Prospecting effective Yang-Mills-Higgs models for the asymptotic confining flux tube
###### Abstract
In this work, we analyze a large class of effective Yang-Mills-Higgs models constructed in terms of adjoint scalars. In particular, we reproduce asymptotic properties of the confining string, suggested by lattice simulations of \(SU(N)\) pure Yang-Mills theory, in models that are stable in the whole range of Higgs-field mass parameters. These properties include \(N\)-ality, Abelian-like flux-tube profiles, independence of the profiles with the \(N\)-ality of the quark representation, and Casimir scaling. We find that although these models are formulated in terms of many fields and possible Higgs potentials, a collective behavior can be established in a large region of parameter space, where the desired asymptotic behavior is realized.
## I Introduction
The dual superconductivity scenario to describe confinement in pure Yang-Mills (YM) theory has been a subject of intense research for several decades [1; 2; 3; 4; 5; 6]. According to this mechanism, the Yang-Mills vacuum behaves as a condensate of chromomagnetic objects that gives rise to a confining flux tube between quark probes. This idea has been explored extensively using lattice simulations. For example, along the transverse direction to the flux tube, the profile for the longitudinal component of the chromoelectric field has been fitted with the solitonic Abelian Nielsen-Olesen vortex [7]. The underlying objects that could condense in four-dimensional spacetime have also been studied [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. Ensembles formed by monopoles that propagate along worldlines and thin center-vortices, which are gauge field configurations characterized by loops that propagate along worldsurfaces, have been identified in the YM vacuum [13]. In particular, the \(N\)-ality property observed in large Wilson loops was reproduced when the average over Monte Carlo configurations is replaced by one over simpler thin center-vortex configurations, extracted from the complete link variables, which happen to percolate in the continuum limit. Then, one important question is how to conciliate the Abelian-like behavior of the flux tube and \(N\)-ality. These features can be accommodated in effective Yang-Mills-Higgs (YMH) field models with \(SU(N)\to Z(N)\) spontaneous symmetry breaking (SSB) [20; 21; 22; 23; 24; 25; 26; 27], with or without \(SU(N)\) flavor symmetry. Moreover, \(SU(N)\) gauge field models constructed with adjoint Higgs fields effectively describe the asymptotic behavior of the different condensates observed in the lattice [28] (see also [19]). In these models, the effective \(SU(N)\) gauge field \(\Lambda_{\mu}\) represents the Goldstone modes for the percolating thin center vortices, with the natural \(N\)-matching rule among center-vortex worldsurfaces. The adjoint Higgs fields, minimally coupled to \(\Lambda_{\mu}\), effectively describe monopole worldlines attached to worldsurfaces, thus including the nonoriented (in the Lie algebra) center-vortex component. The Higgs potential contains a mass term \(\mu^{2}(\psi_{I},\psi_{I})\) and the natural matching rules among monopole worldlines. On this direction, an \(SU(N)\) color and flavor symmetric model based on adjoint fields \(\psi_{I}\in\mathfrak{su}(N)\) with \(N^{2}-1\) flavors, \(I=1,\ldots,N^{2}-1\), was analyzed in Refs. [29; 30; 31]. Within this framework, when \(\mu^{2}=0\), the flux tube between external probes coincides with the Abelian Nielsen-Olesen (finite) vortex. In addition, at asymptotic distances, as the group representation of the external probes is varied, the string tension satisfies a Casimir scaling law.
From a phenomenological point of view, studies in the lattice cast some doubts about whether field models with Nielsen-Olesen profiles are suitable to describe the confining string. The analysis of the energy-momentum tensor showed deviations from the Abelian counterparts at intermediate
distances [32]. A possible way out could be the consideration of non-Abelian models away from the Abelianization point. However, it could also happen that the intermediate confining regime lies outside the domain of applicability of the effective field model. Being originated from thin objects, it could only be used at asymptotic distances. In this case, there is still an issue with the model studied in Ref. [30]: when moving from the Abelianization point at \(\mu^{2}=0\), there is a neighboring region (\(\mu^{2}<0\)) where the model becomes unstable. In fact, it would be interesting if the \(SU(N)\to Z(N)\) SSB pattern could coexist with a negative \(\mu^{2}\), which is naturally obtained in ensembles where monopole worldlines have negative tension (monopole proliferation) and positive stiffness [33; 34]. On this direction, this state could be stabilized by additional quartic terms not considered in the original formulation. Indeed, the model described in Ref. [30] does not contain all possible terms compatible with color and flavor symmetry. In this work, we present a thorough investigation of the most general flavor-symmetric \(SU(N)\) model with \(N^{2}-1\) adjoint Higgs flavors, studying the possibility of coexistence of asymptotic \(N\)-ality, Abelian-like profiles, Casimir scaling, and stable regions in parameter space. These are important properties compatible with present lattice simulations of pure YM theory. The observed independence of the flux-tube cross-section with respect to the \(N\)-ality of the quark representation [35] will also be discussed.
## II General \(Su(n)\) model with \(N^{2}-1\) flavors
In this section, we shall initially review an effective SU(N) YMH model with \(N^{2}-1\) adjoint scalar fields, which was proposed in Ref. [39]. Its action reads
\[S=\int d^{4}x\left(\frac{1}{4}\langle F_{\mu\nu},F_{\mu\nu} \rangle+\frac{1}{2}\langle D_{\mu}\psi_{I},D_{\mu}\psi_{I}\rangle+V_{\rm H}( \psi)\right)\ \ \ \,\ \ \ \ \psi_{I}\in\mathfrak{su}(N)\;,\] \[F_{\mu\nu}=\frac{i}{g}\left[D_{\mu},D_{\nu}\right]\ \ \ \,\ \ \ \ D_{\mu}= \partial_{\mu}+g\Lambda_{\mu}\!\wedge\;,\] \[V_{\rm H}(\psi)=c+\frac{\mu^{2}}{2}\langle\psi_{A},\psi_{A} \rangle+\frac{\kappa}{3}f_{ABC}\langle\psi_{A}\wedge\psi_{B},\psi_{C}\rangle+ \frac{\lambda}{4}\langle\psi_{A}\wedge\psi_{B},\psi_{A}\wedge\psi_{B}\rangle\;. \tag{1}\]
Here, we used the notation \(X\wedge Y=-i[X,Y]\), while the brackets denote the Killing product between two Lie Algebra elements, defined by
\[\langle X,Y\rangle={\rm Tr}\left({\rm Ad}(X){\rm Ad}(Y)\right)\ \ \ \ \,\ \ \ \ X,Y\in\mathfrak{su}(N)\;, \tag{2}\]
where \({\rm Ad}(X)\) refers to the adjoint representation. In our conventions, the basis \(T_{A}\) satisfies \(\langle T_{A},T_{B}\rangle=\delta_{AB}\). This model is invariant under color transformations
\[\psi_{I}\to U\psi_{I}U^{-1}\ \ \ \,\ \ \ \ U\in SU(N) \tag{3}\]
and under flavor transformations
\[\psi_{I}\rightarrow{\rm Ad}(U)_{IJ}\psi_{J}\;. \tag{4}\]
For an appropriate choice of the parameters, an \(SU(N)\to Z(N)\) spontaneous symmetry breaking (SSB) pattern is triggered. Then, as the first homotopy group of the vacuum manifold \({\cal M}=SU(N)/Z(N)\) is \(Z(N)\), the topologically stable vortex solutions display \(N\)-ality. At \(\mu^{2}=0\), they are Abelian-like and have an exact Casimir law at asymptotic distances for the \(k\)-Antisymmetric representations [40]. Furthermore, at \(\lambda=g^{2}\), this scaling law was shown to be stable as the energy of the \(k\)-Antisymmetric irrep is the smallest among the irreps with \(N\)-ality \(k\)[41]. These properties are compatible with those of the confining string observed in lattice simulations [35; 42]. Note that
the \(SU(N)\to Z(N)\) SSB becomes unstable for \(\mu^{2}<0\), as the energy of aligned configurations (\(\psi_{A}=\psi_{B}\,\) for all \(A\), \(B\)) is arbitrarily negative for large \(\langle\psi_{A},\psi_{A}\rangle\). This happens because both the cubic and the quartic terms are zero in this case. Thus, we are led to look for new models with additional relevant terms to stabilize the desired phase. Initially, it is interesting to consider a potential that depends on \(\psi_{A}\) through the real variable
\[\phi(x,g)=\langle\psi_{A},gT_{A}g^{-1}\rangle\;\;\;\;\;,\;\;\;\;g\,\in SU(N) \tag{5}\]
as follows
\[V(\psi)=c_{0}+\int d\mu(g)\,\left(\tfrac{a}{2}\phi^{2}+\tfrac{b}{3}\phi^{3}+ \tfrac{c}{4}\phi^{4}\right)\;. \tag{6}\]
This potential can be shown to be invariant under both color and flavor transformations (cf. eqs. (3), (4)) after using the invariance of the Haar measure \(\int d\mu(g)=\int d\mu(gU)\), \(U\in SU(N)\). Here, it is also clear that the quartic term is non-negative and the potential is bounded from below. In particular, that term would vanish only if \(\phi(g)\) itself vanished for every \(U\), in which case the quadratic term would also be zero. Therefore, the above-mentioned stability issue does not exist in this case. Indeed, when compared with eq. (1), this model generates additional terms, which happen to be all possible terms that are compatible with color and flavor symmetry up to quartic order, given in a specific combination. In what follows, we will show this statement while, in the next section, we will study the most general model with this symmetry, obtained by assigning arbitrary coefficients to all generated terms.
Since \(\psi(g)=R_{AA^{\prime}}(g)\psi_{AA^{\prime}}\), where \(R(g)\) stands for the adjoint representation matrix of \(g\), the integrand contains two, three and four tensor products of adjoint representations of \(SU(N)\). Only the singlet part of these tensor products yields a nonvanishing contribution for the integral. According to Refs. [37; 38], the tensor product of two adjoint representations \(R(g)_{ab}R(g)_{cd}\) can be decomposed into 7 different irreps1 (the Young tableaux are shown in Fig. 1). The associated Hermitian projectors are
Footnote 1: This is for \(N>3\). For \(N=2\), only the representations \(S\), \(F\) and \(X\) exist and are associated with spins 0, 1, and 2 respectively. For \(N=3\), the representation \(Y\) does not exist. Notice that this does not mean that the corresponding projectors vanish if one naively sets \(N=2\) or \(N=3\). Simply, they should not be considered in these particular cases.
\[P_{S}|_{CD}^{AB}= \frac{1}{N^{2}-1}\delta_{AB}\delta_{CD}\;, \tag{7a}\] \[P_{F}|_{CD}^{AB}= f_{ABE}f_{CDE}\;,\] (7b) \[P_{D}|_{CD}^{AB}= \frac{N^{2}}{N^{2}-4}d_{ABE}d_{CDE}\;,\] (7c) \[P_{X}|_{CD}^{AB}= \frac{1}{4}\delta_{AC}\delta_{BD}+\frac{N+2}{4N}\delta_{AD}\delta _{BC}-\frac{1}{2N(N+1)}\delta_{AB}\delta_{CD}\] \[-\frac{N}{4}f_{ADE}f_{BCE}+\frac{N}{4}d_{ADE}d_{BCE}-\frac{N}{2(N+ 2)}d_{ABE}d_{CDE}\;,\] (7d) \[P_{Y}|_{CD}^{AB}= \frac{N-2}{4N}(\delta_{AC}\delta_{BD}+\delta_{AD}\delta_{BC})+ \frac{N-2}{2N(N-1)}\delta_{AB}\delta_{CD}\] \[-\frac{N}{4}(d_{ACE}d_{BDE}+d_{ADE}d_{BCE})+\frac{N(N-4)}{4(N-2)}d _{ABE}d_{CDE}\;,\] (7e) \[P_{T}|_{CD}^{AB}= \frac{1}{4}(\delta_{AC}\delta_{BD}-\delta_{AD}\delta_{BC})-\frac {1}{2}f_{ABE}f_{CDE}+i\frac{N}{4}(f_{ADE}d_{BCE}+d_{ADE}f_{BCE})\;,\] (7f) \[P_{T}|_{CD}^{AB}= P_{T}^{*}|_{CD}^{AB}\;, \tag{7g}\]
here the symmetryc and antisymmetryc structure constants \(d_{ABC}\) and \(f_{ABC}\) are defined as
\[T_{A}\wedge T_{B} =f_{ABC}T_{C}\;, \tag{8a}\] \[\{T_{A},T_{B}\} =\frac{\delta_{AB}}{N^{2}}\mathbb{I}+d_{ABC}T_{C}\;. \tag{8b}\]
These projectors allow us to obtain explicit expressions for the different integrals by decreasing the number of matrices in the integrand up to a point where the orthogonality relations [36],
\[\int d\mu(g)D^{(i)}(g)|_{\xi\xi^{\prime}}D^{(j)}(g^{-1})|_{\zeta^{\prime}\zeta }=\frac{\delta_{ij}\delta_{\xi\zeta}\delta_{\xi^{\prime}\zeta^{\prime}}}{d_{i }}\;, \tag{9}\]
can be used. The indices \(i,j\) label irreducible representations, while \(d_{i}\) is the dimension of \(D^{(i)}\). Since these representations are unitary, their components satisfy \(D(g^{-1})_{B^{\prime}B}=D(g)^{\dagger}_{B^{\prime}B}=D^{*}(g)_{BB^{\prime}}\). For the quadratic term, we can directly use Eq, (9) to obtain
\[\int d\mu(g)\,\phi^{2}=\langle\psi_{A},T_{A^{\prime}}\rangle\langle\psi_{B},T _{B^{\prime}}\rangle\int d\mu(g)R(g)_{AA^{\prime}}R(g)_{BB^{\prime}}=\frac{ \langle\psi_{A},\psi_{A}\rangle}{N^{2}-1}\;. \tag{10}\]
To evaluate the cubic term, the completeness property
\[\delta_{AA^{\prime}}\delta_{BB^{\prime}}=P_{S}|^{AB}_{A^{\prime}B^{\prime}}+P_ {F}|^{AB}_{A^{\prime}B^{\prime}}+\ldots \tag{11}\]
leads to
\[I_{3}|^{ABC}_{A^{\prime}B^{\prime}C^{\prime}}=\int d\mu(g)R(g)_{ AA^{\prime}}R(g)_{BB^{\prime}}R(g)_{CC^{\prime}}\] \[=\int d\mu(g)R(g)_{AA^{\prime\prime}}R(g)_{BB^{\prime\prime}} \left(P_{S}|^{A^{\prime\prime}B^{\prime\prime}}_{A^{\prime}B^{\prime}}+P_{F}| ^{A^{\prime\prime}B^{\prime\prime}}_{A^{\prime}B^{\prime}}+P_{D}|^{A^{\prime \prime}B^{\prime\prime}}_{A^{\prime}B^{\prime}}+...\right)R(g)_{CC^{\prime}} \tag{12}\] \[=\frac{1}{d_{Ad}}P_{F}|^{AB}_{A^{\prime}B^{\prime}}+\frac{1}{d_{ Ad}}P_{D}|^{AB}_{A^{\prime}B^{\prime}}\;. \tag{13}\]
Here, we used that \(P_{i}\) selects the subspace carrying the irreducible representation \(i\) so that, in each term, the product \(R_{AA^{\prime}}R_{BB^{\prime}}P_{(i)}|^{A^{\prime\prime}B^{\prime\prime}}_{A^{ \prime}B^{\prime\prime}}\) can be thought of as components of a single \(D^{(i)}\), which allows us to use the orthogonality relations to evaluate the group integral. As the last factor in eq.
Figure 1: Young Tableaux of all irreps. contained in the tensor product of two \(Ad(SU(N))\) representations.
(12) is in the adjoint, the only contribution is originated from the two independent subspaces that carry an adjoint representation in \(\mathrm{Ad}\otimes\mathrm{Ad}\). In this manner, we get
\[\int d\mu(g)\,\phi^{3} =\frac{f_{ABC}\langle\psi_{A}\wedge\psi_{B},\psi_{C}\rangle}{N^{2}- 1}+\frac{N^{2}d_{ABC}\langle\psi_{A}\vee\psi_{B},\psi_{C}\rangle}{(N^{2}-1)(N^ {2}-4)}\.\]
Similarly, We can proceed with the quartic term when computing
\[I_{4|A^{\prime}B^{\prime}C^{\prime}D^{\prime}} =\int d\mu(g)R(g)_{AA^{\prime}}R(g)_{BB^{\prime}}R(g)_{CC^{\prime}}R (g)_{DD^{\prime}}. \tag{15}\]
This time we can introduce a pair of completeness relations to reduce the components of \(R(g)\) in terms of components of \(D^{(i)}\) and then use the orthogonality relation in eq. (9). Most of the contributions will be originated from products reduced by the same projectors. Two notable exceptions are the products between the representations \(T\) and \(\bar{T}\) and the two different adjoints, one associated with \(f_{ABC}\) and the other with \(d_{ABC}\). The result for the quartic term is
\[\int d\mu(g)\,\phi^{4} =\langle\psi_{A},T_{A^{\prime}}\rangle\langle\psi_{B},T_{B^{ \prime}}\rangle\langle\psi_{C},T_{C^{\prime}}\rangle\langle\psi_{D},T_{D^{ \prime}}\rangle I_{4|A^{\prime}B^{\prime}C^{\prime}D^{\prime}}\, \tag{16a}\] \[I_{4|A^{\prime}B^{\prime}C^{\prime}D^{\prime}} =\frac{N^{2}f_{ABE}d_{CDE}f_{A^{\prime}B^{\prime}E^{\prime}}d_{C^ {\prime}D^{\prime}E^{\prime}}+N^{2}d_{ABE}f_{CDE}d_{A^{\prime}B^{\prime}E^{ \prime}}f_{C^{\prime}D^{\prime}E^{\prime}}}{(N^{2}-1)(N^{2}-4)}+\] \[+\frac{4P_{T}|^{AB}_{CD}P_{T}|^{A^{\prime}B^{\prime}}+4P_{T}|^{AB }_{CD}P_{T}|^{A^{\prime}B^{\prime}}_{C^{\prime}D^{\prime}}}{(N^{2}-1)(N^{2}-4 )}+\sum_{i=S,F,D,X,Y}\frac{1}{d_{i}}P_{i}|^{AB}_{CD}P_{i}|^{A^{\prime}B^{ \prime}}_{C^{\prime}D^{\prime}}. \tag{16b}\]
Now, let us see that the terms generated by the model in eq. (6) are all possible terms invariant under the desired symmetry. It is clear that the quadratic term on the right-hand side of eq. (10) is the only possibility. As for the cubic term, the most general combination is given by
\[C^{ABC}_{A^{\prime}B^{\prime}C^{\prime}}\psi_{AA^{\prime}}\psi_{ BB^{\prime}}\psi_{CC^{\prime}}\, \tag{17}\]
with \(\psi_{AA^{\prime}}=\langle\psi_{A},T_{A^{\prime}}\rangle\), i.e. primed indices refer to color and unprimed refer to flavor. To ensure that color and flavor symmetries
\[\psi_{AA^{\prime}}\to R_{A^{\prime}B^{\prime}}\psi_{AB^{\prime}} \ \ \ \,\ \ \ \ \psi_{AA^{\prime}}\to R_{AB}\psi_{BA^{\prime}}\, \tag{18}\]
are independently present, \(C^{ABC}_{A^{\prime}B^{\prime}C^{\prime}}\) must be a linear combination of the antisymmetric and symmetric structure constants of \(\mathfrak{su}(N)\) in both sets of prime and unprimed indices. Indeed, these structure constants are the only invariant tensors with three indices, that is, the only singlets in \(\mathrm{Ad}^{\otimes 3}\equiv\mathrm{Ad}\otimes\mathrm{Ad}\otimes\mathrm{Ad}\). Therefore, the most general cubic term can be parametrized by
\[\frac{\kappa_{f}}{3}f_{ABC}\langle\psi_{A}\wedge\psi_{B},\psi_{C} \rangle+\frac{\kappa_{d}}{3}d_{ABC}\langle\psi_{A}\vee\psi_{B},\psi_{C} \rangle\, \tag{19}\]
which corresponds to assigning arbitrary coefficients to the terms obtained in eq. (14). Regarding the quartic term, the most general possibility is
\[C^{ABCD}_{A^{\prime}B^{\prime}C^{\prime}D^{\prime}}\psi_{AA^{\prime}}\psi_{ BB^{\prime}}\psi_{CC^{\prime}}\psi_{DD^{\prime}}. \tag{20}\]
This time, \(C^{ABCD}_{A^{\prime}B^{\prime}C^{\prime}D^{\prime}}\) must be a linear combination of terms of the form
\[T^{ABCD}_{\mathrm{f}}T^{A^{\prime}B^{\prime}C^{\prime}D^{\prime}}_{\mathrm{c}}\, \tag{21}\]
with both tensors \(T_{\rm f}\) and \(T_{\rm c}\) being invariant under the adjoint action of \(SU(N)\). The space of invariants with four adjoint indices (singlets in \({\rm Ad}^{\otimes 4}\)) has basis2\(\delta_{AB}\delta_{CD}\), \(\delta_{AC}\delta_{BD}\), \(\delta_{AD}\delta_{BC}\), \(d_{ABE}d_{CDE}\), \(d_{ACE}d_{BDE}\), \(d_{ADE}d_{BCE}\), \(f_{ABE}f_{CDE}\), \(f_{ACE}f_{BDE}\), and \(f_{ADE}f_{BCE}\)[37]. In principle, with 9 invariants, the most general \(C^{ABCD}_{A^{\prime}P^{\prime}C^{\prime}D^{\prime}}\) is written as a linear combination of 81 interactions. However, at most 9 of these 81 interactions3 are non-vanishing and linearly independent. The key equations to eliminate these redundancies are
Footnote 2: In fact, for \(SU(2)\) and \(SU(3)\) this basis is overcomplete and should include only 3 and 8 elements, respectively. For \(SU(2)\), \(d_{ABC}=0\) and \(f_{ABE}f_{CDE}=\delta_{AC}\delta_{BD}-\delta_{AD}\delta_{BC}\). For \(SU(3)\), Burgoyne’s identity \(3(d_{ABE}d_{CDE}+d_{ACE}d_{BDE}+d_{ADE}d_{BCE})=\delta_{AD}\delta_{CD}+\delta_ {AC}\delta_{BD}+\delta_{AD}\delta_{BC}\) can be used.
\[f_{ABE}f_{CDE} = d_{ACE}d_{BDE}-d_{ADE}d_{BCE}+\frac{2}{N^{2}}(\delta_{AC}\delta _{BD}-\delta_{AD}\delta_{BC})\;, \tag{22}\] \[f_{ABE}d_{CDE} = d_{ADE}f_{BCE}+d_{ACE}f_{BDE}\;. \tag{23}\]
For organizational purposes, these 9 interactions can be subdivided into four sets, depending on how the flavor indices of the Higgs fields are matched. In the flavor-singlet set, the flavor indices are contracted with each other:
\[V_{1}^{(4)}=(\langle\psi_{A},\psi_{A}\rangle)^{2} ; \quad V_{2}^{(4)}=\langle\psi_{A},\psi_{B}\rangle\langle\psi_{A}, \psi_{B}\rangle\;; \tag{24a}\] \[V_{3}^{(4)}=\langle\psi_{A}\wedge\psi_{B},\psi_{A}\wedge\psi_{B}\rangle ; \quad V_{4}^{(4)}=\langle\psi_{A}\vee\psi_{A},\psi_{B}\vee\psi_{B} \rangle\;. \tag{24b}\]
Notice \(V_{3}\) is the interaction that was analyzed in previous papers [39; 40; 41]. In the f-adjoint set, the flavor indices are matched with two copies of the antisymmetric structure constants \(f_{ABC}\):
\[V_{5}^{(4)} =f_{ABE}f_{CDE}\langle\psi_{A},\psi_{C}\rangle\langle\psi_{B}, \psi_{D}\rangle\;; \tag{25a}\] \[V_{6}^{(4)} =f_{ABE}f_{CDE}\langle\psi_{A}\wedge\psi_{B},\psi_{C}\wedge\psi_{ D}\rangle\;. \tag{25b}\]
The d-adjoint set is analogous to the f-adjoint one, but with symmetric structure constants \(d_{ABC}\) instead:
\[V_{7}^{(4)} =d_{ABE}d_{CDE}\langle\psi_{A},\psi_{B}\rangle\langle\psi_{C}, \psi_{D}\rangle\;; \tag{26a}\] \[V_{8}^{(4)} =d_{ABE}d_{CDE}\langle\psi_{A}\vee\psi_{B},\psi_{C}\vee\psi_{D} \rangle\;, \tag{26b}\]
where we defined
\[X\lor Y=\{X,Y\}-\frac{\langle X,Y\rangle}{N^{2}}\;. \tag{27}\]
Finally, there is the mixed adjoint set, where the flavor indices are matched with both \(f_{ABC}\) and \(d_{ABC}\):
\[V_{9}^{(4)}=f_{ABE}d_{CDE}\langle\psi_{A}\wedge\psi_{B},\psi_{C}\vee\psi_{D} \rangle\;,\] (28a) (28b)
These terms are all generated in eq. (16a).
## III Ansatz for the vortex solutions
In this section, we shall study the most general color and flavor symmetric model for a set of \(N^{2}-1\) SU(N) adjoint Higgs fields. The total energy in the presence of static \(\Lambda_{0}=0\) gauge fields is
\[E=\int d^{4}x\,\left(\frac{1}{2}\langle B_{i},B_{i}\rangle+\langle D_{\mu}( \Lambda)\psi_{I},D_{\mu}(\Lambda)\psi_{I}\rangle+V_{\rm gen}(\psi)\right)\;. \tag{29}\]
The most general potential, as discussed in the previous section, is given by
\[V_{\rm gen}(\psi)=c_{0}+\frac{1}{2}\mu^{2}V^{(2)}+\frac{1}{3}\kappa_{f}V^{(3)-f}+ \frac{1}{3}\kappa_{d}V^{(3)-d}+\frac{1}{4}\sum_{i=1}^{9}\lambda_{i}V_{i}^{(4)}( \psi)\;. \tag{30}\]
Here, we defined
\[V^{(2)}=\langle\psi_{A},\psi_{A}\rangle\;,\] \[V^{(3)-f}=f_{ABC}\langle\psi_{A}\wedge\psi_{B},\psi_{C}\rangle\;,\] \[V^{(3)-d}=d_{ABC}\langle\psi_{A}\vee\psi_{B},\psi_{C}\rangle\;. \tag{31}\]
Regarding the stability of the potential in eq. (30), all of the interactions \(V_{i}^{(4)}\) are positive, except for \(V_{9}^{(4)}\), which was verified numerically to have an indefinite sign. However, it is clear that this term can be present in a stable model as it occurs in the one defined in eq. (6). Moreover, the instability problem pointed out in Sec. I when \(\mu^{2}<0\) can be easily overcome in a large region of parameter space while keeping Abelian-like behavior and Casimir Scaling. This will be revisited at the end of Secs. IV and V.2.
In Ref. [40] an ansatz for a vortex with charge \(k\) was proposed for the model defined by eq. (1). In this subsection, we will show that the same ansatz closes the equations of motion for the general model of eq. (29). The ansatz is proposed in the Cartan-Weyl basis of the Lie Algebra, which consists of \(N-1\) diagonal generators \(T_{q}\), \(q=1,\ldots,N-1\), and \(\frac{N(N-1)}{2}\) pairs of off-diagonal generator \(T_{\alpha},T_{\bar{\alpha}}\), which are labeled by the positive roots \(\alpha\) of \(SU(N)\)4. The ansatz reads
Footnote 4: See Appendices A, B for a review of some aspects of the Lie Algebra of SU(N) which are relevant for this work.
\[\Lambda_{i} =S\mathcal{A}_{i}S^{-1}+\frac{i}{g}S\partial_{i}S^{-1}\;, \tag{32a}\] \[\mathcal{A}_{i} =(a-1)\frac{k}{g}\partial_{i}\phi\beta\cdot T\;,\] (32b) \[\psi_{q} =h_{qp}ST_{q}S^{-1}\;,\] (32c) \[\psi_{\alpha/\bar{\alpha}} =h_{\alpha}ST_{\alpha/\bar{\alpha}}S^{-1}\;,\] (32d) \[S =e^{i\phi\beta\cdot T}\;,\;\beta=2N\Omega\;. \tag{32e}\]
Here, \(\beta\) is the magnetic weight, and \(\Omega\) is the highest weight of the representation of the static quarks. We used the notation \(\beta\cdot T=\beta_{q}T_{q}\) and, for later convenience, we will assume \(h_{-\alpha}=h_{\alpha}\) even though the profiles \(h_{\alpha}\) were initially defined only for positive roots.
These profile functions depend only on the cylindrical coordinate distance \(\rho\) and must obey boundary conditions to reproduce smooth vortex configurations centered on the z-axis. At infinity, they must be such that the fields are in the vacuum manifold, that is
\[\text{When }\rho\rightarrow\infty\;,\;a\to 1\;,\;h_{qp} \to v\delta_{qp}\;,\;h_{\alpha}\to v\;. \tag{33}\]
Moreover, some of them must also obey regularity conditions in the vortex center \(\rho=0\). The gauge profile \(a(\rho)\) must vanish there to avoid a divergent magnetic field. As for the Higgs profiles, one must consider the behavior of the local frame
\[ST_{q}S^{-1}= T_{q}\;, \tag{34a}\] \[ST_{\alpha}S^{-1}= \cos\left(\beta\cdot\alpha\,\varphi\right)T_{\alpha}-\sin\left( \beta\cdot\alpha\,\varphi\right)T_{\bar{\alpha}}\;,\] (34b) \[ST_{\bar{\alpha}}S^{-1}= \sin\left(\beta\cdot\alpha\,\varphi\right)T_{\alpha}+\cos\left( \beta\cdot\alpha\,\varphi\right)T_{\bar{\alpha}}\;. \tag{34c}\]
This implies that if \(\beta\cdot\alpha\neq 0\), the field \(\psi_{\alpha}\) is ill defined at \(\rho=0\). This leads to the regularity condition at \(\rho=0\):
\[a=0\text{ and }h_{\alpha}=0\text{ if }\beta\cdot\alpha\neq 0\;. \tag{35}\]
We will restrict the analysis to the \(k-\)Antisymmetric representations, as these are expected to give rise to the most stable confining strings in the asymptotic regime. Their highest weights are given by (see Appendix A for a very brief review of the weights of SU(N))
\[\Omega=\Omega^{(k)}=\sum_{i=1}^{k}\omega_{i}\;, \tag{36}\]
where \(\omega_{i}\), \(i=1,...,N\) are the weights of the fundamental representation. We must now show that the full equations of motion of the model can be reduced to a set of scalar ones for the profiles \(a,h_{qp},h_{\alpha}\). In this respect, let us initially investigate the implications for the gauge field equation
\[D_{j}F_{ij}=ig[\psi_{A},D_{i}\psi_{A}]\;. \tag{37}\]
In Ref. [40], this ansatz was shown to work for a potential that corresponds to the particular choice \(\mu^{2}\geq 0,\kappa_{d}=0,\lambda_{i\neq 3}=0\) in the notation of the present paper. Since the equation for the gauge field is the same regardless of the potential, the same must hold for the general case. A nontrivial question is whether the equations for the Higgs fields close or not. These are
\[D^{2}\psi_{A}=\frac{\delta V_{\text{gen}}}{\delta\psi_{A}}\;. \tag{38}\]
Using the commutation relations (34), the left-hand side can be evaluated as
\[D^{2}\psi_{q}=\nabla^{2}h_{qp}T_{p}\;,\;D^{2}\psi_{\alpha/\overline{\alpha}}= \left(\nabla^{2}h_{\alpha}+(1-a)^{2}(\alpha\cdot\beta/\rho)^{2}\right)ST_{ \alpha/\overline{\alpha}}S^{-1}\;. \tag{39}\]
To present the results obtained for the right-hand side of (38) (i.e. the forces), we define the following quantities
\[F^{(2)-A}=\frac{1}{2}\frac{\delta V^{(2)}}{\delta\psi^{A}}\;,\; F^{(4)-A}_{i}=\frac{1}{4}\frac{\delta V^{(4)}_{i}}{\delta\psi_{A}}\;,\;i=1,...,9 \tag{40a}\] \[F^{(3)-f-A}=\frac{1}{3}\frac{\delta V^{(3)-f}}{\delta\psi^{A}}\;,\;F^{(3)-d-A}=\frac{1}{3}\frac{\delta V^{(3)-d}}{\delta\psi^{A}}\;. \tag{40b}\]
We shall start by analyzing the Cartan sector, i.e. when \(A=q\). In light of eq. (39), the ansatz closes if the right-hand side is proportional to a combination of the Cartan generators.
In the ansatz, the expressions for the lower-order forces, as well as those in the flavor-singlet category, are easier to obtain than the ones in the other categories. For this reason, we simply present them below
\[F^{(2)-q}=h_{qp}T_{p}\;\;,\;\;\text{ F }^{(3)-f-q}=h_{\alpha}^{2}\alpha|_{q}\alpha \cdot T\;\;,\;\;\text{ F }^{(3)-d-q}=0\;, \tag{41a}\] \[F^{(4)-q}_{1}=\left(\text{Tr}(\mathbb{H}^{\,\mathbb{T}}\mathbb{ H})+2\text{h}_{\alpha}^{2}\right)h_{qp}T_{p}\;,\;\text{ F }_{\,\,2}^{(4)-q}=h_{ql}h_{pl}h_{pm}T_{m}\;,\;\text{ F }_{\,\,3}^{(4)-q}=2h_{\alpha}^{2}h_{qp}\alpha|_{p}\alpha\cdot T\;,\] (41b) \[F^{(4)-q}_{4}=(4N)^{2}h_{qp}\omega_{j}|_{p}(\omega_{i}\cdot \mathbb{H}^{\,\mathbb{T}}\mathbb{H}.\omega_{i})(\omega_{i}\cdot\omega_{j}) \omega_{j}\cdot T+8Nh_{\alpha}^{2}(\tilde{\alpha}\cdot\omega_{i})h_{qp} \omega_{i}|_{p}\omega_{i}\cdot T\;, \tag{41c}\]
where \(\mathbb{H}|_{qp}=h_{qp}\), \(i\) and \(j\) are summed from \(1\) to \(N\) and \(\alpha\) is summed over the positive roots.
When it comes to \(F_{i\geq 5}^{(4)-q}\), the calculations are more subtle. For illustrative purposes, we will show the main steps to compute \(F_{6}^{(4)}\) since it represents well the overall complexity. Recalling eq. (25b), we have
\[F_{6}^{(4)-A}=f_{ABE}f_{CDE}\psi_{B}\wedge(\psi_{C}\wedge\psi_{D})\;.\] (42a) The first step is to consider all the non-vanishing possibilities for the indices \[B\], \[C\], and \[D\] given that \[A=q\], i.e. \[F_{6}^{(4)-q} = 2f_{q\alpha\overline{\alpha}}f_{p\alpha\overline{\alpha}}\psi_{ \alpha}\wedge(\psi_{p}\wedge\psi_{\alpha})+2f_{q\overline{\alpha}\alpha}f_{p \overline{\alpha}\alpha}\psi_{\overline{\alpha}}\wedge(\psi_{p}\wedge\psi_{ \overline{\alpha}})+f_{q\alpha\overline{\alpha}}f_{\gamma\eta\overline{\alpha} }\psi_{\alpha}\wedge(\psi_{\gamma}\wedge\psi_{\eta}) \tag{43}\] \[+f_{q\alpha\overline{\alpha}}f_{\gamma\overline{\eta}\overline{ \alpha}}\psi_{\alpha}\wedge(\psi_{\overline{\gamma}}\wedge\psi_{\bar{\eta}})+2 f_{q\overline{\alpha}\alpha}f_{\gamma\overline{\eta}\alpha}\psi_{\overline{ \alpha}}\wedge(\psi_{\gamma}\wedge\psi_{\bar{\eta}})\;\;,\]
where the sum in \(\alpha\) is to be performed over all positive roots and in \(\gamma\) and \(\eta\) over all positive roots as long as \(\alpha\neq\eta\neq\gamma\neq\alpha\). Using the properties of the structure constants, we can reduce eq. (43) to a sum of two terms. The first one can be readily evaluated using eqs. (B5) and (B8a):
\[4f_{q\alpha\overline{\alpha}}f_{p\alpha\overline{\alpha}}\psi_{\alpha}\wedge( \psi_{p}\wedge\psi_{\alpha})=4\alpha|_{q}\alpha|_{p}h_{pl}h_{\alpha}^{2}\alpha _{l}\alpha_{r}T_{r}=4h_{\alpha}^{2}\left(\alpha\cdot\mathbb{H}\cdot\alpha \right)\alpha|_{q}\alpha\cdot T\;. \tag{44}\]
The second one reads
\[4f_{q\alpha\overline{\alpha}}f_{\gamma\eta\overline{\alpha}}\psi_{\alpha} \wedge(\psi_{\gamma}\wedge\psi_{\eta})=4h_{\alpha}h_{\gamma}h_{\eta}\alpha|_{ q}f_{\gamma\eta\overline{\alpha}}f_{\gamma\eta\overline{\delta}}S\left(T_{ \alpha}\wedge T_{\overline{\delta}}\right)S^{-1}\;, \tag{45}\]
where \(\alpha\), \(\gamma\), \(\eta\) and \(\delta\) are summed over the positive roots. To simplify this expression further, eq. (B8c) plays a crucial role. First, notice that the above expression vanishes unless \(\alpha=\delta\) since both \(\delta\) and \(\alpha\) are positive roots. The positivity is important here because, otherwise, eq. (B8c) would allow other possibilities like, for example, \(\alpha=\gamma-\eta=-\delta\). Also, using eq. (B10), we find
\[f_{\gamma\eta\overline{\alpha}}^{2}=\frac{1}{2}N_{\gamma,\eta}^{2}(\delta_{ \alpha,\gamma+\eta}+\delta_{\alpha,-\gamma-\eta})+\frac{1}{2}N_{\gamma,-\eta}^ {2}(\delta_{\alpha,\gamma-\eta}+\delta_{\alpha,\eta-\gamma})=\frac{1}{2}N_{ \gamma,\eta}^{2}\delta_{\alpha,\gamma+\eta}\;, \tag{46}\]
where we changed the sum on \(\gamma\) and \(\eta\) in the last equality to be overall roots, positive and negative, as long as \(\gamma\neq\eta\neq\alpha\neq\gamma\). This implies
\[4f_{q\alpha\overline{\alpha}}f_{\gamma\eta\overline{\alpha}}\psi_{\alpha} \wedge(\psi_{\gamma}\wedge\psi_{\eta})=2N_{\alpha,\gamma}^{2}h_{\alpha}h_{ \gamma}h_{\alpha+\gamma}\alpha|_{q}\alpha|_{p}T_{p}\;. \tag{47}\]
Additionally, provided one keeps in mind the properties of the symmetric constants (see eqs. (B26) and (B27)), the previous remarks can be readily applied to all of the other forces over the field \(\psi_{q}\). After doing so, the expressions for f-adjoint forces are
\[F_{5}^{(4)-q} =2h_{\alpha}^{2}\alpha|_{q}\alpha\cdot\mathbb{H}\cdot T\;, \tag{48a}\] \[F_{6}^{(4)-q} =4h_{\alpha}^{2}\alpha|_{q}\left(\alpha\cdot\mathbb{H}\cdot \alpha\right)\alpha\cdot T+2N_{\alpha,\gamma}^{2}h_{\alpha}h_{\gamma}h_{\alpha+ \gamma}\alpha|_{q}\alpha\cdot T\;. \tag{48b}\]
In the d-adjoint case, the forces after applying the ansatz become
\[F_{7}^{(4)-q}= (4N)^{2}\omega_{i}|_{q}(\omega_{i}\cdot\omega_{j})(\omega_{j}. \mathbb{H}\mathbb{H}^{T}\cdot\omega_{j})\left(\omega_{i}\cdot\mathbb{H}\cdot T \right)+8Nh_{\alpha}^{2}\omega_{i}|_{q}\left(\omega_{i}\cdot\tilde{\alpha} \right)\left(\omega_{i}\cdot\mathbb{H}\cdot T\right)\;, \tag{49a}\] \[F_{8}^{(4)-q}= (4N)^{4}\left(\omega_{i}\cdot\mathbb{H}\cdot\omega_{b}\right) \left(\omega_{i}\cdot\omega_{j}\right)\left(\omega_{j}\cdot\mathbb{H}\cdot \omega_{a}\right)^{2}\left(\omega_{a}\cdot\omega_{b}\right)\omega_{i}|_{q} \omega_{b}\cdot T+4h_{\alpha}^{2}\tilde{\alpha}|_{q}\left(\tilde{\alpha}\cdot \mathbb{H}\cdot\tilde{\alpha}\right)\tilde{\alpha}\cdot T\] \[+2(4N)^{2}h_{\alpha}^{2}(\omega_{i}\cdot\mathbb{H}\cdot\omega_{j})( \tilde{\alpha}\cdot\omega_{i})(\tilde{\alpha}\cdot\omega_{j})\omega_{i}|_{q} \omega_{j}\cdot T+2M_{\alpha,\gamma}^{2}h_{\alpha}h_{\gamma}h_{\alpha+\gamma} \tilde{\alpha}_{q}\tilde{\alpha}\cdot T\;, \tag{49b}\]
while the mixed adjoint one turns out to be
\[F_{9}^{(4)-q} = 2\alpha|_{q}\left(\tilde{\alpha}\cdot\mathbb{H}\cdot\tilde{\alpha} \right)h_{\alpha}^{2}\alpha\cdot T+N_{\alpha\gamma}^{2}h_{\alpha}h_{\gamma}h_{ \alpha+\gamma}\left(\alpha|_{q}\alpha|_{p}+\tilde{\alpha}_{q}\tilde{\alpha}_{p }\right)T_{p} \tag{50}\] \[+(4N)^{2}\left(\omega_{i}\cdot\mathbb{H}\cdot\omega_{j}\right) \left(\omega_{i}\cdot\alpha\right)(\omega_{j}\cdot\alpha)h_{\alpha}^{2}\omega_{i} |_{q}\omega_{j}\cdot T+2(\alpha\cdot\mathbb{H}\cdot\alpha)h_{\alpha}^{2}\tilde{ \alpha}|_{q}\tilde{\alpha}\cdot T\;.\]
Again, in the above expressions, \(\alpha\) must be summed over the positive roots while \(\gamma\) must be summed over all of the roots, provided \(\gamma\neq\pm\alpha\). The indices \(i\), \(j\), \(a\), and \(b\) label the weights of the fundamental representation, thus ranging from \(1\) to \(N\). Because of the terms involving the symmetric constants \(d_{ABC}\), we defined a vector \(\tilde{\alpha}=\omega_{a}+\omega_{b}\) for each root \(\alpha=\omega_{a}-\omega_{b}\) (for details, see Appendix B).
In the root sector, i.e. setting \(A=\alpha\), we have
\[F_{6}^{(4)-\alpha} = 2f_{\alpha q\bar{\alpha}}f_{p\alpha\bar{\alpha}}\psi_{q}\wedge( \psi_{p}\wedge\psi_{\alpha})+f_{\alpha q\bar{\alpha}}f_{\gamma\eta\bar{\alpha}} \psi_{q}\wedge(\psi_{\gamma}\wedge\psi_{\eta})+f_{\alpha q\bar{\alpha}}f_{ \bar{\gamma}\bar{\eta}\bar{\alpha}}\psi_{q}\wedge(\psi_{\bar{\gamma}}\wedge \psi_{\bar{\eta}}) \tag{51}\] \[+2f_{\alpha\bar{\alpha}q}f_{\alpha^{\prime}\bar{\alpha^{\prime}}q }\psi_{\bar{\alpha}}\wedge(\psi_{\alpha^{\prime}}\wedge\psi_{\bar{\alpha^{ \prime}}})+f_{\alpha\gamma}\delta f_{\xi\xi\bar{\xi}}\psi_{\gamma}\wedge(\psi_ {\varepsilon}\wedge\psi_{\xi})+f_{\alpha\gamma\xi}f_{\bar{\xi}\bar{\xi}}\psi_{ \gamma}\wedge(\psi_{\bar{\varepsilon}}\wedge\psi_{\bar{\xi}})\] \[+2f_{\alpha\bar{\gamma}}\delta f_{\bar{\varepsilon}\bar{\xi}} \delta\psi_{\gamma}\wedge\left(\psi_{\varepsilon}\wedge\psi_{\bar{\xi}}\right)\;.\]
Here, there is no sum over \(\alpha\) and there are sums over all \(\alpha^{\prime}>0\) and all roots \(\gamma,\eta,\varepsilon,\xi\), both positive and negative. The above expression brings two notable differences when compared to its Cartan sector counterpart. First, there are terms with summations over \(5\) positive roots, instead of just \(3\). These terms can be treated just like before but with the use of eq. (B8) twice. Second, a completely new kind of term shows up, namely
\[2f_{\alpha\bar{\alpha}q}f_{\alpha^{\prime}\bar{\alpha^{\prime}}q}\psi_{\bar{ \alpha}}\wedge(\psi_{\alpha^{\prime}}\wedge\psi_{\bar{\alpha}^{\prime}})=( \alpha\cdot\alpha^{\prime})^{2}h_{\alpha^{\prime}}^{2}h_{\alpha}ST_{\alpha}S^{ -1}\;. \tag{52}\]
Here, the \(\alpha\) and \(\alpha^{\prime}\) are positive roots and can even be the same, which is why \(\alpha^{\prime}\) was used, instead of a different greek letter. Other than that, the steps to compute the forces in the root sector are similar to what was done before and we start again by exhibiting first the result for the lower order forces and the flavor-singlet ones
\[F^{(2)-\alpha}= h_{\alpha}ST_{\alpha}S^{-1}\;, \tag{53a}\] \[F^{(3)-f-\alpha}= 2\left(\alpha\cdot\mathbb{H}\cdot\alpha\right)h_{\alpha}ST_{ \alpha}S^{-1}+N_{\alpha,\gamma}^{2}h_{\gamma}h_{\alpha+\gamma}ST_{\alpha}S^{-1}\;,\] (53b) \[F^{(3)-d-\alpha}= 2\left(\tilde{\alpha}\cdot\mathbb{H}\cdot\tilde{\alpha}\right)h _{\alpha}ST_{\alpha}S^{-1}+N_{\alpha,\gamma}^{2}h_{\gamma}h_{\alpha+\gamma} ST_{\alpha}S^{-1}\;,\] (53c) \[F^{(4)-\alpha}_{1}= \big{(}\mathrm{Tr}(\mathbb{H}^{\mathrm{T}}\mathbb{H})+2\mathrm{h} _{\alpha^{\prime}}^{2}\big{)}\mathrm{h}_{\alpha}\mathrm{ST}_{\alpha}S^{-1}\;,\] (53d) \[F^{(4)-\alpha}_{2}= h_{\alpha}^{3}ST_{\alpha}S^{-1}\;,\] (53e) \[F^{(4)-\alpha}_{3}= \big{(}\alpha\cdot\mathbb{H}^{\mathrm{T}}\mathbb{H}\cdot\alpha+N _{\alpha,\gamma}^{2}h_{\gamma}^{2}+|\alpha|^{2}h_{\alpha}^{2}\big{)}h_{\alpha} ST_{\alpha}S^{-1}\;,\] (53f) \[F^{(4)-\alpha}_{4}= \big{(}4N(\omega_{i}\cdot\mathbb{H}^{\mathrm{T}}\mathbb{H}\cdot \omega_{i})(\omega_{i}\cdot\tilde{\alpha})+2(\tilde{\alpha}\cdot\tilde{\alpha}^ {\prime})h_{\alpha^{\prime}}^{2}\big{)}\,h_{\alpha}ST_{\alpha}S^{-1}\;. \tag{53g}\]
Then, we move to the f-adjoint set
\[F^{(4)-\alpha}_{5}= \big{(}\alpha\cdot\mathbb{H}\mathbb{H}^{\mathrm{T}}\cdot\alpha+N _{\alpha,\gamma}^{2}h_{\gamma}^{2}+|\alpha|^{2}h_{\alpha}^{2}\big{)}\,h_{ \alpha}ST_{\alpha}S^{-1}\;, \tag{54a}\] \[F^{(4)-\alpha}_{6}= \left(2(\alpha\cdot\mathbb{H}\cdot\alpha)^{2}+2\left(\alpha\cdot \alpha^{\prime}\right)^{2}h_{\alpha^{\prime}}^{2}\right)h_{\alpha}ST_{\alpha}S^{- 1}+N_{\alpha,\gamma}^{2}\big{(}\alpha\cdot\mathbb{H}\cdot\alpha\big{)}h_{\gamma }h_{\alpha+\gamma}ST_{\alpha}S^{-1}\] \[+\big{(}\gamma\cdot\mathbb{H}\cdot\gamma\big{)}N_{\alpha,\gamma}^ {2}h_{\gamma}h_{\alpha+\gamma}ST_{\alpha}S^{-1}+N_{\alpha,\gamma}^{2}N_{\gamma, \eta}^{2}h_{\eta}h_{\gamma+\eta}h_{\alpha+\gamma}ST_{\alpha}S^{-1}\;, \tag{54b}\]
while the d-ajoint forces read
\[F^{(4)-\alpha}_{7}= \big{(}4N(\tilde{\alpha}\cdot\omega_{i})(\omega_{i}\cdot\mathbb{H }\mathbb{H}^{\mathrm{T}}\cdot\omega_{i})+2(\tilde{\alpha}\cdot\tilde{\alpha}^ {\prime})h_{\alpha^{\prime}}^{2}\big{)}h_{\alpha}ST_{\alpha}S^{-1}\;, \tag{55a}\] \[F^{(4)-\alpha}_{8}= 2\left(\tilde{\alpha}\cdot\mathbb{H}\cdot\tilde{\alpha}\right)^{2 }h_{\alpha}ST_{\alpha}S^{-1}+(4N)^{2}\left(\tilde{\alpha}\cdot\omega_{i}\right) \left(\tilde{\alpha}\cdot\omega_{j}\right)\left(\omega_{i}\cdot\mathbb{H}\cdot \omega_{j}\right)^{2}h_{\alpha}ST_{\alpha}S^{-1}\] \[+M_{\alpha,\gamma}^{2}\left(\tilde{\alpha}\cdot\mathbb{H}\cdot \tilde{\alpha}\right)h_{\gamma}h_{\alpha+\gamma}ST_{\alpha}S^{-1}+2\left(\tilde {\gamma}\cdot\mathbb{H}\cdot\tilde{\gamma}\right)M_{\alpha,\gamma}^{2}h_{\alpha+ \gamma}h_{\gamma}ST_{\alpha}S^{-1}\] \[+2\left(\tilde{\alpha}\cdot\tilde{\alpha}^{\prime}\right)^{2}h_{ \alpha}h_{\alpha^{\prime}}^{2}ST_{\alpha}S^{-1}+M_{\alpha,\gamma}^{2}M_{\gamma, \eta}^{2}h_{\alpha+\gamma}h_{\eta}h_{\gamma+\eta}ST_{\alpha}S^{-1}\;. \tag{55b}\]
Finally, the mixed-adjoint force reads
\[F_{9}^{(4)-\alpha}= 2\left(\alpha\cdot\mathbb{H}\cdot\alpha\right)\left(\tilde{\alpha }\cdot\mathbb{H}\cdot\tilde{\alpha}\right)h_{\alpha}ST_{\alpha}S^{-1}+\left( \frac{1}{2}\alpha\cdot\mathbb{H}\cdot\alpha+\frac{1}{2}\tilde{\alpha}\cdot \mathbb{H}\cdot\tilde{\alpha}\right)M_{\alpha,\gamma}^{2}h_{\gamma}h_{\alpha+ \gamma}ST_{\alpha}S^{-1}\] \[+8N^{2}\left(\alpha\cdot\omega_{i}\right)\left(\alpha\cdot\omega _{j}\right)\left(\omega_{i}\cdot\mathbb{H}\cdot\omega_{j}\right)^{2}h_{\alpha} ST_{\alpha}S^{-1}+((\alpha\cdot\tilde{\alpha}^{\prime})^{2}+(\alpha^{\prime} \cdot\tilde{\alpha})^{2})h_{\alpha^{\prime}}^{2}h_{\alpha}ST_{\alpha}S^{-1}\] \[+\left(\gamma\cdot\mathbb{H}\cdot\gamma+\tilde{\gamma}\cdot \mathbb{H}\cdot\tilde{\gamma}\right)N_{\alpha,\gamma}^{2}h_{\gamma}h_{\alpha+ \gamma}ST_{\alpha}S^{-1}+N_{\gamma,\eta}^{2}N_{\alpha,\gamma}^{2}h_{\eta}h_{ \gamma+\eta}h_{\alpha+\gamma}ST_{\alpha}S^{-1}\;. \tag{56}\]
The summations are over \(i,j=1,...,N\), \(\alpha^{\prime}>0\), including \(\alpha^{\prime}=\alpha\), positive and negative \(\gamma,\eta\), excluding \(\gamma=\pm\alpha\neq\eta\neq\gamma\). The expression for the forces with index \(\bar{\alpha}\) are the same after replacing barred roots indices with unbarred ones and vice-versa.
Since we showed that both sides of eq. (38) point in the same direction in the Lie algebra, then our ansatz closes and we are left with scalar equations for the profiles \(a\), \(h_{qp}\), and \(h_{\alpha}\).
## IV Abelianization and the asymptotic Casimir law
Here, we shall discuss the energy scaling of the solution with \(N-\)ality \(k\). A good starting point is to review some facts regarding the particular case of the model given by eq. (1), which was studied in Ref. [40]. It has a special point in parameter space, \(\mu^{2}=0\), where all the profiles \(h_{\alpha}\) with \(\alpha\cdot\beta=0\) freeze at their vacuum value \(v\). As for the other profiles, a collective behavior was shown to take place, i.e. \(h_{\alpha}=h\) for all \(\alpha\) with \(\alpha\cdot\beta\neq 0\). In this case, the model can be said to be Abelianized as the equations satisfied by the profiles \(a,h\) are those of the Ginzburg-Landau model, which gives rise to the well-known Nielsen-Olesen vortex. The string tension (energy per unit length) in this particular case is
\[\sigma_{\text{particular}}=k(N-k)\int d^{2}x\,\left(\frac{|\nabla a|^{2}}{ \rho^{2}g^{2}}+\frac{h^{2}(1-a)^{2}}{\rho^{2}}+|\nabla h|^{2}\right)+V_{ \text{particular}}\;. \tag{57}\]
Here, we can use Derricks's theorem in two dimensions, which states that the kinetic energy of the gauge field is equal to the potential energy of the Higgs fields, thus implying
\[\sigma_{\text{particular}}=k(N-k)\int d^{2}x\,\left(2\frac{|\nabla a|^{2}}{ \rho^{2}g^{2}}+\frac{h^{2}(1-a)^{2}}{\rho^{2}}+|\nabla h|^{2}\right)\;. \tag{58}\]
This string tension scales with the quadratic Casimir of the \(k-\)Antisymmetric representation, in accordance with the Casimir law
\[\frac{\sigma_{k}}{\sigma_{1}}=\frac{k(N-k)}{N-1} \tag{59}\]
approximately observed in the lattice. The above derivation makes it clear that a set of ingredients for such a law is the existence of a region in parameter space for which the following conditions are met
1. \(h_{\alpha}=v\), \(\forall\alpha|\alpha\cdot\beta=0\),
2. \(h_{\alpha}=h\), \(\forall\alpha|\alpha\cdot\beta=1\),
3. \(h\) must be independent of \(k\).
Keep in mind that only two values for \(\alpha\cdot\beta\) are being considered since the weight in eq. (32e) are those of the \(k-\)Antisymmetric representations.
In the following, we will analyze the existence of such a region for the model of eq. (29). Then, let us assume
\[h_{qp}=v\delta_{qp}\;,\] \[h_{\alpha}=\begin{cases}v\text{ if }\alpha\cdot\beta=0\\ h\text{ if }\alpha\cdot\beta=1\end{cases} \tag{60}\]
and evaluate the force expressions (41), (48)-(50), and (53)-(56).
Once again, we will illustrate the calculations using \(F_{6}\) as an example. In the Cartan sector,
\[F_{6}^{q}=4h_{\alpha}^{2}\alpha|_{q}\left(\alpha\cdot\mathbb{H}\cdot\alpha \right)\alpha\cdot T+2N_{\alpha,\gamma}^{2}h_{\alpha}h_{\gamma}h_{\alpha+ \gamma}\alpha|_{q}\alpha\cdot T \tag{61}\]
To evaluate these terms, we first need to write down explicitly, for each \(k\), which positive root yields the value \(0\) or \(1\) for the product \(\alpha\cdot\beta\). If we express the roots as differences of weights of the fundamental representation, i. e. \(\alpha=\alpha_{ij}=\omega_{i}-\omega_{j}\), we have
\[\alpha_{ij}\cdot\beta=\begin{cases}0,\text{ if }i,j=1,...,k\text{ or }i,j=k+1,...,N\;,\\ 1,\text{ if }i=1,...,k\text{ and }j=k+1,...,N\;,\end{cases} \tag{62}\]
and the positivity of the root is guaranteed by \(i<j\). Now, we can define the matrices
\[A_{1}|_{qp}^{(k)}= \sum_{i=1}^{k}\sum_{j=k+1}^{N}\alpha_{ij}|_{q}\alpha_{ij}|_{p}\;, \tag{63a}\] \[A_{0}|_{qp}^{(k)}= \sum_{i=k+1}^{N}\sum_{j=i+1}^{N}\alpha_{ij}|_{q}\alpha_{ij}|_{p}\;,\] (63b) \[\tilde{A}_{0}|_{qp}^{(k)}= \sum_{i=1}^{k}\sum_{j=i+1}^{k}\alpha_{ij}|_{q}\alpha_{ij}|_{p}\;, \tag{63c}\]
which sum up to half the identity matrix. Then, noticing every root has length equal to \(1/\sqrt{N}\), the first term in eq. (61) is
\[4h_{\alpha}^{2}\alpha|_{q}\left(\alpha\cdot\mathbb{H}\cdot\alpha \right)\alpha\cdot T = \frac{4}{N}v^{3}\sum_{\alpha>0\atop\alpha\cdot\beta^{k}=0} \alpha|_{q}\alpha\cdot T+\frac{4}{N}vh^{2}\sum_{\alpha>0\atop\alpha\cdot\beta ^{k}\neq 0}\alpha|_{q}\alpha\cdot T\;, \tag{64}\] \[= \frac{2}{N}v^{3}T_{q}+\frac{4}{N}v(h^{2}-v^{2})A_{qp}^{k}T_{p}\;.\]
To evaluate the second term in eq. (61), we need to split the sum over \(\alpha\) and \(\gamma\) into different cases to take into account all the possibilities for the profiles \(h_{\alpha}h_{\gamma}h_{\alpha+\gamma}\). However, the matrix part of the term depends only on \(\alpha\) and not \(\gamma\). This means that in each case the sum over \(\gamma\) only contributes with a numerical factor. The following table summarizes the different cases. The first column shows how the roots \(\alpha\) and \(\gamma\) must match to ensure \(\alpha+\gamma\) is a valid root. The second column shows the range of the indices \(i,j,k\) that label the roots. The third one show the associated profiles and the last one shows the multiplicity for each case, i.e. how many \(\gamma\) possibilities are there for each \(\alpha\)
\[\begin{split}\text{Root type}&\text{Indices range}\quad\text{ Profiles}\ (h_{\alpha},\,h_{\gamma},\,h_{\alpha+\gamma})\ \ \text{Multiplicity}\\ \alpha_{ij}\;,\;\gamma_{jl}&\quad i\leq k\;,\;j>k\;,\;l \leq k\\ \alpha_{ij}\;,\;\gamma_{li}&\quad i\leq k\;,\;j>k\;,\;l >k\\ \alpha_{ij}\;,\;\gamma_{li}&\quad i\leq k\;,\;j>k\;,\;l \leq k\\ \alpha_{ij}\;,\;\gamma_{jl}&\quad i\leq k\;,\;j>k\;,\;l >k\\ \alpha_{ij}\;,\;\gamma_{li}\text{ or }\gamma_{jl}&\quad i\leq k\;,\;j \leq k\;,\;l>k\\ \alpha_{ij}\;,\;\gamma_{li}\text{ or }\gamma_{jl}&\quad i\leq k\;,\;j \leq k\;,\;l\leq k\\ \alpha_{ij}\;,\;\gamma_{li}\text{ or }\gamma_{jl}&\quad i\leq k\;,\;j \leq k\;,\;l\leq k\\ \alpha_{ij}\;,\;\gamma_{li}\text{ or }\gamma_{jl}&\quad i>k\;,\;j >k\;,\;l\leq k\\ \alpha_{ij}\;,\;\gamma_{li}\text{ or }\gamma_{jl}&\quad i>k\;,\;j >k\;,\;l>k\\ \end{split}\qquad\begin{split}(h,h,\tilde{h}_{0})&(k-1)\\ (h,h,h_{0})&(N-k-1)\\ (h,\tilde{h}_{0},h)&(k-1)\\ (h,h_{0},h)&(N-k-1)\\ (\tilde{h}_{0},h,h)&2(N-k)\\ (\tilde{h}_{0},\tilde{h}_{0},\tilde{h}_{0})&2(k-2)\\ (h_{0},h,h)&2k\\ (h_{0},h,h)&2(N-k-2)\\ \end{split}\]
Then, the second term reads
\[2N_{\alpha,\gamma}^{2}h_{\alpha}h_{\gamma}h_{\alpha+\gamma} \alpha|_{q}\alpha\cdot T = \frac{2(h^{2}-v^{2})v}{N}\left((N-2)A|_{qp}^{(k)}T_{p}+(N-k)\tilde{ A}_{0}|_{qp}^{(k)}T_{p}+kA_{0}|_{qp}^{(k)}T_{p}\right) \tag{65}\] \[+\frac{2v^{3}(N-2)}{N}\left(\tilde{A}_{0}|_{qp}^{(k)}T_{p}+A_{0}| _{qp}^{(k)}T_{p}+A|_{qp}^{(k)}T_{p}\right)\;.\]
We can eliminate \(A_{0}^{(k)}\) and \(\tilde{A}_{0}^{(k)}\) by using the identities
\[A_{0}|_{qp}^{(k)} = \frac{2N-k}{4N}\delta_{qp}-\frac{N}{2}\beta|_{q}\beta|_{p}-\frac{ A|_{qp}^{(k)}}{2}\;, \tag{66}\] \[\tilde{A}_{0}|_{qp}^{(k)} = \frac{k}{4N}\delta_{qp}+\frac{N}{2}\beta|_{q}\beta|_{p}-\frac{A|_{ qp}^{(k)}}{2}\;, \tag{67}\]
which leads to
\[2N_{\alpha,\gamma}^{2}h_{\alpha}h_{\gamma}h_{\alpha+\gamma} \alpha|_{q}\alpha\cdot T = \frac{v^{3}(N-2)}{N}T_{q}+\frac{k(3N-2k)}{2N^{2}}v(h^{2}-v^{2})T_ {q} \tag{68}\] \[+\frac{N-4}{N}v(h^{2}-v^{2})A|_{qp}^{(k)}T_{p}+(N-2k)v(h^{2}-v^{2 })\beta|_{q}\beta\cdot T\]
A similar analysis can be carried out for all of the other forces. Just as before, we start by showing the result for the lower order and flavor-singlet interactions
\[F^{(2)-q}= vT_{q}\;, \tag{69a}\] \[F^{(3)-f-q}= v^{2}T_{q}+2(h^{2}-v^{2})A|_{qp}^{(k)}T_{p}\;,\] (69b) \[F^{(3)-d-q}= \frac{N^{2}-4}{N^{2}}v^{2}T_{q}+\frac{2k}{N}(h^{2}-v^{2})T_{q}-2 (h^{2}-v^{2})A|_{qp}^{(k)}T_{p}\] \[+4(h^{2}-v^{2})(N-2k)\beta|_{q}\beta\cdot T\;,\] (69c) \[F_{1}^{(4)-q}= (N^{2}-1)v^{3}T_{q}+2k(N-k)v(h^{2}-v^{2})T_{q}\;,\] (69d) \[F_{2}^{(4)-q}= v^{3}T_{q}\;,\] (69e) \[F_{3}^{(4)-q}= v^{3}T_{q}+2v(h^{2}-v^{2})A|_{qp}^{(k)}T_{p}\;,\] (69f) \[F_{4}^{(4)-q}= -\frac{2k(N-2k)}{N^{2}}v(h^{2}-v^{2})T_{q}+4(N-2k)v(h^{2}-v^{2}) \beta|_{q}\beta\cdot T\;. \tag{69g}\]
Next, we present the result for the f-adjoint set of interactions
\[F_{5}^{(4)-q}= v^{3}T_{q}+2v(h^{2}-v^{2})A|_{qp}^{(k)}T_{p}\;, \tag{70a}\] \[F_{6}^{(4)-q}= v^{3}T_{q}+\frac{k(3N-2k)}{2N^{2}}v(h^{2}-v^{2})T_{q}+v(h^{2}-v^{2}) A|_{qp}^{(k)}T_{p}\] \[+(N-2k)v(h^{2}-v^{2})\beta|_{q}\beta\cdot T\;, \tag{70b}\]
then the d-adjoint forces
\[F_{7}^{(4)-q} =-\frac{2k(N-2k)}{N^{2}}v(h^{2}-v^{2})T_{q}+4(N-2k)v(h^{2}-v^{2}) \beta|_{q}\beta\cdot T\;, \tag{71a}\] \[F_{8}^{(4)-q} =\left(\frac{N^{2}-4}{N^{2}}\right)^{2}v^{3}T_{q}-\frac{k(24N-5N^{ 3})+k^{2}(16+2N^{2})}{2N^{4}}v(h^{2}-v^{2})T_{q}\] \[-\frac{N^{2}-12}{N^{2}}v(h^{2}-v^{2})A|_{qp}^{(k)}T_{p}+\frac{(3N^ {2}-40)(N-2k)}{N^{2}}v(h^{2}-v^{2})\beta|_{q}\beta\cdot T\;, \tag{71b}\]
and the mixed-adjoint one
\[F_{9}^{(4)-q} = \frac{N^{2}-4}{N^{2}}v^{3}T_{q}+\frac{k(2N-k)}{N^{2}}v(h^{2}-v^{2 })T_{q} \tag{72}\] \[-\frac{6}{N^{2}}v(h^{2}-v^{2})A|_{qp}^{(k)}T_{p}+2(N-2k)v(h^{2}-v ^{2})\beta|_{q}\beta\cdot T\;.\]
Notice that these forces are combinations of the following five expressions: \(T_{q}\), \(k(h^{2}-v^{2})T_{q}\), \(k^{2}(h^{2}-v^{2})T_{q}\), \((h^{2}-v^{2})A|_{qp}^{(k)}T_{p}\), and \((N-2k)(h^{2}-v^{2})\beta|_{q}\beta\cdot T\). As discussed before, if the profiles \(h_{qp}\) were to freeze at their vacuum values, a Casimir scaling can be found. Because of eq. (39), this means that the total force on the fields \(\psi_{q}\) must vanish, which implies equating the total coefficients of each piece to \(0\). Doing so for the coefficient of \(T_{q}\) leads to
\[0 = \mu^{2}+\kappa_{f}v+\frac{N^{2}-4}{N^{2}}\kappa_{d}v+(N^{2}-1) \lambda_{1}v^{2}+\lambda_{2}v^{2}+\lambda_{3}v^{2} \tag{73}\] \[+\lambda_{5}v^{2}+\lambda_{6}v^{2}+\left(\frac{N^{2}-4}{N^{2}} \right)^{2}\lambda_{8}v^{2}+\frac{N^{2}-4}{N^{2}}\lambda_{9}v^{2}\;,\]
which is actually what defines the value of \(v\neq 0\) that minimizes the potential. As for the other pieces, they yield a set of four conditions, out of which only three are independent:
\[2\kappa_{d}+2N^{2}\lambda_{1}v-2\lambda_{4}v+\frac{3}{2}\lambda_ {6}v-2\lambda_{7}v+\frac{5N^{2}-24}{2N^{2}}\lambda_{8}v+2\lambda_{9}v=0 \tag{74a}\] \[2N^{2}\lambda_{1}-4\lambda_{4}+\lambda_{6}-4\lambda_{7}+\frac{N^ {2}+8}{N^{2}}\lambda_{8}+\lambda_{9}=0\] (74b) \[2\kappa_{f}-2\kappa_{d}+2\lambda_{3}v+2\lambda_{5}v+\lambda_{6}v -\frac{N^{2}-12}{N^{2}}\lambda_{8}v-\frac{6}{N^{2}}\lambda_{9}v=0 \tag{74c}\]
In the root sector, \(F_{\alpha}\) was shown to be proportional to \(ST_{\alpha}S^{-1}\) in eqs. (53)-(56), but the expression for the forces changes depending on the type of root as defined in eq. (62). If the roots \(\alpha=\omega_{i}-\omega_{j}\) are perpendicular to the magnetic weight and \(i,j>k\), the lower order and flavor-singlet forces
are5
Footnote 5: These expressions are the same for all values of \(i\) and \(j\), provided \(i,j>k\). Additionally, when \(i,j<k\), the forces can be obtained from the former case by a simple change \(k\to N-k\). As these properties are true for all of the forces acting on \(\psi_{\alpha/\alpha}\), we will omit the case \(i,j<k\).
\[\langle F^{(2)-\alpha},ST_{\alpha}S^{-1}\rangle =v\;, \tag{75a}\] \[\langle F^{(3)-f-\alpha},ST_{\alpha}S^{-1}\rangle =v^{2}+\frac{k}{N}(h^{2}-v^{2})\;,\] (75b) \[\langle F^{(3)-d-\alpha},ST_{\alpha}S^{-1}\rangle =\frac{N^{2}-4}{N^{2}}v^{2}+\frac{k}{N}(h^{2}-v^{2})\;,\] (75c) \[\langle F^{\alpha}_{1},ST_{\alpha}S^{-1}\rangle =(N^{2}-1)v^{3}+2k(N-k)v(h^{2}-v^{2})\;,\] (75d) \[\langle F^{\alpha}_{2},ST_{\alpha}S^{-1}\rangle =v^{3}\] (75e) \[\langle F^{\alpha}_{3},ST_{\alpha}S^{-1}\rangle =v^{3}+\frac{kv(h^{2}-v^{2})}{N}\;,\] (75f) \[\langle F^{\alpha}_{4},ST_{\alpha}S^{-1}\rangle =-2\frac{k(N-2k)}{N^{2}}v(h^{2}-v^{2})\;. \tag{75g}\]
The forces in the f-adjoint set are
\[\langle F^{\alpha}_{5},ST_{\alpha}S^{-1}\rangle =v^{3}+\frac{k}{N}v(h^{2}-v^{2})\;, \tag{76a}\] \[\langle F^{\alpha}_{6},ST_{\alpha}S^{-1}\rangle =v^{3}+\frac{k(2N-k)}{N^{2}}v(h^{2}-v^{2})\;, \tag{76b}\]
while those in the d-adjoint set are
\[\langle F^{\alpha}_{7},ST_{\alpha}S^{-1}\rangle =-2\frac{k(N-2k)}{N^{2}}v(h^{2}-v^{2})\;, \tag{77a}\] \[\langle F^{\alpha}_{8},ST_{\alpha}S^{-1}\rangle =\frac{(N^{2}-4)^{2}}{N^{4}}v^{3}+\frac{k(2N^{3}-8k-N^{2}k-6N)}{N^ {4}}v(h^{2}-v^{2})\;. \tag{77b}\]
The force in the mixed-adjoint set is
\[\langle F^{\alpha}_{9},ST_{\alpha}S^{-1}\rangle =\frac{N^{2}-4}{N^{2}}v^{3}+\frac{k(2N^{2}-Nk-3)}{N^{3}}v(h^{2}-v^ {2})\;. \tag{78}\]
Now, for a Casimir scaling, we should also equate to \(0\) the total force on \(\psi_{\alpha}\) with \(\alpha\cdot\beta=0\). It turns out that this is automatically satisfied after imposing the \(h_{qp}\)-freezing conditions in eq. (74). Finally, for roots \(\alpha\) such that \(\alpha\cdot\beta=1\), we show the expressions for the lower order and flavor-singlet forces
\[\langle F^{(2)-\alpha},ST_{\alpha}S^{-1}\rangle =h \tag{79a}\] \[\langle F^{(3)-f-\alpha},ST_{\alpha}S^{-1}\rangle =hv\] (79b) \[\langle F^{(3)-d-\alpha},ST_{\alpha}S^{-1}\rangle =\frac{N^{2}-4}{N^{2}}hv\;,\] (79c) \[\langle F^{\alpha}_{1},ST_{\alpha}S^{-1}\rangle =(N^{2}-1)v^{2}h+2k(N-k)h(h^{2}-v^{2})\] (79d) \[\langle F^{\alpha}_{2},ST_{\alpha}S^{-1}\rangle =h^{3}\] (79e) \[\langle F^{\alpha}_{3},ST_{\alpha}S^{-1}\rangle =\frac{h(h^{2}+v^{2})}{2}\] (79f) \[\langle F^{\alpha}_{4},ST_{\alpha}S^{-1}\rangle =\frac{(N-2k)^{2}}{N^{2}}h(h^{2}-v^{2})\;, \tag{79g}\]
the f-adjoint forces
\[\langle F_{5}^{\alpha},ST_{\alpha}S^{-1}\rangle =\frac{1}{2}h(h^{2}+v^{2}) \tag{80a}\] \[\langle F_{6}^{\alpha},ST_{\alpha}S^{-1}\rangle =v^{2}h+\frac{1+Nk-k^{2}}{N^{2}}h\left(h^{2}-v^{2}\right) \tag{80b}\]
the d-adjoint forces
\[\langle F_{7}^{\alpha},ST_{\alpha}S^{-1}\rangle =\frac{(N-2k)^{2}}{N^{2}}h(h^{2}-v^{2})\;, \tag{81a}\] \[\langle F_{8}^{\alpha},ST_{\alpha}S^{-1}\rangle =\frac{N^{4}-8N^{2}+16}{N^{4}}v^{2}h+\frac{N^{3}k-N^{2}k^{2}-3N^{ 2}+8Nk-8k^{2}}{N^{4}}h^{3}\;, \tag{81b}\]
and the mixed-adjoint force
\[\langle F_{9}^{\alpha},ST_{\alpha}S^{-1}\rangle=\frac{Nk-k^{2}-1}{N^{2}}h(h^{2 }-v^{2})+\frac{N^{2}-4}{N^{2}}v^{2}h\;. \tag{82}\]
Since the profile \(h\) is nontrivial, there is no condition associated with the total force vanishing. However, it is important that \(h\) does not depend on \(k\) so as to guarantee a Casimir law. This would entail equating to \(0\) the coefficients of \(kh\), \(k^{2}h\), \(kh^{3}\), and \(k^{2}h^{3}\) in the total force. Just like before, the resulting conditions are not independent of eqs. (74). That is, after freezing \(h_{qp}\), the equation satisfied by \(h\) is automatically \(k\)-independent and reads
\[\nabla^{2}h=\left(\mu^{2}+\kappa_{f}v+\kappa_{d}\frac{N^{2}-4}{N^ {2}}v+\lambda_{1}(N^{2}-1)v^{2}+\frac{\lambda_{3}v^{2}}{2}-\lambda_{4}v^{2}+ \lambda_{5}\frac{v^{2}}{2}+\lambda_{6}\frac{N^{2}-1}{N^{2}}v^{2}-\lambda_{7}v^ {2}+\right.\] \[\left.+\lambda_{8}\frac{N^{4}-5N^{2}+16}{N^{4}}v^{2}+\lambda_{9} \frac{N^{2}-3}{N^{2}}v^{2}\right)h+\left(\lambda_{2}+\frac{\lambda_{3}}{2}+ \lambda_{4}+\frac{\lambda_{5}}{2}+\frac{\lambda_{6}}{N^{2}}+\lambda_{7}-\frac {3}{N^{2}}\lambda_{8}-\frac{\lambda_{9}}{N^{2}}\right)h^{3}\;. \tag{83}\]
For completeness, we show also the resulting equation for the gauge field
\[\frac{1}{\rho}\frac{da}{d\rho}-\frac{d^{2}a}{d\rho^{2}}=g^{2}h^{2}(1-a)\;. \tag{84}\]
These equations are those of an ANO model after an appropriate redefinition of the parameters.
## V Stability
In the simple model given by eq. (1), when moving from the Abelianization point at \(\mu^{2}=0\), there is a neighboring region (\(\mu^{2}<0\)) where it becomes unstable (see Sec. I). In this region, the fields prefer to align along a common direction in the Lie algebra and arbitrarily increase their norm. This way, the cubic and quartic terms are nullified and the energy due to the mass term becomes arbitrarily negative. Although we shall not analyze the parameter space in detail, we would like to note that this issue can be easily fixed in the general color and flavor symmetric setting, and even in a class of models where the field-content is reduced by disregarding the \(\psi_{q}\) Higgs-sector. Moreover, this can be done while keeping the Abelian-like profiles as well as the Casimir scaling law.
### Models with color and flavor symmetry
For example, let us consider the model in Eqs. (29), (30), with \(\mu^{2}\), \(\kappa_{f}\), \(\lambda_{2}\), and \(\lambda_{3}\) being the only nonvanishing parameters. In the new quartic contribution
\[\lambda_{2}\left(\langle\psi_{1},\psi_{1}\rangle^{2}+\langle\psi_{2},\psi_{2} \rangle^{2}+\cdots+2\langle\psi_{1},\psi_{2}\rangle^{2}+2\langle\psi_{2},\psi_ {3}\rangle^{2}+\dots\right)\;,\]
when \(\lambda_{2}>0\), the terms with a single flavor index prevent the energy minimization with an arbitrarily large norm, thus leading to a stable model for positive and sufficiently large \(\lambda_{2}\). Note also that the remaining mixed terms favor the orthogonality between different fields. This favors the \(SU(N)\to Z(N)\) SSB vacua \(\psi_{A}=vST_{A}S^{-1}\) considered in our previous analysis. In this respect, for positive \(\lambda_{2},\lambda_{3}\), these vacua are favored with respect to the trivial one when
\[\mu^{2}<\frac{2}{9}\frac{\kappa_{f}^{2}}{\lambda_{2}+\lambda_{3}}\;. \tag{85}\]
In addition, it can be easily seen that for sufficiently large \(\lambda_{2}\), the SSB vacua are favored when compared with the aligned configuration. Indeed, this is the case for
\[\lambda_{2}>\frac{\lambda_{3}}{N^{2}-2}\;. \tag{86}\]
Moreover, the freezing conditions for \(\psi_{q}\) in Eqs. (74a)-(74c) are satisfied at
\[\mu^{2}=-\lambda_{2}\left(\frac{\kappa_{f}}{\lambda_{3}}\right)^{2}\;,\]
which corresponds to \(v=-\frac{\kappa_{f}}{\lambda_{3}}\). According to the analysis in Sec. IV, this freezing automatically implies Nielsen-Olesen profiles and asymptotic Casimir scaling.
### Reduced models without \(\psi_{q}\)
From the ensemble point of view [28], the Higgs fields \(\psi_{\alpha}\), \(\psi_{\bar{\alpha}}\) labeled by roots are naturally associated with worldlines carrying an adjoint charge \(\alpha\). On the other hand, the adjoint Higgs fields \(\psi_{q}\) labelled by Cartan indices were introduced to cope with possible matching rules in the \(\mathfrak{su}(2)\) subalgebras of \(\mathfrak{su}(N)\). If these matching rules were absent, it would be appropriate to limit the Higgs field-content of the effective model to \(\psi_{\alpha}\), \(\psi_{\bar{\alpha}}\). Let us analyze what would change in this scenario. This can be achieved by setting \(\psi_{q}=0\) in the energy functional and the ansatz. Of course, we do not have to worry about the conditions derived from the eqs. for \(\psi_{q}\) (cf. (69)-(72)). In the root sector, on the other hand, eqs. (53)-(56) with \(\psi_{q}=0\) are still valid and, for that reason, the ansatz still closes. The main changes are originated from eqs. (75)-(78) and (79)-(82), since the absence of the fields \(\psi_{q}\) drastically modify the coefficients therein. Consequently, new conditions emerge when equating the coefficients of the new total forces on \(\psi_{\alpha}\) to \(0\) (\(\alpha\cdot\beta=0\)). Nevertheless, a similar analysis can be carried out and, just as before, not all conditions are independent. The freezing conditions can be chosen as
\[\kappa_{f}+\kappa_{d}+2N^{2}\lambda_{1}v+\lambda_{3}v-2(\lambda_{ 4}+\lambda_{7})v+\lambda_{5}v+\frac{2N-3}{N}(\lambda_{6}+\lambda_{8}+\lambda_ {9})v=0 \tag{87a}\] \[-2N^{2}\lambda_{1}+4(\lambda_{4}+\lambda_{7})-(\lambda_{6}+ \lambda_{9})-\frac{N^{2}+8}{N^{2}}\lambda_{8}=0\;, \tag{87b}\]
while the new equation that defines \(v\) is
\[0 = \mu^{2}+\frac{N-2}{N}v(\kappa_{f}+\kappa_{d})+N(N-1)v^{2}\lambda_{1} +v^{2}\lambda_{2}+\frac{N-1}{N}v^{2}\lambda_{3}+\frac{N-1}{N}v^{2}\lambda_{5} \tag{88}\] \[+\frac{N^{2}-3N+4}{N^{2}}v^{2}\lambda_{6}+\frac{N^{3}-3N^{2}+4}{N ^{3}}v^{2}\lambda_{8}+\frac{N^{2}-3N+2}{N^{2}}v^{2}\lambda_{9}\;.\]
Again, the freezing conditions lead to a collective behavior where the nontrivial profiles \(h_{\alpha}\) (\(\alpha\cdot\beta=1\)) are equal to a single one \(h\), which satisfies a \(k\)-independent Nielsen-Olesen equation
\[\nabla^{2}h = \left(\mu^{2}+\kappa_{f}\frac{N-2}{N}v+\kappa_{d}\frac{N-2}{N}v+ \lambda_{1}N(N-1)v^{2}+\lambda_{3}\frac{N-2}{2N}v^{2}-\lambda_{4}v^{2}+\lambda _{5}\frac{N-2}{2N}v^{2}\right.\] \[\left.+\lambda_{6}\frac{N^{2}-3N+3}{N^{2}}v^{2}-\lambda_{7}v^{2}+ \lambda_{8}\frac{N^{3}-3N^{2}+3N+4}{N^{3}}v^{2}+\lambda_{9}\frac{N^{2}-3N+3}{ N^{2}}v^{2}\right)h\] \[+\left(\lambda_{2}+\frac{\lambda_{3}}{2}+\lambda_{4}+\frac{ \lambda_{5}}{2}+\frac{\lambda_{6}}{N^{2}}+\lambda_{7}-\frac{3\lambda_{8}}{N^{2 }}-\frac{\lambda_{9}}{N^{2}}\right)h^{3}\;.\]
In addition, when \(\mu^{2}\), \(\kappa_{f}\), \(\lambda_{2}\), and \(\lambda_{3}\) are the only nonvanishing parameters, the above analysis is expected to hold for sufficiently large \(\lambda_{2}\). In that region the favored vacua would be \(\psi_{\alpha}=vST_{\alpha}S^{-1}\), \(\psi_{\bar{\alpha}}=vST_{\bar{\alpha}}S^{-1}\). For positive \(\lambda_{2},\lambda_{3}\), this vacuum has a lower energy than the trivial one when
\[\mu^{2}<\frac{2}{9N}\kappa_{f}^{2}\frac{N^{2}-4N+4}{N\lambda_{2}+\lambda_{3}(N -1)}\;. \tag{90}\]
Moreover, we checked that in the region
\[\lambda_{2}>\frac{\lambda_{3}(N-1)}{N^{3}-N^{2}-N} \tag{91}\]
these vacua are favored with respect to the aligned configuration. In this example, the freezing condition for the fields \(\psi_{\alpha}\) (\(\alpha\cdot\beta=0\)) occurs at
\[\mu^{2}=-\frac{\kappa_{f}^{2}}{N\lambda_{3}}-\frac{\lambda_{2}\kappa_{f}^{2}}{ \lambda_{3}^{2}}\;, \tag{92}\]
which corresponds to \(v=-\frac{\kappa_{f}}{\lambda_{3}}\). At this point, besides stability, the reduced model displays Abelian-like vortex profiles and Casimir scaling, as the general conditions given in Sec. IV are also realized.
At the freezing point, in the color and flavor symmetric model and in the reduced model, the energy difference between the preferred \(SU(N)\ \to Z(N)\) SSB configuration, the aligned, and the trivial one is finite. Thus, we may conclude that the \(SU(N)\ \to Z(N)\) SSB pattern is stable with respect to small deviations from the freezing point. In this case, the flux tubes only receive perturbative corrections. Also, because of the additional quartic term considered, all possible phases obtained when the mass and cubic parameters are arbitrarily varied become correctly stabilized, as the energy of the global minima will be bounded from below.
## VI Discussion
In this work, we analyzed two classes of YMH models with a set of adjoint Higgs flavors. Initially, we considered the most general case with \(SU(N)\) color and flavor symmetry constructed in terms of
\(N^{2}-1\) adjoint real scalars. Next, we also analyzed models derived from the former by disregarding Higgs flavor labels in the Cartan sector, only keeping Higgs fields labeled by the adjoint weights of \(SU(N)\), which can be readily associated with the different monopole charges. In this case, the cubic and quartic interactions effectively describe the matching rules for these charges when three and four monopole worldlines meet at a point. This, together with the minimal coupling to the \(SU(N)\) gauge field Goldstone modes \(\Lambda_{\mu}\), describe a mixed ensemble of oriented and nonoriented center vortices [28]. In both cases, the \(SU(N)\to Z(N)\) SSB pattern, essential to reproduce the observed \(N\)-ality properties of the confining states at asymptotic distances, can be realized. Here, we showed that the different properties suggested by the lattice can be accommodated in a class of models that remain stable under variations of the Higgs-field mass parameter. These properties include asymptotic Abelian profiles [32], the Casimir scaling law [42], and the independence of the flux-tube cross-section from the \(N\)-ality of the quark representation [35]. For each class, the generation of Abelian profiles was traced back to the possibility of freezing the Higgs fields having labels that are trivially transformed by Cartan transformations along the \(k\)-antisymmetric weights. This freezing automatically implies that the profiles \(h_{\alpha}\) associated to Higgs fields that do rotate under this type of transformation (there are \(k(N-k)\) such fields) can be equated to a single profile \(h\). The latter satisfies a Nielsen-Olesen equation that turns out to be \(k\)-independent. As the regularity conditions are also \(k\)-independent, the above mentioned cross-section property is then implied. Therefore, although the models are formulated in terms of many fields, a collective behavior arises where the \(k\)-vortex energy is proportional to \(k(N-k)\), which coincides with the quadratic Casimir of the \(k\)-antisymmetric representation. In both classes of models there are relatively few freezing conditions on the parameters. In addition, for small deviations from the freezing point, the vortex properties are only perturbatively modified. Then, it is satisfying to see that properties observed or suggested in lattice simulations of \(SU(N)\) YM lattice theory are ubiquitous in YMH models with adjoint flavors, which in turn provide an effective description of mixed ensembles of oriented and nonoriented center vortices also observed in the lattice.
###### Acknowledgements.
The Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) is acknowledged for the financial support.
## Appendix A Weights of SU(N)
The weights \(\lambda\) of a given representation D of SU(N) are \(N-1\) tuples defined in terms of the eigenvectors of the Cartan generators, as follows:
\[D(T_{q})|\lambda\rangle=\lambda|_{q}|\lambda\rangle\;. \tag{10}\]
When D is the fundamental(defining) representation, these weights are denoted by \(\omega_{i}\), \(i=1,\ldots,N\). It is convenient to define an ordering relation for these tuples, where a given weight is said to be positive if its last nonzero component is positive. It is also convenient to define the magnetic weights \(\beta_{i}=2N\omega_{i}\). Then, the magnetic weights of the defining representation are defined such that \(\beta_{1}>\beta_{2}>\cdots>\beta_{N}\). They all have the same length, i.e. \(|\beta_{i}|^{2}=2(N-1)\), and different weights have the following scalar product
\[\beta_{i}\cdot\beta_{j}=-2\;,\;i\neq j\;. \tag{11}\]
Another important particular case is when D is the adjoint representation, defined by
\[{\rm Ad}(T_{A})|_{BC}=-if_{ABC}\;. \tag{10}\]
The corresponding weights are known as the roots of SU(N). They are given by differences of fundamental weights, i.e., all roots can be written as
\[\alpha_{ij}=\omega_{i}-\omega_{j}\;,\;i\neq j\;. \tag{11}\]
Notice that \(\alpha_{ij}\) is positive if and only if \(i>j\).
## Appendix B The structure constants of SU(N)
This section is dedicated to recalling the definition and properties of the symmetric and antisymmetric structure constants \(d_{ABC}\) and \(f_{ABC}\) of SU(N). We define the antisymmetric constants in terms of the commutators
\[[T_{A},T_{B}]=if_{ABC}T_{C}\;. \tag{12}\]
It is more elegant to define these constants in terms of an operation, which we will denote by the symbol \(\wedge\), that is entirely closed in the algebra
\[T_{A}\wedge T_{B}=-i[T_{A},T_{B}]=f_{ABC}T_{C}\;. \tag{13}\]
The actual values of the constants \(f_{ABC}\) depend on a choice of basis and throughout this work we will always use the Weyl-Cartan basis which consists of \(N-1\) diagonal generators \(T_{q}\,,\,q=1,\ldots,N-1\), known as the Cartan generators, and the off-diagonal generators \(T_{\alpha},T_{\bar{\alpha}}\), which are labeled by the positive roots \(\alpha\) of SU(N). The off-diagonal generators are defined in terms of the root vectors \(E_{\alpha}\), which satisfy
\[[T_{q},E_{\alpha}]=\alpha|_{q}E_{\alpha}\;\;\;\;\;,\;\;\;\;[E_{ \alpha},E_{-\alpha}]=\alpha|_{q}T_{q}\;,\] \[[E_{\alpha},E_{\gamma}]=N_{\alpha\gamma}E_{\alpha+\gamma},\,{ \rm for}\;\alpha+\gamma\neq 0\;. \tag{14}\]
The constant \(N_{\alpha\delta}\) being zero if \(\alpha+\delta\) is not a root. Then, the Hermitian off-diagonal generators are defined by
\[T_{\alpha}=\frac{E_{\alpha}+E_{-\alpha}}{\sqrt{2}}\;\;\;\;\;\;,\;\;\;\;\;T_{ \bar{\alpha}}=\frac{E_{\alpha}-E_{-\alpha}}{\sqrt{2}i}\;. \tag{15}\]
The nontrivial commutation relations in the Cartan-Weyl basis are
\[T_{q}\wedge T_{\alpha}= \alpha|_{q}T_{\overline{\alpha}}\;, \tag{16a}\] \[T_{q}\wedge T_{\overline{\alpha}}= -\alpha|_{q}T_{\alpha}\;,\] (16b) \[T_{\alpha}\wedge T_{\overline{\alpha}}= \alpha|_{q}T_{q}\;,\] (16c) \[T_{\alpha}\wedge T_{\beta}= \frac{1}{\sqrt{2}}\left(N_{\alpha,\beta}T_{\overline{\alpha+ \beta}}+N_{\alpha,-\beta}T_{\overline{\alpha-\beta}}\right)\;,\] (16d) \[T_{\alpha}\wedge T_{\overline{\beta}}= \frac{1}{\sqrt{2}}\left(-N_{\alpha,\beta}T_{\alpha+\beta}+N_{ \alpha,-\beta}T_{\alpha-\beta}\right)\;,\] (16e) \[T_{\overline{\alpha}}\wedge T_{\overline{\beta}}= \frac{1}{\sqrt{2}}\left(-N_{\alpha,\beta}T_{\overline{\alpha+ \beta}}+N_{\alpha,-\beta}T_{\overline{\alpha-\beta}}\right)\;. \tag{16f}\]
To evaluate the constants \(f_{ABC}\), we use the identity
\[f_{ABC}=\langle T_{A}\wedge T_{B},T_{C}\rangle\;, \tag{100}\]
although one caveat is worth mentioning: because of the property \(T_{-\alpha}=T_{\alpha}\), \(T_{\overline{-\alpha}}=-T_{\overline{\alpha}}\), the Killing products between generators associated with roots are
\[\langle T_{\alpha},T_{\beta}\rangle =\delta_{\alpha,\beta}+\delta_{\alpha,-\beta}\;, \tag{101a}\] \[\langle T_{\overline{\alpha}},T_{\overline{\beta}}\rangle =\delta_{\alpha,\beta}-\delta_{\alpha,-\beta}\;. \tag{101b}\]
With this in mind, the final result for the non-zero antisymmetric constants is
\[f_{q\alpha\overline{\alpha}} =-f_{q\overline{\alpha}\alpha}=\alpha|_{q}\;, \tag{102a}\] \[f_{\gamma\eta\overline{\alpha}} =\frac{1}{\sqrt{2}}\big{(}N_{\gamma,\eta}(\delta_{\alpha,\gamma+ \eta}-\delta_{\alpha,-\gamma-\eta})+N_{\gamma,-\eta}(\delta_{\alpha,\gamma- \eta}-\delta_{\alpha,\eta-\gamma})\big{)}\;,\] (102b) \[f_{\gamma\overline{\eta}\alpha} =\frac{1}{\sqrt{2}}\big{(}-N_{\gamma,\eta}(\delta_{\alpha,\gamma+ \eta}+\delta_{\alpha,-\gamma-\eta})+N_{\gamma,-\eta}(\delta_{\alpha,\gamma- \eta}+\delta_{\alpha,\eta-\gamma})\big{)}\;,\] (102c) \[f_{\overline{\gamma}\,\overline{\eta}\,\overline{\alpha}} =\frac{1}{\sqrt{2}}\big{(}-N_{\gamma,\eta}(\delta_{\alpha,\gamma+ \eta}-\delta_{\alpha,-\gamma-\eta})+N_{\gamma,-\eta}(\delta_{\alpha,\gamma- \eta}-\delta_{\alpha,\eta-\gamma})\big{)}\;. \tag{102d}\]
In our convention, the constants \(N_{\alpha,\beta}\) are given by
\[|N_{\alpha,\beta}|=\begin{cases}\frac{1}{\sqrt{2N}},&\text{if $\vec{\alpha}+\beta$ is a root}\\ 0,&\text{otherwise}\end{cases} \tag{103}\]
They also have the useful properties
\[N_{-\alpha,-\beta} =N_{\beta,\alpha}=-N_{\alpha,\beta}\;, \tag{104a}\] \[N_{\alpha,\beta} =N_{\gamma,\alpha}=N_{\beta,\gamma}\;\text{ if }\;\alpha+\beta+\gamma=0\;. \tag{104b}\]
The roots \(\alpha\) and the weights \(\omega\) have a few properties worth noticing:
\[\alpha=\omega_{i}-\omega_{j}\;, \tag{105}\] \[\omega_{i}\cdot\omega_{j}=\frac{N\delta_{ij}-1}{2N^{2}}\Rightarrow |\alpha|^{2}=\frac{1}{N}\;,\] (106) \[\sum_{i=1}^{N}\omega_{i}|_{q}\omega_{i}|_{p}=\sum_{\alpha>0} \alpha|_{q}\alpha|_{p}=\frac{\delta_{qp}}{2}\;. \tag{107}\]
The symmetric constants are defined in terms of the anticommutators
\[\{T_{A},T_{B}\}=c\mathbb{I}+d_{ABC}T_{C} \tag{108}\]
The appearance of a component in the direction of the identity matrix \(\mathbb{I}\) comes from the fact that the anticommutator is not traceless. In fact, the constant \(c\) can be found via the trace of this equation
\[2\text{Tr}(T_{A}T_{B})=c\,N\;. \tag{109}\]
The basis \(T_{A}\) is normalized in the sense of the Killing product, which can be realized as
\[\langle T_{A},T_{B}\rangle=2N\text{Tr}(T_{A}T_{B})=\delta_{AB}\;. \tag{110}\]
This leads to
\[c=\frac{\langle T_{A},T_{B}\rangle}{N^{2}}=\frac{\delta_{AB}}{N^{2}} \tag{101}\]
Once again, it is more elegant to define the constants \(d_{ABC}\) in terms of a product closed in the algebra. We denote this product by \(\vee\) and set
\[T_{A}\lor T_{B}=\{T_{A},T_{B}\}-\frac{\langle T_{A},T_{B}\rangle}{N^{2}}=d_{ ABC}T_{C}\;. \tag{102}\]
Because the basis \(T_{A}\) is traceless, we can also obtain these constants by
\[d_{ABC}=2N\text{Tr}(\{T_{A},T_{B}\}T_{C})\;. \tag{103}\]
This expression makes clear the cyclic property \(d_{ABC}=d_{BCA}\).
The constants \(d_{ABC}\) have fewer interesting properties which makes it desirable to replace them with \(f_{ABC}\) whenever possible. To do so, the following relations are useful [37]
\[f_{ABE}f_{CDE} = d_{ACE}d_{BDE}-d_{ADE}d_{BCE}+\frac{2}{N^{2}}(\delta_{AC}\delta_ {BD}-\delta_{AD}\delta_{BC})\;, \tag{104}\] \[f_{ABE}d_{CDE} = d_{ADE}f_{BCE}+d_{ACE}f_{BDE}\;,\] (105) \[d_{AEF}d_{BEF} = \frac{N^{2}-4}{N^{2}}\delta_{AB}\;. \tag{106}\]
Fortunately, since we are only interested in SU(N), more can be said about the symmetric constants. For that purpose, we first write the matrix realization of the Weyl-Cartan basis in terms of roots and weights
\[T_{q}|_{ij} = \omega_{i}|_{q}\delta_{ij}\;, \tag{107}\] \[T_{\alpha_{ab}}|_{ij} = \frac{1}{2\sqrt{N}}\left(\delta_{ia}\delta_{jb}+\delta_{ib}\delta _{ja}\right)\;,\] (108) \[T_{\overline{\alpha}_{ab}}|_{ij} = \frac{i}{2\sqrt{N}}\left(-\delta_{ia}\delta_{jb}+\delta_{ib}\delta _{ja}\right)\;. \tag{109}\]
Using the components of the generators, it is possible to show
\[T_{q}\lor T_{p}= \sum_{i=1}^{N}\omega_{i}|_{q}\omega_{i}|_{p}\omega_{i}\cdot T\;, \tag{110a}\] \[T_{q}\lor T_{\alpha}= \tilde{\alpha}|_{q}T_{\alpha}\;,\] (110b) \[T_{q}\lor T_{\overline{\alpha}}= \tilde{\alpha}|_{q}T_{\overline{\alpha}}\;,\] (110c) \[T_{\alpha}\lor T_{\overline{\alpha}}= 0\;,\] (110d) \[T_{\alpha}\lor T_{\beta}= \frac{1}{\sqrt{2}}\left(|N_{\alpha,\beta}|T_{\alpha+\beta}+|N_{ \alpha,-\beta}|T_{\overline{\alpha-\beta}}\right)\;,\] (110e) \[T_{\alpha}\lor T_{\overline{\beta}}= \frac{1}{\sqrt{2}}\left(|N_{\alpha,\beta}|T_{\overline{\alpha+ \beta}}-|N_{\alpha,-\beta}|T_{\overline{\alpha-\beta}}\right)\;,\] (110f) \[T_{\overline{\alpha}}\lor T_{\overline{\beta}}= \frac{1}{\sqrt{2}}\left(-|N_{\alpha,\beta}|T_{\alpha+\beta}+|N_{ \alpha,-\beta}|T_{\alpha-\beta}\right)\;. \tag{110g}\]
For each \(\alpha=\omega_{i}-\omega_{j}\), we define \(\tilde{\alpha}=\omega_{i}+\omega_{j}\).
We can now use eq. (119) and the analogous commutator version to evaluate the constants \(d_{ABC}\)6
Footnote 6: All the roots are assumed to be positive
\[d_{qpl} =4N\sum_{i=1}^{N}\omega_{i}|_{q}\omega_{i}|_{p}\omega_{i}|_{l}\;, \tag{127a}\] \[d_{q\alpha\alpha} =d_{q\overline{\alpha}\,\overline{\alpha}}=\tilde{\alpha}|_{q}\;,\] (127b) \[d_{\gamma\eta\alpha} =\frac{1}{\sqrt{2}}\big{(}|N_{\gamma,\eta}|(\delta_{\alpha, \gamma+\eta}+\delta_{\alpha,-\gamma-\eta})+|N_{\gamma,-\eta}|(\delta_{\alpha, \gamma-\eta}+\delta_{\alpha,\eta-\gamma})\big{)}\;,\] (127c) \[d_{\gamma\overline{\eta\alpha}} =\frac{1}{\sqrt{2}}\big{(}|N_{\gamma,\eta}|(\delta_{\alpha, \gamma+\eta}-\delta_{\alpha,-\gamma-\eta})-|N_{\gamma,-\eta}|(\delta_{\alpha, \gamma-\eta}-\delta_{\alpha,\eta-\gamma})\big{)}\;,\] (127d) \[d_{\overline{\gamma}\,\overline{\eta}\alpha} =\frac{1}{\sqrt{2}}\big{(}-|N_{\gamma,\eta}|(\delta_{\alpha, \gamma+\eta}+\delta_{\alpha,-\gamma-\eta})+|N_{\gamma,-\eta}|(\delta_{\alpha, \gamma-\eta}+\delta_{\alpha,\eta-\gamma})\big{)}\;. \tag{127e}\]
|
2305.16574
|
Measures of contextuality in cyclic systems and the negative
probabilities measure CNT3
|
Several principled measures of contextuality have been proposed for general
systems of random variables (i.e. inconsistentlly connected systems). The first
of such measures was based on quasi-couplings using negative probabilities
(here denoted by CNT3, Dzhafarov & Kujala, 2016). Dzhafarov and Kujala (2019)
introduced a measure of contextuality, CNT2, that naturally generalizes to a
measure of non-contextuality. Dzhafarov and Kujala (2019) additionally
conjectured that in the class of cyclic systems these two measures are
proportional. Here we prove that that conjecture is correct. Recently,
Cervantes (2023) showed the proportionality of CNT2 and the Contextual Fraction
measure (CNTF) introduced by Abramsky, Barbosa, and Mansfeld (2017). The
present proof completes the description of the interrelations of all
contextuality measures as they pertain to cyclic systems.
|
Giulio Camillo, Víctor H. Cervantes
|
2023-05-26T01:49:35Z
|
http://arxiv.org/abs/2305.16574v1
|
Measures of contextuality in cyclic systems and the negative probabilities measure \(\text{CNT}_{3}\)
###### Abstract
Several principled measures of contextuality have been proposed for general systems of random variables (i.e. inconsistently connected systems). The first of such measures was based on quasi-couplings using negative probabilities (here denoted by \(\text{CNT}_{3}\), Dzhafarov & Kujala, 2016). Dzhafarov and Kujala (2019) introduced a measure of contextuality, \(\text{CNT}_{2}\), that naturally generalizes to a measure of non-contextuality. Dzhafarov and Kujala (2019) additionally conjectured that in the class of cyclic systems these two measures are proportional. Here we prove that that conjecture is correct. Recently, Cervantes (2023) showed the proportionality of \(\text{CNT}_{2}\) and the Contextual Fraction measure (CNTF) introduced by Abramsky, Barbosa, and Mansfeld (2017). The present proof completes the description of the interrelations of all contextuality measures as they pertain to cyclic systems.
Contextuality is a property of systems of random variables. A system is contextual when the observed joint distributions within different contexts are incompatible with the equality in probability of variables across contexts (we shall give a formal definition of contextuality below). In the contextuality literature, several measures or indexes of the degree of contextuality of a system have been introduced. Each of these measures reflects a unique aspect of contextuality and together provide a pattern of the system's contextuality. The class of cyclic systems is prominent in applications of contextuality and is used to represent many important scenarios of quantum contextuality, such as the EPR/Bohm scenario [1, 2], or the Klyachko-Can-Binicioglu-Shumovsky scenario [3].
For cyclic systems, it has been conjectured that all measures in the literature are proportional to each other. In Ref. [4], the equality of three of these measures \((\text{CNT}_{1},\text{CNT}_{2},\) and \(\text{CNT}_{0})\) was proved, and in Ref. [5], the proportionality of \(\text{CNT}_{2}\) and the Contextual Fraction (CNTF) was demonstrated. Together, these two results show the proportionality of all but one of the measures found in the literature. The measure \(\text{CNT}_{3}\) based on negative probabilities was conjectured to be proportional to \(\text{CNT}_{2}\) in Ref. [6]. In this paper, we prove the truth of this conjecture by means of showing that \((n-1)\text{CNT}_{3}=\text{CNTF}\). Thus, this paper culminates the theoretical description of the interrelations of all contextuality measures as they pertain cyclic systems.
## 1 Contextuality-by-Default
In this section, we present the Contextuality-by-Default (CbD) approach to contextuality analysis [7, 8, 9, 4, 6, 10]. A _system_ of random variables is a set of double-indexed random variables \(R_{q}^{c}\), where \(c\in C\) is the _context_ of the random variable, the conditions under which it is recorded, and \(q\in Q\) denotes its _content_, the property of which the random variable is a measurement. The following is a presentation of a system:
\[\mathcal{R}=\left\{R_{q}^{c}:c\in C,q\in Q,q\prec c\right\}, \tag{1}\]
where \(q\prec c\) denotes that content \(q\) is measured in context \(c\).
For each \(c\in C\), the subset
\[\mathrm{R}^{c}=\left\{R_{q}^{c}:q\in Q,q\prec c\right\} \tag{2}\]
is referred to as the _bunch_ for context \(c\). The variables within a bunch are jointly distributed. That is, bunches are random vectors with a given probability distribution. For each \(q\in Q\), the subset
\[\mathcal{R}_{q}=\left\{R_{q}^{c}:c\in C,q\prec c\right\} \tag{3}\]
is referred to as the _connection_ corresponding to content \(q\). However, no two random variables within a connection \(\mathcal{R}_{q}\) are jointly distributed; thus, they are said to be _stochastically unrelated_.1
Footnote 1: More generally, any two \(R_{q}^{c},R_{q^{\prime}}^{c^{\prime}}\in\mathcal{R}\) with \(c\neq c^{\prime}\) are stochastically unrelated. We emphasize that variables within a connection (and within a system) are stochastically unrelated by using calligraphic script for their names, and that variables of a bunch do possess a joint distribution by using roman script.
Cyclic systems are a prominent class of systems of random variables. They are the object of Bell's theorem [11, 12], the Leggett-Garg theorem [13], Suppes and Zanotti's theorem [14], the Klyachko-Can-Binicioglu-Shumovsky theorem [3], as well as many other theoretical results (see e.g., [15, 16]). Cyclic systems are used to model most applications that empirically explore contextuality (e.g., [17, 18, 19, 20, 21, 22]). Furthermore, as shown in Refs. [10, 23], a system without cyclic subsystems is necessarily noncontextual. A system \(\mathcal{R}\) is said to be _cyclic_ if
1. each of its contexts contains two jointly distributed _binary_ random variables,
2. each content is measured in two contexts, and
3. there is no proper subsystem of \(\mathcal{R}\) that satisfies (i) and (ii).
The number \(n\) of contexts (and contents) on a cyclic system is known as its _rank_. For any cyclic system, a rearrangement and numbering of its contexts and contents can always be found so that the system can be given the presentation
\[\mathcal{R}_{n}=\left\{\left\{\left.R_{i}^{i},R_{i\oplus 1}^{i}\right\}:i=1, \ldots,n\right\},\right. \tag{4}\]
where \(R_{j}^{i}\) stands for \(R_{q_{j}}^{c_{i}}\), and \(\oplus 1\) denotes cyclic shift \(1\mapsto 2,\ldots,n-1\mapsto n,n\mapsto 1\).2 In this way, the variables \(\left\{R_{i}^{i},R_{i\oplus 1}^{i}\right\}\) constitute the bunch corresponding to context \(c_{i}\). The following matrices depict the format of two cyclic systems: a cyclic system of rank 3, and a cyclic system of rank 6.
Footnote 2: Similarly, \(\ominus 1\) will denote the inverse shift of \(\oplus 1\).
\[\begin{array}{|c|c||c|c|c||
The vector \(\mathbf{c}_{(\cdot)}\) contains imposed probabilities. These probabilities define a _coupling_ of the variables within each connection. A coupling of a set of random variables \(\left\{X_{i}\right\}_{i\in I}\), where \(I\) indexes the variables in the set, is a new set of jointly distributed random variables \(\left\{Y_{i}\right\}_{i\in I}\) such that for each \(i\in I\), the distributions of \(X_{i}\) and \(Y_{i}\) coincide. In a _multimaximal coupling_ of a set of random variables, for any two \(Y_{i},Y_{i^{\prime}}\), the probability \(\Pr(Y_{i}=Y_{i^{\prime}})\) is the maximal possible given their individual distributions. If we denote the variables of the multimaximal coupling of connection \(\mathcal{R}_{q_{j}}\) by
\[\mathrm{T}_{q_{j}}=\left\{T_{j}^{i}:c_{i}\in C,q_{j}\prec c_{i}\right\}, \tag{8}\]
one obtains the vector
\[\mathbf{c}_{(\cdot)}=\left(\Pr(T_{i}^{i}=r_{i}^{i},T_{i}^{i\oplus 1}=r_{i}^{i \oplus 1})\right)_{i=1,\ldots,n}, \tag{9}\]
with \(r_{i}^{i},r_{i}^{i\ominus 1}=0,1\), and where
\[\begin{array}{ll}\Pr(T_{i}^{i}=0,T_{i}^{i\ominus 1}=0)&=\min(\Pr(R_{i}^{i}=0), \Pr(R_{i}^{i\ominus 1}=0)),\\ \Pr(T_{i}^{i}=1,T_{i}^{i\ominus 1}=1)&=\min(\Pr(R_{i}^{i}=1),\Pr(R_{i}^{i \ominus 1}=1)),\\ \Pr(T_{i}^{i}=0,T_{i}^{i\ominus 1}=1)&=\Pr(R_{i}^{i\ominus 1}=1)-\Pr(T_{i}^{i }=1,T_{i}^{i\ominus 1}=1),\\ \text{and}&\\ \Pr(T_{i}^{i}=1,T_{i}^{i\ominus 1}=0)&=\Pr(R_{i}^{i}=1)-\Pr(T_{i}^{i}=1,T_{i}^{i \ominus 1}=1).\end{array} \tag{10}\]
Clearly, in the multimaximal coupling \(\mathrm{T}_{q_{j}}\), \(\Pr(T_{i}^{i}=0,T_{i}^{i\ominus 1}=1)=0\) or \(\Pr(T_{i}^{i}=1,T_{i}^{i\ominus 1}=0)=0\). Note that whenever a system \(\mathcal{R}\) is consistently connected, then for any two \(R_{q}^{\prime},R_{q}^{\prime}\), the corresponding variables of a multimaximal coupling of \(\mathcal{R}_{q}\) are almost always equal (that is, \(\Pr(T_{q}^{c}=T_{q}^{c^{\prime}})=1\)).
### Consistification
_Consistification_ is a procedure that can be applied to any system of binary random variables that will create a new system \(\mathcal{R}^{\ddagger}\) that is consistently connected and whose contextual status is the same as that of the original system \(\mathcal{R}\). Here we introduce the procedure closely following the presentation given in Ref. [5]. The consistification of system \(\mathcal{R}\) is obtained by constructing a new system \(\mathcal{R}^{\ddagger}\) in the following manner. First, define the set of contents \(Q^{\ddagger}\) of the new system as
\[Q^{\ddagger}=\left\{q_{ij}:c_{i}\in C,q_{j}\in Q,q_{j}\prec c_{i}\right\}. \tag{11}\]
That is, for each content \(q_{j}\) and each of the contexts \(c_{i}\) in which it is recorded, we define a content \(q_{ij}\)="\(q_{j}\) recorded in context \(c_{i}\)". Next, define the new set of contexts \(C^{\ddagger}\) as
\[C^{\ddagger}=C\sqcup Q, \tag{12}\]
the disjoint union of the contexts and the contents of the system \(\mathcal{R}\). Then, define the new relation
\[\prec^{\ddagger}=\left\{(q_{ij},c_{i}):q_{j}\in Q,c_{i}\in C,q_{j}\prec c_{i} \right\}\sqcup\left\{(q_{ij},q_{j}):q_{j}\in Q,c_{i}\in C,q_{j}\prec c_{i} \right\}. \tag{13}\]
That is, the new content \(q_{ij}\) is recorded in precisely two of the new contexts, \(c_{i},q_{j}\in C^{\ddagger}\). Therefore, the bunch
\[\mathrm{R}^{c_{i}}=\left\{R_{q_{ij}}^{c_{i}}:q_{ij}\in Q^{\ddagger},q_{ij} \prec^{\ddagger}c_{i}\right\} \tag{14}\]
coincides with the bunch
\[\mathrm{R}^{c_{i}}=\left\{R_{q}^{c_{i}}:q\in Q,q\prec c_{i}\right\} \tag{15}\]
of the original system; while the bunch
\[\mathrm{R}^{q_{ij}}=\left\{R_{q_{ij}}^{q_{j}}:q_{ij}\in Q^{\ddagger},q_{ij} \prec^{\ddagger}q_{j}\right\} \tag{16}\]
is constructed by defining new jointly distributed random variables \(\left\{R_{q_{ij}}^{q_{j}}\right\}_{q_{ij}\prec^{\ddagger}q_{j}}\) such that \(\mathrm{R}^{q_{j}}\) is the multimaximal coupling of \(\mathcal{R}^{q_{ij}}\) of system \(\mathcal{R}\).
In particular if \(\mathcal{R}\) is a cyclic system of rank \(n\), then its consistified system \(\mathcal{R}^{\ddagger}\) is a consistently connected cyclic system of rank \(2n\). The following matrices show the consistification of system
and how its bunches relate to the bunches of the original system and the multimaximal couplings of its connections.
\begin{tabular}{|
where \(\mathbf{l}^{*}\) and \(\mathbf{b}^{*}\) are the empirical probabilities of the system, and \(\mathbf{c}^{*}\) are the probabilities found from the multimaximal couplings of each of its connections. Let \(\mathbf{M}\) be the incidence matrix found by taking the rows of \(\mathbf{M}_{(.)}\) corresponding to the elements of \(\mathbf{p}^{*}\). Note that the system is noncontextual if and only if there is a vector \(\mathbf{h}\geq 0\) (component-wise) such that
\[\mathbf{M}\mathbf{h}=\mathbf{p}^{*}, \tag{21}\]
subject to \(\mathbf{1}^{\intercal}\mathbf{h}=1\)[9]. Denoting the rows of \(\mathbf{M}\) that correspond to \(\mathbf{l}^{*}\), \(\mathbf{c}^{*}\), \(\mathbf{b}^{*}\) by, respectively, \(\mathbf{M}_{\mathbf{l}}\), \(\mathbf{M}_{\mathbf{b}}\), \(\mathbf{M}_{\mathbf{c}}\), we can rewrite (21) as
\[\left(\begin{array}{c}\mathbf{M}_{\mathbf{l}}\\ \mathbf{M}_{\mathbf{b}}\\ \mathbf{M}_{\mathbf{c}}\end{array}\right)\mathbf{h}=\left(\begin{array}{c} \mathbf{l}^{*}\\ \mathbf{b}^{*}\\ \mathbf{c}^{*}\end{array}\right). \tag{22}\]
An example matrix \(\mathbf{M}_{(.)}\) for cyclic systems of rank 2 can also be found in Ref. [9], whereas Ref. [5] illustrates matrix \(\mathbf{M}\) for cyclic systems of rank 4.
Let \(\mathbf{M}^{\prime}=(\mathbf{M}|\mathbf{M})\), \(\mathbf{y}^{\prime}=(\mathbf{y}_{+}^{\intercal}|-\mathbf{y}_{-}^{\intercal})^ {\intercal}\), and \(\mathbf{y}=\mathbf{y}_{+}-\mathbf{y}_{-}\), where \(\mathbf{y}_{+},\mathbf{y}_{-}\) are vectors of \(2^{2n}\) nonnegative components. Clearly,
\[\mathbf{M}^{\prime}\mathbf{y}^{\prime}=\mathbf{M}\mathbf{y}.\]
The contextuality measure \(\mathrm{CNT}_{3}(\mathcal{R}_{n})\) can be computed solving the linear programming task [9]:
\[\begin{array}{|cc|}\hline\text{find}&\text{minimizing}&\text{subject to}\\ \hline\mathbf{y}^{\prime}&\mathbf{1}^{\intercal}\mathbf{y}_{-}&\mathbf{M}^{ \prime}\mathbf{y}^{\prime}=\mathbf{p}^{*}\\ &\mathbf{1}^{\intercal}\mathbf{y}^{\prime}=1\\ &\mathbf{y}_{+},\mathbf{y}_{-}\geq 0\end{array} \tag{23}\]
For any solution \(\mathbf{y}^{\prime*}\), we compute \(\mathbf{y}^{*}=\mathbf{y}_{+}^{*}-\mathbf{y}_{-}^{*}\) and \(\mathrm{CNT}_{3}(\mathcal{R}_{n})=\left\|\mathbf{y}^{*}\right\|_{1}-1\). For brevity and due to the uniqueness of the Hahn-Jordan decomposition, we shall confuse notation and also call \(\mathbf{y}^{*}\) a solution of task (23). A solution \(\mathbf{y}^{*}\) generally does not define a probability distribution; instead, it provides a signed \(\sigma\)-additive measure whose total variation is smallest among all signed measures with marginals that agree both with the bunches and multimaximal connections of the system \(\mathcal{R}_{n}\). A solution of this task gives a true probability measure if and only if the system is noncontextual.
If a system \(\mathcal{R}_{n}\) is consistently connected, the contextual fraction proposed by Abramsky et al. [24] can be computed solving the following linear programming task:
\[\begin{array}{|cc|}\hline\text{find}&\text{maximizing}&\text{subject to}\\ \hline\mathbf{z}&\mathbf{1}^{\intercal}\mathbf{z}&\mathbf{M}_{(.)}\mathbf{z }\leq\mathbf{p}_{(.)}^{*}\\ &\mathbf{z}\geq 0\\ &\mathbf{1}^{\intercal}\mathbf{z}\leq 1\end{array} \tag{24}\]
For any solution \(\mathbf{z}^{*}\), \(\mathrm{CNT}(\mathcal{R}_{n})=1-\mathbf{1}^{\intercal}\mathbf{z}^{*}\). The previous task is equivalent to the one proposed in [24] which uses a simpler representation of the system [6]. A solution \(\mathbf{z}^{*}\) generally does not define a probability distribution. It is a defective \(\sigma\)-additive measure with total measure \(0\leq T\leq 1\), that is a true probability measure if and only if the system is noncontextual. Note that both tasks (23) and (24), used to compute \(\mathrm{CNT}_{3}\) and \(\mathrm{CNT}\), respectively, have in general infinitely many solutions.
Now, if we consider the consistified system \(\mathcal{R}_{n}^{\ddagger}\) of a cyclic system \(\mathcal{R}_{n}\), then if \(\mathcal{R}_{n}\) is consistently connected, equality (25) is satisfied by Th. 7 of Ref. [6]:
\[\mathrm{CNTF}(\mathcal{R}_{n})=\mathrm{CNTF}(\mathcal{R}_{n}^{\ddagger}). \tag{25}\]
Moreover, Th. 7 of Ref. [6] also shows that, regardless of consistent connectedness, the linear programming task to compute \(\mathrm{CNTF}(\mathcal{R}_{n}^{\ddagger})\) is equivalent to the task in expression (24) where \(\mathbf{p}_{(.)}^{*}\) describes system \(\mathcal{R}_{n}^{\ddagger}\). Hence, we will use equality (25) as the definition of the contextual fraction for inconsistently connected systems and compute it using task (24).
## 2 Relating \(\mathrm{CNT}_{3}\) and \(\mathrm{CNTF}\) in cyclic systems
To relate the two measures of degree of contextuality \(\mathrm{CNT}_{3}\) and \(\mathrm{CNTF}\) of a system \(\mathcal{R}_{n}\), we consider the set of its defective quasi-couplings. Let
\[\mathcal{Q}_{n}=\left\{\mathbf{x}\in\mathbb{R}^{2^{2n}}:\mathbf{M}_{(.)}\mathbf{ x}\leq\mathbf{p}_{(.)}^{*}\;\;\text{and}\;\;\mathbf{1}^{\intercal}\mathbf{x}\leq 1\right\}, \tag{26}\]
the convex pyramid obtained by the intersection of the convex polyhedral cone--that is, a space closed under addition and multiplication by non-negative scalars generated by the intersection of a finite number of half-spaces which have \(\mathbf{0}\) on their boundary [25, 26]--defined by the half-spaces \(\mathbf{M}_{(.)}\mathbf{x}\leq\mathbf{p}_{(.)}^{*}\) and the half-space \(\mathbf{1}^{\intercal}\mathbf{x}\leq 1\). Figure 1 schematically illustrates the set \(\mathcal{Q}_{n}\). We see that the intersection of hyperplane \(\mathbf{1}^{\intercal}\mathbf{y}=1\) and \(\mathcal{Q}_{n}\) defines the face of the pyramid on which all solutions to task (23) used to compute \(\mathrm{CNT}_{3}\) lie. Similarly, the intersection of hyperplane \(\mathbf{1}^{\intercal}\mathbf{z}=1-\mathrm{CNTF},\ \mathcal{Q}_{n}\), and the nonnegative orthant of \(\mathbb{R}^{2^{2n}}\), defines a slice on whose surface lie all solutions to task (24) used to compute \(\mathrm{CNTF}\).
**Lemma 2.1**.: _If a cyclic system \(\mathcal{R}_{n}\) is contextual, there exists some solution \(\mathbf{y}^{*}\) of task (23) with a single negative component._
Proof.: Fix \(i\in\{1,\ldots,n\}\), and choose an event \(S=\left\{S_{i}^{i}=1,S_{i}^{i\ominus 1}=0\right\}\) such that a multimaximal coupling of \(\mathcal{R}_{i}\) has, without loss of generality, \(\Pr\left(T_{i}^{i}=1,T_{i}^{i\ominus 1}=0\right)=0\).3 Look at the row \(u\) of \(\mathbf{M}_{(.)}\) corresponding to \(\Pr\left(T_{i}^{i}=1,T_{i}^{i\ominus 1}=0\right)=0\) and let \(V\) be the set of indices \(j\in\{1,\ldots,2^{2n}\}\) such that \(\mathbf{M}_{(.),u,j}=1\). Choose any \(v\in V\), and let \(s_{v}\) be the \(v\)th component of \(S\). Define \(\mathbf{q}_{(.)}^{*}\) component-wise by taking \(\mathbf{q}_{(.),i}^{*}=\mathbf{p}_{(.),i}^{*}+\frac{1}{2}\mathrm{CNT}_{3}\) if the event \(s^{\prime}\) whose probability is the \(i\)th component of \(\mathbf{p}_{(.)}^{*}\) is contained in \(s_{v}\), and \(\mathbf{q}_{(.),i}^{*}=\mathbf{p}_{(.),i}^{*}\), otherwise. Lastly, let
Footnote 3: If for no \(\mathcal{R}_{i}\), \(\Pr\left(T_{i}^{i}=1,T_{i}^{i\ominus 1}=0\right)=0\), replace \(R_{i}^{c}\) in the system with \(1-R_{i}^{c}\) for some \(i\).
\[\mathcal{H}_{v}=\left\{\mathbf{x}\in\mathbb{R}^{2^{2n}}:\mathbf{1}^{\intercal }(\mathbf{x}-\mathbf{e}_{v})=1+\frac{1}{2}\mathrm{CNT}_{3}\ \text{and}\ \mathbf{M}_{(.)}(\mathbf{x}-\mathbf{e}_{v})=\mathbf{q}_{(.)}^{*}\right\}, \tag{27}\]
where \(\mathbf{e}_{v}\) is the unit vector with a \(1\) on its \(v\)th component, and choose a point \(\mathbf{w}^{*}\) with zero \(v\)th component in the intersection of \(\mathcal{H}\) and the nonnegative orthant of \(\mathbb{R}^{2^{2n}}\). Clearly, the point \(\mathbf{y}^{*}=\mathbf{w}^{*}-\frac{1}{2}\mathrm{CNT}_{3}\mathbf{e}_{v}\) is a solution of task (23) with \(\mathbf{y}_{v}^{*}=-\frac{1}{2}\mathrm{CNT}_{3}\) its sole negative component.
**Lemma 2.2**.: _Let \(\mathcal{R}_{n}\) be a contextual cyclic system. Given a solution \(\mathbf{y}^{*}\) of task (23) as in Lemma 2.1, a solution \(\mathbf{z}^{*}\) of task (24) can be constructed such that \(|\mathbf{y}_{i}^{*}|\geq\mathbf{z}_{i}^{*}\), \(i=1,\ldots,2^{2n}\), and \(||\mathbf{y}^{*}-\mathbf{z}^{*}||_{1}=n\mathrm{\mathit{CNT}}_{3}\)._
Proof.: Choose a solution \(\mathbf{y}^{*}\) in accordance to Lemma 2.1. Let \(\hat{\mathbf{x}}_{1}=\mathbf{y}_{v}^{*}\mathbf{e}_{v}\) where \(v\) is the index of the only negative component of \(\mathbf{y}^{*}\). Using this \(v\), let \(s_{v}\) and \(\mathbf{q}_{(.)}^{*}\) be defined as in Lemma 2.1, and let \(U\) be the set of indices \(u\in\{1,\ldots,12n\}\) such that \(\mathbf{M}_{(.),u,v}=1\). Note that \(|U|=4n\), where
Figure 1: Scheme of the pyramid of defective quasi-couplings \(\mathcal{Q}_{n}\). The intersection of \(\mathcal{Q}_{n}\) and the nonnegative orthant of \(\mathbb{R}^{2^{2n}}\) is illustrated via the blue lines on the two depicted slices cutting through \(\mathcal{Q}_{n}\). Quasi-couplings \(\mathbf{y}^{*}\) lie on the slice \(\mathbf{1}^{\intercal}\mathbf{y}=1\) and defective couplings \(\mathbf{z}^{*}\) that are solutions to task (24) lie within the closed region delimited by blue edges on the slice \(\mathbf{1}^{\intercal}\mathbf{z}=1-\mathrm{CNTF}\).
there are \(n\) indices such that \(\mathbf{p}_{(.),u}\) corresponds to \(\Pr(R_{i}^{i}=r_{i}^{i},R_{i\oplus i}^{i}=r_{i\oplus 1}^{i})\), one for each of the \(n\) contexts of \(\mathcal{R}_{n}\); another \(n\) correspond to \(\Pr(T_{i}^{i}=r_{i}^{i},T_{i}^{i\oplus 1}=r_{i}^{i\oplus 1})\), one per content; and \(2n\) correspond to one probability \(\Pr(R_{i}^{i}=r_{i}^{i})\) for each random variable in the system.
Let \(\mathbf{M}_{U}\) be the submatrix of \(\mathbf{M}_{(.)}\) whose rows are indexed by \(U\), and \(\mathbf{M}_{U^{\prime}}\) the matrix with the remaining rows of \(\mathbf{M}_{(.)}\). (Note that matrix \(\mathbf{M}_{U}\) is a reduction of matrix \(\mathbf{M}_{(.)}\) in the same manner as \(\mathbf{M}\), with the event \(s_{v}\) taking the place of the event \(\left\{S_{i}^{i}=1,S_{i\oplus 1}^{i}=1\right\}_{i=1,\ldots,n}\) for its construction, see Ref. [9].) Define \(\mathbf{p}_{U}^{*}\) and \(\mathbf{p}_{U^{\prime}}^{*}\) analogously. We can then rewrite \(\mathcal{Q}_{n}\) as the intersection of
\[\mathcal{Q}_{U}=\left\{\mathbf{x}\in\mathbb{R}^{2^{2n}}:\mathbf{M}_{U}\mathbf{ x}\leq\mathbf{p}_{U}^{*}\;\;\text{and}\;\;\mathbf{1}^{\intercal}\mathbf{x}\leq 1 \right\}, \tag{28}\]
and
\[\mathcal{Q}_{U^{\prime}}=\left\{\mathbf{x}\in\mathbb{R}^{2^{2n}}:\mathbf{M}_{U ^{\prime}}\mathbf{x}\leq\mathbf{p}_{U^{\prime}}^{*}\;\;\text{and}\;\;\mathbf{ 1}^{\intercal}\mathbf{x}\leq 1\right\}. \tag{29}\]
From the definition of \(\mathbf{M}\) (see Ref.[9]), we have that the dimension of \(\mathcal{Q}_{n}\) is \(4n+1\). Similarly, the dimension of \(\mathcal{Q}_{U}\) is \(4n+1\) because it is constructed by a minimal subset of defining inequalities of \(\mathcal{Q}_{n}\).
Define \(\mathbf{w}^{*}=\mathbf{y}^{*}-\hat{\mathbf{x}}_{1}\). Since \(\mathbf{M}_{(.)}\mathbf{w}^{*}=\mathbf{q}_{(.)}^{*}\), \(\mathbf{w}^{*}\notin\mathcal{Q}_{n}\). Clearly, \(\mathbf{w}^{*}\notin\mathcal{Q}_{U}\) and \(\mathbf{w}^{*}\in\mathcal{Q}_{U^{\prime}}\). Let us next consider the task
\begin{tabular}{|c c c|} \hline find & minimizing & subject to \\ \hline \(\mathbf{x}\) & \(\mathbf{1}^{\intercal}\mathbf{x}\) & \(\mathbf{M}_{U}(\mathbf{w}^{*}-\mathbf{x})\leq\mathbf{p}_{U}^{*}\) \\ & & \(\mathbf{M}_{U^{\prime}}(\mathbf{w}^{*}-\mathbf{x})\leq\mathbf{p}_{U^{\prime}}^ {*}\) \\ & & \(\mathbf{x}\geq 0\) \\ & & \(\mathbf{1}^{\intercal}(\mathbf{w}^{*}-\mathbf{x})\leq 1\) \\ & & \(\mathbf{e}_{\mathbf{x}}^{\intercal}\mathbf{x}=0\) \\ \hline \end{tabular} (30)
This task must have a solution, since \(\mathbf{x}^{*}=\mathbf{w}^{*}\) satisfies all its constraints. Additionally, it is evident that the second set of restrictions (those associated with \(\mathbf{p}_{U^{\prime}}^{*}\)) place no restriction to finding the solution because, by construction, \(\mathbf{M}_{U^{\prime}}\mathbf{w}^{*}=\mathbf{p}_{U^{\prime}}^{*}\); hence, any vector \(\mathbf{x}\geq 0\) will satisfy that set of inequalities. Further examination of the constraints shows immediately that for any solution \(\mathbf{x}^{*}\), \(\mathbf{1}^{\intercal}\mathbf{x}^{*}\geq-2\mathbf{y}_{v}^{*}\). Similarly, inspecting the constraints associated with \(\mathbf{p}_{U}^{*}\) reveals that whenever a vector \(\mathbf{x}^{\prime}\) satisfies \(\mathbf{M}_{U,u}^{\intercal}(\mathbf{w}^{*}-\mathbf{x}^{\prime})\leq\mathbf{p}_ {Uu}^{*}\), where \(\mathbf{p}_{Uu}^{*}\) is a probability \(\Pr(R_{i}^{i}=r_{i}^{i},R_{i\oplus i}^{i}=r_{i\oplus 1}^{i})\), then \(\mathbf{M}_{U,t}^{\intercal}(\mathbf{w}^{*}-\mathbf{x}^{\prime})\leq\mathbf{p}_ {Ut}^{*}\), where \(\mathbf{p}_{Ut}^{*}\) is a probability \(\Pr(R_{i}^{i}=r_{i}^{i})\) or \(\Pr(R_{i\oplus 1}^{i}=r_{i\oplus 1}^{i})\)--for the same \(i\) in the event corresponding to \(\Pr(R_{i}^{i}=r_{i}^{i},R_{i\oplus i}^{i}=r_{i\oplus 1}^{i})\)--,will also be satisfied. An analogous observation can be made when \(\mathbf{p}_{Uu}^{*}\) is a probability \(\Pr(T_{i}^{i}=r_{i}^{i},T_{i}^{i\ominus 1}=r_{i}^{i\ominus 1})\). Therefore, at most \(2n\) of the constraints imposed via matrix \(\mathbf{M}_{U}\) are active in determining the solution space of task (30)
Let \(\mathbf{M}_{w}\) and \(\mathbf{p}_{w}^{*}\) contain the rows and probabilities of \(\mathbf{M}_{U}\) and \(\mathbf{p}_{U}^{*}\), respectively, corresponding to bunch and connection probabilities. Since the rows of \(\mathbf{M}_{U}\) are linearly independent, so are the rows of \(\mathbf{M}_{w}\), and the latter has full row rank \(2n\). Given the considerations in the above paragraph, task (30) is equivalent to task
\begin{tabular}{|c c c|} \hline find & minimizing & subject to \\ \hline \(\mathbf{x}\) & \(\mathbf{1}^{\intercal}\mathbf{x}\) & \(\mathbf{M}_{w}(\mathbf{w}^{*}-\mathbf{x})\leq\mathbf{p}_{w}^{*}\) \\ & & \(\mathbf{x}\geq 0\) \\ & & \(\mathbf{e}_{\mathbf{x}}^{\intercal}\mathbf{x}=0\) \\ \hline \end{tabular} (31)
Now, the constraint \(\mathbf{e}_{v}^{\intercal}\mathbf{x}=0\) can be replaced by a modification of column \(v\) of matrix \(\mathbf{M}_{w}\) in which the column is replaced by a vector of zeros. This effectively reduces its rank to \(2n-1\). We further note that, in standard form, the constraints for task (31) are \(\mathbf{M}_{w}\mathbf{x}\geq\mathbf{M}_{w}\mathbf{w}^{*}-\mathbf{p}_{w}^{*}\), and the deficiency in rank just introduced implies that there is some row of \(\mathbf{M}_{w}\) that may be safely removed for purposes of finding a solution \(\mathbf{x}^{*}\). Since (assuming the modified matrix) \(\mathbf{M}_{w}\mathbf{x}\geq\mathbf{M}_{w}\mathbf{w}^{*}-\mathbf{p}_{w}^{*}\) is an underdetermined system with \(2n-1\) inequalities, there exists a solution \(\mathbf{x}^{*}\) such that all components of \(\mathbf{x}^{*}\) but \(2n-1\) are zero. Therefore, we see that for a solution \(\mathbf{x}^{*}\), \(\mathbf{M}_{w}\mathbf{x}^{*}=\mathbf{M}_{w}\mathbf{w}^{*}-\mathbf{p}_{w}^{*}\), and that \(\mathbf{1}^{\intercal}\mathbf{x}^{*}=\mathbf{1}^{\intercal}(\mathbf{M}_{w} \mathbf{w}^{*}-\mathbf{p}_{w}^{*})=-(2n-1)\mathbf{y}_{v}^{*}\). The statement is obtained by noting that task (30) is equivalent to maximizing \(\mathbf{1}^{\intercal}(\mathbf{w}^{*}-\mathbf{x})\) under the same constraints, which is essentially task (24). In other words, \(\mathbf{z}^{*}\equiv\mathbf{y}^{*}-\hat{\mathbf{x}}_{1}-\mathbf{x}^{*}\) is an optimal solution to task (24).
**Lemma 2.3**.: \(\left\|\mathbf{y}^{*}-\mathbf{z}^{*}\right\|_{1}=\text{{CNTF}}+\text{{CNT}}_{3}\)__
Proof.: Choose solutions \(\mathbf{y}^{*}\) and \(\mathbf{z}^{*}\) in accordance to Lemmas 2.1 and 2.2. Then
\[\left\|\mathbf{y}^{*}\right\|_{1} =\left\|\mathbf{y}^{*}-\mathbf{z}^{*}+\mathbf{z}^{*}\right\|_{1}\] \[=\left\|\mathbf{y}^{*}-\mathbf{z}^{*}\right\|_{1}+\left\|\mathbf{ z}^{*}\right\|_{1}\] \[=\left\|\mathbf{y}^{*}-\mathbf{z}^{*}\right\|_{1}+1-\text{CNTF}\]
where the second line follows by the choice of \(\mathbf{y}^{*}\) and \(\mathbf{z}^{*}\). The statement follows immediately by noting that \(\left\|\mathbf{y}^{*}\right\|_{1}=1+\text{CNT}_{3}\).
**Theorem 2.1**.: _If \(\mathcal{R}_{n}\) is a cyclic system of rank \(n\), then \(\text{CNTF}(\mathcal{R}_{n})=(n-1)\text{CNT}_{3}(\mathcal{R}_{n})\)_
Proof.: The relation in the statement is trivially true for any noncontextual system; hence, assume that \(\mathcal{R}_{n}\) is a contextual cyclic system of rank \(n\). Choose solutions \(\mathbf{y}^{*}\) and \(\mathbf{z}^{*}\) in accordance to Lemmas 2.1 and 2.2. By Lemma 2.2,
\[\left\|\mathbf{y}^{*}-\mathbf{z}^{*}\right\|_{1}=n\text{CNT}_{3}.\]
And from Lemma 2.3, it follows that
\[\text{CNTF}=(n-1)\text{CNT}_{3}. \tag{32}\]
**Corollary 2.1**.: \(\text{CNT}_{3}\) _is not invariant to consistification. If \(\mathcal{R}_{n}\) is a contextual cyclic system and \(\mathcal{R}_{n}^{1}\) is its consistification, then_
\[\text{CNT}_{3}(\mathcal{R}_{n}^{1})=\frac{n-1}{2n-1}\text{CNT}_{3}(\mathcal{ R}_{n}).\]
Proof.: \((n-1)\text{CNT}_{3}(\mathcal{R}_{n})=\text{CNTF}(\mathcal{R}_{n})=\text{CNT }(\mathcal{R}_{n}^{1})=(2n-1)\text{CNT}_{3}(\mathcal{R}_{n}^{1}).\)
**Example 2.1** (Consistently connected system).: _Consider a cyclic system \(\mathcal{R}_{3}\) with bunch joint distributions_
\[\begin{array}{c|c|c}&R_{i}^{i}=0&R_{i}^{i}=1\\ \hline R_{i\oplus 1}^{i}=0&\nicefrac{{1}}{{8}}&\nicefrac{{3}}{{8}}\\ \hline R_{i\oplus 1}^{i}=1&\nicefrac{{3}}{{8}}&\nicefrac{{1}}{{8}}\\ \end{array}. \tag{33}\]
_The system \(\mathcal{R}_{3}\) is consistently connected and can be represented by the vector:_
\[\mathbf{p}^{*\intercal}=(\nicefrac{{1}}{{2}},\nicefrac{{1}}{{2}},\nicefrac{{ 1}}{{2}},\nicefrac{{1}}{{2}},\nicefrac{{1}}{{2}},\nicefrac{{1}}{{2}}, \nicefrac{{1}}{{2}},\nicefrac{{1}}{{8}},\nicefrac{{1}}{{8}},\nicefrac{{1}}{{8} },\nicefrac{{1}}{{2}},\nicefrac{{1}}{{2}},\nicefrac{{1}}{{2}}).\]
_Let \(\{\mathbf{e}_{j}\}_{j=1,\ldots,64}\), be the standard basis of \(\mathbb{R}^{64}\), we can write a solution to task (23) with a single negative mass (as in Lemma 2.1)_
\[\mathbf{y}^{*}=\frac{1}{16}\left(3\mathbf{e}_{7}-\mathbf{e}_{14}+2\mathbf{e}_{ 25}+\mathbf{e}_{26}+3\mathbf{e}_{31}+2\mathbf{e}_{34}+\mathbf{e}_{38}+2 \mathbf{e}_{40}+\mathbf{e}_{42}+2\mathbf{e}_{58}\right).\]
_To highlight the dimension of the solution space \(\mathcal{Q}_{U}\), this solution can be further re-expressed as a linear combination of the following \(L_{1}\)-orthonormal vectors \(\{\hat{\mathbf{x}}_{j}\}_{j=1,\ldots,6}\):_
\[\hat{\mathbf{x}}_{1} =-\mathbf{e}_{14}, \hat{\mathbf{x}}_{4} =(\mathbf{e}_{26}+\mathbf{e}_{38}+\mathbf{e}_{42})/3,\] \[\hat{\mathbf{x}}_{2} =\mathbf{e}_{7}, \hat{\mathbf{x}}_{5} =\mathbf{e}_{31},\] \[\hat{\mathbf{x}}_{3} =(\mathbf{e}_{25}+\mathbf{e}_{40})/2, \hat{\mathbf{x}}_{6} =(\mathbf{e}_{34}+\mathbf{e}_{58})/2.\]
_In terms of these vectors, we have_
\[\mathbf{y}^{*}=\frac{1}{16}\left(\hat{\mathbf{x}}_{1}+3\hat{\mathbf{x}}_{2}+4 \hat{\mathbf{x}}_{3}+3\hat{\mathbf{x}}_{4}+3\hat{\mathbf{x}}_{5}+4\hat{ \mathbf{x}}_{6}\right).\]
_Now, we can use the construction in Lemma 2.2 to find the point_
\[\mathbf{z}^{*}=\frac{1}{16}\left(0\hat{\mathbf{x}}_{1}+2\hat{\mathbf{x}}_{2}+3 \hat{\mathbf{x}}_{3}+2\hat{\mathbf{x}}_{4}+2\hat{\mathbf{x}}_{5}+3\hat{ \mathbf{x}}_{6}\right).\]
_which is a solution to task (24) to compute CNTF. For this system \(\text{CNT}_{3}=\nicefrac{{1}}{{8}}\) and_
\[\text{CNTF}=\frac{1}{4}=(3-1)\text{CNT}_{3}.\]
**Example 2.2** (Inconsistently connected system).: _Consider the system \(\mathcal{R}^{\prime}_{3}\) in which the distribution of the third bunch of system \(\mathcal{R}_{3}\) from example 2.1 is replaced by_
\[\begin{array}{c|c}&R_{3}^{3}=0\\ \hline R_{1}^{3}=0&\nicefrac{{1}}{{8}}&\nicefrac{{7}}{{16}}\\ \hline R_{1}^{3}=1&\nicefrac{{3}}{{8}}&\nicefrac{{1}}{{16}}\end{array}. \tag{34}\]
_The system \(\mathcal{R}^{\prime}_{3}\) is inconsistently connected and can be represented by the vector:_
\[\mathbf{p}^{*\intercal}=(\nicefrac{{1}}{{2}},\nicefrac{{1}}{{2}},\nicefrac{{ 1}}{{2}},\nicefrac{{1}}{{2}},\nicefrac{{7}}{{16}},\nicefrac{{1}}{{2}}, \nicefrac{{1}}{{8}},\nicefrac{{1}}{{8}},\nicefrac{{1}}{{16}},\nicefrac{{1}}{ {2}},\nicefrac{{1}}{{2}},\nicefrac{{7}}{{16}}).\]
_One possible solution \(\mathbf{y}^{*}\) of task (23) for system \(\mathcal{R}^{\prime}_{3}\) can be written as a linear combination of the following \(L_{1}\)-orthonormal vectors \(\{\hat{\mathbf{x}}_{j}\}_{j=1,\ldots,6}\):_
\[\hat{\mathbf{x}}_{1} =-\mathbf{e}_{49}, \hat{\mathbf{x}}_{4} =(\mathbf{e}_{23}+\mathbf{e}_{39})/2,\] \[\hat{\mathbf{x}}_{2} =(\mathbf{e}_{7}+\mathbf{e}_{31}+\mathbf{e}_{40})/3, \hat{\mathbf{x}}_{5} =\mathbf{e}_{25},\] \[\hat{\mathbf{x}}_{3} =\mathbf{e}_{27}, \hat{\mathbf{x}}_{6} =(\mathbf{e}_{34}+\mathbf{e}_{58})/2.\]
_with_
\[\mathbf{y}^{*}=\frac{1}{16}\left(\hat{\mathbf{x}}_{1}+6\hat{\mathbf{x}}_{2}+ \hat{\mathbf{x}}_{3}+2\hat{\mathbf{x}}_{4}+2\hat{\mathbf{x}}_{5}+6\hat{ \mathbf{x}}_{6}\right).\]
_Similarly to the previous example, use the construction in Lemma 2.2 to find the point_
\[\mathbf{z}^{*}=\frac{1}{16}\left(0\hat{\mathbf{x}}_{1}+6\hat{\mathbf{x}}_{2}+ \hat{\mathbf{x}}_{3}+0\hat{\mathbf{x}}_{4}+\hat{\mathbf{x}}_{5}+4\hat{ \mathbf{x}}_{6}\right).\]
_which is a solution to task (24) to compute CNTF. For this system CNT\({}_{3}=\nicefrac{{1}}{{8}}\) and_
\[\text{CNTF}=\frac{1}{4}=(3-1)\text{CNT}_{3}.\]
## 3 Discussion
The result presented in this paper proves that the conjecture in [4] is indeed true. Moreover, we can now affirm that all the fundamentally different approaches to quantify contextuality currently found in the literature are proportional to each other within the class of cyclic systems. The proportionality relations among the measures are:
\[2\text{CNT}_{0}=2\text{CNT}_{1}=2\text{CNT}_{2}=\text{CNTF}=(n-1)\text{CNT }_{3}. \tag{35}\]
The equality of the first three measures was shown in Ref. [10], the third equality was proved in Ref. [5], and the last equality, in this paper. It should also be noticed that the hierarchical measure of contextuality proposed in Ref. [27] reduces to CNT\({}_{2}\) for cyclic systems; therefore, it also satisfies the proportionality to the other measures. This result therefore completes the contextuality theory of cyclic systems and its measures.
However, as noted in Refs.[10, 28, 5], the relations among these measures are not as simple as in other types of systems of random variables. In Ref. [10], one can find examples of non-cyclic systems for which CNT\({}_{1}\) and CNT\({}_{2}\) are not functions of each other. A class of examples is considered in Ref. [5, 29] to show the same lack of functional relation between CNT\({}_{2}\) and CNTF outside of cyclic systems. Lastly, in Ref. [28], some examples of hypercyclic systems of order higher than 2--cyclic systems are a special case of this class where order equals 2--that show that in general there is no functional relation among any of the measures here considered.
AcknowledgmentsThe authors would like to thank Ehtibar N. Dzhafarov, Alisson Tezzin and Barbara Amaral for helpful discussions. GC worked under the Financial Support of the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES) - Programa de Excelencia Academica (PROEX) - Brasil and the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) - Brasil.
|
2304.02628
|
What Affects Learned Equivariance in Deep Image Recognition Models?
|
Equivariance w.r.t. geometric transformations in neural networks improves
data efficiency, parameter efficiency and robustness to out-of-domain
perspective shifts. When equivariance is not designed into a neural network,
the network can still learn equivariant functions from the data. We quantify
this learned equivariance, by proposing an improved measure for equivariance.
We find evidence for a correlation between learned translation equivariance and
validation accuracy on ImageNet. We therefore investigate what can increase the
learned equivariance in neural networks, and find that data augmentation,
reduced model capacity and inductive bias in the form of convolutions induce
higher learned equivariance in neural networks.
|
Robert-Jan Bruintjes, Tomasz Motyka, Jan van Gemert
|
2023-04-05T17:54:25Z
|
http://arxiv.org/abs/2304.02628v2
|
# What Affects Learned Equivariance in Deep Image Recognition Models?
###### Abstract
Equivariance w.r.t. geometric transformations in neural networks improves data efficiency, parameter efficiency and robustness to out-of-domain perspective shifts. When equivariance is not designed into a neural network, the network can still learn equivariant functions from the data. We quantify this learned equivariance, by proposing an improved measure for equivariance. We find evidence for a correlation between learned translation equivariance and validation accuracy on ImageNet. We therefore investigate what can increase the learned equivariance in neural networks, and find that data augmentation, reduced model capacity and inductive bias in the form of convolutions induce higher learned equivariance in neural networks.
## 1 Introduction
Equivariance in neural network features allows invariance to geometric transformations [6, 23], making such networks more data efficient [38, 40, 25], parameter efficient [6] and robust to out-of-distribution transformations [13, 1, 33].
Equivariance with respect to specific geometric transformations can be designed into the neural network architecture [5, 37, 6]. However, even with careful design, it may happen that the resulting architecture is not as equivariant as intended [13, 17, 19, 45]. An example is the convolution operator in Convolutional Neural Networks (CNNs) for translation equivariance, which can be broken by border effects [17, 19] or pooling [45]. On the other hand, even if neural networks are not designed to be equivariant, they can still _learn_ equivariance naturally. Existing works demonstrate qualitative examples of learned equivariant features [2, 10, 29]. However, how much equivariance is learned, and which factors affect equivariance, are open questions.
In this work, we quantify learned equivariance in image recognition neural networks that have and have not been explicitly designed for equivariance. Where existing works [3, 15, 27, 46, 18] typically only measure equivariance at the output of the network, we measure equivariance for all intermediate layers. To do so, we deviate from existing measures of learned equivariance which are inconsistent across network depths, and we design a consistent measure.
Using our measure for learned equivariance, we find evidence that learned translation equivariance in intermediate features of neural networks correlates with increased validation accuracy on ImageNet. We therefore investigate how we can increase learned equivariance by changing how we train neural networks. In particular, we find that 1) making the task equivariant does not increased learned equivariance; 2) data augmentations designed for invariance indeed increase learned equivariance, even in early and middle layers; 3) reducing model capacity increases learned equivariance, suggesting that equivariant features arise from a need to compress representations; 4) CNNs learn more translation and rotation equivariance in intermediate features than
Figure 1: Neural networks can learn features that are invariant or equivariant w.r.t. a geometric transformation of the data, such as rotation. We measure learned equivariance w.r.t. translation and rotation in neural networks.
the Vision Transformers (ViTs).
We make the following contributions:
* We propose a new measure for learned equivariance that is allows comparing learned equivariance of features at different depths of the network.
* We show evidence for a positive correlation between learned translation equivariance in intermediate features and validation accuracy on ImageNet.
* We test how several aspects of neural network training affect learned equivariance. In summary, we find that data augmentation, reduced model capacity and the inductive biases of CNNs positively affect learned equivariance.
## 2 Related Works
Neural networks can learn equivariant features from data [24, 28, 29]. Particularly inspiring is the work by Olah _et al_. [29], that demonstrates by precise and meticulous manual investigations that learned equivariant features exist in networks that were not designed to be equivariant. Inspired by this work, we here investigate how to move beyond laborous manual qualitative investigations, and instead offer a quantitative approach, by giving an automatic measure for learned equivariance.
A number of existing works measure equivariance in neural networks. [15] study models from the pre-Deep Learning era which have since been superseded by the models we study. More recent works measure equivariance in Convolutional Neural Networks, with KL divergence on class probabilities [46], with Euclidean distance [18], or cosine similarity [3, 27] on feature maps. In our work, we show that the cosine similarity is not appropriate for measuring equivariance in intermediate feature maps, and offer a correlation-based measure.
Several works study how neural network hyperparameters and datasets affect learned translation equivariance in the final output of the network. The kernel and padding sizes of the architecture affect translation invariance [27], although data augmentation might have a bigger effect on translation invariance than the network architecture [18]. Similar conclusions about the importance of the data were found by others [3, 4]. Here, we follow these investigations, and extend them by analyzing the impact on the intermediate layers.
There are some works that study equivariant properties of intermediate features. Recently, [26] proposed a method to detect invariance to any learned Lie group for intermediate features. However, they do not study equivariance, like we do. Other works study only the transformation group of translations (\(\mathbb{Z}^{2}\)). [45] measures the translation equivariance by computing cosine similarity between feature maps to show how max pooling violates the translation equivariance property. [17, 19] show that some padding methods disrupt the translation equivariance property in CNNs. [32] measure the invariance of intermediate representations using normalized cosine similarity to study the effect of pooling on deformation stability. Where these works diagnose issues with designed equivariance and test for their effects, we consider learned equivariance in a more general sense, including transformation groups not designed into the network, such as rotations.
## 3 Method
Neural networks can learn to be equivariant in two ways: either by learning invariant features or by learning equivariant groups of features, as shown in Fig 1. In this section we detail how we can measure the quantity of invariant features and equivariant groups of features. We discuss which similarity measure is appropriate for measurements of learned equivariance in features at different depths of a neural network. Finally, we verify our measures using artificially engineered equivariant CNNs.
In the following we will refer to invariant features and equivariant groups of features under the single predicate "learned equivariance", as invariance is a special case of equivariance.
### Invariant features
We derive a measure of learned equivariance from inspecting the definition of equivariance [6] applied to a single neural network layer:
\[f(T_{g}(X))=T_{g}^{\prime}(f(X)), \tag{1}\]
where \(X\in\mathbb{R}^{C_{\mathbf{u}}\times H\times W}\) is an image or a feature map, \(f(X)\in\mathbb{R}^{C\times H\times W}\) is the output of a neural network operation with \(C\) output features and \(T_{g}\) is the application of a transformation \(g\) from a transformation group \(G\). For example, if \(G=\mathbb{Z}^{2}\), then \(g\) is a translation with a particular integer-valued \((x,y)\) offset. If \(T_{g}^{\prime}\) is the identity function for all \(g\), the layer \(f\) is _invariant_ w.r.t. transformation \(T\):
\[f(T(X))=f(X). \tag{2}\]
Without designing invariance to \(T\) into neural network layer \(f\), each individual feature in \(f(X)\) can learn to behave invariant or not invariant with respect to \(T\). We therefore define invariance for each feature \(c\in C\) independently:
\[f(T(X))_{c}=f(X)_{c}. \tag{3}\]
In Fig. 1 we show an example where \(T\) is a \(90^{\circ}\) rotation.
To measure a feature's invariance w.r.t. \(g\), we compute the similarity between \(f(T(X))_{c}\) and \(f(X)_{c}\):
\[\text{Invariance}(f_{c},g)=S(f(T(X))_{c},f(X)_{c}), \tag{4}\]
given a similarity function \(S:\mathbb{R}^{\mathbb{H}\times\mathbb{W}}\times\mathbb{R}^{\mathbb{H}\times \mathbb{W}}\rightarrow[0,1]\). Given invariance measures for each feature, we can average these measures for all features in a layer to compute a layer's invariance.
### Equivariant features
A group of features \(C_{G}\subseteq C\) in a neural network layer is equivariant with respect to a transformation group \(G\) if each feature \(c^{\prime}\in C_{G}\) activates for a different transformation \(g\) from the group. In other words, for a sample \(T_{g}(X)\) transformed with any transformation from the group, the group of feature maps \(f(X)_{c^{\prime}\in C_{G}}\) will have one feature \(c^{\prime}\) whose transformed feature map \(T_{g}(f(X))_{c^{\prime}}\) matches \(f(T_{g}(X))_{c^{\prime}}\):
\[f(T_{g}(X))_{c^{\prime}}=T_{g}(f(X))_{c^{\prime}},\quad\exists c^{\prime}\in C _{T}. \tag{5}\]
In Fig. 1 we show an example where \(g\) is a \(90^{\circ}\) rotation.
To fit this with the definition of equivariance (Eq. 1) we define \(T_{g}^{\prime}\):
\[f(T_{g}(X))_{c^{\prime}\in C_{G}}=T_{g}^{\prime}(f(X))_{c^{\prime}\in C_{G}} \tag{6}\]
where \(T_{g}^{\prime}\) transforms with \(T_{g}\) and selects feature \(c^{\prime}\) that matches the transformation \(g\). When this equation holds, the feature group \(C_{G}\) is equivariant w.r.t. \(T\).
To measure the equivariance of a feature \(c\) we find the maximum similarity between \(f(T_{g}(x))_{c}\) and \(T_{g}(f(x))_{c^{\prime}}\) over all features \(c^{\prime}\in C\), for a given transformation \(g\):
\[\text{Equivariance}(f_{c},g)=\max_{c^{\prime}\in C}S(f(T_{g}(X))_ {c},T_{g}(f(X))_{c^{\prime}}) \tag{7}\]
given a similarity function \(S:\mathbb{R}^{\mathbb{H}\times\mathbb{W}}\times\mathbb{R}^{\mathbb{H}\times \mathbb{W}}\rightarrow[0,1]\). Given equivariance measures for each feature, we can average these measures for all features in a layer to compute a layer's equivariance. Note that if a feature is invariant w.r.t. \(T_{g}\), we will measure an equivariance score that is at least as high as the invariance score. As invariance is a special case of equivariance, this behavior of our measure is intended.
### Measuring similarity
We need to choose a similarity measure \(S:\mathbb{R}^{\mathbb{H}\times\mathbb{W}}\times\mathbb{R}^{\mathbb{H}\times \mathbb{W}}\rightarrow[0,1]\) with which to compare feature maps to measure invariance and equivariance. In existing works, cosine similarity is commonly used as a similarity measure used to compute invariance or equivariance of the networks representations [4, 27, 45]. However, cosine similarity is sensitive to the mean values of its input vectors. This behaviour is depicted in Figure 1(a). As different layers in a neural network have different mean activation values (see Fig. 1(b)), this biases the similarity measure.
We propose to use Pearson correlation [14] instead. Pearson correlation, also called centered cosine similarity, is a similarity measure that does not suffer from sensitivity to the mean of the inputs, as it computes the covariance of the inputs normalized by their standard deviations. It is the basis of many methods for comparing network representations [20, 22, 31].
To motivate our choice, we visualize the difference between using cosine similarity and correlation for measuring equivariance in the following example. We train ResNet-44 [16] model on CIFAR-10 [21] dataset and compute invariance w.r.t. to \(90^{\circ}\) rotation after each residual block. In Figure 2 we show the qualitative comparison between the scores, computed using cosine similarity and correlation, and the mean of the activations. Additionally, we compute a correlation between the magnitude of the activations and equivariance scores computed with cosine similarity (**0.63**) and correlation (**0.11**). Scores computed using cosine similarity correlate visibly with the mean of the activation while, for the scores computed using correlation, this effect is less prevalent. In our experiments, we therefore use correlation as a measure to quantify equivariance.
## 4 Experiments
### Controlled experiments
To verify that our method captures equivariance, we apply it to two controlled toy settings. We create two 3-layer CNNs with hand-crafted filters such that we expect to measure perfect learned rotation invariance and equivariance respectively. For the invariant model, we set all the filters to be rotationally symmetric, using a 2D isotropic Gaussian function, and measure the invariance after each layer (Fig. 2(a)). For the equivariant model, we cut out corners of the filters from the invariant model such that all the filters are rotations of one another (Fig. 2(b)). Our measure finds both models capture exactly the intended learned equivariances, demonstrating the validity of our measure.
### Does learned equivariance improve accuracy?
We study the relationship between the validation accuracy and the amount of learned equivariance in large-scale seminal models. For each part of each trained model we compute Spearman's rank correlation between the amount of invariance or equivariance and the ImageNet validation accuracy of the model.
We test four CNNs (EfficientNet-B6 & EfficientNet-B7 [35], ResNeXT-101 [41] and Inception-V3 [34]) and two Vision Transformer variants (Vision Transformer [11] and MLP-Mixer [36]). We measure invariance and equivariance for both translation and rotation for 2000 images from the ImageNet validation set. We do not train the models ourselves but instead use available checkpoints from torchvison [30] or timm[39]. Since the studied model families do not have the same exact number of layers, we
divide each model into depth-wise parts and report the average equivariance measures over all layers in each part. Feature maps from the beginning of the network until the global average pooling (GAP) layer are uniformly partitioned into _Early_, _Middle_ and _Late_ parts. _Pool_ captures the feature maps directly after the GAP layer and _Final_ is the feature map directly before the softmax layer. We discriminate between the _Pool_ part and the _Final_ part to identify what role in achieving equivariance the global pooling and final classifier have.
Figure 4 shows there is some correlation between translation equivariance in _Early_ and _Middle_ layers and accuracy on ImageNet, while attaining almost perfect correlation in the _Final_ part. In contrast, for rotations there is little correlation between the equivariance in the representation before global pooling and the validation accuracy.
Even though the sample size (six models) for this correlation test is small, we conclude that there is some evidence for the benefit of learning translation equivariance in intermediate features of neural networks trained on ImageNet. In the following we therefore study what can increase the learned equivariance in such networks.
### Equivariance in the data
On tasks where invariant responses are beneficial to solve the task, e.g. translation invariance in image recognition, one may wonder how this invariance is achieved. We study how learned equivariance in intermediate features is affected by adding transformations to the data and therefore into the task. We choose to study rotation transformations on CNNs, as rotation equivariance is not designed into CNNs. We study whether there is a difference if the task is invariant or equivariant with respect to introduced transformations.
We train a 7-layer CNN taken from [6], consisting of 7 layers of \(3\times 3\) convolutions, 20 channels in each layer, ReLU activation functions, batch normalization, and max-pooling after layer 2, on 3 different datasets. We test on three different datasets. The first dataset is _MNIST6_, which is the regular MNIST [9] without \(\{0,1,6,8\}\) classes, to get rid of rotational transformations that these classes have. For example, digit 8 is very similar to its \(180^{\circ}\) rotation, so, by default, this class would introduce some rotation invariance, which is undesirable as we want to control for rotation invariance in this setting. Second is the _MNIST6-Rot-Inv_ where every digit in _MNIST6_ is randomly rotated by \(r\in\{0^{\circ},90^{\circ},180^{\circ},270^{\circ}\}\) upfront. This dataset imposes invari
Figure 3: We create two controlled toy CNNs, each designed to be perfectly invariance and equivariant respectively, to test if our method measures equivariance correctly, which it does.
Figure 2: Analysis of the influence of magnitude of weights on different similarity measures.
ance into the task as, for every transformation, the predicted class should be the same. The last dataset, _MNIST6-Rot-Eq_, is created in the same way as _MNIST6-Rot-Inv_, but now the classes are made up of all combinations of digit number and rotation (e.g. class 2 is (digit\({}^{\circ},180^{\circ}\))). This dataset imposes equivariance into the task. We compute in- and equivariance of the trained model for 2000 images from the validation set. We average the score over \(90^{\circ},180^{\circ},270^{\circ}\) rotations. Each experiment is repeated three times using different random seed. We train for 100 epochs using Adam with a batch size of 128 and a learning rate of 0.01, L2 regularization at 0.0005 and weight decay at epochs 25 and 50 with a factor of 10.0.
In Figure 5 we show the learned rotation equivariance. Firstly, we observe that the equivariance decreases with the depth, up to the final part after global average pooling (GAP), regardless of the task. For the tasks where the equivariance or invariance is imposed in the task, we see an increase in the final part, which suggests that GAP plays a significant role in achieving equivariance. Secondly, we do not see any significant differences between _MNIST6_, _MNIST6-Rot-Inv_ and _MNIST6-Rot-Eq_, up to a later stage, which may indicate that early convolutional layers learn features with some amount of rotation equivariance regardless of the rotation invariance of the task. Finally, we observe that rotation equivariance is much larger then rotation invariance in the early and middle layers, which shows that CNNs do learn more rotated versions of the same feature in different channel rather then learning invariant, symmetrical features in a single channel. We conclude that introducing equivariance into the task does not significantly affect the learned equivariance of intermediate features.
### Data augmentations
By duplicating input samples under some transformation, data augmentation can induce invariance in the neural network. We study what the effect of data augmentation is on learned equivariance in intermediate representations: does data augmentation result in more invariant or equivariant intermediate features? For example, we know that the random crops data augmentation method, which essentially introduces random translations into the data, increases the model performance and translation invariance at the last layer [18]. The question is whether random crops increase the learned equivariance of intermediate features as well. In this experiment we study how translation equivariance is affected by different data augmentations.
We train ResNet-44 [16], adapted for CIFAR-10 [21], on the CIFAR-10 dataset using one of the following augmentations: random crops, horizontal flips, CutMix [43], RandAugment [7]. In each experiment we compute equivariance of the trained model over 2000 images from the validation set and average the score over diagonal shifts from one to 16 pixels. Each experiment is repeated three times by training the network different random seeds. We train for 200 epochs using SGD with a batch size of 128 and a learning rate of 0.1 and a momentum of 0.9, L2 regularization at 0.0001 and weight decay at epochs 100 and 150 with a factor of 10.0.
In principle we expect the in- and equivariance to be the same since translation equivariance should be provided by the convolution. However, we include equivariance in our experiments since there are works showing that the information about location can be encoded in different channels [17, 19].
In Figure 5(a) we show learned translation equivariance for the tested data augmentations. Random crops and RandAugment increase the equivariance of learned features in the _Middle_, _Late_ and _Final_ parts, while the other data augmentation methods do not have any significant effect, with CutMix even having less equivariance than the baseline in the _Middle_ part. We complement the finding of [18] by showing that random crops increase not only translation invariance but also translation equivariance in the intermediate layers. Also, we do not see any difference between invariance and equivariance for any data augmentation, which means that any equivariance learned is just invariance.
Figure 4: Spearman’s rank correlation between learned equivariance and ImageNet validation accuracy. Translation equivariance in intermediate features correlates with increased accuracy on ImageNet, while rotation equivariance does not.
Figure 5: Learned rotation equivariance for rotation invariance/equivariance in the data. Invariance or equivariance in the task does not induce learning more equivariant features up until the late part of the network. Also there is no visible difference, up until the late part of the network, in the learned equivariance between the invariant and equivariant tasks.
### Model capacity
We hypothesize that a smaller model in principle benefits from a more efficient representation and hence may learn more equivariant features. We therefore study whether model capacity influences learning translation equivariant representations. We train WideResNet-40 (WRN-40) [44] models, where we scale the number of channels (the "width") in each layer by a factor \(s\in\{1,2,4,8\}\). We train on the CIFAR-10 dataset and measure learned translation equivariance. The hyperparameters used for training are the same as in the data augmentation experiment of Sec 4.4.
In Figure 5(b) we show learned translation equivariance for different model capacities. We observe that the amount of translation equivariance is lower for the wider models, even though the amount of invariance in the final part is the same, which matches our hypothesis: an efficient representation learns to be equivariant.
### Architectures
The architecture of a neural network determines which biases can be learned in training. Vision Transformers (ViTs) [11] lack certain inductive biases present in CNNs, which has been linked to their reduced data efficiency [12, 42]. We are interested to what extent the difference in inductive bias between CNNs and Vision Transformers (ViTs) affects learned equivariance.
We test architectures as they were designed for the ImageNet dataset [8], to faithfully represent their intended inductive bias. We use the same architectures and pre-trained model weights as tested in Sec. 4.2: four CNNs (EfficientNet-B6 & EfficientNet-B7 [35], ResNeXT-101 [41] and Inception-V3 [34]) and two Vision Transformer variants (Vision Transformer [11] and MLP-Mixer [36]). We measure both translation and rotation equivariance on trained models for 2000 images from the ImageNet validation set. We also use the same depth-wise partitioning of feature maps into parts as used in Sec. 4.2. We measure translation equivariance over diagonal shifts of size 1 to 32 and rotation equivariance for \(90,180,270\) rotations.
In Figure 6(a) we present the results for learned translation equivariance. We can see that ViT and MLP-Mixer have less translation equivariance than CNNs in _Early_ and _Middle_ layers. This is not unexpected, as convolutions directly integrate translation equivariance, whereas Vision
Figure 6: Measuring learned translation equivariance for (a) data augmentations and (b) model capacity. For data augmentations (a), random crops and RandAugment increase channel equivariance the most, while other strategies have no discernible improvements. For model capacity (b), smaller models learn more in- and equivariance, although the amount of in- and equivariance in the end is similar.
Figure 7: Measuring learned equivariance for inductive biases. For translation (a), the CNN variants exhibit more equivariance in the intermediate representation then the Vision Transformer variants. Global pooling seems to play an important role in achieving invariance. For rotation (b), the CNN variants exhibit more equivariance in the intermediate representation then the Vision Transformer variants. The _Early_ and _Middle_ parts have more equivariance than invariance.
Transformers have to learn position embeddings that are translation equivariant. This reduced translation equivariance could be the reason for the poor data efficiency of ViT and MLP-Mixer [11, 36] since translation equivariance improves data efficiency [19]. Finally, we note that learned invariance and equivariance are identical for the tested models, meaning that these networks do not learn to represent different translations in different channels.
In Figure 6(b) we present the results for learned rotation equivariance. We observe that the ViT and MLP-Mixer have lower rotation equivariance than the CNNs in intermediate features, while after the GAP layer the ViT exhibits the most rotation equivariance out of all the models. Secondly, we note that early parts of all networks learn equivariant features that are not invariant, more so than in late parts of the networks. In contrast to the results for translation equivariance, we see that models with low rotation equivariance throughout _Early_, _Middle_ and _Late_ parts (ViT, Efficient-Net B6/B7) have the highest rotation equivariance in the _Final_ part, while the models with highest equivariance in _Early_, _Middle_ and _Late_ parts (ResNeXT-101, Inception-v3) have the least equivariance in the _Final_ part. This shows that high learned equivariance in the final model representation does not imply that intermediate representations are also highly equivariant.
## 5 Conclusion
We conduct a quantitative study on learned equivariance in intermediate features of CNNs and Vision Transformers trained for image recognition, using an improved measure of equivariance. We find evidence that translation equivariance in intermediate representations correlates with ImageNet validation accuracy. We show that data augmentations and reduced model capacity can increase learned equivariance in intermediate features. Also, the CNNs we test learn more translation and rotation equivariance in intermediate features than the ViTs we test.
**Limitations.** Our method allows to measure equivariance w.r.t. affine transformations only. The reason for that is the transformation \(g\) with respect to which we measure the equivariance has to be a map from and to an identical discrete domain, e.g. feature maps. This restriction disqualifies continuous transformations such as rotations with any other resolution than 90 degrees, or scaling with non-integer scaling factors.
**Future work.** Learned equivariance benefits image recognition models. However, applying equivariant priors usually adds additional cost in terms of memory or computation. Future work could study whether one can apply equivariant priors selectively within a neural network, saving computing cost where networks already learn to be equivariant. Additionally, we show that Vision Transformers learn less translation equivariance than CNNs. Future work could explore methods to increase translation invariance in Vision Transformers, to aid in their data efficiency.
## Acknowledgements
Robert-Jan Bruintjes and Jan van Gemert are financed by the Dutch Research Council (NWO) (project VI.Vidi.192.100). All authors sincerely thank everyone involved in funding this work.
|
2306.14633
|
JSEEGraph: Joint Structured Event Extraction as Graph Parsing
|
We propose a graph-based event extraction framework JSEEGraph that approaches
the task of event extraction as general graph parsing in the tradition of
Meaning Representation Parsing. It explicitly encodes entities and events in a
single semantic graph, and further has the flexibility to encode a wider range
of additional IE relations and jointly infer individual tasks. JSEEGraph
performs in an end-to-end manner via general graph parsing: (1) instead of flat
sequence labelling, nested structures between entities/triggers are efficiently
encoded as separate nodes in the graph, allowing for nested and overlapping
entities and triggers; (2) both entities, relations, and events can be encoded
in the same graph, where entities and event triggers are represented as nodes
and entity relations and event arguments are constructed via edges; (3) joint
inference avoids error propagation and enhances the interpolation of different
IE tasks. We experiment on two benchmark datasets of varying structural
complexities; ACE05 and Rich ERE, covering three languages: English, Chinese,
and Spanish. Experimental results show that JSEEGraph can handle nested event
structures, that it is beneficial to solve different IE tasks jointly, and that
event argument extraction in particular benefits from entity extraction. Our
code and models are released as open-source.
|
Huiling You, Samia Touileb, Lilja Øvrelid
|
2023-06-26T12:12:54Z
|
http://arxiv.org/abs/2306.14633v1
|
# JSEEGraph: Joint Structured Event Extraction as Graph Parsing
###### Abstract
We propose a graph-based event extraction framework JSEEGraph that approaches the task of event extraction as general graph parsing in the tradition of Meaning Representation Parsing. It explicitly encodes entities and events in a single semantic graph, and further has the flexibility to encode a wider range of additional IE relations and jointly infer individual tasks. JSEEGraph performs in an end-to-end manner via general graph parsing: (1) instead of flat sequence labelling, nested structures between entities/triggers are efficiently encoded as separate nodes in the graph, allowing for nested and overlapping entities and triggers; (2) both entities, relations, and events can be encoded in the same graph, where entities and event triggers are represented as nodes and entity relations and event arguments are constructed via edges; (3) joint inference avoids error propagation and enhances the interpolation of different IE tasks. We experiment on two benchmark datasets of varying structural complexities; ACE05 and Rich ERE, covering three languages: English, Chinese, and Spanish. Experimental results show that JSEEGraph can handle nested event structures, that it is beneficial to solve different IE tasks jointly, and that event argument extraction in particular benefits from entity extraction. Our code and models are released as open-source1.
Footnote 1: [https://github.com/huiling-y/JSEEGraph](https://github.com/huiling-y/JSEEGraph)
## 1 Introduction
Event extraction (EE) deals with the extraction of complex, structured representations of events from text, including overlapping and nested structures [2, 1]. While there are existing datasets annotated with such rich representations [1, 10], a majority of current approaches model this task using simplified versions of these datasets or sequence-labeling-based encodings which are not capable of capturing the full complexity of the events. Figure 1 shows an example from the Rich ERE dataset [1] of a sentence containing both nested and overlapping events: "_buy"_ serves as trigger for two overlapping events, transfermoney and transferownership with their respective argument roles, and similarly "_made"_ for two artifact events triggered by the coordination of two GPE entities _Canada_ and _USA_; at the same time, the event trigger "_made"_ is nested inside the entity span "_things made in Canada or USA"_. For this example, models based on token tagging (such as the commonly used BIO-encoding) would fail completely when a token contributes to multiple information extraction elements. In this case, the version of the ACE05 dataset widely employed for EE would not fully capture the double-tagged event triggers, by simply disregarding one of the two events, and the nested entity "_things made in Canada or USA"_ would be _"things"_.
Event extraction is a subtask of a wider set of Information Extraction (IE) tasks, jointly dealing with extracting various types of structured information from unstructured texts, from named entities, relations, to events. There have been continued efforts in creating benchmark datasets that can be used for evaluating a wide range of IE tasks. Both ACE05 [10]2 and Rich
Figure 1: Example of nested and overlapping events in the sentence “_I, purposely buy things made in Canada or USA.”, taken from Rich ERE [1].
ERE Song et al. (2015)3 provide consistent annotations of entities, relations, and events. While there are clear inter-relations between these different elements, and despite the availability of rich annotations, existing works often deal with individual tasks, such as named entity recognition (NER) Chiu and Nichols (2016); Bekoulis et al. (2018) or event extraction (EE) Yang and Mitchell (2016); Du and Cardie (2020); Li et al. (2020). Recently there have been some efforts in jointly modelling multiple IE tasks Wadden et al. (2019); Lin et al. (2020); Nguyen et al. (2022), but these methods explicitly avoid nested instances.
Footnote 3: [https://catalog.ldc.upenn.edu/LDC2020T18](https://catalog.ldc.upenn.edu/LDC2020T18)
We here propose to represent events, along with entities and relations, as general graphs and approach the task of event extraction as Meaning Representation Parsing Oepen et al. (2020); Samuel and Straka (2020). As shown in Figure 2, in such an information graph, event triggers and entities are represented as nodes; event types, argument roles, and relations are constrained edges; and nested/overlapped structures are straightforwardly represented, since a surface string can be abstracted into an unlimited number of nodes, as illustrated by the two separate nodes for the event triggers for _"cost"_. Our approach does not rely on ontology- or language-specific features or any external syntactic/semantic parsers, but directly parses raw text into an information graph. We experiment on the benchmark datasets ACE05 Doddington et al. (2004) and Rich ERE Song et al. (2015), zooming in on nested structures. Our results show JSEE-Graph to be versatile in solving entity, relation, and event extraction jointly, even for heavily nested instances and across three different languages. Ablation studies consistently show that event extraction especially benefits from entity extraction.
The paper is structured as follows: section 2 provides the relevant background for our work, and section 3 further describes the tasks addressed and the datasets we employ, focusing in particular on their complexity, as measured by level of nesting. Section 4 presents the JSEE graph parsing framework and section 5 the experimental setup for evaluating the JSEE parser. Section 6 presents the results of our evaluations and provides a study of the performance for nested structures, as well as an ablation study assessing the effect of joint IE modeling and an error analysis. Finally we provide conclusions (Section 7) and discuss limitations of our work.
## 2 Related work
Event extraction is commonly approached as supervised classification, even though other approaches relying on generation Paolini et al. (2021); Lu et al. (2021); Li et al. (2021); Hsu et al. (2022) or prompt tuning inspired by natural language understanding tasks Shin et al. (2020); Gao et al. (2021); Li and Liang (2021); Liu et al. (2022) also are gaining ground. Classification-based methods break event extraction into several subtasks (trigger detection/classification, argument detection/classification), and either solve them separately in a pipeline-based manner Ji and Grishman (2008); Li et al. (2013); Liu et al. (2020); Du and Cardie (2020); Li et al. (2020) or jointly infer them as multiple subtasks Yang and Mitchell (2016); Nguyen et al. (2016); Liu et al. (2018); Wadden et al. (2019); Lin et al. (2020). Classification-based joint methods typically apply sequence-labeling-based encoding and extract all event components in one pass, whereas pipeline methods break the problem into separate stages which are performed sequentially. Whereas sequence-labeling approaches cannot distinguish overlapping events/arguments by the nature of the BIO-encoding, pipeline methods may in principle detect these. However, they typically suffer from error propagation and are not equipped to model the interactions between the different event elements (triggers, arguments).
Nested eventsSome previous work addresses the problem of overlapping or nested arguments in EE. Xu et al. (2020) address overlapping arguments in the Chinese part of the ACE05 dataset and jointly perform predictions for event triggers and argu
Figure 2: Example of graph representation for entities, relations, and events from the sentence _“School district officials have estimated the cost of rebuilding an intermediate school at $40 million.”_, from Rich ERE Song et al. (2015).
ments based on common feature representations derived from a pre-trained language model. Sheng et al. (2021) propose a joint framework with cascaded decoding to tackle overlapping events, and sequentially perform type detection, event and argument extraction in a Chinese financial event dataset. They deal with cases of both "overlapping events" and "overlapping arguments", however, their approach may suffer from error propagation due to the cascading approach. Cao et al. (2022) distinguish between overlapped and nested events and propose the OneEE tagging scheme which formulates EE as a word-to-word relation recognition, distinguishing separate span and role relations. OneEE is evaluated on the FewFC Chinese financial event dataset and the biomedical event datasets Genia11 and Genia13. While specifically focusing on nested events, these previous works are limited by focusing only on one language or on specialized (financial/biomedical) domains. In this work we aim to provide a more comprehensive evaluation over two datasets in several versions with increasing levels of structural complexity (see below) and across three different languages.
Joint IE approachesWadden et al. (2019) propose the DyGIE++ model which approaches joint modeling of IE entities and relations via span-based prediction of entities and event triggers, and subsequent dynamic graph propagation based on relations. They evaluate on ACE05 and Genia datasets and limit their experiments to English only. Their approach is restricted to a certain span width, limiting the length of possible entities. OneIE Lin et al. (2020) is a joint system for IE using global features to model cross-subtask or cross-instance interactions between the subtasks and predict an information graph. They propose the E+ extension of ACE05 which includes multi-token events (E\({}^{+}\)) as we do. As in our work, they also present results on Spanish and Chinese as well and develop a multilingual model, but their experiments avoid nested structures, by using only the head of entity mentions and specifically removing overlapped entities. Nguyen et al. (2022) model joint IE in a two-stage procedure which first identifies entities and event triggers and subsequently classify relations between these starting from a fully connected dependency graph; a GCN is employed to encode the resulting dependency graphs for computation of the joint distribution. While the approach is shown to be effective, it is still a pipeline approach which can suffer from error propagation. Since it relies on sequence labeling for entity/event detection, it cannot identify overlapping entities/event triggers. Furthermore, the approach relies on syntactic information from an external parser and focuses only on English and Spanish in the Light ERE dataset Song et al. (2015).
Meaning Representation Parsing Meaning Representation Parsing (MRP) Oepen et al. (2014, 2015, 2020) is a framework covering several types of dependency-based semantic graph frameworks. Unlike syntactic dependency representations, these semantic representations are not trees, but rather general graphs, characterised by potentially having multiple top nodes (_roots_) and not necessarily being connected, since not every token is necessarily a node in the graph. The semantic frameworks include representations with varying levels of "anchoring" to the input string Oepen et al. (2020), ranging from the so-called "bi-lexical" representations where every node in the graph corresponds to a token in the input string to a framework like AMR Banarescu et al. (2013) which constitutes the most abstract and unanchored type of framework, such that the correspondence between the nodes in a graph and tokens in the string is completely flexible. This allows for straightforward representation of nesting and overlapping structures, where multiple nodes may be anchored to overlapping sub-strings. There have been considerable progress in developing variants of both transition-based and graph-based dependency parsers capable of producing such semantic graphs Hershcovich et al. (2017); Dozat and Manning (2018); Samuel and Straka (2020). Previous research has further made use of AMR-based input representations to constrain the tasks of event extraction Huang et al. (2018) and more recently joint information extraction Zhang and Ji (2021), where an off-the-shelf AMR parser is used to derive candidate entity and event trigger nodes before classifying pairwise relations guided by the AMR hierarchical structure. While there are clear parallels between the MRP semantic frameworks and the tasks proposed in IE, little work has focused on the direct application of MRP parsing techniques to these tasks. You et al. (2022) is a notable exception in this respect, who presents an adaptation of the PERIN semantic parser Samuel and Straka (2020) to the event extraction task. While their work is promising it is limited to only one dataset (ACE05), which does
not contain a lot of nested structures and is further limited to English event extraction only. In this work we extend their approach to the task of joint information extraction, covering both entities, events and relations taken from two different datasets in several versions and for three languages, and further demonstrates the effectiveness of approaching general information extraction from text via graph-parsing and the interpolation of different IE tasks.
## 3 Task and Data
While the main focus of this work is on event extraction, we hypothesize that our graph-based approach lends itself to dealing with two challenging aspects of current research on this task: the processing of nested and overlapping event structures, and the joint modeling of inter-related IE structures. In the following we quantify the level of nesting in two widely used datasets which contain rich annotations for both entities, events, and relations. We further propose two versions of each dataset with varying potential for nesting, which allows us to focus on this aspect during evaluation.
Event Extractionis the task of extracting events into structured forms, namely event triggers and their arguments. An event trigger is the word(s) that most clearly describes an event, such as _"buy"_, which evokes a transferownership and an transformery event in Figure 1. Event arguments are the participants and attributes of an event, and can be tagged as entities at the same time, as demonstrated in Figure 2.
We use the benchmark datasets ACE05 Doddington et al. (2004) and Rich ERE Song et al. (2015), both containing consistent annotations for entities, relations, and events, for joint evaluation of multiple IE tasks and in multiple languages (ACE05 in English and Chinese, and ERE in English, Chinese, and Spanish). Table 1 summarizes the relevant statistics of the datasets. The inventory of event types, argument roles, entity types and relation types are listed in Table 2. Despite targeting the same IE tasks, from ACE05 to Rich ERE, the annotation guidelines have shifted towards more sophisticated representations, resulting in more complex structures in Rich ERE Song et al. (2015). Prominent differences between ACE05 and Rich ERE are:
* **Entities**, and hence event arguments, are more fine-grained in Rich ERE, with 15 entity types, as compared to 7 types in ACE05. In terms of entity spans, ACE05 explicitly marks the head of the entity versus the entire mention, providing the possibility of solving a simpler task for entity extraction and recognizing only the head token as opposed to the full span of the entity in question. This is commonly done for this task in previous work of EE. However, in Rich ERE, the entire string of text is annotated for entity mentions, and heads are only marked explicitly for nominal mentions that are not named entities or pronominal entities.
* **Event triggers** can be double-tagged in Rich ERE, namely one trigger can serve multiple event mentions, giving rise to overlapping events, as shown in Figure 1, while in ACE05, an event trigger only evokes one event. This means that Rich ERE presents a more complex task of event extraction.
We measure the nested instances in ACE05 and Rich ERE as a way to showcase different levels of complexity for extracting entities, relations, and events. More specifically, we quantify nested instances in two versions of each dataset, one using only the head of an entity mention (when it is annotated), and the other with the entire mention text. Following Lin et al. (2020) we dub the version which only marks the head of entities ACE-E\({}^{+}\) and
\begin{table}
\begin{tabular}{l l r r r r r} \hline \hline
**Lang** & **Split** & **\#Sents** & **\#Events** & **\#Roles** & **\#Entities** & **\#Relations** \\ \hline \hline \multirow{6}{*}{en} & \multicolumn{5}{c}{**Dataset: ACE05**} \\ & & 19 371 & 4419 & 6 609 & 47 546 & 7 172 \\ & Dev & 896 & 468 & 759 & 3 421 & 729 \\ & Test & 777 & 461 & 735 & 3 828 & 822 \\ \hline \multirow{3}{*}{zh} & \multicolumn{1}{c}{Train} & 6 706 & 2 928 & 5 576 & 29 674 & 8 003 \\ & Dev & 511 & 217 & 406 & 2 246 & 601 \\ & Test & 521 & 190 & 336 & 2 389 & 686 \\ \hline \multirow{3}{*}{en} & \multicolumn{5}{c}{**Dataset: Rich ERE**} \\ & & 12 421 & 8 368 & 15 197 & 34 611 & 7 498 \\ & Dev & 692 & 459 & 797 & 1 998 & 366 \\ & Test & 745 & 566 & 1195 & 2 286 & 544 \\ \hline \multirow{3}{*}{zh} & \multicolumn{1}{c}{Train} & 9 253 & 5 325 & 9 066 & 26 128 & 6 044 \\ & Dev & 541 & 366 & 52 & 1 609 & 379 \\ & Test & 483 & 439 & 776 & 2 022 & 502 \\ \hline \multirow{3}{*}{es} & \multicolumn{1}{c}{Train} & 8 292 & 5 013 & 8 575 & 20 347 & 4 140 \\ & Dev & 383 & 254 & 447 & 1 068 & 199 \\ \cline{1-1} & Test & 598 & 334 & 609 & 1 438 & 287 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the preprocessed datasets.
\begin{table}
\begin{tabular}{l l r r r r} \hline \hline
**Dataset** & **\#Event-types** & **\#Argument-reles** & **\#Entity types** & **\#Relation type** \\ \hline ACE05 & 33 & 22 & 7 & 6 \\ Rich ERE & 38 & 20 & 15 & 6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Inventory of event types, argument roles, entity types and relation types in ACE05 and Rich ERE.
Rich ERE-E\({}^{+}\), and introduce two additional versions of the datasets, dubbed, ACE-E\({}^{++}\) and Rich ERE-E\({}^{++}\) which retain the full annotated mention text span. Nesting is measured between any pair of triggers and entities. Note that our notion of nesting subsumes both _overlapping_ and _nested_ target/entities Cao et al. (2022), _i.e._ both full and partial overlap of text spans. As shown in Table 3, Rich ERE features many cases of nested triggers, while these are not found in ACE05, due to the aforementioned double-tagging in Rich ERE (see Figure 1); when only considering the head of an entity, ACE05 exhibits very little nesting, but Rich ERE exhibits a considerable amount of nesting within entities, as well as between entity-trigger. The reason for this is that in Rich ERE, only certain nominal mentions are marked with explicit heads; when the full entity mentions are considered, both datasets are heavily nested.
As mentioned above, this work deals with three IE tasks, as exemplified by Figure 2: entities, relations, and events. Given a sentence, our JSEE-Graph framework extracts its entity mentions, relations, and event mentions. In addition to event extraction, we thus target two additional IE tasks in our graph-based model:
Entity Extractionis to identify entity mentions from text and classify them into types according to a pre-defined ontology. For example, in Figure 2, _"district"_ is an organization (ORG) entity.
Relation Extractionaims to assign a relation type to an ordered pair of entity mentions, based on a pre-defined relation ontology. For example, in Figure 2, the relation between PER _"officials"_ and ORG _"district"_ is orgaffiliation.
## 4 Graph parsing framework
Our JSEEGraph framework is a text-to-graph parser tailored for EE tasks, additionally with different IE components explicitly encoded in a single graph, as shown in Figure 2. Our framework builds on Samuel and Straka (2020) who developed the PERIN parser in the context of Meaning Representation Parsing Oepen et al. (2020), as well as You et al. (2022) who applied PERIN to the task of event extraction. We here extend this parser to the IE graphs shown in Figure 2 in a multilingual setting.
Given a sentence, as the example shown in Figure 3, JSEEGraph encodes the input tokens with the pre-trained language model XLM-R Conneau et al. (2020) to obtain the contextualized embeddings and further maps the embeddings onto queries; nodes (triggers and entities) are predicted by classifying the queries and anchored to surface tokens via a deep biaffine classifier Dozat and Manning (2017); edges are constructed between nodes with two biaffine classifiers, assigning arguments to predicted events and relations to entity pairs. We describe each module in detail in what follows.
### Sentence encoding
We use XLM-R Conneau et al. (2020) to obtain the contextualized embeddings of the input sequence. To be specific, a trainable weight \(w_{l}\) is used to get a weighted sum of representations of different layers, so the final contextual embedding \(\mathbf{e}=\sum_{l=1}^{L}\mathrm{softmax}(w_{l})\mathbf{e}_{l}\) with \(\mathbf{e}_{l}\) as the intermediate output from the \(l^{th}\) layer. If an input token consists of multiple subwords, the final contextual embedding will be the weighted sum over all subword embeddings with a learned subword attention.
Each contextual embedding is mapped into \(\mathbf{q}\)\(=\{\mathbf{q}_{1},\cdots,\mathbf{q}_{n}\}\) queries via a linear layer, and further transformed into hidden features \(\mathbf{h}=\{\mathbf{h}_{1},\cdots,\mathbf{h}_{n}\}\) with a stack of transformer encoder layers, which models inter-query dependency with multi-head self-attention.
### Node prediction
The node prediction module consists of a node label classifier and an anchor biaffine attention classifier.
The node label classifier is a linear classifier classifying each query into a node in the graph, and the node label is predicted by a single-layer feedforward network (FNN). If a query is classified
\begin{table}
\begin{tabular}{l l r r r r r} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Lang**} & \multicolumn{2}{c}{**Nesting**} & \multicolumn{2}{c}{**\#Sents**} \\ & & \(\mathrm{Trg}\)-Trg & \(\mathrm{Ent}\)-Ent & \(\mathrm{Trg}\)-Ent & Nested & All \\ \hline \multirow{2}{*}{AC05-E\({}^{+}\)} & en & 0 & 0 & 4 & 4 & 21044 \\ & zh & 0 & 4 & 9 & 12 & 7738 \\ \hline \multirow{2}{*}{AC05-E\({}^{++}\)} & en & 0 & 13387 & 716 & 5315 & 21044 \\ & zh & 0 & 10797 & 252 & 3748 & 7738 \\ \hline \multirow{3}{*}{Rich ERE-E\({}^{+}\)} & en & 1066 & 1329 & 244 & 1529 & 13858 \\ & zh & 301 & 1383 & 284 & 1266 & 10277 \\ & es & 485 & 523 & 97 & 712 & 2973 \\ \hline \multirow{3}{*}{Rich ERE-E\({}^{++}\)} & en & 1063 & 9453 & 1517 & 4277 & 13858 \\ & zh & 301 & 7303 & 622 & 2993 & 10277 \\ \cline{1-1} & es & 485 & 5526 & 854 & 2614 & 9273 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Nesting instances in ACE05 and Rich ERE. Nesting between a pair of event triggers is referred to as Trg-Trg; between a pair of entity mentions as Ent-Ent, and between an event trigger and an entity as Trg-Ent. For both datasets, in the E\({}^{+}\) version, entity mentions include only heads, while in the E\({}^{++}\) version, entity mentions include the full text spans.
into "null", no node is created from this query.
Node anchoring, as shown in Equation (1), is performed by biaffine attention [1] between the contextual embeddings \(\mathbf{e}\) and hidden feature of queries \(\mathbf{h}\), to map each query (a candidate node) to surface tokens, as shown in Equation (3). For each query, every input token is binary classified into anchor or non-anchor.
\[\mathrm{Bilinear(X_{1},X_{2})=X_{1}^{T}UX_{2}} \tag{1}\] \[\mathrm{Biaffine(X_{1},X_{2})=X_{1}^{T}UX_{2}+W(X_{1}\oplus X_{2} )+b}\] (2) \[\mathrm{node}^{(\mathrm{anchor})}=\mathrm{Biaffine}^{(\mathrm{ anchor})}(\mathbf{h},\mathbf{e}) \tag{3}\]
Node prediction is complete with queries that are classified into nodes and anchored to corresponding surface tokens. Predicted nodes are either event triggers or entities, labeled as "trigger" or entity type. A dummy node is randomly generated to add to predicted nodes to play the role of <root> node, and always holds the first position.
### Edge prediction
Edge prediction between nodes is performed with two deep biaffine classifiers, as in Equation (6), one to predict edge presence between a pair of nodes and the other to predict the corresponding edge label. To construct edges between nodes, only queries from which nodes have been constructed will be used, and the new hidden features is \(\mathbf{h}^{\prime}\), which are further split into two parts with a single-layer FNN, as show in Equation (4) and (5).
\[\mathbf{h}_{1}^{\prime(\mathrm{edge})}=\mathrm{FNN}_{1}^{(\mathrm{ edge})}(\mathbf{h}^{\prime}) \tag{4}\] \[\mathbf{h}_{2}^{\prime(\mathrm{edge})}=\mathrm{FNN}_{2}^{(\mathrm{ edge})}(\mathbf{h}^{\prime})\] (5) \[\mathrm{edge}=\mathrm{Biaffine}^{(\mathrm{edge})}(\mathbf{h}_{1} ^{\prime(\mathrm{edge})},\mathbf{h}_{2}^{\prime(\mathrm{edge})}) \tag{6}\]
The edge presence biaffine classifier performs binary classification, deciding whether or not an edge should be constructed between a pair of nodes. The edge label biaffine classifier performs multi-class classification, and the edge label set is the union of argument roles and relation types.
### Constrained decoding
During inference, we apply a set of constraints specifically developed for the correct treatment of event arguments and entity relations based on the graph encoding we define for the information graph (Figure 2): 1) directed edges from the <root> node can only connect to a trigger node, and the corresponding edge label is an event type; 2) directed edges from a trigger node to an entity indicates an event argument, with the argument role placed as edge label; 3) directed edges between a pair of entities indicate an entity relation, and the corresponding relation type is assigned to the edge label.
Figure 3: An illustration of our JSEEGraph parsing the sentence _“Crowds march in Egypt to protest Morsi detention.”_, example from Rich ERE.
Experimental setup
### Data
As mentioned above, we evaluate our system on the benchmark datasets ACE054 (LDC2006T06) and Rich ERE5 (LDC2020T18). As mentioned above, Table 1 summarizes the statistics of the pre-processed datasets.
Footnote 4: [https://catalog.ldc.upenn.edu/LDC2006T06](https://catalog.ldc.upenn.edu/LDC2006T06)
Footnote 5: [https://catalog.ldc.upenn.edu/LDC2020T18](https://catalog.ldc.upenn.edu/LDC2020T18)
Following Lin et al. (2020), we keep 33 event types, 22 argument roles, 7 entity types, and 6 relation types for both the English and Chinese parts of ACE05. We follow You et al. (2022) in employing the ACE-E\({}^{++}\) version of this data, which uses the full text span of entity mentions instead of only the head, as described in section 3 above.
For Rich ERE, we keep 18 out of 38 event types defined in the Rich ERE event ontology 6, 18 out of 21 argument roles 7, 15 entity types, and 6 relation types for English, Chinese, and Spanish. Given no existing data splits, we randomly sample similar proportions of documents for train, development, and testing as the split proportions in ACE05.
Footnote 6: The Rich ERE event ontology defines 38 event types, but for Chinese and Spanish data, only 18 event types are annotated. For consistency, we also use the same 18 event types for the English part.
Footnote 7: 3 argument roles for the reduced event types are thus excluded.
### Evaluation metrics
Following previous work (Lin et al., 2020; Nguyen et al., 2021), precision (P), recall (R), F1 scores are reported for the following information elements.
* **Entity** An entity mention is correctly extracted if its offsets and entity type match a reference entity.
* **Relation** A relation is correctly extracted if its relation type, and offsets of both entity mentions match those of reference entities.
* **Event trigger** An event trigger is correctly identified (Trg-I) if its offsets match a reference trigger, and correctly classified (Trg-C) if its event type also matches a reference trigger.
* **Event argument** The evaluation of an argument is conditioned on correct event type prediction; if a predicted argument plays a role in an event that does not match any reference event types, the argument is automatically considered a wrong prediction. An argument is correctly identified (Arg-I) if its offsets match a reference argument, and correctly classified (Arg-C) if its argument role also matches the reference argument.
### Implementation detail
We adopt multi-lingual training for each dataset for the reported results. Results of monolingual models are listed in Appendix B. Detailed hyperparameter settings and runtimes are included in Appendix A.
### System comparison
We compare our JSEEGraph to the following systems: 1) ONEIE (Lin et al., 2020); 2) GraphIE (Nguyen et al., 2022); 3) FourIE (Nguyen et al., 2021); 4) JMCEE (Xu et al., 2020); 5) EventGraph (You et al., 2022) on the ACE05 dataset. For Rich ERE there is little previous work to compare to; the only previously reported results (Li et al., 2022) for EE only solve the task of argument extraction, using gold entity and trigger information, hence their work is not included in our system comparison.
## 6 Results and discussion
We here present the results for our JSEEGraph model for the EE task, as well as its performance for the additional IE components: entities and relations, evaluated as described above. We further zoom in on the nested structures identified in Section 3 and assess the performance of our system on these rich structures which have largely been overlooked in previous work on event extraction. We go on to assess the influence of inter-related IE components in an ablation study. Finally we provide an error analysis of our model's predictions.
### Overall performance
As shown in Table 4, on ACE-E\({}^{+}\), our overall results align with other systems. Our JSEEGraph results are especially strong for event argument extraction, with an improvement of around 10 percentage points from the best results of the previous best performing systems in our comparison.
On the newly introduced ACE-E\({}^{++}\), despite having more complex structures, with a higher degree of nested structures, the results of JSEEGraph on trigger extraction remain stable. We further note that our results on argument, entity, and relation extraction suffers some loss from highly nested entities, which is not surprising.
From Table 5, we find that the scores on Rich ERE are consistently lower compared to those of ACE05. The double-tagging of event triggers described in Section 3 clearly pose a certain level of difficulty for the model to disambiguate events with a shared trigger. Argument and entity extraction also suffers from more fined-grained entity types.
### Nesting
In order to directly evaluate our model's performance on nested instances, we split each test set into nested and non-nested parts and report the corresponding scores, as shown in Table 68.
Footnote 8: ACE05-E\({}^{+}\) is not included as it lacks sufficient nested instances.
We observe that JSEEGraph is quite robust in tackling nested instances across different IE tasks and languages. On ACE05-E\({}^{++}\), more than half of the test data are nested for both English and Chinese, and the results on the nested parts are lower, however consistently comparable with the non-nested parts of the datasets. On Rich ERE-E\({}^{+}\), nested instances make up only a small part of the test data, but the results are still comparable to the non-nested part. On Rich ERE-E\({}^{++}\), about one third of the test data are nested, results of the nested parts are in fact consistently better for trigger, entity, and relation extraction, but inferior for argument extraction.
To conclude, JSEEGraph does not suffer considerable performance loss from nesting among different IE elements, and in many cases actually gains in performance from more complex structures, notably for trigger, entity, and relation extraction. It is clear that the system can make use of inter-relations between the different IE elements of the information graph in order to resolve these structures.
### Ablation study
In order to gauge the effect of the joint modeling of entities, events, and relations, we perform an ablation study where we remove the entity and relation information from our information graph, hence only performing the task of event extraction directly from text. In the reduced information graph, node labels for entity types are removed, and relation edges between entities are also removed. We find that event extraction clearly benefits from entity and relation extraction, especially for event argument extraction. As shown in Table 4 and Table 5, when we train our model only for event extraction, the performance on argument extraction drops consistently across different datasets and languages, but the performance on trigger extraction remains quite stable.
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline
**Lang** & **Nested** & **fents** & **Trg-1** & **Trg-C** & **Arg-1** & **Arg-C** & **Entity** & **Relation** \\ \hline \hline \multicolumn{7}{c}{**Dataset: ACE05-E\({}^{++}\)**} \\ \(\mathbf{\mathsf{em}}\) & ✓ & 418 & 721 & 68.5 & 59.2 & 57.0 & 85.1 & 57.0 \\ \(\mathbf{\mathsf{em}}\) & ✓ & 359 & 77.0 & 74.0 & 73.2 & 69.0 & 87.4 & 47.5 \\ \(\mathbf{\mathsf{em}}\) & ✗ & 277 & 72.2 & 69.7 & 68.9 & 65.5 & 85.4 & 60.8 \\ \(\mathbf{\mathsf{em}}\) & ✗ & 244 & 57.6 & 57.6 & 57.9 & 77.3 & 84.5 & 33.6 \\ \hline \multicolumn{7}{c}{**Dataset: Rich ERE-E\({}^{+}\)**} \\ \(\mathbf{\mathsf{em}}\) & ✓ & 93 & 81.3 & 71.6 & 54.8 & 51.4 & 81.3 & 49.8 \\ \(\mathbf{\mathsf{em}}\) & ✗ & 652 & 61.4 & 56.9 & 64.0 & 60.5 & 79.8 & 56.3 \\ \(\mathbf{\mathsf{em}}\) & ✓ & 101 & 720 & 66.6 & 47.5 & 45.1 & 79.7 & 56.0 \\ \(\mathbf{\mathsf{em}}\) & ✗ & 382 & 54.2 & 52.2 & 59.8 & 55.9 & 77.1 & 49.9 \\ \(\mathbf{\mathsf{em}}\) & ✓ & 51 & 78.1 & 64.7 & 55.5 & 52.3 & 78.4 & 51.8 \\ \(\mathbf{\mathsf{em}}\) & ✗ & 547 & 49.9 & 48.5 & 63.8 & 55.6 & 73.1 & 51.8 \\ \hline \multicolumn{7}{c}{**Dataset: Rich ERE-E\({}^{+}\)**} \\ \(\mathbf{\mathsf{em}}\) & ✓ & 251 & 75.4 & 69.2 & 53.0 & 50.5 & 81.0 & 45.7 \\ \(\mathbf{\mathsf{em}}\) & ✗ & 944 & 46.0 & 45.3 & 75.0 & 70.6 & 71.0 & 49.4 \\ \(\mathbf{\mathsf{em}}\) & ✓ & 197 & 70.4 & 67.0 & 49.0 & 46.8 & 80.4 & 57.2 \\ \(\mathbf{\mathsf{em}}\) & ✗ & 286 & 45.9 & 41.8 & 63.9 & 61.1 & 69.7 & 23.3 \\ \(\mathbf{\mathsf{em}}\) & ✓ & 163 & 66.0 & 59.3 & 57.2 & 53.7 & 75.2 & 53.5 \\ \(\mathbf{\mathsf{em}}\) & ✗ & 435 & 47.0 & 43.0 & 65.3 & 61.7 & 61.4 & 30.0 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Experimental on test data with nesting as compared to without nesting (F1-score, %).
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Model** & **Trg-1** & **Trg-C** & **Arg-1** & **Arg-C** & **Entity** & **Relation** \\ \hline \hline \multicolumn{7}{c}{**Dataset: ACE05-E\({}^{++}\)**} \\ EventGraph & & 70.0 & — & 65.4 & — & \\ GraphIE & & **74.8** & — & 59.9 & 91.0 & **65.4** \\ OSEIE & & 75.6 & 72.8 & 57.3 & 54.8 & 89.6 & 58.6 \\ FourIE & & **76.7** & 73.3 & 59.5 & 57.5 & **91.1** & 63.6 \\ JSEEGraph & 74.2 & 71.3 & **70.7** & **68.4** & 90.7 & 62.6 \\ JSEEGraph with out-rated & 74.8 & 71.7 & 67.5 & 64.6 & — & \\ \hline \multicolumn{7}{c}{**Dataset: ACE05-E\({}^{++}\)**} \\ \(\mathbf{\mathsf{em}}\) & ✗ & **52.3** & **74.0** & 53.7 & 50 & — & \\ ONEIE & & 67.7 & — & 53.2 & **89.9** & 62.9 \\ FortIE & & 70.3 & — & 56.1 & 89.1 & **65.9** \\ JSEEGraph & 71.9 & 69.6 & **74.3** & **70.1** & 87.4 & 63.3 \\ JSEEGraph & **vu
### Error analysis
The experimental results show that JSEEGraph has an advantage when it comes to the task of argument extraction. In a manual error analysis we therefore focus on the errors of event trigger extraction. After a manual inspection of our model's predictions on the test data, we find that the errors fall into the following main categories.
Over-predict non-event sentences.Our system tends to be more greedy in extracting event mentions, and wrongly classifies some tokens as event triggers even though the sentence does not contain event annotation. For instance, the sentence _"Anne-Marie will get the couple's 19-room home in New York state"_ (from ACE05) does not have annotated events, but our system extracts _"get"_ as trigger for a Transfer-Ownership event; in this case, however, one could argue that the Transfer-Ownership should be annotated.
Under-predict multi-event sentencesWhen a sentence contains multiple event mentions, JSEEGraph sometimes fails to extract all of the event triggers. For example, this sentence _"Kelly, the US assistant secretary for East Asia and Pacific Affairs, arrived in Seoul from Beijing Friday to brief Yoon, the foreign minister"_ from ACE05 contains a Transport event triggered by _"arrived"_ and a Meet event triggered by _"brief"_, but our system fails to extract the trigger for the Meet event; in this example, it requires a certain level of knowledge to be able to identify _"brief"_ as an event trigger, which is beyond the capacity of our model.
Wrong event typesIn some cases, even though our model successfully identifies an event trigger, it assigns a wrong event type. Some event types can easily be confused with each other. In this sentence from Rich ERE, _"The University of Arkansas campus was buzing Friday after a student hurt himself when a gun went off in his backpack in the KUAF building"_, an Injure event is evoked by _"hurt"_, but our model assigns an event type of Attack. Clearly, Injure and Attack events are one typical case of event types that can be easily confused.
Context beyond sentenceThis error applies specifically to Rich ERE: even though the annotation of events is on a sentence level, annotators were instructed to take into account the context of the whole article. Our model fails completely when a trigger requires context beyond the sentence. For instance, this sentence _"If Mickey can do it, so can we!"_ is taken from an article describing an on-going demonstration in Disney Land, and _"it"_ is the trigger for a demonstrate event; without the context, our model fails to identify the trigger. These are cases which would require information about event coreference.
## 7 Conclusion
In this paper, we have proposed JSEEG, a graph-based approach for joint structured event extraction, alongside entity, and relation extraction. We experiment on two benchmark datasets ACE05 and Rich ERE, covering the three languages English, Chinese, and Spanish. We find that our proposed JSEEGraph is robust in solving nested event structures, and is especially strong in event argument extraction. We further demonstrate that it is beneficial to jointly perform EE with other IE tasks, and event argument extraction especially gains from entity extraction.
LimitationsOur work has two main limitations. Firstly, we do not compare our system to previous works on the Rich ERE dataset. This is mainly due to the fact that most work use the light ERE Song et al. (2015) dataset. We were unfortunately not able to got access to this version of the data9, which is why no experiments were carried out on it.
Footnote 9: Here we refer to the datasets with LDC codes: _LDC2015E29_, _LDC2015E68_, and _LDC2015E78_ for English ERE, and _LDC2015E107_ for the Spanish ERE.
Secondly, we only experiment with one language model, the multilingual model XLM-R. As our model is language agnostic, and we aimed to test its performance on datasets in different languages, the choice of a multilingual model was obvious. XLM-R has been chosen based on its good performance in other tasks, and to make our work comparable to previous work You et al. (2022). However, another approach would be to test our model with a selection of language-specific language models.
## Acknowledgements
This work was supported by industry partners and the Research Council of Norway with funding to _MediaFutures: Research Centre for Responsible_
Media Technology and Innovation_, through the centers for Research-based Innovation scheme, project number 309339.
|
2310.18937
|
The Utility of "Even if..." Semifactual Explanation to Optimise Positive
Outcomes
|
When users receive either a positive or negative outcome from an automated
system, Explainable AI (XAI) has almost exclusively focused on how to mutate
negative outcomes into positive ones by crossing a decision boundary using
counterfactuals (e.g., \textit{"If you earn 2k more, we will accept your loan
application"}). Here, we instead focus on \textit{positive} outcomes, and take
the novel step of using XAI to optimise them (e.g., \textit{"Even if you wish
to half your down-payment, we will still accept your loan application"}).
Explanations such as these that employ "even if..." reasoning, and do not cross
a decision boundary, are known as semifactuals. To instantiate semifactuals in
this context, we introduce the concept of \textit{Gain} (i.e., how much a user
stands to benefit from the explanation), and consider the first causal
formalisation of semifactuals. Tests on benchmark datasets show our algorithms
are better at maximising gain compared to prior work, and that causality is
important in the process. Most importantly however, a user study supports our
main hypothesis by showing people find semifactual explanations more useful
than counterfactuals when they receive the positive outcome of a loan
acceptance.
|
Eoin M. Kenny, Weipeng Huang
|
2023-10-29T08:52:23Z
|
http://arxiv.org/abs/2310.18937v1
|
# The Utility of "Even if..." Semifactual Explanation to Optimise Positive Outcomes+
###### Abstract
When users receive either a positive or negative outcome from an automated system, Explainable AI (XAI) has almost exclusively focused on how to mutate negative outcomes into positive ones by crossing a decision boundary using counterfactuals (e.g., _"If you earn 2k more, we will accept your loan application"_). Here, we instead focus on _positive_ outcomes, and take the novel step of using XAI to optimise them (e.g., _"Even if you wish to half your down-payment, we will still accept your loan application"_). Explanations such as these that employ "even if..." reasoning, and do not cross a decision boundary, are known as semifactuals. To instantiate semifactuals in this context, we introduce the concept of _Gain_ (i.e., how much a user stands to benefit from the explanation), and consider the first causal formalisation of semifactuals. Tests on benchmark datasets show our algorithms are better at maximising gain compared to prior work, and that causality is important in the process. Most importantly however, a user study supports our main hypothesis by showing people find semifactual explanations more useful than counterfactuals when they receive the positive outcome of a loan acceptance.
## 1 Introduction
Explainable AI (XAI) is broadly categorised into factual [4; 27; 30] and contrastive explanations [29; 38]. Within contrastive XAI, despite being neglected in comparison to counterfactuals, semifactuals are a major, fundamental part of human explanation, and have long been studied in psychology [9; 35; 49], philosophy [6; 8; 20], and lately computer science [1; 29; 34; 52; 62; 3]. They take the form of _"Even if \(x\) happened, \(y\) would still be the outcome"_. Such reasoning has many potential uses as demonstrated by these prior works, but here we are focused on how semifactuals can help optimise positive outcomes for users, which (to the best of our knowledge) remains completely unexplored.
Our definition of counterfactuals is in line with Wachter et al. [55], where a test instance classified as \(c\) must be mutated to cross a decision boundary into class \(c^{\prime}\). Likewise, as established in the literature [1; 29], we define a semifactual as an instance classified as \(c\), which must be modified in such a way as to _not_ cross a decision boundary (and hence remain class \(c\)) [29]. In recourse [25; 50], "negative outcomes" (e.g., a loan rejection) are generally mutated to produce "positive outcomes" (e.g., a loan acceptance) for users using counterfactuals. In our setting, we are assuming there was initially a positive outcome, and we are trying to mutate features to produce an even better situation for users, and in doing so _not_ cross the boundary into the negative outcome (i.e., using semifactuals).
Historically, counterfactuals have had obvious applications in computer science, such as explaining how to have a bank loan accepted rather than rejected, but applications for semifactuals as less
clear. As such, the usage of semifactuals has often inadvertently defaulted to copying counterfactual research by also explaining negative outcomes (e.g., _"Even if you double your savings, your loan will still be rejected"_[3, 29, 48]). However, such an application for semiffactual explanation perhaps has two main issues. Firstly, it is debatable if these explanations convey useful information [1], whilst a counterfactual explaining how to cross a decision boundary and have a loan accepted has obvious utility [24]. Secondly, such explanations make the user's situation seem helpless [35], in that they cannot possibly have their loan accepted, which raises ethical concerns [1]. However, our proposed framework can be used to not only overcome both of these issues, but actively _contribute_ to fairness.
Firstly, to try offer useful information for users, we flip the usual recourse problem and consider the user starting from a positive (rather than a negative) outcome. In this setting, consider a user that has had their loan accepted, but might prefer to make a smaller down-payment on a loan application. In this situation, our framework could present an explanation such as _"Even if you half your down-payment, your loan will still be accepted"_, which seems to be more useful than explaining negative outcomes (see Section 6). Secondly, because we are starting from a positive outcome, there is no danger of manipulating people into accepting a negative outcome, which guarantees fairness is this regard. Now, with regards to optimising fairness even further, note that banks are not motivated to share such explanations even though they may help people, because (for example) larger down-payments are associated with lower risk on their behalf [7]. So, the usage of semifactuals in this application has clear potential to actively _encourage_ fairness and transparency. As an aside, it is worth noting that although the focus of this paper is on financial applications, this research has broad impact on any domain for which the optimisation of a positive outcome is beneficial. For instance, in medical applications, our framework could present explanations of the form _"Even if you half your dose of drug \(x\), you will still be at a low risk for disease \(y\)"_. This is once again important for optimising fairness because people are frequently over-prescribed medicine with adverse side-effects [47], but due to profit Big Pharma has no incentive to actively encourage this type of transparency. Similar usage of semifactuals have also been proposed in smart agriculture to combat climate change [29].
Our main contributions are: (1) the first explicit exploration of how to optimise positive outcomes with XAI, (2) the problem formulation for this which involved augmenting current semifactual research with the concept of _Gain_ (see Section 3.3), and (3) the premiere user test in the XAI literature for semifactuals, showing a clear application in which users find them more useful than counterfactuals.
## 2 Literature Review
When using contrastive explanation to explain loan acceptance decisions, to the best of our knowledge, this has only been explored by McGrath et al. [36]. Specifically, they suggest _positive counterfactuals_, which show "by how much" a user had their loan accepted to help inform them when making future financial decisions. While this is interesting information, we show that users find semifactual explanations more useful in loan acceptance situations than positive counterfactuals (see Section 6).
Semifactual explanation is growing in popularity [3], Kenny & Keane [29] first explored the idea, but focused only on images.2 Artelt & Hammer [1] used diverse semifactuals to explain why an AI system refuses to make predictions due to having an unacceptably low certainty, but ignore how to explain either positive or negative outcomes. Lu et al. [34] explain spurious patterns with semifactuals using a human-in-the-loop framework in NLP. Zhao et al. [62] proposed a class-to-class variational encoder (C2C-VAR) with low computational cost that can generate semifactual images. Vats et al. [52] used generative models to produce semifactual image explanations for classifications of ulcers. Lastly, for model exploration, Xie et al. [59] sampled semifactual images with a joint Gaussian mixture model, and Dandl et al. [15] proposed deriving semifactual explanations from interpretable region descriptors. In contrast to all these approaches, we are showcasing how semifactuals can be used to optimise positive outcomes for users (notably in causal settings).
Footnote 2: Note there is other work on _a-fortiori_ explanations which have similar computational techniques to semifactuals [14, 19, 45], they are justifications of the form _“Because \(x\) it true, \(y\) must also be true”_.
From a user perspective, many have discussed the urgent need for comparative tests with semifactuals [29, 37, 40, 48, 56], with Aryal & Keane [3] pointing to the _'paucity of user studies'_ in the area. However, the only such tests we are aware of are in the psychological literature over two decades ago [35]. Taking to this challenge, we conduct the first such test directly comparing semifactuals to counterfactuals in the XAI literature (see Section 6).
Our research is related to algorithmic recourse [50] in that we are trying to ensure users are treated fairly by automated systems [24]. In this area, Mothil et al. [39] explored counterfactual diversity, in that we should be offering users several explanations. In addition, counterfactual robustness has been examined [18], which proposes that generated explanations should be robust to distributional shifts. Lastly, causality has been argued as essential to providing plausible recourse [26]. We see these three facets as being important to our problem setting, and instantiate them in our framework. There are other areas in recourse such as sequential decision making [16; 42], fairness [54], and privacy [43], but we leave their exploration within semifactual explanation for future work.
As an aside, the literature on sufficiency could be conflated with semifactual explanation, as it describes a set of "sufficient" features for a prediction which, in the presence of the other features mutating, mostly doesn't affect the outcome [17; 46; 57]. However the techniques offer no insights for how to generate a meaningful semifactual. More importantly though, if the sufficient features are the only actionable ones, then by definition we can't modify them to create a semifactual.
## 3 Semifactual Framework
In this section, we describe the basic definitions and assumptions for our semifactual framework to optimise positive outcomes for users, before formalising it under the concept of _Gain_ (i.e., how much a user stands to benefit from the explanation) in a causal setting, neither of which has been considered before. As an aside, we also show how the established concepts of plausibility, robustness, and diversity can be made fit into the objective to offer better explanations. Finally, we reflect on the theoretical properties of the framework.
### Definitions
Let us denote an individual \(\mathbf{x}\in\mathcal{X}\) with \(k\)_mutable_ features \(\mathbf{X}=\{X_{1},\ldots,X_{k}\}\). Given the individual \(\mathbf{x}\), a set of actions can be applied to \(\mathbf{x}\) where each action \(a(\mathbf{x})\) is also a \(k\)-dim vector. As in prior work [25], we apply \(a(\mathbf{x})\) and \(a\) exchangeably, since the individual \(\mathbf{x}\) will always be fixed. We explicitly exclude features that are either _inmutable_ or _non-actionable_. Adopting Pearl's \(do()\) operator [44], an action can be defined as \(a(\mathbf{x})=do(\mathbf{X}\coloneqq\boldsymbol{\theta})\), or simply \(do(\boldsymbol{\theta})\), to force a hard
Figure 1: Semifactual Explanation to Optimise Positive Outcomes: An individual \(\mathbf{x}\) has their loan accepted, but there are several semifactual explanations which can help optimise their outcome. Our algorithm produces a set of semifactual explanations which _maximise_ the distance between \(\mathbf{x}\) and the final explanation \(\mathbb{SF}(\mathbf{x},a;\mathcal{M})\). This allows the largest _Gain_ to be achieved so that the user gets the maximum benefit. In contrast, counterfactual algorithms are not suitable because they are designed to target the shortest path across a decision boundary. In addition, the semifactuals are robust to distributional shifts by constraining an \(\epsilon\)-neighborhood between them and the decision boundary. Note \(\mathcal{M}\) is the Structural Causal Model (SCM), see Section 3.
intervention of replacing \(\mathbf{x}\) by \(\mathbf{\theta}\) where \(\mathbf{\theta}\in\mathcal{X}\). It implies that, for each feature, \(X_{i}\coloneqq\theta_{i}\) for the individual \(\mathbf{x}\). If the action \(do(\mathbf{\theta})\) imposes no change, \(\mathbf{x}=\mathbf{\theta}\) holds. We further denote a set of human-constrained actionable ranges \(\mathcal{A}=\{a(\mathbf{x})=do(\mathbf{\theta}):\forall\mathbf{\theta}\in\mathcal{X}\}\). Note that the actions have to be mutable and explicitly exclude any action which keeps the individual in the same position.
The non-causal semifactual interaction between \(\mathbf{x}\) and \(a(\mathbf{x})\) is defined by \(\mathbb{SF}:\mathcal{X}\times\mathcal{A}\mapsto\mathcal{X}\). That is, the individual \(\mathbf{x}\) taking action \(a(\mathbf{x})\) will lead to another representation \(\mathbf{\theta}\in\mathcal{X}\) representing that person's recourse. Now, a structural semifactual is defined which considers the dependence between the related features [18; 24]. We denote the structural causal model (SCM) by \(\mathcal{M}=(\mathbf{S},\mathbb{P}_{U})\) where \(\mathbf{S}\) are a set of structural equations and \(\mathbb{P}_{U}\) is the distribution over the exogenous variables \(U\in\mathcal{U}\). Consider that in a causal graph, there is a set of causal parents for each feature \(x_{i}\in\mathbf{x}\), denoted by \(\mathrm{Pa}_{i}\). We denote the structural equations as \(\mathbf{S}=\{x_{i}\coloneqq g_{i}(\mathrm{Pa}_{i},U_{i}):i=1,\dots,k\}\) where \(g_{i}(\cdot)\) is a deterministic function that describes the causal relationship for \(x_{i}\), and depends on the exogenous variable \(U_{i}\in\mathcal{U}\) alongside the corresponding parent set \(\mathrm{Pa}_{i}\). Hence, \(\mathbf{S}\) induces a mapping \(\mathbb{S}:\mathcal{U}\mapsto\mathcal{X}^{*}\) and its inverse mapping \(\mathbb{S}^{-1}:\mathcal{X}\mapsto\mathcal{U}\). Let \(f\circ g(x)=f(g(x))\) which can be extended to more functions. Hence, we specify the SCM-processed semifactual by \(\mathbb{SF}(\mathbf{x},do(\mathbf{\theta});\mathcal{M})\) to denote the transition between the states by taking a certain action through an SCM \(\mathcal{M}\), where
\[\mathbf{\theta}^{\prime}=\mathbb{SF}(\mathbf{x},do(\mathbf{\theta});\mathcal{M}) \coloneqq\mathbb{S}\circ\mathbb{S}^{-1}\circ\mathbb{SF}(\mathbf{x},do(\mathbf{ \theta});\mathcal{M})\,. \tag{1}\]
If all features are _independently manipulable_, we have \(\mathbf{\theta}^{\prime}=\mathbf{\theta}=\mathbb{SF}(\mathbf{x},do(\mathbf{\theta}); \mathcal{M})=\mathbb{SF}(\mathbf{x},do(\mathbf{\theta}))\). Therefore, \(\mathbb{SF}(\mathbf{x},do(\mathbf{\theta});\mathcal{M})\) is a more generalised formulation which covers the non-causal case. Lastly, we assume a binary model that generates the score for the users is \(h\), where \(h:\mathcal{X}\mapsto\{0,1\}\) by which we can simply consider that \(1\) means a positive outcome (e.g., a loan acceptance) and \(0\) is a negative outcome (e.g., a loan rejection). We set a lower threshold \(\psi\) that separates the decision boundary. For the form defined above, \(\psi=0.5\) is a reasonable threshold that fits all situations well.
### Framework
We define our semifactual framework as one centering on gain (\(G\)) that is weighted by plausibility (\(P\)), regularization in the form of diversity (\(R\)), and hard constraints in the form of robustness (\(H\)), indexed by \(j\). All of the components are parameterized with \(\mathbf{x}\) and a subset of suggestions \(\{a_{1},\dots,a_{m}\}\). Letting \(f(\cdot)\) be a function composed by gain and some weighting (i.e., plausibility for us), the causal semifactual framework is defined as
\[\max_{a_{1},\dots,a_{m}} \frac{1}{m}\sum_{i=1}^{m}f(G(\mathbf{x},a_{i}),P(\mathbf{x},a_{i} ))+\gamma R(\{\mathbf{\theta}_{1},\dots,\mathbf{\theta}_{m}\})\] s.t. \[\mathbf{\theta}_{i}=\mathbb{SF}(\mathbf{x},a_{i};\mathcal{M}),H_{j} (\mathbf{\theta}_{i})\geq 0,\forall i,j \tag{2}\]
where the regularisation and hard constraints can be multiple and indexed with \(i\) and \(j\), respectively. One may define a similar formulation for the non-causal case (see Section 4.2). We defer all details of the components until Section 3.4.
### Optimising Positive Outcomes with _Gain_
For the core of the objective we appeal to the notion of gain. Note that _gain_ is similar to the idea of _cost_ commonly used in recourse [50], but there are three crucial differences. First, we are trying to _maximise_ gain, rather than _minimise_ cost [24]. Second, gain ideally considers the causal dependencies between features in its function, whilst cost typically only considers the user's action(s) [24]. Third, gain is further subdivided into positive and negative polarities. To elaborate on this last point, take the example of a user who has their loan application for buying a new house accepted. In this situation, if they desired to spend more time away from work with family, they would experience _positive gain_ if they could work less hours per week and still have their loan accepted (see Figure 1). Conversely, if this person increased the number of hours they worked, they would experience _negative gain_. Notably, positive/negative gain is not necessarily connected to the model's probabilities (see \(a_{1}\) in Figure 1 moving away from the decision boundary). Similar to actionability constraints which can offer individualised recourse [50], what is positive/negative gain must be manually defined for each individual. As prior work on semifactuals simply maximised the \(L_{2}\) distance between a test instance and explanatory one to define good explanations [1; 29], we introduced the concept of gain to make them more meaningful in application.
More formally, we define the gain function by \(G:\mathcal{X}\times\mathcal{A}\to\mathbb{R}\). By denoting \(\mathbf{\theta}=\mathbb{S}\mathbb{F}(\mathbf{x},a;\mathcal{M})\), we decompose the function as follows:
\[G(\mathbf{x},a)\coloneqq\mathcal{P}_{SF}\circ\delta(\mathbf{x},\mathbf{\theta})= \mathcal{P}_{SF}\circ\delta(\mathbf{x},\mathbb{S}\mathbb{F}(\mathbf{x},a; \mathcal{M})) \tag{3}\]
where \(\mathcal{P}(\cdot,\cdot)\) is an oracle function that computes the payoff based on the vectorised difference between \(\mathbf{x}\) and \(\mathbf{\theta}\), i.e., \(\delta:\mathcal{X}\times\mathcal{X}\mapsto\mathbb{R}^{k}\) which is a symmetrical difference function between the two feature representations. The subscript of \(\mathcal{P}_{SF}\) denotes a semifactual. In interpretation, the gain function compares two states, (1) the original feature vector \(\mathbf{x}\), and (2) the SCM-processed end state \(\mathbf{\theta}\) which was led to through \(\mathbf{x}\) taking action \(a\).
Why is _Gain_ not necessarily equivalent to _Cost_?Formally, to enable the comparison, we write the cost function (denoted by \(C(\mathbf{x},a)\)) as
\[C(\mathbf{x},a)=-\mathcal{P}_{CF}\circ\delta(\mathbf{x},\mathbb{S}\mathbb{F} (\mathbf{x},a)) \tag{4}\]
which builds on the fact that cost solely considers the feature change. Note that \(\mathbb{S}\mathbb{F}\) is equivalent to the notion \(\mathbb{C}\mathbb{F}\) in [25], what makes our approach different is the consideration of positive outcomes and gain. Our finding is that gain in semifactuals (SFs) is not necessarily equivalent to cost in counterfactuals (CFs) where the equivalence ignores the sign of both quantities, as formally stated as follows.
**Theorem 3.1**.: _Even if \(\mathcal{P}_{SF}(\cdot,\cdot)\equiv\mathcal{P}_{CF}(\cdot,\cdot)\), gain and cost are not necessarily equivalent ignoring the sign._
Proof.: Note that SCMs are also considered in counterfactual recourse [18, 24, 26]. However, in this prior research SCMs are typically applied for enforcing hard plausibility constraints, not in the computation of a user's cost. In contrast, our gain function takes the SCM-processed semifactual \(\mathbf{\theta}^{\prime}\) as an input. We employ proof by contradiction here. Assume that cost and gain are equivalent ignoring sign so that, without loss of generality,
\[|G(\mathbf{x},a)|=|C(\mathbf{x},a)| \iff|\mathcal{P}\circ\delta(\mathbf{x},\mathbb{S}\mathbb{F}( \mathbf{x},a;\mathcal{M}))|=|\mathcal{P}\circ\delta(\mathbf{x},\mathbb{S} \mathbb{F}(\mathbf{x},a))|\] \[\iff\delta(\mathbf{x},\mathbb{S}\mathbb{F}(\mathbf{x},a; \mathcal{M}))=\delta(\mathbf{x},\mathbb{S}\mathbb{F}(\mathbf{x},a)) \tag{5}\]
holds. However, SCMs can result in possibly more features being changed since some features could be others' causal parents and those causal children will change their values accordingly. By denoting \(\mathbf{\theta}=\mathbb{S}\mathbb{F}(\mathbf{x},a)\) and \(\mathbf{\theta}^{\prime}=\mathbb{S}\mathbb{F}(\mathbf{x},a^{\prime})\), we consider the general case as follows:
\[|\delta(\mathbf{x},\mathbf{\theta})|-|\delta(\mathbf{x},\mathbf{\theta }^{\prime})|=\sum_{i}|\delta(\mathbf{x},\mathbf{\theta})_{i}|-\sum_{i}|\delta( \mathbf{x},\mathbf{\theta}^{\prime})_{i}|\\ =\sum_{\{i:\theta_{i}=\theta_{i}^{\prime}\}\cup\{i:\theta_{i}\neq \theta_{i}^{\prime}\}}|\delta(\mathbf{x},\mathbf{\theta})_{i}|-|\delta(\mathbf{x},\mathbf{\theta}^{\prime})_{i}|=0+\sum_{\{i:\theta_{i}\neq\theta_{i}^{\prime}\}}| \delta(\mathbf{x},\mathbf{\theta})_{i}|-|\delta(\mathbf{x},\mathbf{\theta}^{\prime})_ {i}|\leq 0\,, \tag{6}\]
which contradicts with Equation (5). Thus, even if the oracle function for calculating the payoff is the same, gain and cost are still not necessarily equivalent. Also, the equality in Equation (6) holds when all features are independently manipulable or the changed features are independently manipulable of the remaining features, so that \(\mathbb{S}\mathbb{F}(\mathbf{x},a)=\mathbb{S}\mathbb{F}(\mathbf{x},a;\mathcal{ M})\). The proof completes here.
### Semifactual Components
Here, we detail how to incorporate the concepts of plausibility, robustness, and diversity into our framework for maximising gain, because they are agreed upon as important in the literature and useful for evaluation. While plausibility and diversity have been explored in semifactual explanation [1, 29], robustness and causality (and indeed an objective balancing all together) have not, yet we argue and show that the subtleties of "even if..." thinking are perhaps better captured in a casual setting.
Plausible GainWe define plausibility here as explanations which are within distribution. For example, an explanation saying a person could earn less and still have their loan accepted should change their "debt-to-income ratio" feature also, or it will lie outside the data manifold. Prior work on semifactuals has only considered euclidean distance to training data as a heuristic for this [29], in contrast we posit (similar to the counterfactual literature [23]) that this is better approached with
SCMs. Hence, we define the plausibility for \(\mathbf{x}\) taking the action \(a\) by \(P(\mathbf{x},a)=\Pr(a=do(\mathbf{\theta})|\mathbf{x})\) where \(\mathbf{x}\) is fixed for an individual and \(\Pr(\cdot)\) is a density function. In our non-causal tests, we use the \(L_{2}\) norm to training data to approximate plausibility (i.e., being in distribution, similar to [29, 32, 51]). However, this issue of plausibility is naturally taken care of in our causal tests thanks to the SCM ensuring plausible feature mutations, so we don't explicitly consider plausibility there going forward.
Robust GainContinuing with the example of a person who has a loan accepted to buy a house, the semifactual should sometimes be robust to distribution shifts. For example, if the person uses the semifactual explanation to triple their loan amount (recall Figure 1), they will likely need upwards of six months to locate a new house during which the semifactual should hold if the person e.g. gets an additional credit card. Hence, we define our semifactual robustness such that while taking action \(a\), any close neighbor of the generated semifactual \(\mathbb{SF}(\mathbf{x},a;\mathcal{M})\) can still receive a positive outcome. The \(\epsilon\)-neighborhood of \(\mathbf{x}\) centering around an individual \(\mathbf{x}\) is
\[\mathcal{B}(\mathbf{x})=\{\mathbf{\theta}=\mathbb{SF}(\mathbf{x},a;\mathcal{M}): \forall a\in\mathcal{A},\delta(\mathbf{\theta},\mathbf{x})\leq\epsilon\} \tag{7}\]
which covers all neighbors that can be reached from \(\mathbf{x}\) by taking an _actionable_ feature change \(a\) through the SCM \(\mathcal{M}\). By definition, \(\mathbf{x}\) is also a neighbor of itself since \(\mathbf{x}\in\mathcal{B}(\mathbf{x})\) holds given \(\delta(\mathbf{x},\mathbf{x})=0\). Let us represent \(\mathcal{B}(\mathbb{SF}(\mathbf{x},a;\mathcal{M}))\) by \(\mathcal{B}_{s}(\mathbf{x},a)\) for simplicity. Given the predictive model \(h(\cdot)\) and an individual \(\mathbf{x}\), an action \(a\) is robust for individual \(\mathbf{x}\) if \(h(\mathbf{\theta})>\psi,\forall\mathbf{\theta}\in\mathcal{B}_{s}(\mathbf{x},a)\), which is equivalent to \(\min_{\mathbf{\theta}\in\mathcal{B}_{s}(\mathbf{x},a)}h(\mathbf{\theta})-\psi>0\). For instance, \(\psi=0.5\) works for a binary model case. We hence denote the term related to the robustness by
\[H(\mathbf{x},a)=\min_{\mathbf{\theta}\in\mathcal{B}_{s}(\mathbf{x},a)}h(\mathbf{ \theta})-\psi\,, \tag{8}\]
which will be useful for constructing the final objective.
Diverse GainIt is generally preferred to offer a number of suggested actions \(\{a_{1},\dots,a_{m}\}\), rather than a single one [60]. Like prior work in counterfactuals, we define diversity as the average pair-wise distance among a set of entities [39, 61]. We reuse the distance function \(\delta\) and define the diversity objective within a set of SFs \(\{\mathbf{\theta}_{i}\}_{i=1}^{m}\subseteq\mathcal{X}^{m}\) as
\[R(\{\mathbf{\theta}_{i}\}_{i=1}^{m})=\begin{cases}\frac{2}{m(m-1)}\sum_{i=1}^{m} \sum_{j>i}^{m}L_{2}\circ\delta(\mathbf{\theta}_{i},\mathbf{\theta}_{j})&m>1\\ 0&m=1\end{cases} \tag{9}\]
which represents a pairwise mean distance among the set of data points, based on the \(L_{2}\) norm. One may accommodate \(m=1\) for the case when only a single semifactual is desired.
#### 3.4.1 Semifactual Objective
The final objective may be constructed as follows.
**Definition 3.2** (Semifactual Objective).: We consider a simple composition multiplication function for \(f(\cdot)\). Considering gain, plausibility, robustness, and diversity, the semifactual objective function is:
\[\max \frac{1}{m}\sum_{i=1}^{m}P(\mathbf{x},a_{i})G(\mathbf{x},a_{i})+ \gamma R\left(\{\mathbb{SF}(\mathbf{x},a_{i};\mathcal{M})\}_{i=1}^{m}\right)\] (10) s.t. \[\forall i=1,\dots,m:a_{i}\in\mathcal{A},H(\mathbf{x},a_{i})>\psi\,.\]
In optimisation [10], an adversarial interpretation from the perspective of a two-player zero sum game can further simplify Equation (10) to
\[\mathcal{J}\coloneqq\min_{\lambda_{1},\dots,\lambda_{m}\geq 0}\ \max_{a_{1},\dots,a_{m}\in\mathcal{A}}\frac{1}{m}\sum_{i=1}^{m}P(\mathbf{x},a _{i})G(\mathbf{x},a_{i})+\lambda_{i}H(\mathbf{x},a_{i})+\gamma R\left(\{ \mathbb{SF}(\mathbf{x},a_{i};\mathcal{M})\}_{i=1}^{m}\right)\,, \tag{11}\]
where \(H(\mathbf{x},a)\) is the Lagrangian. The primal player tries to maximise the plausibility-weighted gain and diversity, with regard to \(a\), whilst the dual player tries to minimise regarding a set of \(\lambda\).
Since there are \(m\) suggestions, the constraints for robustness will be \(m\) times. Observing the objective, the robustness is a hard constraint, whilst the diversity can be regarded as regularisation. \(P\) can be seen as a scaling factor for \(G\) which helps to guarantee that high expected gain is only possible alongside high plausibility, simply adding them misses this special property.
### Properties of the Framework
Effective Solution Space.We discuss the set of meaningful solutions here and the result validates the re-formulation [i.e., Equation (11)] of the semifactual framework [i.e., Equation (10)]. First, we depict the lemma.
**Lemma 3.3**.: _Assume that the limit of the gain function and diversity term are finite. Also, assume that \(\mathcal{A}^{+}\coloneqq\{a\in\mathcal{A}:G(\mathbf{x},a)\geq 0\}\) is non-empty for an individual \(\mathbf{x}\). The semifactual objective \(\mathcal{J}\geq 0\) when \(\forall i=1,\ldots,m,a_{i}\in\mathcal{A}^{+}:H(\mathbf{x},a_{i})\geq 0\), otherwise \(\mathcal{J}=-\infty\)._
See Section A.1 for the proof. We can summarise that the action set which is able to provide the positive payoff can be defined by the named effective solution space for \(\mathbf{x}\): \(\mathcal{A}=\{a\in\mathcal{A}:H(\mathbf{x},a)\geq 0,G(\mathbf{x},a)>0\}\). Hence, repeated suggestions will be produced when the number of actions in this space is smaller than the required \(m\). Otherwise, the solution will provide more versatile options. There are no suggestions to achieve an effective semifactual(s) (i.e., with positive gain) if this solution space is empty. However, similar situations exist for counterfactuals when they are also impossible to generate, assuming a similar set of actionable constraints are defined.
## 4 Implementation Details
We now introduce our methods to solve Equation (11), henceforth called Semifactual-recourse GENeration (S-GEN), for both causal and non-causal domains. In the following paragraphs, we use \(\hat{G}\) to denote an empirical approximation of \(G\), and likewise for \(P\), \(H\), \(R\), and \(\mathcal{J}\).
### Causal Case
Assuming the presence of a differentiable classifier \(h(\cdot)\) and SCM \(\mathcal{M}\), (recall the latter guarantees plausibility), let \(\Omega_{i}(\mathbf{x})=\{\Pr(\boldsymbol{\theta}):\boldsymbol{\theta}\in \mathcal{B}_{s}(\mathbf{x},a_{i})\}\) be the probability distribution over the \(\epsilon\)-neighborhood of \(\mathbb{SF}(\mathbf{x},a_{i};\mathcal{M})\). Also, let \(\mathbf{B}_{i}\) represent a finite subset of \(\mathcal{B}_{s}(\mathbf{x},a_{i})\) sampled according to \(\Omega_{i}(\mathbf{x})\). Our objective is:
\[\max_{a_{1},\ldots,a_{m}}\min_{\lambda_{1},\ldots,\lambda_{m}} \frac{1}{m}\sum_{i=1}^{m}-\lambda_{i}\mathcal{L}\left(h(\mathbb{SF }(\mathbf{x},a_{i};\mathcal{M}),h(\mathbf{x}))-\frac{1}{|\mathbf{B}_{i}|}\sum _{\boldsymbol{\theta}_{i}\in\mathbf{B}_{i}}\lambda_{i}\mathcal{L}\left(h( \boldsymbol{\theta}_{i}),h(\mathbf{x})\right)\right.\] \[\left.\begin{aligned} &+\hat{P}(\mathbf{x},a_{i})\hat{G}( \mathbf{x},a_{i})+\gamma\hat{R}(\{\mathbb{SF}(\mathbf{x},a_{i};\mathcal{M})\} _{i=1}^{m})\\ &\text{s.t.}\quad\forall i=1,\ldots,m:a_{i}\in\mathcal{A}, \lambda_{i}>0\end{aligned}\right. \tag{12}\]
where \(\mathcal{L}\) is the binary cross entropy loss. For robustness, we used Monte Carlo (MC) sampling with an epsilon \(\epsilon\) robust hypersphere, and if either the instance or sampling return a _negative outcome_ with \(h(\cdot)\), we use the prior optimisation step as the solution. For diversity, \(m\) is set to the number of actionable feature sets, and a solution is obtained for each. We utilise the causal recourse approach of Karimi et al. [26] for solving the maximin. The actionable bounds are clipped each iteration, and \(\lambda\) is iteratively decreased to put more emphasis on gain over time (see Algorithm 2).
### Non-Causal Case
For the non-causal case, we use a genetic algorithm [53; 58] which only assumes a binary predictive model \(h(\cdot)\). This approach follows the standard design for genetic algorithms, with some minor alterations specifically for semifactual generation, see Appendix D for the pseudocode. Next we present the fitness function which optimises our objective.
#### 4.2.1 Fitness Function
For gain, the average distance between an individual \(\mathbf{x}\) and each semifactual \(\mathbb{SF}(\mathbf{x},a)\) is measured as \(\hat{G}(\mathbf{x},a)=\|\mathbb{SF}(\mathbf{x},a)-\mathbf{x}\|_{2}\). For robustness, we relax it to two constraints: \(H_{p}\) is the probabilistic robustness for the neighbor points where the generated semifactuals for a query are randomly perturbed using MC simulation to make sure the surrounding neighborhood is robust, and \(H_{a}\) the absolute robustness for the individual \(\mathbf{x}\) (more detail in Section A.2). For the first constraint, a score of \(\hat{H}_{p}(\mathbf{x},a)=\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}\left\{h( \mathbf{x})=h(\boldsymbol{\theta}_{i})\right\}\) where \(\boldsymbol{\theta}\sim\Pr(\mathcal{B}_{s}(\mathbf{x},a))\), is returned. For
the second constraint, a score of \(\hat{H}_{a}(\mathbf{x},a)=\mathbbm{1}\{h(\mathbf{x})=h\circ\mathbb{SF}(\mathbf{x},a)\}\) is returned. Hence, the solution is rewarded for (1) the neighborhood samples, and (2) the semifactuals themselves being classified as \(h(\mathbf{x})\). For plausibility, we take from prior work and directly use the training data [29]. Specifically, considering the training data set \(\mathcal{D}\), we define the notion of plausibility using the distance of each semifactual generated to the nearest training data point. As the term must be maximised, we use a function which is monotonically decreasing with respect to the distance with \(P(\mathbf{x},a)\approx\hat{P}_{\mathcal{D}}(\mathbf{x},a)=\exp\{1/(\min_{ \boldsymbol{\theta}\in\mathcal{D}}\|\mathbb{SF}(\mathbf{x},a)-\boldsymbol{ \theta}\|_{2}^{2}+\gamma_{p})\}\) where \(\gamma_{p}\) is to account for when a perfect match to the semifactual exists in the training data (thus the division is undefined), and \(\hat{P}_{\mathcal{D}}(\mathbf{x},a)\) is an empirical approximation (based on \(\mathcal{D}\)) of the plausibility. Lastly, for diversity [39; 61], we take the mean distance between all \(m\) generated semifactuals with \(\hat{R}(\{\mathbb{SF}(\mathbf{x},a_{i})\}_{i=1}^{m})\) which precisely follows Equation (9). This objective collapses to 0 when \(m=1\).
Certain objectives need to be weighted individually based on the problem. For example, explanations which can be acted upon immediately perhaps don't need robustness. Notably, the multiplier \(\lambda\) in Equation (11) is split to \(\lambda_{p}\) and \(\lambda_{a}\), for \(H_{p}\) and \(H_{a}\) respectively. In this work, we treat them as hyperparameters. Also, they are used alongside \(\gamma\) to balance the objectives. Since \(\lambda\) and \(\gamma\) are selected as hyperparameters, they are removed under the min operator. Finally, the objective (fitness) function is defined as:
\[\max_{a_{1},\ldots,a_{m}\in\mathcal{A}^{+}}\frac{1}{m}\sum_{i=1}^{m}\hat{P}_{ \mathcal{D}}(\mathbf{x},a_{i})\hat{G}(\mathbf{x},a_{i})+\lambda_{p}\hat{H}_{ p}(\mathbf{x},a_{i})+\lambda_{s}\hat{H}_{a}(\mathbf{x},a_{i})+\gamma\hat{R}( \{\mathbb{SF}(\mathbf{x},a_{i};\mathcal{M})\}_{i=1}^{m})\,.\]
We selected the hyperparameters via a grid search, see Appendix C. Crucially, we also weight the fitness function output by \(\hat{H}_{p}(\mathbf{x},a)\) to encourage solutions with more semifactuals (see Algorithm 1).
## 5 Experiments & Results
Here we test S-GEN in both causal and non-causal settings. We show the effectiveness of our method in optimising a user's positive outcome compared to baselines and open source our code (see Appendix E). The actionability constraints are detailed in Appendix B. Baselines were modified to be appropriately compared, most importantly, we stopped counterfactual techniques before they crossed a decision boundary (thus generating semifactuals), and modified semifactual techniques to work on tabular data, Appendix G details the peripheral modifications.
In the non-causal setting, we consider three datasets, Loan Application [33], German Credit [21], and BCSC [11]. All categorical variables are one hot encoded. Three models were used, a decision tree, logistic regression, and naive bayes, each with 30 random test data point explanation samples gotten by varying the random seed. Note that because the range varied on each dataset, the results were normalised and averaged for each, but Appendix F details each individual dataset for completeness. For baselines, we modify three techniques, DiCE by Mothilal et al. [39] (henceforth DiCE*), PIECE by Kenny & Keane [29] (henceforth PIECE*), and Diverse semifactual Explanations of Reject by Artelt & Hammer [1] (henceforth DSER*). Plausibility is measured as the distance between a generated semifactual(s) and the nearest training example; thus, the smaller the better. Robustness is measured by MC sampling \(n=100\) single feature perturbations of each semifactual \(\boldsymbol{\theta}_{i}\), predicting their class, and returning a float between 0-1 of the success rate as described in Section 4.2.1.
In the causal setting, the Adult [31] and COMPAS [5] datasets are considered. The SCMs from Nabi & Shpitser [41] were used, and the structural equations from Dominguez et al. [18]. All categorical features are treated as real-valued. We use the pre-trained MLP classifiers from Dominguez et al. [18] and take 30 averaged samples from 5 random seeds. As baselines we modify the technique of Karimi et al. [26] [henceforth Karimi et al.(2021)*], and Dominguez et al. [18] [henceforth Dominguez et al.(2022)*], the latter optimises with robustness in mind. We optimise the relevant techniques to be robust in an \(\epsilon=0.1\) hypersphere, and the C&W adversarial attack by Carlini & Wagner [12] measures robustness by checking if the nearest adversarial attack is outside this radius.
For all tests, the main metric of concern is gain, that is, the mean distance between a query and its generated semifactual(s), the larger this number, the better. Diversity is also measured for all tests as the mean distance between all \(m\) generated semifactuals for an individual \(\mathbf{x}\), the higher the number, the better. To be in line with prior art, the \(L_{2}\) norm is used in non-causal tests [1], and the \(L_{1}\) in causal [18]. Note for causal tests the SCM guarantees plausibility so this metric is not reported.
### Non-Causal Results
Our purpose here is to show that current methods are insufficient to meet the basic requirements for semifactual explanation discussed in Section 3.4. Specifically, a technique needs to optimise gain, while remaining plausible, robust, and offering diverse explanations.
Observing the average normalised results across all datasets (note robustness was not normalised since it is already 0-1 range), Figure 2 shows that S-GEN performed the best on all metrics for all values of \(m\) (1-10). The results demonstrate that traditional counterfactual approaches (DiCE*) are not suitable to achieve optimal gain, due to them focusing on minimising cost. Moreover, methods built for semifactual generation specifically (i.e., DSER* & PIECE*) that _do_ actually maximise gain somewhat, still fail to match the results of S-GEN. This shows that S-GEN is superior to existing semifactual methods (and popular counterfactual approaches appropriately modified) for maximising a user's gain in positive outcomes. Moreover, it does so while maintaining superior plausibility, robustness, and diversity in all tests.
### Causal Results
We evaluate our algorithm in a causal setting where the SCMs and structural equations are known. The primary purpose of this test is to demonstrate that the semantics of semifactual "even if" thinking is better captured in a causal setting due to dependencies being taken into account when calculating a person's gain. With regard to diversity, we fix \(m\) to the maximum number of feature sets available from the actionable features (so only one \(m\) value is tested).
Figures 2(a) and 2(b) show the initial gain achieved by a person after taking a certain action (i.e., the _Action Gain_), and how this gain transforms after considering the causal relationship between features (i.e., the _Causal Gain_). Firstly, the total gain achieved by S-GEN is much larger than the baselines in both datasets and hence consistent with our non-causal tests. More importantly however, the change in gain a person achieves after considering the causal relations in the adult dataset is significantly higher both in significance testing and effect size (\(0.055\pm 0.001\) v. \(0.063\pm 0.001\); t-test \(p<0.02\); Cohen's \(d=2.24\)), showing it is beneficial to consider causality when calculating a person's gain. The results of diversity put S-GEN first also (S-GEN \(=0.84\pm 0.09\) v. Karimi \(=0.43\pm 0.03\) v. Dominguez \(=0.34\pm 0.03\)). In robustness, both S-GEN and Dominguez et al. (2022)* did reasonably well (S-GEN \(=87\%\) success v. Dominguez \(=54\%\) success), but Karimi et al. (2021)* did not (\(7.2\%\) success), likely due to the latter not being designed for this.
## 6 User Evaluation
The primary motivation behind this work is the hypothesis that semifactual explanation would be preferred by users over counterfactuals in positive outcome settings. To test this assumption, we design the first user test in XAI directly comparing the two. Specifically, we show users three materials in which a person has a bank loan accepted, and three in which they don't. Users were then shown both explanation types for each material, and asked to rate on a scale from 1-5 how useful each were. So, the study was a within-subjects design, and the condition was the explanation type. Note that although we are studying the effect of the explanation type on loan acceptance, the
Figure 2: Results: The ability of S-GEN to create semifactuals is compared to DiCE*, DSER*, and PIECE*. Overall, S-GEN does the best, achieving significantly better results to all baselines in all tests. Note we normalised all results before averaging because each dataset has different scaling. Standard error bars are shown.
loan rejection scenarios were also included to balance people's view of the problem setting, and as attention checks to verify that users were engaging with the materials and varying their scores accordingly. For analysis, each user's scores for counterfactuals and semifactuals were averaged in both loan acceptance and rejection materials into four decimal scores per user, thus allowing us to analyse the discrete Likert scores with t-tests [28]. As is a popular approach [22], we don't explicitly define what "useful" means to users, but rather let them use their own natural interpretation, as the results returned were reasonably consistent across individuals, they appear to have converged on an common interpretation of this word. The null hypothesis is that people will find both explanation types not significantly different in loan acceptance. The alternative is that people will find semifactuals significantly more useful in loan acceptance.
A power analysis [13] of two dependent means with an effect size \(dz=0.8\), alpha \(\alpha=0.05\), and power (\(1-\beta\) err prob)=0.9 informed a sample of 15 was appropriate for t-tests. Users were gathered from Prolific.com, 8 males, 7 females, aged 18+, native English speakers, and from the U.S. People were paid $12/hr, which totalled $35. The semifactuals were generated with S-GEN, and the counterfactuals with DiCE [39], notably these are equivalent to _positive counterfactuals_ by McGrath et al. [36] for explaining loan acceptance situations. The study obtained IRB approval from MIT.
All users engaged and changed their ratings significantly depending on whether a loan was accepted or rejected, so none were excluded. Figure 2(c) shows users find semifactuals significantly more useful in loan acceptance (S-GEN=3.60\(\pm\)0.27 v. DiCE=2.33\(\pm\)0.34; \(p<.005\)) compared to rejections when counterfactuals are preferred (S-GEN=2.6\(\pm\)0.32 v. DiCE=4.53\(\pm\)0.17; \(p<.0001\)). Hence we reject the null and lend credible evidence that semifactuals are more useful to explain positive outcomes.
## 7 Discussion
Although much XAI work has explored how to explain positive outcomes, to the best of our knowledge, no consideration has been given towards explaining how to _optimise_ them. Here, we have taken the novel step of exploring this, and showed how semifactuals are especially suited for the purpose. This required building on prior work in semifactuals by (1) introducing the concept of _Gain_, (2) re-framing them in a causal setting, and (3) conducting their first user test in XAI. Perhaps the notable limitation of our work is that although we have shown people do perceive semifactuals as being more useful in positive outcomes, we have not demonstrated this quantitatively, notably because of the difficulties acquiring an appropriate user base alongside the ethical considerations of such a study. Moreover, considering a casual formulation of semifactuals requires an SCM, which is not always realistic, but we have provided a non-causal algorithm for these situations. In future work, it would be interesting to formalise the utility of semifactuals for optimising positive outcomes in other domains such as robotics, which likely requires other considerations.
Figure 3: Causal Experiment & User Study Results: (a/b) show the gain achieved by all methods both before and after considering the causal dependencies. Firstly, note that S-GEN achieves significantly more gain than the alternatively proposed approaches. Most importantly however, (a) shows there is significantly more gain achieved on the Adult data by S-GEN after taking causal dependencies into account, showing the importance of a causal formalisation. (c) Shows the user study results, where people perceive semifactual explanation as being significantly more useful than counterfactuals in the positive outcome of having a loan accepted. Standard error bars are shown.
## Acknowledgements
The authors would like to thank Neil J. Hurley, alongside Mark T. Keane and Ruth M.J. Byrne who both inspired early consideration of the ideas in this paper. The authors would also like to thank MIT for their support in this project. This research wasn't directly supported by any grants or funding. We hope readers find the ideas interesting and useful for application.
|
2303.11195
|
Photon induced near-field electron microscopy from nanostructured
metallic films and membranes
|
We investigate - both experimentally and theoretically - the inelastic
interaction between fast electrons and the electromagnetic field scattered by
metallic apertures and nanostructures on dielectric membranes using photon
induced near-field electron microscopy. The experiments - performed in a high
brightness ultrafast transmission electron microscope - on gold apertures on
silicon nitride membranes reveal strong modulations of the electron-light
coupling strength. We demonstrates that this effect results from the combined
action of the electric field scattered by the aperture edges and the reflection
and transmission of the incident wave by the dielectric membrane. Moreover,
when a nanostructure is added inside the metallic aperture, the new scattered
field interferes with the previous contributions, thus imprinting the optical
response of the nanostructure in additional modulations of the electron-light
coupling strength. Using systematic electrodynamics simulations based on the
Green dyadic method, we quantitatively analyze these different contributions to
the electron-light coupling and propose further applications.
|
Sophie Meuret, Hugo Lourenço-Martins, Sébastien Weber, Florent Houdellier, Arnaud Arbouet
|
2023-03-20T15:23:38Z
|
http://arxiv.org/abs/2303.11195v1
|
# Photon induced near-field electron microscopy from nanostructured metallic films and membranes
###### Abstract
We investigate - both experimentally and theoretically - the inelastic interaction between fast electrons and the electromagnetic field scattered by metallic apertures and nanostructures on dielectric membranes using photon induced near-field electron microscopy. The experiments - performed in a high brightness ultrafast transmission electron microscope - on gold apertures on silicon nitride membranes reveal strong modulations of the electron-light coupling strength. We demonstrates that this effect results from the combined action of the electric field scattered by the aperture edges and the reflection and transmission of the incident wave by the dielectric membrane. Moreover, when a nanostructure is added inside the metallic aperture, the new scattered field interferes with the previous contributions, thus imprinting the optical response of the nanostructure in additional modulations of the electron-light coupling strength. Using systematic electrodynamics simulations based on the Green dyadic method, we quantitatively analyze these different contributions to the electron-light coupling and propose further applications.
## I Introduction
Ultrafast Transmission Electron Microscopes (UTEM) combining the femtosecond temporal resolution of ultrafast optical spectroscopies and the nanometric spatial resolution of electron microscopy have opened up many new possibilities to investigate light-matter at unique spatio-temporal scales [1; 2] such as the efficient probing of nano-optical excitations [3; 4; 5] or the coherent control of free electron wavefunctions [6; 7; 8; 9]. Recently, the combined use of tailored illumination and inelastic electron-light interaction has been proposed to correct the spherical aberration of electron microscopes [10]. In a first proof-of-principle experiment in this direction, the electron-light coupling mediated by a dielectric membrane illuminated by a tilted plane wave has been exploited to imprint different transverse intensity profiles on the electron wave function [11].
The interaction of a tilted plane wave with a dielectric membrane or metallic mirror can also be used in so-called holographic PINEM experiments. While conventional PINEM experiments give access to the optical-near field intensity along the electron trajectory, holographic PINEM experiments detect the interference of the studied optical excitation with a reference wave generated e.g. by a reflection of the incident wave by a planar interface. It has been shown that such interference between the plasmon field excited at a metal/dielectric interface and the reflection from the sample can imprint the phase of the plasmon field in the electron/near-field coupling constant extracted from the PINEM signal [12]. Holographic PINEM experiments have also been performed in a different geometry involving two sequential inelastic interactions induced by two distinct samples placed at different locations along the electron beam trajectory [6]. Even though the presence of a membrane in electron spectroscopy experiments is unavoidable, its has been shown that it is possible to minimize its influence on the electron-light coupling by choosing the incidence angle so that the contributions from the incident and reflected electric fields almost completely cancel each other. This is however not always possible in particular when the space available in the objective lens of the microscope is very limited or when short focal distance focusing optics are used on the sample. In these latter cases, a detailed knowledge of the contribution from the membrane to the inelastic signal is required prior to any demanding PINEM experiment.
In this study, we investigate both experimentally and theoretically the inelastic interaction between fast electrons and the electromagnetic field scattered by metallic apertures and nanostructures fabricated on a dielectric membrane (see Figure 1). Using Photon Induced Near-field Electron Microscopy, we map the inelastic interaction probability and analyze the combined influence of the scattering by the aperture edges or nanostructures and the electric field reflected or transmitted by the dielectric membrane illuminated by a tilted plane wave on the electron-light coupling. We have performed two sets of experiments on apertures and nano-antennas fabricated in a gold film deposited on a silicon nitride membrane : first we have investigated the spatial distribution of the inelastic signal when the fast electrons travel through simple apertures of different shapes engraved in a 50 nm gold film with a focused ion beam (See SI) before considering the case in which a gold nanostructure stands in the middle of the aperture. The results of the PINEM experiments are analyzed using electrodynamical simulations based on the Green dyadic Method.
## II Inelastic electron-light interactions in a perturbed metallic films
PINEM works on a so-called pump-probe scheme: A first laser pulse (pump) excites the optical near-field around a nanostructure which is then probed by a subsequent electron (probe) pulse. During its transit in the optical near-field, the travelling electron can emit or absorb photons, thus leading to a modification of its energy. This inelastic interaction yields a characteristic electron energy spectrum composed of a series of peaks reflecting the discrete nature of the photon exchange. The magnitude of these peaks only depends on the so-called electron-light coupling strength \(g\) which is proportional to the Fourier transform of the electric field component along the electron trajectory [3; 5; 13]:
\[g=\frac{e}{2\hbar\omega}\int dz\;E_{z}(z)e^{-\imath\omega z/v} \tag{1}\]
After interaction, the exit wavefunction of the electron, initially having an energy \(E_{0}\), is a superposition of wavelets of different kinetic energies \(E_{n}=E_{0}+n\hbar\omega\). The amplitude of the different components is a function of the electron-light coupling constant \(g\).
The PINEM experiments have been performed on the high-brightness ultrafast TEM developed in CEMES-CNRS. The latter is a customized 200 kV cold-field emission Hitachi High-Technology HF2000 [14]. The electron gun has been modified so that femtosecond laser pulses can be focused onto the tungsten nanotip and trigger the emission of femtosecond electron pulses [15]. The high brightness of the femtosecond electron source provides sub-400 fs electron pulses that can be focused in spots as small as 1 nm on the sample. The electron microscope has been modified to allow optical excitation of the sample inside the objective lens. A high numerical aperture parabolic mirror with XYZ translation stage has been added between the objective lens pole pieces yielding a tilt angle of 35\({}^{\circ}\) between the electron probe and the optical pump focused on the sample (see Figure 1). Figure 1-b shows an example of the nanostructures investigated by PINEM in this study. These structures are apertures and nano-antennas fabricated from a 40 nm thick gold film evaporated on a 50 nm thick Si\({}_{3}\)N\({}_{4}\) membrane (see Methods for more details). At each point of the map, an electron energy spectrum such as the ones shown in Figure 1-b is acquired with a Gatan PEELS 666 with a typical integration time between 300 and 500 ms/pixel. The collection of energy spectra is then post-processed to extract the electron-light coupling constant at each point of the map (see Methods for more details about PINEM theory and data processing). In the experiments reported in this study, the electron beam is accelerated at 150 keV. The use of a 30 \(\mu\)m STEM aperture allows to focus the electron probe in a few nm spot [14; 16]. The geometry of the experiments is sketched in Figure 1. The electron is travelling along the (Oz) axis, perpendicular to the (OXY) plane of the membrane. The position of the electron beam on the membrane is identified by the
Figure 1: (Color Online) a) Photon-Induced Near-field Electron Microscopy. The excitation laser pulse is focused on the sample by a parabolic mirror (incidence angle \(\theta_{i}=35^{\circ}\)). The electron and laser pulses are synchronized by a delay line (not shown) and the energy spectrum of the electron pulse is analyzed by an electron spectrometer after interaction with the optical pulse. Inset: Bright field image of the sample recorded with a continuous electron beam. b) Electron energy spectra (solid line) and fit (dashed lines) recorded at different positions along the arrow. Right Top : Bright field image recorded at the same time as the electron energy spectrum. Right bottom: Map of the electron-light coupling constant (g) extracted from a fit of the electron energy spectra acquired at each pixel.
coordinates x and y. The (Ox) axis lies in the plane of incidence whereas the (Oy) axis is perpendicular to the latter.
In a first set of experiments, we study the electron energy exchanges when the fast electrons travel through apertures of different shapes. The spatial variations of the electron-light coupling constant \(g\) extracted from the PINEM maps acquired on square and circular apertures are shown in Figure 2. Clear modulations of \(g\) are visible in the apertures.
The origin of these modulations can be traced back from the definition of the coupling constant \(g\) (Equ. (1)) that governs the electron-light interaction probability. In vacuum, the momentum mismatch between free space light and the moving electron leads to a vanishing \(g\) and therefore no interaction. We provide in supplementary information additional data acquired on similar apertures without the silicon nitride membrane confirming the absence of detectable inelastic signal at the center of the aperture. The presence of a membrane on the electron path leads to a non null integral and therefore to inelastic interaction probabilities that depend on the wavelength, intensity and angle of incidence of the incident wave as well as the membrane dielectric constant [17]. Taking into account the multiple reflections at the two vacuum/membrane interfaces, \(g_{mem}\) can be written as:
\[g_{mem}=\frac{\imath\,e}{2\hbar\omega}e^{\imath k_{i,x}.x}f(\omega,\theta_{i},n_{m},v) \tag{2}\]
in which \(f\) is a complex function of the angular frequency of light \(\omega\), the electron speed \(v\), the membrane refractive index \(n_{m}\) and thickness \(d\) and the incidence angle \(\theta_{i}\). The complete expression of \(g_{mem}\) taking into account the multiple reflections at the two vacuum/membrane interfaces is given in supplementary information. It is clear from equation (2) that the modulus of \(g_{mem}\) on a simple membrane is not expected to display any spatial modulation.
The spatial modulations of the electron-light coupling strength visible in Figure (2) may arise from the combined influence of (i) the contrast in reflectivities between the metal and the dielectric surface and (ii) the scattering from the film edge. First, the existence of a separation between two regions of different reflectivities in the sample plane makes the moving charge interact with the electric field reflected either by the gold surface or the silicon nitride membrane depending on the distance between the electron and the membrane. The transition between the two cases does not occur for the same value of \(z\) depending on the distance of the electron beam to the aperture edge. The expression of the electron-light coupling strength resulting solely from the difference in reflectivities predicted by a simple model in the geometrical approximation can be found in the SI. This model predicts a spatial modulation of the electron-light coupling strength in poor agreement with the experiment. The origin of the discrepancy lies in the fact that a crude geometrical approach neglects the important contribution of the wave scattered by the film edge. For p-polarized illumination, the scattering by the film edge yields an electric field having a z-component comparable to the incident wave and therefore a significant contribution to the electron-light coupling. The latter is expected for p-polarized illumination to contribute to the electric field along the electron trajectory. The electric field scattered by the metallic film edge predicted by the Sommerfeld model is shown in Figure 3-b [18].
To take into account the different contributions discussed above, we have performed electrodynamical simulations using the Green Dyadic Method (GDM) [19; 20]. We have used the pyGDM open-source python toolkit [21]. pyGDM relies on the concept of a generalized propagator and allows to perform electro-dynamical simulations giving access to a large variety of near-field or far-field optical properties of individual nano-structures under either optical or electronic excitation [22]. More details about the GDM simulations are provided in the
Figure 2: (Color Online) Map of the electron-light coupling strength on an apertured gold film deposited on a silicon nitride membrane. Bright field image of respectively a square (top) and a circle (bottom) aperture fabricated in a 40 nm thick gold film deposited on a 50 nm thick Si\({}_{3}\)N\({}_{4}\) membrane. Experiment : Map of the electron-light coupling strength \(g\) extracted from numerical fits of the electron energy spectra for the square (top) and circle (bottom) aperture. Simulation: Electron-light coupling constant computed from electrodynamical simulations based on the Green Dyadic Method. Top : Overlay on top of the experimental data, a comparison of the simulation (cyan line) and experiment (dark blue line), the gray dashed line represent the bright field intensity profil). Bottom : 2D simulation map in the case of the circle aperture
supplementary information. The influence of the scattering by the metallic film edge and the difference in reflectivities from the gold film and silicon nitride membrane have first been simulated using 2D GDM simulations considering a silicon membrane half covered by a gold film. We have plotted in Figure 2-c the electron-light coupling strength computed using 2D-GDM across a 1100 nm gap made in a gold film deposited on a silicon nitride membrane. The results of the 2D-GDM calculations are in good agreement with the profile extracted from the experimental data acquired on the square aperture. We show in Figure 3-c the total electric field on the sample computed using 2D-GDM. A detailed study of the different contributions to the spatial variations of the electron-light coupling constant in the aperture provided in the supplementary information shows that scattering by the aperture edges and the reflection/transmission by the dielectric membrane contribute in comparable proportions to the modulation of the inelastic interaction strength. The case of more complex in-plane shapes requires full 3D electrodynamical simulations. Figure 2-f shows the results of 3D-GDM calculations performed on the circular aperture. Again, a good agreement is obtained between the experiment and the electrodynamical calculations.
In a second set of experiments, we have studied the electron-light coupling strength on apertured metallic films in which a nanostructure has been added on the silicon nitride membrane in the square apertures. As shown in Figure 4-a, the sample consists in square apertures with a bowtie antenna made of two equilateral gold prisms with an edge length \(e=200\) nm located at the center of the aperture. The exact same geometry has been fabricated several times with varying orientations with respect to the optical excitation. Figure 4 shows the bright field images and maps of the electron-light coupling strength extracted from the electron energy spectra acquired at each position. Away from the bowtie antenna, the spatial distribution of the inelastic interaction strength is very similar to the case of the empty aperture. Closer to the nano-antenna, the interaction strength shows complex spatial variations with both regions in which the presence of the nano-antenna reinforces the coupling strength and regions in which the latter is diminished with respect to the case of the empty aperture. To analyze these observations, we have performed 3D-GDM simulations considering either a gold bowtie in vacuum or a gold bowtie on a silicon nitride membrane. The results are displayed in Figure 3-e and f. When the gold bow-tie is in vacuum, the electron-light coupling vanishes away from the antenna : the momentum mismatch between the fast particle and free space light prevents their coupling outside of the near-field zone. In the near-field region, the spectrum of the optical near-field includes large wavevector components that couple efficiently with the moving electron. When the orientation of the particle with respect to the plane of incidence is changed, the regions where the electron couples efficiently to the electron are also modified, following the topography of the optical near-field of the nano-antenna. In the case of a nano-antenna on a membrane, the situation is more complex. The electron-light coupling does not vanish away from the antenna as the dielectric contrast at the surface of the substrate mediates the coupling between the incident wave and the moving charge. The measured electron-light coupling then results from the combined influence of the membrane and the gold nano-antenna. Neglecting the mutual influence of the membrane and nano-antenna, the electron-light coupling can be written as :
\[g_{tot}=\frac{e}{2\hbar\omega}\int dz\left[E_{z}^{mem}(z)+E_{z}^{ant}(z)\right] \,e^{-\imath\omega z/v} \tag{3}\]
Equation (3) shows that the regions in which the electron-light coupling either increased or decreased is clearly visible on the experimental results of Figure 4
Figure 3: (Color Online) a) Sketch of the experiment showing the beams reflected and transmitted by the membrane. b) Component along the electron trajectory of the electric field scattered by a semi-infinite half plane as predicted by the Sommerfeld model. c) Total electric field on the sample computed from 2D electrodynamical simulations based on the Green Dyadic Method.
b-d originate from the interference between the optical response of the membrane and the antenna. In addition to the electric field of the nanostructure itself, the use of a p-polarized tilted illumination in our PINEM experiments yields electric fields reflected/transmitted by the membrane or scattered by the aperture edges that have a non null component along the electron trajectory and efficiently couple with the moving charge. These electric fields can interfere with the optical field scattered by the illuminated nano-objects. The experimental and theoretical results of Figure 4 reveal these interferences in the near-field of gold nano-antennas but similar interference effect should be visible in the far-field of a nano-scatterer.
To address this point, we simulated PINEM experiments on individual nano-scatterers deposited on a membrane and illuminated by a tilted plane-wave (\(\theta_{i}=35^{\circ}\)). The nano-scatterers were chosen high enough so that the tilted illumination yields an out-of-plane dipole capable of radiating electric fields with z-components of magnitude large enough to interfere efficiently with the incident wave. Figure 5-a displays the results in the case of a single gold nanodisc. The electron-light coupling strength displays spatial modulations reminiscent of the Doppler effect arising from the interference between the tilted incident plane wave and the secondary wave scattered by the excited metallic nano-object. The interferences obtained here between the scattering from the nanodiscs and the reflection from the dielectric substrate are connected to the results of [12] where the interferences between the surface plasmon polariton field propagating away from an optically excited nano-hole and the reflection from the sample were reported. Contrary to the case of Figure 4 these interferences appear in the far-field zone, the presence of the dielectric contrast at the membrane surface allowing the coupling between the electron and the electromagnetic field. As illustrated in Figure 5-b, when more nanostructures are considered, the spatial modulation of the electron-light coupling results from the interference between (i) the incident wave, (ii) the waves reflected/transmitted by the substrate and (iii) the different waves scattered by the nanostructures. In the case of PINEM performed using a normally incident illumination, the spatial distribution of the electron-light coupling resembles closely the topography of the amplitude of the z-component of the total electric field. The case of a tilted illumination is associated with a significant contribution from the substrate to the electron-light coupling that yields more complex modulations of the inelastic interaction probability. Our results show that experiments performed in this configuration demand careful electro-dynamical simulation to take into account the different contributions to the electromagnetic field probed by the moving electron.
## IV Conclusion
In conclusion we have studied both experimentally and theoretically the electron-light coupling in apertures and nanostructures fabricated on a dielectric membrane. Our results show that the scattering from the aperture edges as well as the electric field reflected or transmitted by the
Figure 4: (Color Online) a) Bright field image of the studied sample. b-d) PINEM maps of the electron-light coupling constant \(|g|\) acquired on 3 different apertures having different orientations with respect to the illumination. Maps of the electron-light coupling constant \(|g|\) computed using 3D-GDM simulations on a gold bow-tie (200 nm edge length, 160 nm gap) placed in vacuum (e) or on a silicon nitride membrane (f).
dielectric membrane illuminated by a tilted plane wave contribute significantly to the modulation of the electron-light coupling strength measured in PINEM. This contribution from the membrane interferes with the electric field scattered by nanostructures fabricated on the membrane and alters the electron-light coupling both in the near-field and far-field region. Whereas the analysis of the experimental results in this configuration requires a careful analysis and comparison with electrodynamical simulations, these interference effects could be exploited to map the phase of the electric fields scattered by complex nanostructures.
## Acknowledgements
This project has been funded in part by the European Union's Horizon 2020 research and innovation program under Grant Agreement Nos. 823717 (ESTEEM3). This project has been funded in part by the ANR under Grant Agreement No. ANR-19-CE30-0008 ECHOMELO and Grant Agreement No. ANR-14-CE26-0013 FemtoTEM. This work was supported by Programme Investissements d'Avenir under the Program ANR-11-IDEX-0002-02, reference ANR-10-LABX-0037-NEXT (MUSE grant). All authors declare no competing interest.
|
2307.03066
|
Kemperman's inequality and Freiman's lemma via few translates
|
Let $G$ be a connected compact group equipped with the normalised Haar
measure $\mu$. Our first result shows that given $\alpha, \beta>0$, there is a
constant $c = c(\alpha,\beta)>0$ such that for any compact sets $A,B\subseteq
G$ with $ \alpha\mu(B)\geq\mu(A)\geq \mu(B) $ and $ \mu(A)+\mu(B)\leq 1-\beta$,
there exist $b_1,\dots b_c\in B$ such that \[ \mu(A\cdot \{b_1,\dots,b_c\})\geq
\mu(A)+\mu(B).\] A special case of this, that is, when $G=\mathbb{T}^d$,
confirms a recent conjecture of Bollob\'as, Leader and Tiba.
We also prove a quantitatively stronger version of such a result in the
discrete setting of $\mathbb{R}^d$. Thus, given $d \in \mathbb{N}$, we show
that there exists $c = c(d) >0$ such that for any finite, non-empty set $A
\subseteq \mathbb{R}^d$ which is not contained in a translate of a hyperplane,
one can find $a_1, \dots, a_c \in A$ satisfying \[ |A+ \{a_1, \dots, a_c\}|
\geq (d+1)|A| - O_d(1). \] The main term here is optimal and recovers the
bounds given by Freiman's lemma up to the $O_d(1)$ error term.
|
Yifan Jing, Akshat Mudgal
|
2023-07-06T15:34:44Z
|
http://arxiv.org/abs/2307.03066v2
|
# Kemperman's inequality and Freiman's lemma via few translates
###### Abstract.
Let \(G\) be a connected compact group equipped with the normalised Haar measure \(\mu\). Our first result shows that given \(\alpha,\beta>0\), there is a constant \(c=c(\alpha,\beta)>0\) such that for any compact sets \(A,B\subseteq G\) with
\[\alpha\mu(B)\geq\mu(A)\geq\mu(B)\ \ \text{and}\ \ \mu(A)+\mu(B)\leq 1-\beta,\]
there exist \(b_{1},\dots b_{c}\in B\) such that
\[\mu(A\cdot\{b_{1},\dots,b_{c}\})\geq\mu(A)+\mu(B).\]
A special case of this, that is, when \(G=\mathbb{T}^{d}\), confirms a recent conjecture of Bollobas, Leader and Tiba.
We also prove a quantitatively stronger version of such a result in the discrete setting of \(\mathbb{R}^{d}\). Thus, given \(d\in\mathbb{N}\), we show that there exists \(c=c(d)>0\) such that for any finite, non-empty set \(A\subseteq\mathbb{R}^{d}\) which is not contained in a translate of a hyperplane, one can find \(a_{1},\dots,a_{c}\in A\) satisfying
\[|A+\{a_{1},\dots,a_{c}\}|\geq(d+1)|A|-O_{d}(1).\]
The main term here is optimal and recovers the bounds given by Freiman's lemma up to the \(O_{d}(1)\) error term.
Key words and phrases:Kemperman's inequality, Freiman's lemma, Inverse theorems YJ and AM are supported by Ben Green's Simons Investigator Grant, ID:376201.
When \(G\) is abelian, this is also known as Kneser's inequality [16]. Moving now from the continuous case to the discrete setting, we note that analysis of finite sumsets in \(\mathbb{R}^{d}\) arose from work of Freiman [9] and has played an important role in additive combinatorics, specially due to its connections to Freiman's theorem and the sum-product problem, see, for instance, [5, 11]. In particular, denoting \(\dim(A)\) to be the dimension of the affine span of \(A\) for any finite, non-empty \(A\subseteq\mathbb{R}^{d}\), the well-known Freiman's lemma (see [9, Section 1.14] or [24, Lemma 5.13]) implies that whenever \(\dim(A)=d\), then
\[|A+A|\geq(d+1)|A|-d(d+1)/2. \tag{1.3}\]
Both the aforementioned inequalities can be seen to be sharp, for example, one can observe that (1.2) is optimal by considering the case when \(G=\mathbb{T}\) is the torus and \(A,B\) are sufficiently small intervals in \(\mathbb{T}\). Similarly, (1.3) is sharp as evinced by the case when \(A\subseteq\mathbb{R}^{d}\) satisfies
\[A=\{0,e_{1},\ldots,e_{d-1}\}\times\{1,2,\ldots,N\}, \tag{1.4}\]
where \(\{e_{1},\ldots,e_{d}\}\) form the canonical basis of \(\mathbb{R}^{d}\).
In a recent paper, Bollobas, Leader, and Tiba [1] considered a different perspective towards (1.1), wherein, they showed that whenever \(A,B\subseteq\mathbb{Z}\) are finite sets with \(|A|\geq|B|\geq 1\), then there exist \(b_{1},b_{2},b_{3}\in B\) such that
\[|A+\{b_{1},b_{2},b_{3}\}|\geq|A|+|B|-1. \tag{1.5}\]
They proved some similar results when \(A,B\) are subsets of \(\mathbb{F}_{p}\) and \(\mathbb{T}\), with various restrictions on the sizes of \(A,B\). They further conjectured that an analogous phenomenon should hold in the higher dimensional torus \(\mathbb{T}^{d}\), that is, writing \(G\) to be \(\mathbb{T}^{d}\), there exists some \(c>0\) such that for any compact, non-empty sets \(A,B\subseteq G\) with \(\mu(A)=\mu(B)=1/3\), one can find \(b_{1},\ldots,b_{c}\in B\) such that
\[\mu(A\cdot\{b_{1},\ldots,b_{c}\})\geq\mu(A)+\mu(B).\]
Here, since \(G=\mathbb{T}^{d}\), the group operation \(\cdot\) denotes the additive group operation in \(\mathbb{T}^{d}\).
Our first result confirms a much more general version of their conjecture.
**Theorem 1.1**.: _Given \(\alpha,\beta>0\), there exists some constant \(c=c(\alpha,\beta)>0\) such that the following holds. Let \(G\) be a connected compact group and let \(A,B\subseteq G\) be compact sets with_
\[\alpha\mu(B)\geq\mu(A)\geq\mu(B)\ \ \text{and}\ \ \mu(A)+\mu(B)\leq 1-\beta.\]
_Then there exist \(b_{1},\ldots,b_{c}\in B\) such that_
\[\mu(A\cdot\{b_{1},\ldots,b_{c}\})\geq\mu(A)+\mu(B).\]
Hence, we prove their conjecture for all connected, compact groups, including the cases when \(G\) is possibly nonabelian. Moreover, our constant \(c\) is independent of \(G\). One way to interpret our result is that given suitable sets \(A,B\subseteq G\), one can recover the lower bound (1.2) for \(\mu(A\cdot B)\) by just considering few translates \(A\cdot b_{1},\ldots,A\cdot b_{c}\) of \(A\). For instance, a conclusion akin to that of Theorem 1.1 holds for any \(K\)-approximate group \(A\) of \(G\), that is, symmetric sets \(A\subseteq G\) containing the identity for which \(A\cdot A\subseteq X\cdot A\) for some \(X\subseteq G\) with \(|X|\leq K\). Indeed in our setting, we may apply (1.2) to deduce that
\[2\mu(A)\leq\mu(A\cdot A)\leq\mu(A\cdot\{x_{1},\ldots,x_{c}\}),\]
where \(\{x_{1},\ldots,x_{c}\}=X^{-1}\subseteq G\). On the other hand, a celebrated result of Breuillard, Green, and Tao [3] shows that approximate groups are highly structured objects that essentially only come from nilpotent groups. Despite the rarity of such objects, Theorem 1.1 suggests that
one can in fact recover the above inequality for any compact set \(A\subseteq G\) with \(\mu(A)\leq 1-\beta\) and \(c=c(\beta)\).
In the discrete setting, it is worth noting that the main result of [1] already implies (1.5) for finite, non-empty sets \(A,B\subseteq\mathbb{R}^{d}\) by considering Freiman isomorphisms from \(A\cup B\) to sets of integers. On the other hand, noting Freiman's lemma, it is natural to ask whether one can also recover bounds akin to (1.3) by just considering \(O_{d}(1)\) translates of \(A\). This is precisely the content of our next result.
**Theorem 1.2**.: _Given \(d\in\mathbb{N}\), there exists some constant \(c=c(d)>0\) such that for every finite, non-empty set \(A\subseteq\mathbb{R}^{d}\) with \(\dim(A)=d\), there exist \(a_{1},\ldots,a_{c}\in A\) satisfying_
\[|A+\{a_{1},\ldots,a_{c}\}|\geq(d+1)|A|-5(d+1)^{3}.\]
The main term in the above lower bound matches the main term in (1.3) provided by Freiman's lemma, and as before, can be seen to be sharp by considering the sets represented in (1.4). A similar setup has been analysed in the recent work of Fox, Luo, Pham and Zhou [8], wherein, the authors showed that for every \(d,\varepsilon>0\), there exist constants \(C=C(d,\varepsilon)\geq 1\) and \(c=c(d,\varepsilon)\geq 1\) such that any finite set \(A\subseteq\mathbb{R}^{d}\), which is not contained in \(C\) translates of any hyperplane, contains elements \(a_{1},\ldots,a_{c}\) satisfying
\[|A+\{a_{1},\ldots,a_{c}\}|\geq(2^{d}-\varepsilon)|A|.\]
Thus, while they obtain a stronger lower bound for \(|A+\{a_{1},\ldots,a_{c}\}|/|A|\), they must restrict to relatively less structured sets \(A\), that is, sets \(A\) which can not be covered by \(C\) translates of any hyperplane. Moreover, the quantitative dependence of \(C\) on \(d,\varepsilon\) is slightly weak, since this arises from an application of the Freiman-Bilu theorem (see [11]). In comparison, Theorem 1.2 provides lower bounds for \(|A+\{a_{1},\ldots,a_{c}\}|/|A|\) that grow linearly with \(d\) but hold for any \(d\)-dimensional subset \(A\subseteq\mathbb{R}^{d}\), essentially allowing us to obtain optimal estimates when \(C=1\).
We now turn to a related philosophy of considering inverse results for small sumsets. This forms a large collection of results in additive combinatorics, with the central theorem being Freiman's inverse theorem, which suggests that sets \(A\subseteq\mathbb{Z}\) with \(|A+A|\leq K|A|\) must be contained efficiently in very additively structured sets known as generalised arithmetic progressions. Moreover, obtaining quantitatively optimal versions of this result as \(K\) becomes large is a key open problem in the area, see [20]. On the other hand, when \(K\) is very small, say \(K<3\), then we have very sharp characterisations. This is the content of Freiman's \(3k-4\) theorem (see [24, Theorem 5.11]) which implies that any finite set \(A\subseteq\mathbb{Z}\) with \(|A+A|\leq 3|A|-4\) is contained in an arithmetic progression \(P\) of length \(|A+A|-|A|+1\). Such types of results have been extended to a variety of settings including the case of finite fields and \(\mathbb{Z}^{d}\), see [4, 22] and the references therein.
More recently, it was shown in [2] that one can obtain an analogue of Freiman's \(3k-4\) theorem for sumsets akin to (1.5), that is, instead of assuming upper bounds for \(|A+A|\) with \(A\subseteq\mathbb{Z}\), one can assume that \(|A+A^{\prime}|\leq(2+\varepsilon)|A|\) for every \(A^{\prime}\subseteq A\) with \(|A^{\prime}|=4\) and with \(\varepsilon\) being sufficiently small, whereupon, one may deduce that \(A\) is contained in an arithmetic progression \(P\) of size \((1+\varepsilon+O(\varepsilon^{2}))|A|\). In our paper, we are also able to prove these type of inverse theorems in the settings of Theorems 1.1 and 1.2, and we present the first of these below.
**Theorem 1.3**.: _Given \(\varepsilon,\alpha,\beta>0\), there exist constants \(c,\eta\) depending only on \(\varepsilon,\alpha,\beta\), such that the following holds. Let \(G\) be a connected compact group, let \(A,B\subseteq G\) be compact sets such that \(\mu(A)+\mu(B)<1-\beta\), and \(\alpha^{-1}\mu(B)\leq\mu(A)\leq\alpha\mu(B)\). Moreover, suppose that for every \(b_{1},\ldots,b_{c}\in B\), we have_
\[\mu(A\cdot\{b_{1},\ldots,b_{c}\})\leq\mu(A)+\mu(B)+\eta\min\{\mu(A),\mu(B)\}.\]
_Then there is a surjective group homomorphism \(\chi:G\to\mathbb{T}\) and two compact intervals \(I,J\subseteq\mathbb{T}\), with \(\lambda\) being the normalised Lebesgue measure on \(\mathbb{T}\), such that_
\[\lambda(I)\leq(1+\varepsilon)\mu(A)\ \ \text{and}\ \ \lambda(J)\leq(1+ \varepsilon)\mu(B)\ \ \text{and}\ \ A\subseteq\chi^{-1}(I)\ \ \text{and}\ \ B\subseteq\chi^{-1}(J).\]
Here, if one replaces the assumption that \(\mu(A\cdot\{b_{1},\ldots,b_{c}\})\) is small from Theorem 1.3 by the hypothesis that the entire product set \(\mu(A\cdot B)\) is small, then the corresponding inverse theorem was obtained by the first author and Tran in [14], and when \(G\) is abelian, it was first proven by Tao [23], see also [6] for a quantitatively better bound. This asserts that sets with doubling close to \(2\) in connected compact groups are dominated by a one-dimensional torus, and, in particular, when \(G\) is compact semisimple, the doubling constant of any small subset should be away from \(2\). Theorem 1.3 is a strengthening of this phenomenon, and we have the following immediate corollary.
**Corollary 1.4**.: _There are absolute constants \(c,\eta>0\) such that the following holds. Let \(G\) be a compact semisimple Lie group, and let \(A\subseteq G\) satisfy \(\mu(A)\leq 1/3\). Then there exist \(a_{1},\ldots,a_{c}\in A\) such that_
\[\mu(A\cdot\{a_{1},\ldots,a_{c}\})>(2+\eta)\mu(A).\]
Returning to the discrete setting, Stanchescu [22] showed that whenever a large set \(A\subseteq\mathbb{Z}^{d}\) with \(\dim(A)=d\) has its sumset close in size to the lower bound in (1.3), then \(A\) is contained in a union of \(d\) parallel lines, that is, \(A\subseteq T+l=\cup_{t\in T}(t+l)\), where \(T\subseteq\mathbb{Z}^{d}\) is a set satisfying \(|T|\leq d\) and \(l\) is some one dimensional subspace, consequently making progress towards a problem raised by Freiman [10]. An asymmetric version of the above conclusion for sets \(A,B\subseteq\mathbb{R}^{d}\) may be derived by combining ideas from [17] and [18], see, for instance, the proof of Lemma 5.1. Both these results seem to capture the extremality of the example presented in (1.4). Using our methods, we are able to obtain an analogous conclusion under a weaker hypothesis, that is, instead of assuming upper bounds for the entire sumset \(A+A\), we operate under the assumption that for any \(A^{\prime}\subseteq A\) with \(|A^{\prime}|\ll_{d}1\), the sumset \(A+A^{\prime}\) is close in size to the estimates provided by Theorem 1.2. We record this inverse result below.
**Theorem 1.5**.: _Given \(d\in\mathbb{N}\), there exists some constant \(c=c(d)>0\) such that the following holds true. Let \(A\subseteq\mathbb{R}^{d}\) be a finite, non-empty set with \(\dim(A)=d\), such that for any \(a_{1},\ldots,a_{c}\in A\), we have_
\[|A+\{a_{1},\ldots,a_{c}\}|\leq(d+1+1/16)|A|.\]
_Then either \(|A|\ll_{d}1\) or \(A\subseteq T+l\), where \(l\) is a one dimensional subspace of \(\mathbb{R}^{d}\) and \(T\subseteq\mathbb{R}^{d}\) satisfies \(|T|\leq(d+1)^{2}\)._
We point out that we have not chosen to optimise the constant \(1/16\) in the above result, and this can be quantitatively improved, potentially at the cost of slightly increasing the upper bound for \(|T|\). It would also be interesting to show a variant of the above result where one obtains \(|T|\leq d\), akin to the results in [18, 22] as well as Lemma 5.1 in SS5.
We now provide a brief outline of the paper. We utilise SS2 to record various preliminary definitions and results that we will require throughout our paper. These include a variety of inverse and structural results from additive combinatorics along with some standard lemmata like the Plunnecke-Ruzsa inequality. We employ SS3 to present the proofs of Theorems 1.1 and 1.3. The key lemma in the section is Lemma 3.1, which deal with the case when \(G\) has a torus quotient and the sets are close to \(1\)-dimensional Bohr sets. The proofs of Theorems 1.1 and 1.3 is that either we are in the situation to apply Lemma 3.1, or a random selection argument works. Our main aim of SS4 will be to prove Lemma 4.1, which can be interpreted as a variant of Theorem 1.2 in the case when our set \(A\subseteq\mathbb{R}^{d}\) can be covered by few translates of a line. This will require a combination of combinatorial geometric and additive combinatorial methods. Moreover, Lemma 4.1 will then naturally combine with Theorem 1.5 to deliver the proof of Theorem 1.2. Finally, in SS5, we record the proof of Theorem 1.5.
**Notation.** In this paper, we use Vinogradov notation, that is, we write \(X\gg_{z}Y\), or equivalently \(Y\ll_{z}X\), to mean \(|X|\geq C_{z}|Y|\) where \(C\) is some positive constant depending on the parameter \(z\). Moreover, we write \(X=O_{z}(Y)\) to mean \(X\ll_{z}Y\). Given a group \(G\) and a set \(A\subseteq G\), we use \(\mathbb{1}_{A}\) to denote the indicator function of the set \(A\), that is, \(\mathbb{1}_{A}(g)=1\) when \(g\in A\), and \(\mathbb{1}_{A}(g)=0\) when \(g\in G\setminus A\).
**Acknowledgements.** We would like to thank Marcelo Campos and Zach Hunter for useful comments. We are also grateful to Zach for help in improving the presentation of our paper.
## 2. Preliminaries
A locally compact group \(G\) is a group equipped with a locally compact and Hausdorff topology on its underlying set such that the group multiplication and inversion maps are continuous. We say that a measure \(\mu\) on the collection of Borel subsets of \(G\) is a _left Haar measure_ if it satisfies the following properties:
1. (nonzero) \(\mu(X)>0\) for all open \(X\subseteq G\);
2. (left-invariant) \(\mu(X)=\mu(a\cdot X)\) for all \(a\in G\) and all measurable sets \(X\subseteq G\);
3. (inner regular) when \(X\) is open, \(\mu(X)=\sup\mu(K)\) with \(K\) ranging over compact subsets of \(X\);
4. (outer regular) when \(X\) is Borel, \(\mu(X)=\inf\mu(U)\) and \(U\) ranging over open subsets of \(G\) containing \(X\);
5. (compactly finite) \(\mu\) takes finite measure on compact subsets of \(G\).
When \(G\) is a locally compact topological group, a famous theorem of Haar asserts that \(G\) has a unique (up to a constant factor) left-invariant Haar measure, denoted by \(\mu\). When \(G\) is compact, \(\mu\) is also right-invariant (that is for a measureable \(X\) we also have \(\mu(X\cdot g)=\mu(X)\) for all \(g\in G\)). We say \(\mu\) is _normalised_ if \(\mu(G)=1\).
In our proof of Theorem 1.1, it will sometimes be more convenient to study the _popular product set_ defined as follows. Given \(0\leq t\leq\min\{\mu(A),\mu(B)\}\), we denote the popular product set
\[A\cdot_{t}B=\{x\in G\mid\mathbb{1}_{A}\ast\mathbb{1}_{B}(x)\geq t\},\]
where, given any \(f,g:G\to\mathbb{R}\), we define the convolution function \(f\ast g:G\to\mathbb{R}\) as
\[f\ast g(x)=\int_{G}f(y)g(y^{-1}x)\,\mathrm{d}\mu(y)\]
for every \(x\in G\). Note that when \(G\) is compact, we also have
\[f*g(x)=\int_{G}f(xy^{-1})g(y)\,\mathrm{d}\mu(y)\]
for every \(x\in G\), since \(\mu\) is bi-invariant. Noting these definitions, one can further see that
\[\lim_{t\to 0}A\cdot_{t}B=\operatorname{supp}(\mathbb{1}_{A}*\mathbb{1}_{B}) \subseteq A\cdot B.\]
We now state the well-known quotient integral formula, see, for example, [7, Theorem 2.49].
**Lemma 2.1**.: _Let \(G\) be a locally compact group and \(H\leq G\) be a closed normal subgroup. Let \(\mu_{G}\) and \(\mu_{H}\) be left invariant Haar measures on \(G\) and \(H\) respectively. Then there is a unique left invariant Haar measure \(\mu_{G/H}\) on \(G/H\) such that for every compactly supported continuous function \(f:G\to\mathbb{C}\),_
\[\int_{G}f(g)\,\mathrm{d}\mu_{G}(g)=\int_{G/H}\int_{H}f(xh)\,\mathrm{d}\mu_{H} (h)\,\mathrm{d}\mu_{G/H}(xH).\]
A locally compact group \(G\) is called _connected_ if it is connected as a topological space. In particular, it does not have any open subgroups. Recall that open subgroups are closed, and closed subgroups of finite indices are open, hence all the closed subgroups of a connected group have measure \(0\).
We now recall Kemperman's inequality as presented in (1.2), which can be seen as an extension of Kneser's inequality [16], which in turn, is a generalisation of the Cauchy-Davenport inequality. We remark that the latter can further generalised to all locally compact groups see [13]. Moreover, one can prove various inverse theorems for when the lower bound in such inequalities is close to being sharp. In our proof of Theorem 1.1, we will make use of one such inverse theorem which can be derived from the work of the first author and Tran in [14]; a generalised version of which is also stated in the forthcoming work [12]. The abelian case of this theorem (with a weaker quantitative bound, in particular, with \(\eta=o(1)\)) was first obtained by Tao [23], with a further result by Christ-Iliopoulou [6] which dispensed a sharp exponent bound.
**Theorem 2.2**.: _Given \(\varepsilon,\alpha>0\), there is \(\eta>0\) such that the following hold. Let \(G\) be a connected compact group equipped with the normalised Haar measure \(\mu\) and let \(A,B\subseteq G\) be compact sets such that \(\alpha\mu(B)\geq\mu(A)\geq\mu(B)\). If_
\[\mu(A\cdot_{\eta\mu(B)}B)\leq\mu(A)+\mu(B)+\eta\mu(B)<1,\]
_then there is a continuous surjective homomorphism \(\chi:G\to\mathbb{T}\), and two compact intervals \(I,J\subseteq\mathbb{T}\) such that with \(\lambda\) the normalised Lebesgue measure on \(\mathbb{T}\) we have_
\[\lambda(I)\leq(1+\varepsilon)\mu(A),\qquad\lambda(J)\leq(1+\varepsilon)\mu(B),\]
_and \(A\subseteq\chi^{-1}(I)\), \(B\subseteq\chi^{-1}(J)\)._
We now move to the discrete setting, where the ambient group is \(\mathbb{R}^{d}\) and \(A\subseteq\mathbb{R}^{d}\) is a finite set. Our first preliminary lemma in this setting will be the following inverse theorem for sumsets in higher dimensions from [17].
**Lemma 2.3**.: _Given \(d\in\mathbb{N}\) and \(K\geq 1\) and a finite set \(A\subseteq\mathbb{R}^{d}\) with \(|A+A|\leq K|A|\), there exist parallel lines \(l_{1},\ldots,l_{r}\) in \(\mathbb{R}^{d}\) and a constant \(0<\sigma\leq 1/2\) depending only on \(K\), such that_
\[|A\cap l_{1}|\geq\ldots|A\cap l_{r}|\geq|A\cap l_{1}|^{1/2}\gg|A|^{\sigma}\]
_and_
\[|A\setminus(l_{1}\cup\cdots\cup l_{r})|\ll K|A|^{1-\sigma}.\]
Thus if \(A\subseteq\mathbb{R}^{d}\) has a small sumset, then one can efficiently cover \(A\) with translates of a one dimensional subspace. We will combine this with the following asymmetric variant of Freiman's lemma from the second author's work in [18].
**Lemma 2.4**.: _Let \(d\geq 2\) be an integer, let \(A,B\subseteq\mathbb{R}^{d}\) be finite sets such that \(|A|\geq|B|\) and \(\dim(A)=d\). Suppose that \(l_{1},\ldots,l_{r},m_{1},\ldots,m_{q}\) are parallel lines such that_
\[A\subseteq l_{1}\cup\cdots\cup l_{r}\ \ \text{and}\ \ B\subseteq m_{1}\cup\cdots\cup m _{q},\]
_with \(|A\cap l_{i}|,|B\cap m_{j}|\geq 1\) for every \(1\leq i\leq r\) and \(1\leq j\leq q.\) Then we have that_
\[|A+B|\geq|A|+\Big{(}d+1-\frac{1}{r-d+2}-\frac{1}{q-c+2}\Big{)}|B|-(d-1)(r+q),\]
_where \(c=d\) when \(\dim(B)=d\) and \(c=\dim(B)\) when \(\dim(B)<d\)._
Next, we will require the following modified version of [1, Theorem 8'] for our proof of Theorem 1.5.
**Lemma 2.5**.: _For all \(K\) and \(\varepsilon>0\), there exists \(c=c(K,\varepsilon)>0\) such that the following holds true. Given a finite subset \(A\) of some abelian group \(G\), there exists a set \(A^{*}\subseteq A\) with \(|A^{*}|\geq(1-\varepsilon)|A|\) such that if we select \(a_{1},\ldots,a_{c}\) uniformly at random from \(A^{*}\) then_
\[\mathbb{E}_{a_{1},\ldots,a_{c}\in A^{*}}|A^{*}+\{a_{1},\ldots,a_{c}\}|\geq\min \{(1-\varepsilon)|A^{*}+A^{*}|,K|A^{*}|\}.\]
This may be obtained by applying the following corollary of [21, Theorem 1.1] in the proof of [1, Theorem 8'], instead of the original version of [21, Theorem 1.1].
**Lemma 2.6**.: _Given \(\varepsilon>0\) and \(K\geq 1\), the following is true for all \(\delta>0\) sufficiently small in terms of \(\varepsilon,K\). Let \(A\) be a finite subset of some abelian group \(G\) and let \(\Gamma\subseteq A\times A\), with \(|\Gamma|\geq(1-\delta)|A|^{2}\). Writing \(S=\{a+b:(a,b)\in\Gamma\}\), suppose that \(|S|\leq K|A|\). Then there exists \(A^{\prime\prime}\subseteq A\) such that_
\[|A^{\prime\prime}|\geq(1-\varepsilon)|A|\ \ \text{and}\ \ |A^{\prime\prime}+A^{ \prime\prime}|\leq|S|+\varepsilon|A|.\]
Lemma 2.6 follows from a straightforward manner from [21, Theorem 1.1] by setting \(A^{\prime\prime}=A^{\prime}\cap B^{\prime}\) in the conclusion of [21, Theorem 1.1] and rescaling \(\varepsilon\) appropriately.
We now record a standard result in additive combinatorics known as the Plunnecke-Ruzsa inequality [24, Corollary 6.29], which, in the situation when \(|A+B|\leq K|B|\), allows us to efficiently bound many-fold sumsets of \(A\).
**Lemma 2.7**.: _Let \(A,B\) be finite, non-empty subsets of some abelian group \(G\) satisfying \(|A+B|\leq K|B|\). Then for every \(k\in\mathbb{N}\), we have that_
\[\big{|}\{a_{1}+\cdots+a_{k}:a_{1},\ldots a_{k}\in A\}\big{|}\leq K^{k}|B|.\]
We end this section by recalling that (1.5), when combined with the theory of Freiman isomorphisms, implies that for any finite \(A,B\subseteq\mathbb{R}^{d}\) with \(|A|\geq|B|\), one may find elements \(b_{1},b_{2},b_{3}\in B\) such that
\[|A+\{b_{1},b_{2},b_{3}\}|\geq|A|+|B|-1. \tag{2.1}\]
## 3. Proof of Theorems 1.1 and 1.3
We begin this section by proving the following lemma that can be seen to provide a stronger version of the conclusion from Theorem 1.1, but in the more specific setting where \(A,B\subseteq G\) can be mapped surjectively to large subsets of some intervals in \(\mathbb{T}\).
**Lemma 3.1**.: _Let \(G\) be a connected compact group with \(\mu\) being the normalised Haar measure on \(G\), and let \(A,B\) be two compact sets in \(G\) with \(\mu(A)\geq\mu(B)\). Suppose that \(\chi:G\to\mathbb{T}\) is a surjective compact group homomorphism and \(I,J\) are two compact intervals in \(\mathbb{T}\) with_
\[\lambda(I)+\lambda(J)<1,\text{ and }A\subseteq\chi^{-1}(I)\text{ and }B \subseteq\chi^{-1}(J),\]
_where \(\lambda\) is the normalised Lebesgue measure on \(\mathbb{T}\). Then there exist \(b_{1},b_{2},b_{3}\in B\) such that_
\[\mu(A\cdot\{b_{1},b_{2},b_{3}\})\geq\mu(A)+\mu(B).\]
Proof.: By replacing \(A\) and \(B\) with \(a^{-1}\cdot A\) and \(B\cdot b^{-1}\) for some \(a\in A\) and \(b\in B\) respectively, we may assume that \(0\) is the left end point for both \(I\) and \(J\). By the compactness assumption on \(A\) and \(B\) we may further assume that
\[\chi^{-1}(0)\cap A\neq\varnothing,\quad\chi^{-1}(0)\cap B\neq\varnothing, \quad\chi^{-1}(\lambda(I))\cap A\neq\varnothing,\quad\chi^{-1}(\lambda(J))\cap B \neq\varnothing.\]
Note that as \(G\) is compact and hence unimodular, such translations would not affect the measure of the product set \(A\cdot B\).
Let us now consider the following short exact sequence
\[1\to H:=\ker\chi\to G\xrightarrow{\chi}\mathbb{T}\to 1,\]
and note that \(H\) is a connected compact group, write \(\mu_{H}\) to denote the normalised Haar measure on \(H\). As \(\lambda(I)+\lambda(J)<1\), observe that the natural embedding \(\phi:\mathbb{T}\to\mathbb{R}\) with \(\phi(\mathbb{T})=[0,1)\) preserves the measure of the sumset
\[I+J=\{i+j:i\in I,j\in J\}.\]
For clarity of exposition, we note that here, and throughout the proof of Lemma 3.1, we will use \(+\) to denote the additive operation in the abelian group \(\mathbb{T}\). Next, defining \(\psi=\phi\circ\chi\), which maps \(G\) to \(\mathbb{R}\), we will abuse some notation and write \(I\), \(J\) (instead of \(\phi(I)\),\(\phi(J)\)) for the compact intervals in \(\mathbb{R}\). Note that we still have
\[\psi^{-1}(0)\cap A\neq\varnothing,\quad\psi^{-1}(0)\cap B\neq\varnothing, \quad\psi^{-1}(\lambda(I))\cap A\neq\varnothing,\quad\psi^{-1}(\lambda(J))\cap B \neq\varnothing,\]
where we now view \(\lambda\) as its pushforward in \(\mathbb{R}\). Let
\[b_{1}\in\psi^{-1}(0)\cap B\ \text{ and }\ b_{2}\in\psi^{-1}(\lambda(J))\cap B,\]
and let
\[X=A\cdot\{b_{1},b_{2}\}=(A\cdot b_{1})\cup(A\cdot b_{2}).\]
We may assume that \(\lambda(I)\geq\lambda(J)\), because otherwise we would have
\[\mu(X)\geq 2\mu(A)\geq\mu(A)+\mu(B),\]
in which case, we would be done. For every element \(x\) in \(\mathbb{R}\) and for every set \(S\subseteq G\), let us consider the fiber function with respect to \(A\) defined as
\[f_{A}(x)=\mu_{H}(([\psi^{-1}(x)]^{-1}\cdot A)\cap H)\quad\text{for every $x\in\psi(A)$},\]
where \([xH]\) denotes the representative element that lives on the coset \(xH\). Moreover, we write \(f_{A}(x)=0\) for every \(x\in\mathbb{R}\setminus\psi(A)\). We define the fiber functions for \(B\) and \(X\) in the same way. We further set
\[\pi(X):=\int_{0}^{\lambda(J)}\max_{y\in x+\lambda(J)\mathbb{Z}}f_{X}(y)\, \mathrm{d}\lambda.\]
This can be viewed as the size of some sort of a "maximum projection" of \(X\) onto \([0,\lambda(J))\). It is worth pointing out that in the above definition while we are considering, for every \(x\in[0,\lambda(J))\), the maximum over all \(y\in x+\lambda(J)\mathbb{Z}\), in practice, this maximum only considers finitely many values of \(y\) since the function \(f_{X}\) is only supported on the set \([0,\lambda(J)+\lambda(I)]\).
Our first aim is to show that
\[\mu(X)\geq\mu(A)+\pi(X). \tag{3.1}\]
We proceed by observing that
\[f_{X}(x)=f_{A}(x)\text{ for $x\in[0,\lambda(J))$ \ and \ $f_{X}(x)=f_{A}(x-\lambda(J))$ for all $x>\lambda(I)$},\]
and so, we may apply Lemma 2.1 to discern that
\[\mu(X)=\int_{\lambda(I)-\lambda(J)}^{\lambda(I)}f_{A}(x)\,\mathrm{d}\lambda(x )+\int_{0}^{\lambda(I)}f_{X}(x)\,\mathrm{d}\lambda(x),\]
which, in turn, gives us
\[\mu(X)-\mu(A)=\int_{0}^{\lambda(I)}(f_{X}(x)-f_{A}(x-\lambda(J)))\,\mathrm{d} \lambda(x).\]
Note that in the above expression, \(f_{A}(x-\lambda(J))=0\) for every \(x\in[0,\lambda(J))\). Splitting the integral from \([0,\lambda(I)]\) over periods of length \(\lambda(J)\), we may now deduce that
\[\mu(X)-\mu(A)=\int_{0}^{\lambda(J)}\sum_{k=0}^{\infty}(f_{X}(x+k\lambda(J))-f _{A}(x+(k-1)\lambda(J)))\,\mathrm{d}\lambda(x). \tag{3.2}\]
While for expository purposes we are taking the sum above over all \(k\in\mathbb{N}\cup\{0\}\), we remark that as before, it is, in practice, a finite sum since for all \(k>2\lceil\lambda(I)/\lambda(J)\rceil\) we have
\[f_{X}(x+k\lambda(J))=f_{A}(x+(k-1)\lambda(J))=0\]
as \(X,A\subseteq[0,\lambda(I)+\lambda(J)]\). Now, fixing \(x_{0}\in[0,\lambda(J))\), we let \(k_{0}=k_{0}(x_{0})\in\mathbb{N}\cup\{0\}\) satisfy
\[f_{X}(x_{0}+k_{0}\lambda(J))=\max_{y\in x+\lambda(J)\mathbb{Z}}f_{X}(y).\]
Since
\[f_{X}(x)\geq\max\{f_{A}(x),f_{A}(x-\lambda(J))\}\]
for every \(x\in\mathbb{R}\) and \(f_{A}(x-\lambda(J))=0\) for every \(x<\lambda(J)\), we see that
\[\sum_{k=0}^{\infty}(f_{X}(x_{0}+k\lambda(J)) -f_{A}(x_{0}+(k-1)\lambda(J)))\geq\sum_{k=0}^{k_{0}}(f_{X}(x_{0}+k \lambda(J))-f_{A}(x_{0}+(k-1)\lambda(J)))\] \[=f_{X}(x_{0}+k_{0}\lambda(J))+\sum_{k=0}^{k_{0}-1}((f_{X}(x_{0}+k \lambda(J))-f_{A}(x_{0}+k\lambda(J))\] \[\geq f_{X}(x_{0}+k_{0}\lambda(J)).\]
Integrating the above for all \(x_{0}\in[0,\lambda(J))\) and noting (3.2), we obtain the claimed estimate
\[\mu(X)-\mu(A)\geq\int_{0}^{\lambda(J)}\max_{y\in x+\lambda(J)\geq 2}f_{X}(y)=\pi(X).\]
In the rest of the proof we will assume that \(\pi(X)<\mu(B)\) as otherwise we are done by applying (3.1). Note that for every \(a\in A\), we have
\[\mu((a\cdot B)\setminus X) \geq\max\{\mu(a\cdot B)-\mu(X\cap\psi^{-1}([\psi(a),\psi(a)+ \lambda(J)])),0\}\] \[\geq\max\{\mu(B)-\pi(X),0\}.\]
By the assumption that \(\pi(X)<\mu(B)\), we have, for every \(a\in A\), the inequality
\[\mu((a\cdot B)\setminus X)\geq\mu(B)-\pi(X).\]
Let us now choose \(b_{3}\in B\) uniformly at random with respect to \(\mu\). Then by Fubini's theorem and the above inequality, we find that
\[\mathbb{E}(\mu((A\cdot b_{3})\setminus X)) =\frac{1}{\mu(B)}\int_{G}\mathbb{1}_{A}(a)\int_{G}\mathbb{1}_{B}( b)\mathbb{1}_{G\setminus X}(a\cdot b)\,\mathrm{d}\mu(b)\,\mathrm{d}\mu(a)\] \[=\frac{1}{\mu(B)}\int_{G}\mathbb{1}_{A}(a)\mu((a\cdot B)\setminus X )\,\mathrm{d}\mu(a)\] \[\geq\frac{\mu(A)(\mu(B)-\pi(X))}{\mu(B)}.\]
Therefore there exists \(b_{3}\in B\) such that
\[\mu((A\cdot b_{3})\setminus X)\geq\frac{\mu(A)(\mu(B)-\pi(X))}{\mu(B)}\geq\mu (B)-\pi(X),\]
where we have used the hypothesis that \(\mu(B)\leq\mu(A)\). Combining this with (3.1), we get that
\[\mu(A\cdot\{b_{1},b_{2},b_{3}\})=\mu(X)+\mu((A\cdot b_{3})\setminus X)\geq\mu (A)+\mu(B),\]
which concludes the proof of Lemma 3.1.
With this in hand, we will now essentially divide our proof of Theorem 1.1 into two cases, the first being the setting when the set of popular products \(A\cdot_{\delta}B\), for some appropriate choice of \(\delta\), is somewhat large, wherein, probabilistic methods suffice to dispense the desired result. The second case is when the set of popular products is small. Here, we will first apply an inverse theorem to show that our sets \(A,B\) can be mapped surjectively to large subsets of some intervals in \(\mathbb{T}\), whereupon, we will apply Lemma 3.1 to deduce the claimed estimate. We now present the deduction of our main result in the first of the above two cases.
**Lemma 3.2**.: _Let \(G\) be a connected, compact group, let \(A,B\subseteq G\) be compact sets and let \(0<\delta<\min\{\mu(A),\mu(B)\}\) be a real number. Then for every \(c\geq(\mu(B)/\delta)^{2}\), there exist \(b_{1},\ldots,b_{c}\in B\) such that_
\[\mu(A\cdot\{b_{1},\ldots,b_{c}\})\geq(1-O(\exp(-c^{1/2})))\mu(A\cdot_{\delta}B).\]
Proof.: We choose a set with \(c\) elements \(B_{c}:=\{b_{1},\ldots,b_{c}\}\) from \(B\) uniformly at random. Note that an element \(g\) is in \(A\cdot B_{c}\) if and only if at least one of \(b_{i}\) in \(b_{1},\ldots,b_{c}\) satisfies that \(b_{i}^{-1}\in g^{-1}A\). Thus, the probability that \(g\in A\cdot B_{c}\) is
\[1-\left(1-\frac{\mu(B\cap A^{-1}g)}{\mu(B)}\right)^{c}=1-\left(1-\frac{1_{A}* 1_{B}(g)}{\mu(B)}\right)^{c},\]
and so, whenever \(g\in A\cdot_{\delta}B\), then the probability that \(g\in A\cdot B_{c}\) is at least \(1-(1-\delta/\mu(B))^{c}\). Since \(c\geq(\mu(B)/\delta)^{2}\), we may deduce that
\[\left(1-\frac{\delta}{\mu(B)}\right)^{c}\leq\exp\left(-\frac{c\delta}{\mu(B)} \right)\ll\exp(-c^{1/2}),\]
with the first inequality above following from the fact that for every \(x\in(0,1)\), one has \(1-x\leq\exp(-x)\). Thus by Markov's inequality, there exists a choice of \(\{b_{1},b_{2},\ldots,b_{c}\}\) such that
\[\mu((A\cdot_{\delta}B)\setminus(A\cdot\{b_{1},b_{2},\ldots,b_{c}\}))\ll\exp(- c^{1/2})\mu(A\cdot_{\delta}B).\]
This, in turn, implies that
\[\mu(A\cdot\{b_{1},b_{2},\ldots,b_{c}\})\geq(1-O(\exp(-c^{1/2})))\mu(A\cdot_{ \delta}B),\]
as desired.
We are now ready to prove Theorem 1.1.
Proof of Theorem 1.1.: Given \(\varepsilon>0\) and let \(\eta=\eta(\varepsilon)\) be as in Theorem 2.2. Let us first consider the case when
\[\mu(A\cdot_{\eta\mu(B)}B)\leq\mu(A)+\mu(B)+\eta\mu(B).\]
As \(\mu(A)+\mu(B)\leq 1-\beta\) with some \(\beta>0\), by letting \(\varepsilon\) sufficiently small we may assume the right hand side of the above inequality is smaller than \(1\). Thus by Theorem 2.2, there is a continuous surjective group homomorphism \(\chi:G\to\mathbb{T}\), and two compact intervals \(I,J\subseteq\mathbb{T}\), so that with \(\lambda\) the normalised Lebesgue measure on \(\mathbb{T}\),
\[\lambda(I)\leq(1+\varepsilon)\mu(A),\qquad\lambda(J)\leq(1+\varepsilon)\mu(B),\]
and \(A\subseteq\chi^{-1}(I)\), \(B\subseteq\chi^{-1}(J)\). By choosing a sufficiently small \(\varepsilon\), we may assume \(\lambda(I)+\lambda(J)<1\). Then by Lemma 3.1, there are \(b_{1},b_{2},b_{3}\in B\) such that
\[\mu(A\cdot\{b_{1},b_{2},b_{3}\})\geq\mu(A)+\mu(B).\]
It remains to consider the case when
\[\mu(A\cdot_{\eta\mu(B)}B)>\mu(A)+\mu(B)+\eta\mu(B).\]
By Lemma 3.2, for every \(c\geq\eta^{-2}\) there exist \(b_{1},\ldots,b_{c}\) from \(B\) such that
\[\mu(A\cdot\{b_{1},\ldots,b_{c}\}) \geq(1-O(\exp(-c^{1/2})))\mu(A\cdot_{\eta\mu(B)}B)\] \[>(1-O(\exp(-c^{1/2})))(\mu(A)+\mu(B)+\eta\mu(B))\] \[\geq\mu(A)+\mu(B),\]
where the latter inequality follows by choosing \(c\) to be sufficiently large compared to \(\alpha\) so as to ensure that
\[\frac{\eta\mu(B)}{2}\geq\frac{\eta\alpha\mu(A)}{2}\gg\exp(-c^{1/2})\mu(A).\]
This finishes the proof of Theorem 1.1.
We conclude this section by notation that our inverse result Theorem 1.3 follows immediately from a combination of Theorem 2.2 and Lemma 3.2.
Proof of Theorem 1.3.: We will proceed by showing that the contra-positive statement holds true. Thus, given \(\varepsilon,\beta>0\), suppose that the conclusion of Theorem 1.3 does not hold. Applying Theorem 2.2, we may find \(\eta^{\prime}>0\) such that
\[\mu(A\cdot_{\eta^{\prime}\mu(B)}B)>\mu(A)+\mu(B)+\eta^{\prime}\min\{\mu(A),\mu (B)\}.\]
Now by Lemma 3.2, for every \(c\geq(\eta^{\prime})^{-1}\), there exist \(b_{1},\ldots,b_{c}\) in \(B\) such that
\[\mu(A\cdot\{b_{1},\ldots,b_{c}\})>(1-O(\exp(-c)))(\mu(A)+\mu(B)+\eta^{\prime} \min\{\mu(A),\mu(B)\}).\]
As \(\mu(B)>\alpha\mu(A)\), we may now set \(\eta=\eta^{\prime}/2\) and note that whenever \(c\) is large enough, we will get
\[\mu(A\cdot\{b_{1},\ldots,b_{c}\})>\mu(A)+\mu(B)+\eta\min\{\mu(A),\mu(B)\}.\]
This finishes the proof of Theorem 1.3.
## 4. Proof of Theorem 1.2
In this section, we will present the proof of Theorem 1.2 and we begin by recording the following weaker version of Theorem 1.2 where we allow the parameter \(c=c(d)\) to also depend on the number of translates of a one dimensional subspace that we would require to cover \(A\).
**Lemma 4.1**.: _For every \(d,r\in\mathbb{N}\) with \(d\leq r\), there exists \(s\in\mathbb{N}\) with \(s\leq 3dr^{2}\) such that the following holds true. Let \(A\) be a finite, non-empty subset of \(\mathbb{Z}^{d}\) such that \(\dim(A)=d\) and \(A\subseteq l_{1}\cup\cdots\cup l_{r}\), where \(l_{1},\ldots,l_{r}\) are parallel lines such that_
\[|A\cap l_{i}|\geq 2\]
_for each \(1\leq i\leq r\). Then there exist \(a_{1},\ldots,a_{s}\in A\) such that_
\[|A+\{a_{1},\ldots,a_{s}\}|\geq(d+1)|A|-3dr.\]
We note that Lemma 4.1 and Theorem 1.5 combine together to deliver Theorem 1.2 in a straightforward manner.
Proof of Theorem 1.2.: Let \(A\subseteq\mathbb{R}^{d}\) be a finite, non-empty set with \(\dim(A)=d\). Applying Theorem 1.5, we can either find \(a_{1},\ldots,a_{c}\in A\) such that
\[|A+\{a_{1},\ldots,a_{c}\}|\geq(d+1+1/16)|A|,\]
or we have that \(|A|\ll_{d}1\) or \(A\) is contained in at most \((d+1)^{2}\) translates of some line \(l\). We are done in the first case, and so, suppose that \(|A|\ll_{d}1\). In this case, we may take \(\{a_{1},\ldots,a_{c}\}=A\) and apply (1.3) to deduce that
\[|A+\{a_{1},\ldots,a_{c}\}|=|A+A|\geq(d+1)|A|-d(d+1)/2,\]
which is better than the desired bound. Thus, we now consider the final case where \(A\) is contained is at most \((d+1)^{2}\) translates of some line \(l\). Here, we may remove at most \(2(d+1)^{2}\) many elements from \(A\) to further assume that \(A\) is covered by translates \(l_{1},\ldots,l_{r}\) of \(l\) with \(r\leq(d+1)^{2}\), such that \(|A\cap l_{i}|\geq 2\) for every \(1\leq i\leq r\). In this setting, we may now apply Lemma 4.1 to obtain the desired bound in a straightforward manner.
Thus, our aim for the rest of this section is to prove Lemma 4.1.
Proof of Lemma 4.1.: We will prove this lemma by induction, and thus, note that when \(d=1\), this follows from (2.1). We may now assume that \(d\geq 2\), in which case, we denote \(H\) to be the \((d-1)\)-dimensional subspace orthogonal to \(l_{1}\) and we let \(\pi:\mathbb{R}^{d}\to H\) be the natural projection map. We write \(p_{i}=A\cap l_{i}\) and \(x_{i}=\pi(p_{i})\) for every \(1\leq i\leq r\) as well as \(x^{\pi}=\pi^{-1}(x)\cap A\) for every \(x\in H\). This allows us to define \(X=\pi(A)=\{x_{1},\ldots,x_{r}\}\). Note that since \(\dim(A)=d\), we have that \(\dim(X)=d-1\). Let \(\mathcal{C}\) be the convex hull of \(X\) and without loss of generality, let \(x_{r}\in X\) be an extreme point of \(X\), that is, let \(x_{r}\) be a vertex on the convex hull \(\mathcal{C}\) of X. We now consider the set \(X^{\prime}=X\setminus\{x_{r}\}\) and its preimage \(A^{\prime}=\{a\in A:\pi(a)\in X^{\prime}\}\).
Noting that \(d-2\leq\dim(X^{\prime})\leq d-1\), we divide our proof into two cases, the first being when \(\dim(X^{\prime})=d-2\). In this case, since \(\dim(X)=d-1\), we see that \(x\) does not lie in the affine span of \(X^{\prime}\), whence, the sets
\[\{2x_{r}\},X^{\prime}+\{x_{r}\}\ \ \text{and}\ \ X^{\prime}+X^{\prime}\]
are pairwise disjoint. This, in turn, implies that the sets
\[x_{r}^{\pi}+x_{r}^{\pi},x_{r}^{\pi}+x_{1}^{\pi},\ldots,x_{r}^{\pi}+x_{r-1}^{ \pi},A^{\prime}+A^{\prime}\]
are pairwise disjoint. We may now apply the inductive hypothesis for \(A^{\prime}\) to deduce that there exist \(a_{1},\ldots,a_{s^{\prime}}\in A^{\prime}\), with \(s^{\prime}\leq 3(d-1)(r-1)^{2}\), such that
\[|A^{\prime}+\{a_{1},\ldots,a_{s^{\prime}}\}|\geq d|A^{\prime}|-3(d-1)(r-1). \tag{4.1}\]
Let \(I_{1}=\{1\leq i\leq r:|x_{r}^{\pi}|\geq|x_{i}^{\pi}|\}\) and let \(I_{2}=\{1,\ldots,r\}\setminus I_{1}\). Given \(i\in I_{1}\), we can apply (2.1) for the sumset \(x_{r}^{\pi}+x_{i}^{\pi}\) to obtain elements \(a_{i,1},\ldots,a_{i,3}\in x_{i}^{\pi}\) such that
\[|x_{r}^{\pi}+\{a_{i,1},\ldots,a_{i,3}\}|\geq|x_{r}^{\pi}|+|x_{i}^{\pi}|-1.\]
Similarly, for every \(i\in I_{2}\), we can apply (2.1) for the sumset \(x_{r}^{\pi}+x_{i}^{\pi}\) to obtain elements \(a_{i,1},\ldots,a_{i,3}\in x_{r}^{\pi}\) such that
\[|x_{i}^{\pi}+\{a_{i,1},\ldots,a_{i,3}\}|\geq|x_{r}^{\pi}|+|x_{i}^{\pi}|-1.\]
Putting these two together along with (4.1), we obtain a set
\[B=\{a_{1,1},\ldots,a_{r,3},a_{1},\ldots,a_{s^{\prime}}\}\subseteq A\]
with \(|B|\leq s^{\prime}+3r\) such that
\[|A+B| \geq|A^{\prime}+\{a_{1},\ldots,a_{s^{\prime}}\}|+\sum_{i\in I_{1}}|x _{r}^{\pi}+\{a_{i,1},\ldots,a_{i,3}\}|+\sum_{i\in I_{2}}|x_{i}^{\pi}+\{a_{i,1}, \ldots,a_{i,3}\}|\] \[\geq d|A^{\prime}|-3(d-1)(r-1)+\sum_{1\leq i\leq r}(|x_{r}^{\pi}| +|x_{i}^{\pi}|-1)\] \[\geq(d+1)|A^{\prime}|+(r+1)|x_{r}^{\pi}|-r-3(d-1)(r-1)\] \[\geq(d+1)|A|-3dr.\]
Moreover, by the inductive hypothesis, we have
\[|B|\leq s^{\prime}+3r\leq 3(d-1)(r-1)^{2}+3r\leq 3dr^{2},\]
and consequently, we obtain the required conclusion in this case.
We now consider the case when \(\dim(X^{\prime})=d-1\). In this case, we analyse the convex hull \(\mathcal{C}^{\prime}\) of \(X^{\prime}\). Here, since \(\dim(X^{\prime})=d-1\) and since \(x_{r}\) is an extreme point of \(\mathcal{C}\), we find elements \(y_{1},\ldots,y_{d-1}\in X^{\prime}\) such that
\[\{x_{r},(x_{r}+y_{1})/2,\ldots,(x_{r}+y_{d-1})/2\}\cap\mathcal{C}^{\prime}=\varnothing.\]
This implies that the sumsets
\[x_{r}^{\pi}+x_{r}^{\pi},x_{r}^{\pi}+y_{1}^{\pi},\ldots,x_{r}^{\pi}+y_{d-1}^{ \pi},A^{\prime}+A^{\prime}\]
are pairwise disjoint. We see that \(\dim(A^{\prime})=d\) because \(\dim(X^{\prime})=d-1\), and that \(A^{\prime}\) is contained in \(r-1\) parallel lines. Thus, we may apply the inductive hypothesis for \(A^{\prime}\) to obtain \(a_{1},\ldots,a_{s^{\prime}}\in A^{\prime}\), with \(s^{\prime}\leq 3d(r-1)^{2}\), such that
\[|A^{\prime}+\{a_{1},\ldots,a_{s^{\prime}}\}|\geq(d+1)|A^{\prime}|-3d(r-1).\]
Moreover, for every \(1\leq i\leq d-1\), we fix an element \(b_{i}\in y_{i}^{\pi}\). Furthermore, we apply (2.1) for the set \(x_{r}^{\pi}+x_{r}^{\pi}\) to obtain elements \(b_{d},b_{d+1},b_{d+2}\in x_{r}^{\pi}\) satisfying
\[|x_{r}^{\pi}+\{b_{d},b_{d+1},b_{d+2}\}|\geq 2|x_{r}^{\pi}|-1.\]
As in the previous case, we see that the set \(B=\{a_{1},\ldots,a_{s^{\prime}},b_{1},\ldots,b_{d+2}\}\subseteq A\) satisfies
\[|A+B| \geq|A^{\prime}+\{a_{1},\ldots,a_{s}\}|+|x_{r}^{\pi}+\{b_{d},b_{d+1 },b_{d+2}\}|+\sum_{1\leq i\leq d-1}|x_{r}^{\pi}+\{b_{i}\}|\] \[\geq(d+1)|A^{\prime}|-3d(r-1)+2|x_{r}^{\pi}|-1+(d-1)|x_{r}^{\pi}|\] \[\geq(d+1)|A|-3dr.\]
Moreover, by the inductive hypothesis, we have that
\[|B|\leq s^{\prime}+d+2\leq 3d(r-1)^{2}+3d\leq 3dr^{2},\]
and so, we finish our proof of Lemma 4.1.
## 5. Proof of Theorem 1.5
Our goal in this section is to present the proof of Theorem 1.5. We begin by noting the following asymmetric generalisation of Freiman's lemma which was given by Ruzsa [19]. Thus, whenever \(A,B\subseteq\mathbb{R}^{d}\) are finite sets with \(\dim(A)=d\) and \(|A|\geq|B|\), then
\[|A+B|\geq|A|+d|B|-d(d+1)/2.\]
Our first objective in this section is to show that in a slightly more specific version of this setting, whenever \(A+B\) is close to the above lower bound, then both \(A\) and \(B\) can be efficiently covered by translates of a one dimensional subspace, which, in turn, implies that both \(A\) and \(B\) have dense subsets lying on translates of the same line.
**Lemma 5.1**.: _Let \(A,B\subseteq\mathbb{R}^{d}\) be finite sets with \(\dim(A)=d\) and \(|A|=|B|\) and_
\[|A+B|\leq|A|+(d+1/7)|B|-O_{d}(1).\]
_Then either \(|A|\ll_{d}1\) or there exists some line \(l\) and some \(x,y\in\mathbb{R}^{d}\) such that_
\[|A\cap(x+l)|\geq|A|/d\ \ \text{and}\ \ |B\cap(y+l)|\geq|B|/d.\]
Proof.: This is true trivially when \(A,B\subseteq\mathbb{R}\), whence, we may assume that \(d\geq 2\). Applying Lemma 2.7 with \(k=2\), we may deduce that
\[|A+A|\leq(d+8/7)^{2}|A|.\]
We may now apply Lemma 2.3 to obtain parallel lines \(l_{1},\dots,l_{r}\), with \(r\ll_{d}|A|^{1-\sigma}\) for some \(\sigma\in(0,1/2]\) that only depends on \(d\), such that \(A\subseteq l_{1}\cup\dots\cup l_{r}\) and
\[|l_{1}\cap A|\gg_{d}|A|^{\sigma}.\]
We now cover \(B\) with lines that are parallel to \(l_{1}\), and so, let \(q\in\mathbb{N}\) be the minimal natural number such that \(B\subseteq m_{1}\cup\dots\cup m_{q}\), where \(m_{1},\dots,m_{q}\) are lines parallel to \(l_{1}\). Note that \(q\ll_{d}|A|^{1-\sigma}\), since
\[(d+8/7)|A|\geq|A+B|\geq\sum_{i=1}^{q}|l_{1}+m_{i}|\geq q|l_{1}|\gg_{d}q|A|^{ \sigma}.\]
This allows us to apply Lemma 2.4, whence, we have
\[|A+B|\geq|A|+\Big{(}d+1-\frac{1}{r-d+2}-\frac{1}{q-c+2}\Big{)}|B|-(d-1)(r+q),\]
where \(c=\min\{d,\dim(B)\}\). Here, since \(r,q\ll_{d}|A|^{1-\sigma}\), we may combine this with the hypothesis of Lemma 5.1 to see that
\[\Big{(}\ \frac{6}{7}-\frac{1}{r-d+2}-\frac{1}{q-c+2}\ \Big{)}|B|\ll_{d}r+q\ll_{d}|A|^{1- \sigma}.\]
Note that if \(r>d\) or \(q>c\), then
\[1/(r-d+2)+1/(q-c+2)\leq 1/2+1/3=5/6,\]
which combines with the above inequality to gives us \(|A|=|B|\ll_{d}1\). On the other hand, when \(r=d\) and \(q=c\), then we must have some \(1\leq i\leq r\) and \(1\leq j\leq q\) such that
\[|A\cap l_{i}|\geq|A|/r=|A|/d\ \ \text{and}\ \ |B\cap m_{j}|\geq|B|/q=|B|/c\geq|B|/d.\]
Thus, we conclude our proof of Lemma 5.1.
With this in hand, we will now proceed with our proof of Theorem 1.5.
Proof of Theorem 1.5.: Our aim is to show that given a finite, non-empty set \(A\subseteq\mathbb{R}^{d}\) such that \(\dim(A)=d\), we either have \(|A|\ll_{d}1\) or \(A\) is covered by at most \((d+1)^{2}\) translates of some line or we can find elements \(a_{1},\ldots,a_{c}\in A\) such that
\[|A+\{a_{1},\ldots,a_{c}\}|\geq(d+1+1/16)|A|.\]
Setting \(K=(d+1+1/10)\) and \(\varepsilon=100^{-d^{2}}\), we apply Lemma 2.5 to obtain \(c=c(d)>0\) and a set \(A^{*}\subseteq A\) such that \(|A^{*}|\geq(1-100^{-d^{2}})|A|\) and such that if we select \(a_{1},\ldots,a_{c}\in A^{*}\) uniformly at random from \(A^{*}\), then
\[\mathbb{E}_{a_{1},\ldots,a_{c}\in A^{*}}|A^{*}+\{a_{1},\ldots,a_{c}\}|\geq\min \{(1-100^{-d^{2}})|A^{*}+A^{*}|,(d+1+1/10)|A^{*}|\}.\]
Here, if
\[(1-100^{-d^{2}})|A^{*}+A^{*}|\geq(d+1+1/10)|A^{*}|, \tag{5.1}\]
then we have that
\[\mathbb{E}_{a_{1},\ldots,a_{c}\in A^{*}}|A^{*}+\{a_{1},\ldots,a_{ c}\}| \geq(1-100^{-d^{2}})(d+1+1/10)|A|\] \[\geq(d+1+1/12)|A|,\]
whence, we are done. Thus, we may assume that (5.1) does not hold true, in which case, we obtain elements \(a_{1},\ldots,a_{c}\in A^{*}\) such that
\[|A^{*}+\{a_{1},\ldots,a_{c}\}|\geq(1-100^{-d^{2}})|A^{*}+A^{*}|. \tag{5.2}\]
We now suppose that \(\dim(A^{*})=m\) for some \(1\leq m\leq d\). We first consider the subcase when
\[|A^{*}+A^{*}|\geq(m+1+1/10)|A^{*}|-m(m+1)/2. \tag{5.3}\]
Here, since \(\dim(A)=d\), we see that there exist elements \(b_{m+1},\ldots,b_{d}\in A\setminus A^{*}\) which are linearly independent and are not contained in the affine span of \(A^{*}\). Thus, we may now set \(A^{\prime}=\{a_{1},\ldots,a_{c}\}\cup\{b_{m+1},\ldots,b_{d}\}\) to see that
\[|A+A^{\prime}| \geq|A^{*}+A^{\prime}|=|A^{*}+\{a_{1},\ldots,a_{c}\}|+\sum_{i=m+ 1}^{d}|A^{*}+\{b_{i}\}|\] \[\geq(1-100^{-d^{2}})|A^{*}+A^{*}|+(d-m)|A^{*}|\] \[\geq(1-100^{-d^{2}})(m+1+1/10)|A^{*}|+(d-m)|A^{*}|-m(m+1)/2\] \[\geq(d+1+1/15)|A^{*}|-m(m+1)/2\] \[\geq(d+1+2/31)|A|-d(d+1),\]
where the second and third inequalities utilise (5.2) and (5.3) respectively. Thus we either have that
\[|A+A^{\prime}|\geq(d+1+1/16)|A|\]
or \(|A|\ll_{d}1\), and so, we are done in this subcase.
Noting the above, we may now assume that (5.3) does not hold true. After applying suitable affine transformations, we may employ Lemma 5.1 to show that either \(|A|\ll_{m}1\) or there exists some line \(l\) in \(\mathbb{R}^{d}\) such that
\[\big{|}A\cap l\big{|}\geq|A^{*}\cap l|\geq|A^{*}|/m\geq|A|(1-100^{-d^{2}}) \big{/}d. \tag{5.4}\]
Since \(m\leq d\) we are done in the setting when \(|A|\ll_{m}1\), and so, we may assume that some \(l\) exists such that (5.4) holds. We now cover \(A\) with translates of the line \(l\), and so, let \(l_{1},\ldots,l_{r}\) be lines parallel to \(l\) such that
\[A\subseteq l_{1}\cup\cdots\cup l_{r},\]
and \(r\) is minimal. We may assume that \(r\geq(d+1)^{2}+1\), since otherwise, we are done. Moreover, writing \(p_{i}=A\cap l_{i}\) for every \(1\leq i\leq r\), we may assume that
\[|p_{1}|\geq\cdots\geq|p_{r}|,\]
and so,
\[|p_{1}|\geq|A\cap l|\geq|A|(1-100^{-d^{2}})/d.\]
Writing \(r_{0}=(d+1)^{2}+1\), observe that the sumsets
\[p_{1}+p_{1},\ldots,p_{1}+p_{r_{0}}\]
are pairwise disjoint, and so, we may apply (2.1) for each such sumset to procure elements \(a_{i,1},a_{i,2},a_{i,3}\in p_{i}\), for every \(1\leq i\leq r_{0}\), such that
\[|A+\{a_{1,1},\ldots,a_{r_{0},3}\}| \geq\sum_{i=1}^{r_{0}}|p_{1}+\{a_{i,1},\ldots,a_{i,3}\}|\] \[\geq\sum_{i=1}^{r_{0}}(|p_{1}|+|p_{i}|-1)\geq r_{0}|p_{1}|\] \[>(d+1)^{2}|A|(1-100^{-d^{2}})/d\] \[\geq(d+2)|A|,\]
This implies the desired bound, and consequently, we conclude the proof of Theorem 1.5.
|
2305.02501
|
Optimal boundary control for the Cahn-Hilliard-Navier-Stokes Equations
|
In this work, we study an optimal boundary control problem for a Cahn -
Hilliard -Navier-Stokes (CHNS) system in a two dimensional bounded domain. The
CHNS system consists of a Navier-Stokes equation governing the fluid velocity
field coupled with a convective Cahn - Hilliard equation for the relative
concentration of the fluids. An optimal control problem is formulated as the
minimization of a cost functional subject to the controlled CHNS system where
the control acts on the boundary of the Navier-Stokes equations. We first prove
that there exists an optimal boundary control. Then we establish that the
control-to-state operator is Frechet differentiable and derive first-order
necessary optimality conditions in terms of a variational inequality involving
the adjoint system.
|
Manika Bag, Tania Biswas, Sheetal Dharmatti
|
2023-05-04T02:15:05Z
|
http://arxiv.org/abs/2305.02501v3
|
# Optimal boundary control for a Cahn-Hilliard-Navier-Stokes equations
###### Abstract.
In In this work, we study an optimal boundary control control problem for a Cahn-Hilliard- Navier-Stokes (CHNS) system in a two-dimensional bounded domain. The CHNS system consists of a Navier-Stokes equation governing the fluid velocity field coupled with a convective Cahn-Hilliard equation for the relative concentration of the fluids. An optimal control problem is formulated as the minimization of a cost functional subject to the controlled CHNS system where the control acts on the boundary of the Navier-Stokes equations. We first prove that there exists an optimal boundary control. Then we establish that the control-to-state operator is Frechet differentiable and derive first-order necessary optimality conditions in terms of a variational inequality involving the adjoint system.
\({}^{1}\)School of Mathematics, Indian Institute of Science Education and Research, Thiruvananthapuram (IISER-TVM), Maruthamala PO, Vithura, Thiruvananthapuram, Kerala, 695 551, INDIA
_e-mail:_ [email protected]
\({}^{*}\)Corresponding author.
**Acknowledgments**: Manika Bag would like to thank the Indian Institute of Science Education and Research, Thiruvananthapuram, for providing financial support and the stimulating environment for the research. The work of Tania Biswas was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT)(NO.2020R1A4A1018190). The work of Sheetal Dharmatti is supported by SERB grant SERB-CRG/2021/008278 of the Government of India.
problems related to the CHNS system. Mathematically, boundary control problems are harder to deal with, specifically obtaining the optimality conditions, as higher regularity of the solution is often required. Also, the mathematical analysis becomes more challenging since the CHNS system is a highly nonlinear coupled system. To the best of our knowledge, most of the analytical work related to optimal control problems for the CHNS system is devoted to the case of distributed control. This work is the first contribution to the analytic study of the boundary control problem for the CHNS system.
The system under consideration in this work is the following controlled CHNS system where control acts on the boundary of the Navier-Stokes equation as a time-dependent Dirichlet boundary condition:
\[\left\{\begin{aligned} \varphi_{t}+\mathbf{u}\cdot\nabla \varphi&=\Delta\mu,\ \ \text{in}\ \Omega\times(0,T),\\ \mu&=-\Delta\varphi+F^{\prime}(\varphi),\\ \mathbf{u}_{t}-\nu\Delta\mathbf{u}+(\mathbf{u}\cdot\nabla) \mathbf{u}+\nabla\pi&=\mu\nabla\varphi,\ \ \text{in}\ \Omega\times(0,T),\\ div&\mathbf{u}&=0,\ \ \text{in}\ \Omega\times(0,T),\\ \frac{\partial\varphi}{\partial\mathbf{n}}=0,\,\frac{\partial \mu}{\partial\mathbf{n}}&=0,\ \ \text{on}\ \Sigma,\\ \mathbf{u}&=\mathbf{h},\ \ \text{on}\ \Sigma,\\ \mathbf{u}(0)=\mathbf{u}_{0},\ \varphi(0)&=\varphi_{0},\ \ \text{in}\ \Omega.\end{aligned}\right. \tag{1.1}\]
Here \(\mathbf{u}(\mathbf{x},t)\) is the average velocity of the fluid, and \(\varphi(\mathbf{x},t)\) is the relative concentration of the fluid. Also, \(\Omega\) is a bounded domain in \(\mathbb{R}^{2},\) with a sufficiently smooth boundary \(\partial\Omega\). The density is taken as matched density, i.e., constant density, which is equal to 1. Moreover, \(\mu\) denotes the chemical potential, \(\pi\) denotes the pressure, \(\nu\) denotes the viscosity, \(F\) is a double-well potential, and \(\mathbf{h}\) is the control acting on the boundary of the domain. Furthermore, \(\mu\) is the first variation of the Helmholtz free energy functional
\[\mathcal{E}(\varphi):=\int_{\Omega}\biggl{(}\frac{1}{2}|\nabla \varphi|^{2}+\mathrm{F}(\varphi(x))\biggr{)}\,\mathrm{d}x, \tag{1.2}\]
where \(\mathrm{F}\) is a double-well potential of the regular type. A typical example of regular \(\mathrm{F}\) is
\[\mathrm{F}(s)=(s^{2}-1)^{2},\ s\in\mathbb{R}. \tag{1.3}\]
We now discuss some of the works available in the literature for the solvability of system (1.1) when \(\mathbf{h}=0.\) In [8], the authors established the existence and uniqueness of weak solutions and strong solutions in 2 and 3 dimensions in the case of regular potential (for strong solution global in 2D and local in time in 3D). The authors have proved the existence and uniqueness results for the case of singular potential in [1, 22]. In [20], the authors have studied the asymptotic behavior where they have proved the existence of global and exponential attaractor. For the system (1.1), when the time-dependent Dirichlet boundary condition for the Navier-Stokes equation is considered, in [5], we have proved the existence, uniqueness of weak solution and existence of strong solution. Moreover, we refer [[2, 3, 4, 19] and references therein] for more generalized models by considering nonconstant viscosity, general density, thermodynamically consistent model, compressible fluids, moving contact lines, etc.
In this work, our aim is to study the optimal boundary control related to the system (1.1). Let us now briefly mention some related literature on optimal control problems. Optimal control problems for Cahn-Hilliard equations have been studied by several mathematicians in [[10, 39, 11, 28, 40], and the references therein] for both the cases of distributed and boundary control. The authors have considered a distributed optimal control problem for Navier-Stokes equations in [23, 24, 14, 9], to name a few. Whereas in [25, 16, 17, 18], the authors have studied boundary optimal control problem for the Navier-Stokes equation. In [16], the 2D Navier-Stokes equation on the unbounded domain is considered with control acts on the boundary. In the seminal paper by the same authors [18], the optimal boundary control for the Navier-Stokes system in 3D has been studied. Turning to optimal control problems for the CHNS system, there are a few analytical works available in the literature that we would like to mention. The robust control problem for the local CHNS system is investigated in [35], and the optimal control with state constraints is considered in [34]. For the nonlocal CHNS system, we mention [6, 7, 13, 12] for regular potential and singular potential, respectively. In [6], the authors have investigated an enstrophy minimization problem and also a data assimilation type of problem where control acts as initial data. In all the other works
mentioned above, a distributed optimal control problem has been studied in terms of minimizing a standard tracking type cost functional. Regarding the numerical studies, optimal control problems of semi-discrete CHNS system for various cases like distributed and boundary control, with nonsmooth Landau-Ginzburg energies and with non-matched fluid densities are studied in [27, 29, 30]. These works considered the local Cahn-Hilliard-Navier-Stokes equations for their numerical studies.
In this work, we consider the control as Dirichlet boundary data for \(\mathbf{u}\) in the system (1.1). Now with the required well-posedness result of (1.1) in hand [see [5] for details], the road is paved for studying optimal control problems associated with the system (1.1). We restrict ourselves to dimension 2 as the regularity of the solution required to study optimal control problems is available in dimension 2 only. The control problem under investigation in this paper reads as follows:
Let us consider the quadratic cost functional
\[\mathcal{J}(\mathbf{u},\varphi,\mathbf{h})=\frac{1}{2}\int_{0}^{ T}\|\mathbf{u}-\mathbf{u}_{Q}\|^{2}+ \frac{1}{2}\int_{0}^{T}\|\varphi-\varphi_{Q}\|^{2}+\frac{1}{2}\| \mathbf{u}(T)-\mathbf{u}_{\Omega}\|^{2}_{\mathrm{L}^{2}(\Omega)}\] \[+\frac{1}{2}\|\varphi(T)-\varphi_{\Omega}\|^{2}_{\mathrm{L}^{2}( \Omega)}+\frac{1}{2}\int_{0}^{T}\|\mathbf{h}\|^{2}_{\mathrm{L}^{2}(\partial \Omega)}. \tag{1.4}\]
So the optimal control problem is formulated as follows:
\[(\mathbf{OCP})\qquad\min_{\mathbf{h}\in\mathcal{U}_{ad}}\mathcal{J}(\mathbf{ u},\varphi,\mathbf{h}),\]
subject to (1.1). Here \(\mathbf{u}_{Q},\,\varphi_{Q},\,\mathbf{u}_{\Omega},\,\varphi_{\Omega}\) are the target functions such that \(\mathbf{u}_{Q}\in\mathrm{L}^{2}(0,T;\mathbb{L}^{2}_{div}(\Omega)),\;\varphi_{ Q}\in\mathrm{L}^{2}(\Omega\times(0,T)),\,\mathbf{u}_{\Omega}\in\mathbb{L}^{2}_{div}( \Omega),\,\varphi_{\Omega}\in\mathrm{L}^{2}(\Omega)\). So the motivation of the problem \((\mathbf{OCP})\) is that we want to find the best control \(\mathbf{h}\) from the set of admissible controls such that the corresponding optimal solution of (1.1) is as close as possible to the target state. The last term in (1.4) is the effort by the control that we have to pay in order to reach the final state. Throughout this paper, we assume \(T\) is given finite final time and set
\[Q=\Omega\times(0,T),\quad\Sigma=\partial\Omega\times(0,T).\]
To overview the work of this paper: in the state problem (1.1), \(\mathbf{h}\) acts as boundary control, which is hypothesized to belong to a suitable closed convex bounded subspace \(\mathcal{U}_{ad}\) of a suitable Banach space \(\mathcal{U}\) [which will be specified later, see (3.1), (3.4)]. In the (1.4), \((\mathbf{u},\varphi)\) is the strong solution of state problem (1.1) corresponding to time-dependent Dirichlet boundary data \(\mathbf{u}|_{\partial\Omega}=\mathbf{h}.\) The main results of this paper are summarized as follows:
1. We establish the existence of an optimal boundary control for the problem \((\mathbf{OCP}).\) [See Theorem (3.2)]
2. We show that the control to state operator \(\mathcal{S}\) is Frechet differentiable between suitable Banach spaces. [See Theorem (4.3)]
3. We derive the adjoint system corresponding to state problem (1.1) and establish the well-posedness of the adjoint system. [See Theorem (5.3)]
4. Finally, we derive the first-order necessary optimality condition in terms of a variational inequality involving adjoint states. [See Theorem (5.7)]
The plan of the paper is as follows: in the next section, we give some preliminaries about function spaces and operators used in the sequel. We also recall the well-posedness results required in this work and give a brief sketch of a proof that the strong solution depends continuously on the boundary data. This result is crucial in proving the existence of optimal control which is tackled in section 3. Section 4 is devoted to studying the linearized system, which comes naturally when one wants to establish the differentiability of control to state operator. Finally, in section 5, we characterize the optimal control with the help of adjoint system and establish the well-posedness of the adjoint system.
## 2. Preliminary
### Functional Setup
Let \(\Omega\) be a bounded subset of \(\mathbb{R}^{2}\) with sufficiently smooth boundary \(\partial\Omega\). We introduce the functional spaces that will be useful in the paper.
\[\mathbb{G}_{\text{div}} :=\Big{\{}\mathbf{u}\in\mathrm{L}^{2}(\Omega;\mathbb{R}^{2}):\; \text{div}\;\mathbf{u}=0,\;\mathbf{u}\cdot\mathbf{n}\big{|}_{\partial\Omega}=0 \Big{\}},\] \[\mathbb{V}_{\text{div}} :=\Big{\{}\mathbf{u}\in\mathrm{H}^{1}_{0}(\Omega;\mathbb{R}^{2}): \;\text{div}\;\mathbf{u}=0\Big{\}},\] \[\mathbb{L}^{2}_{\text{div}} :=\Big{\{}\mathbf{u}\in\mathrm{L}^{2}(\Omega;\mathbb{R}^{2}):\; \text{div}\;\mathbf{u}=0\Big{\}},\] \[\mathbb{H}^{s}_{\text{div}} :=\Big{\{}\mathbf{u}\in\mathrm{H}^{s}(\Omega;\mathbb{R}^{2}):\; \text{div}\;\mathbf{u}=0\Big{\}},\] \[\mathrm{L}^{2}(\Omega) :=\mathrm{L}^{2}(\Omega;\mathbb{R}),\] \[\mathrm{H}^{s}(\Omega) :=\mathrm{H}^{s}(\Omega;\mathbb{R}).\]
Above spaces are defined for \(s>0\). With usual convention, the dual space of \(\mathrm{H}^{s}(\Omega)\) is denoted by \(\mathrm{H}^{-s}(\Omega)\). Let us denote \(\|\cdot\|\) and \((\cdot,\cdot)\) the norm and the scalar product, respectively, on \(\mathbb{L}^{2}_{\text{div}}\) and \(\mathbb{G}_{\text{div}}\). The duality between any Hilbert space \(\mathbb{X}\) and its dual \(\mathbb{X}^{\prime}\) will be denoted by \(\langle\cdot,\cdot\rangle\). We know that \(\mathbb{V}_{\text{div}}\) is endowed with the scalar product
\[(\mathbf{u},\mathbf{v})_{\mathbb{V}_{\text{div}}}=(\nabla\mathbf{u},\nabla \mathbf{v})=2(\mathrm{D}\mathbf{u},\mathrm{D}\mathbf{v}),\;\text{ for all }\;\mathbf{u},\mathbf{v}\in\mathbb{V}_{\text{div}}.\]
The norm on \(\mathbb{V}_{\text{div}}\) is given by \(\|\mathbf{u}\|^{2}_{\mathbb{V}_{\text{div}}}:=\int_{\Omega}|\nabla\mathbf{u} (x)|^{2}\mathrm{d}x=\|\nabla\mathbf{u}\|^{2}\). Since \(\Omega\) is bounded, the embedding of \(\mathbb{V}_{\text{div}}\subset\mathbb{G}_{\text{div}}\equiv\mathbb{G}^{\prime }_{\text{div}}\subset\mathbb{V}^{\prime}_{\text{div}}\) is compact.
### Linear and Nonlinear Operators
Let us define the Stokes operator \(\mathbf{A}:\mathrm{D}(\mathbf{A})\cap\mathbb{G}_{\text{div}}\to\mathbb{G}_{ \text{div}}\) by
\[\mathbf{A}=-\mathrm{P}\Delta,\;\mathrm{D}(\mathbf{A})=\mathbb{H}^{2}(\Omega) \cap\mathbb{V}_{\text{div}},\]
where \(\mathrm{P}:\mathbb{L}^{2}(\Omega)\to\mathbb{G}_{\text{div}}\) is the _Helmholtz-Hodge orthogonal projection_. Note also that we have
\[\langle\mathbf{A}\mathbf{u},\mathbf{v}\rangle=(\mathbf{u},\mathbf{v})_{ \mathbb{V}_{\text{div}}}=(\nabla\mathbf{u},\nabla\mathbf{v}),\text{ for all }\mathbf{u}\in\mathrm{D}(\mathbf{A}),\mathbf{v}\in\mathbb{V}_{\text{div}}.\]
Let \(\mathbf{u},\,\mathbf{v}\) be two vector-valued functions. Then
\[\nabla(\mathbf{u}\cdot\mathbf{v})=(\nabla\mathbf{u})\cdot\mathbf{v}+(\nabla \mathbf{v})\cdot\mathbf{u}=\mathbf{v}^{T}(\nabla\mathbf{u})+\mathbf{u}^{T}( \nabla\mathbf{v}).\]
**Assumption 2.1**.: _We take the following assumptions on \(F\):_
1. _There exist_ \(C_{1}>0\)_,_ \(C_{2}>0\) _such that_ \(\mathrm{F}^{\prime\prime}(s)\leq C_{1}|s|^{p-1}+C_{2}\)_, for all_ \(s\in\mathbb{R}\)_,_ \(1\leq p<\infty\) _and a.e.,_ \(x\in\Omega\)_._
2. \(\mathrm{F}\in\mathrm{C}^{2}(\mathbb{R})\) _and there exists_ \(C_{3}>0\) _such that_ \(\mathrm{F}^{\prime\prime}(s)\geq-C_{3}\) _for all_ \(s\in\mathbb{R}\)_, a.e.,_ \(x\in\Omega\)_._
3. _There exist_ \(C^{\prime}_{3}>0\)_,_ \(C_{4}\geq 0\) _and_ \(r\in(1,2]\) _such that_ \(|\mathrm{F}^{\prime}(s)|^{r}\leq C^{\prime}_{3}|\mathrm{F}(s)|+C_{4}\)_, for all_ \(s\in\mathbb{R}\)_._
4. \(\mathrm{F}\in\mathrm{C}^{3}(\mathbb{R})\) _and there exists_ \(C_{5}>0\)_,_ \(|\mathrm{F}^{\prime\prime\prime}(s)|\leq C_{5}(1+|s|^{q})\) _for all_ \(s\in\mathbb{R}\) _where_ \(q<+\infty\)_._
5. \(F(\varphi_{0})\in L^{1}(\Omega)\)_._
### Well-Posedness of Stokes Equations
Let us consider the following Stokes' problem
\[\left\{\begin{aligned} -\Delta\mathbf{u}_{e}+\nabla\pi& =0,\;\text{ in }Q,\\ div\;\mathbf{u}_{e}&=0,\;\text{ in }Q,\\ \mathbf{u}_{e}&=\mathbf{h},\;\text{ on }\partial \Omega\times(0,T).\end{aligned}\right. \tag{2.1}\]
**Theorem 2.2**.: _Suppose that \(\mathbf{h}\) satisfies the conditions_
\[\left\{\begin{aligned} \mathbf{h}\in\mathrm{L}^{2}(0,T; \mathbb{H}^{\frac{2}{2}}(\partial\Omega))\cap\mathrm{L}^{\infty}(0,T;\mathbb{H}^ {\frac{1}{2}}(\partial\Omega)),\\ \partial_{t}\mathbf{h}&\in\mathrm{L}^{2}(0,T;\mathbb{H} ^{-\frac{1}{2}}(\partial\Omega)).\end{aligned}\right. \tag{2.2}\]
_Then equation (2.1) admits a unique weak solution_
\[\mathbf{u}_{e}\in\mathrm{H}^{1}(0,T;\mathbb{L}^{2}_{\text{div}}(\Omega))\cap \mathrm{L}^{\infty}(0,T;\mathbb{H}^{1}_{\text{div}}(\Omega))\cap\mathrm{L}^{2}( 0,T;\mathbb{H}^{2}_{\text{div}}(\Omega)),\]
_such that_
\[\int_{0}^{T}\|\mathbf{u}_{e}(t)\|_{\mathbb{H}^{2}_{div}}^{2}\,dt \leq c\int_{0}^{T}\|\mathbf{h}(t)\|_{\mathbb{H}^{\frac{1}{2}}( \partial\Omega)}^{2}\,dt, \tag{2.3}\] \[\int_{0}^{T}\|\partial_{t}\mathbf{u}_{e}(t)\|_{\mathbb{L}^{2}_{ div}}^{2}\,dt \leq c\int_{0}^{T}\|\partial_{t}\mathbf{h}(t)\|_{\mathbb{H}^{-\frac{1}{2}}( \partial\Omega)}\,dt. \tag{2.4}\]
_Moreover if \(\mathbf{h}\) satisfies_
\[\left\{\begin{aligned} &\mathbf{h}\in\mathrm{L}^{2}(0,T;\mathbb{H}^{ \frac{5}{2}}(\partial\Omega))\cap\mathrm{L}^{\infty}(0,T;\mathbb{H}^{\frac{ 5}{2}}(\partial\Omega)),\\ &\partial_{t}\mathbf{h}\in\mathrm{L}^{2}(0,T;\mathbb{H}^{\frac{ 1}{2}}(\partial\Omega))\end{aligned}\right. \tag{2.5}\]
_then Stokes' equations (2.1) admits a strong solution_
\[\mathbf{u}_{e}\in\mathrm{H}^{1}(0,T;\mathbb{H}^{1}_{div})\cap\mathrm{L}^{2}( 0,T;\mathbb{H}^{3}_{div}), \tag{2.6}\]
_such that_
\[\int_{0}^{T}\|\mathbf{u}_{e}(t)\|_{\mathbb{H}^{2}_{div}}^{2}\,dt \leq c\int_{0}^{T}\|\mathbf{h}(t)\|_{\mathbb{H}^{\frac{5}{2}}(\partial\Omega) }^{2}\,dt,\] \[\int_{0}^{T}\|\partial_{t}\mathbf{u}_{e}(t)\|_{\mathbb{H}^{1}_{ div}}^{2}\,dt \leq c\int_{0}^{T}\|\partial_{t}\mathbf{h}(t)\|_{\mathbb{H}^{\frac{1}{2}}( \partial\Omega)}\,dt.\]
Proof.: The proof of the above theorem can be found in [32, 33].
### Well-posedness of state problem
In this section, we will state the results on well-posedness and existence of strong solution of the system (1.1).
**Definition 2.3**.: _Let \(\mathbf{u}_{0}\in\mathbb{L}^{2}_{div}(\Omega),\,\varphi_{0}\in\mathrm{H}^{1}(\Omega)\) and \(T>0\) be given. Let \(F\) satisfy the Assumption 2.1, and \(\mathbf{h}\) satisfies (2.2). Let us also assume \(\mathbf{u}_{e}\) solves (2.1). A pair \((\mathbf{u},\varphi)\) is said to be a weak solution of the system (1.1) if \(\mathbf{u}=\overline{\mathbf{u}}+\mathbf{u}_{e}\) and_
* \((\overline{\mathbf{u}},\varphi)\) _satisfies_ \[\left\{\begin{aligned} &\overline{\mathbf{u}}\in L^{\infty}(0,T; \mathbb{G}_{div}(\Omega))\cap L^{2}(0,T;\mathbb{V}_{div}(\Omega))\\ &\overline{\mathbf{u}}_{t}\in L^{2}(0,T;\mathbb{V}^{\prime}_{div}( \Omega))\\ &\varphi\in L^{\infty}(0,T;\mathrm{H}^{1}(\Omega))\cap L^{2}(0,T; \mathrm{H}^{2}(\Omega))\\ &\varphi_{t}\in L^{2}(0,T;(\mathrm{H}^{1}(\Omega))^{\prime})\\ &\mu\in L^{2}(0,T;\mathrm{H}^{1}(\Omega)).\end{aligned}\right.\] (2.7)
* _for every_ \(\mathbf{v}\in\mathbb{V}_{div},\text{ every }\psi\in\mathrm{H}^{1}\) _and for a.e_ \(t\in(0,T)\) _we have_ \[\langle\partial_{t}\overline{\mathbf{u}}(t),\mathbf{v}\rangle- \nu(\Delta\overline{\mathbf{u}}(t),\mathbf{v})+((\overline{\mathbf{u}}(t)+ \mathbf{u}_{e}(t))\cdot\nabla(\overline{\mathbf{u}}(t)+\mathbf{u}_{e}(t)), \mathbf{v}) =(\mu\nabla\varphi,\mathbf{v})-\langle\partial_{t}\mathbf{u}_{e}, \mathbf{v}\rangle,\] (2.8) \[\langle\partial_{t}\varphi(t),\psi\rangle+\big{(}(\overline{ \mathbf{u}}(t)+\mathbf{u}_{e}(t))\cdot\nabla\varphi(t),\psi\big{)} =(\Delta\mu(t),\psi)\] (2.9)
**Theorem 2.4**.: _Let \(\mathbf{u}_{0}\in\mathbb{L}^{2}_{div}(\Omega)\) and \(\varphi_{0}\in\mathrm{H}^{1}(\Omega)\). Let \(F\) satisfy the Assumption 2.1 and \(\mathbf{h}\) satisfies (2.2). Moreover, let us assume the compatibility condition_
\[\left.\mathbf{u}_{0}\right|_{\partial\Omega}=\mathbf{h}\big{|}_{t=0}. \tag{2.10}\]
_Then, for a given \(T>0\), there exists a unique pair \((\overline{\mathbf{u}},\varphi)\) which satisfies (2.8) and (2.9)._
**Theorem 2.5**.: _Let the assumptions of the theorem (2.4) holds. Then there exists a unique weak solution \((\mathbf{u},\varphi)\) to the system (1.1), where \(\mathbf{u}=\overline{\mathbf{u}}+\mathbf{u}_{e}\), \(\overline{\mathbf{u}}\) as defined in theorem (2.4). Moreover, \((\mathbf{u},\varphi)\) satisfy_
\[\mathbf{u}\in\mathrm{L}^{\infty}(0,T;\mathbb{L}^{2}_{div})\cap \mathrm{L}^{2}(0,T;\mathbb{H}^{1}_{div}),\] \[\varphi\in\mathrm{L}^{\infty}(0,T;\mathrm{H}^{1})\cap\mathrm{L}^{2} (0,T;\mathrm{H}^{2}),\] \[\mathbf{u}_{t}\in\mathrm{L}^{2}(0,T;\mathbb{H}^{1}_{div}{}^{ \prime}),\] \[\varphi_{t}\in\mathrm{L}^{2}(0,T;(\mathrm{H}^{1})^{\prime}),\] \[\mu\in\mathrm{L}^{2}(0,T;\mathrm{H}^{1}).\]
**Proposition 2.6**.: _Let \((\mathbf{u}_{1},\varphi_{1})\) and \((\mathbf{u}_{2},\varphi_{2})\) be two pair of weak solution of the system (1.1) with boundaries \(\mathbf{h}_{1},\,\mathbf{h}_{2}\) and initial datum \((\mathbf{u}_{10},\varphi_{10}),\,(\mathbf{u}_{20},\varphi_{20})\) respectively. Define \(\overline{\mathbf{u}}:=\overline{\mathbf{u}}_{1}-\overline{\mathbf{u}}_{2}\) and \(\varphi:=\varphi_{1}-\varphi_{2}\), where \(\overline{\mathbf{u}}_{i}=\mathbf{u}_{i}-\mathbf{u}_{ei}\), \(\mathbf{u}_{ei}\) is the solution of "lifting operator" with \(\mathbf{h}_{i}\) on the boundary, \(i=1,2\). Then there exists a constant \(C>0\) such that_
\[\begin{split}\|\mathbf{u}\|_{\mathrm{L}^{\infty}(0,T,\mathbb{G} _{div})\cap\mathrm{L}^{2}(0,T,\mathbb{V}_{div})}+\|\varphi\|_{\mathrm{L}^{ \infty}(0,T,\mathrm{H}^{1})\cap\mathrm{L}^{2}(0,T,\mathrm{H}^{2})}+\|\nabla \mu\|_{\mathrm{L}^{2}(0,T;\mathrm{L}^{2})}\\ \leq C\big{(}\|\mathbf{h}\|_{\mathrm{L}^{\infty}(0,T,\mathbb{H}^ {\frac{1}{2}}(\partial\Omega))\cap\mathrm{L}^{2}(0,T,\mathbb{H}^{\frac{1}{2}} (\partial\Omega))}+\|\partial_{t}\mathbf{h}\|_{\mathrm{L}^{2}(0,T,\mathbb{H}^ {-\frac{1}{2}}(\partial\Omega))}\big{)},\end{split} \tag{2.11}\]
_where \(\mu=\mu_{1}-\mu_{2},\ \mathbf{h}=\mathbf{h}_{1}-\mathbf{h}_{2}\) and \(C\) depends on \(\|\mathbf{u}_{0}\|,\|\varphi_{i0}\|,\ i=1,2\,\)._
Proof.: Note that in [5] section 5, we have proved
\[\begin{split}\|\overline{\mathbf{u}}\|_{\mathrm{L}^{\infty}(0,T,\mathbb{G}_{div})\cap\mathrm{L}^{2}(0,T,\mathbb{V}_{div})}+\|\varphi\|_{ \mathrm{L}^{\infty}(0,T,\mathrm{H}^{1})\cap\mathrm{L}^{2}(0,T,\mathrm{H}^{2} )}\leq C\big{(}\|\mathbf{h}\|_{\mathrm{L}^{\infty}(0,T,\mathbb{H}^{\frac{1}{2} }(\partial\Omega))\cap\mathrm{L}^{2}(0,T,\mathbb{H}^{\frac{3}{2}}(\partial \Omega))}\\ +\|\partial_{t}\mathbf{h}\|_{\mathrm{L}^{2}(0,T,\mathbb{H}^{- \frac{1}{2}}(\partial\Omega))}\big{)}.\end{split}\]
But similar calculations and observations will lead to (2.11).
**Theorem 2.7** ([5], Theorem 6.1).: _Let \(F\) satisfies assumption (2.1) and \(\mathbf{h}\) satisfies (2.5). For a given pair \((\mathbf{u}_{0},\varphi_{0})\in\mathbb{H}^{1}_{div}\times\mathrm{H}^{2}\), there exists a unique pair \((\mathbf{u},\varphi)\) which is the weak solution of the system (1.1) and also satisfies_
\[\begin{split}\mathbf{u}\in\mathrm{L}^{\infty}(0,T;\mathbb{H}^{1} _{div})\cap\mathrm{L}^{2}(0,T;\mathbb{H}^{2}_{div})\\ \varphi\in\mathrm{L}^{\infty}(0,T;\mathrm{H}^{2})\cap\mathrm{L}^{2 }(0,T;\mathrm{H}^{3}\cap\mathrm{H}^{4}).\end{split}\]
**Remark 2.8**.: _In [5], authors did not estimate the time derivatives \(\mathbf{u}_{t},\,\varphi_{t}\). One can easily show that_
\[\mathbf{u}_{t}\in\mathrm{L}^{2}(0,T;\mathbb{L}^{2}_{div}(\Omega)),\quad\varphi _{t}\in\mathrm{L}^{2}(0,T;\mathrm{H}^{1}(\Omega)).\]
In the next section, we will discuss the continuous dependence of a strong solution. We will give a brief sketch of proof as it will follow similarly from [5], section 5, with a slight modification in estimates.
### Continuous dependence of strong solution
Let \((\mathbf{u}_{1},\varphi_{1})\) and \((\mathbf{u}_{2},\varphi_{2})\) be two weak solutions of the system with non-homogeneous boundaries \(\mathbf{h}_{1}\) and \(\mathbf{h}_{2}\) and initial conditions \(\mathbf{u}_{i0}\), \(\varphi_{i0}\), for \(i=1,2\) respectively. Then denote the differences \(\mathbf{u}=\overline{\mathbf{u}}_{1}-\overline{\mathbf{u}}_{2}\), \(\varphi=\varphi_{1}-\varphi_{2}\) where \(\overline{\mathbf{u}}_{i}=\mathbf{u}_{i}-\mathbf{u}_{ei}\), \(\mathbf{u}_{ei}\) is the solution of (2.1) corresponding to boundary \(\mathbf{h}_{i}\), for \(i=1,2\). Note that, \(\mathbf{u}_{e}:=\mathbf{u}_{e_{1}}-\mathbf{u}_{e_{2}}\) satisfies the equation (2.1) with the boundary \(\mathbf{h}=\mathbf{h}_{1}-\mathbf{h}_{2}\). Then \((\mathbf{u},\varphi)\) satisfies:
\[\begin{cases}\varphi_{t}+\mathbf{u}\cdot\nabla\varphi_{1}+\mathbf{u}_{2}\cdot \nabla\varphi+\mathbf{u}_{e}\cdot\nabla\varphi_{1}+\mathbf{u}_{e_{2}}\cdot \nabla\varphi=\Delta\tilde{\mu},\ \ \text{in}\ \Omega\times(0,T),\\ \tilde{\mu}=-\Delta\varphi+F^{\prime}(\varphi_{2})-F^{\prime}(\varphi_{1}),\ \ \text{in}\ \Omega \times(0,T),\\ \mathbf{u}_{t}-\nu\Delta\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u}_{e_{1}}+( \mathbf{u}_{e_{1}}\cdot\nabla)\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u}_{1 }+(\mathbf{u}_{2}\cdot\nabla)\mathbf{u}_{e}+(\mathbf{u}_{e}\cdot\nabla)\mathbf{ u}_{2}\\ \qquad+(\mathbf{u}_{e}\cdot\nabla)\mathbf{u}_{e_{1}}+(\mathbf{u}_{e_{2}} \cdot\nabla)\mathbf{u}_{e}+\nabla\tilde{\pi}=\tilde{\mu}\nabla\varphi_{1}+\mu_ {2}\nabla\varphi-\partial_{t}\mathbf{u}_{e},\ \ \text{in}\ \Omega\times(0,T),\\ div\ \mathbf{u}=0,\ \ \text{in}\ \Omega\times(0,T),\\ \frac{\partial\varphi}{\partial\mathbf{n}}=0,\ \frac{\partial\tilde{\mu}}{ \partial\mathbf{n}}=0,\ \ \text{on}\ \Sigma,\\ \mathbf{u}=0,\ \ \text{on}\ \Sigma,\\ \mathbf{u}(x,0)=\mathbf{u}_{0},\,\varphi(x,0)=\varphi_{0},\,\text{in}\ \Omega.\end{cases} \tag{2.12}\]
We have the following continuous dependence result for the strong solution of (1.1):
**Proposition 2.9**.: _Let \(\mathbf{h}_{i}\in\mathcal{U}\) and \((\varphi_{i},\mathbf{u}_{i})\) be the strong solution of the system (1.1) with corresponding boundary data \(\mathbf{h}_{i}\) with initial data \((\varphi_{i0},\mathbf{u}_{i0})\) for \(i=1,2.\) Let \(\mathbf{h}=\mathbf{h}_{1}-\mathbf{h}_{2}\). Then there exist a constant \(C>0\) such that:_
\[\|\mathbf{u}\|_{\mathrm{L}^{\infty}(0,T;\mathbb{H}^{1}_{div}(\Omega))\cap \mathrm{L}^{2}(0,T;\mathbb{H}^{2}(\Omega))}+\|\varphi\|_{\mathrm{L}^{\infty} (0,T;\mathrm{H}^{2}(\Omega))\cap\mathrm{L}^{2}(0,T;\mathrm{H}^{2}(\Omega))} \leq C\|\mathbf{h}\|_{\mathcal{U}}, \tag{2.13}\]
_where the norm linear space \(\mathcal{U}\) defined in (3.1) with the norm, \(\|\cdot\|_{\mathcal{U}}\), given as (3.2)._
Proof.: We multiply the equation (2.12)\({}_{3}\) by \(\mathbf{A}\mathbf{u}\) and get the following
\[\frac{1}{2}\frac{d}{dt}\|\nabla\mathbf{u}\|^{2}+\nu\|\mathbf{A} \mathbf{u}\|^{2}=(\,(\mathbf{u}\cdot\nabla)\mathbf{u}_{e_{1}},\mathbf{A} \mathbf{u}\,)+(\,(\mathbf{u}_{e_{1}}\cdot\nabla)\mathbf{u},\mathbf{A}\mathbf{u} \,)+(\,(\mathbf{u}\cdot\nabla)\mathbf{u}_{1},\mathbf{A}\mathbf{u}\,)\] \[\qquad+(\,(\mathbf{u}_{2}\cdot\nabla)\mathbf{u}_{e},\mathbf{A} \mathbf{u}\,)+(\,(\mathbf{u}_{e}\cdot\nabla)\mathbf{u}_{2},\mathbf{A}\mathbf{u }\,)+(\,(\mathbf{u}_{e}\cdot\nabla)\mathbf{u}_{e_{1}},\mathbf{A}\mathbf{u}\,)+ (\,(\mathbf{u}_{e_{2}}\cdot\nabla)\mathbf{u}_{e},\mathbf{A}\mathbf{u}\,)\] \[\qquad+(\,\nabla\tilde{\pi},\mathbf{A}\mathbf{u}\,)+(\,\tilde{\mu }\nabla\varphi_{1},\mathbf{A}\mathbf{u}\,)+(\,\mu_{2}\nabla\varphi,\mathbf{A} \mathbf{u}\,)+(\,\partial_{t}\mathbf{u}_{e},\mathbf{A}\mathbf{u}\,) \tag{2.14}\]
Similarly multiplying by \(\Delta^{2}\varphi\) to the equation (2.12)\({}_{1}\) we get
\[\frac{1}{2}\frac{d}{dt}\|\Delta\varphi\|^{2}+\|\Delta^{2}\varphi \|^{2}= -(\mathbf{u}\cdot\nabla\varphi_{1},\Delta^{2}\varphi)-(\mathbf{u} _{2}\cdot\nabla\varphi,\Delta^{2}\varphi)-(\mathbf{u}_{e}\cdot\nabla\varphi_{ 1},\Delta^{2}\varphi)\] \[-(\mathbf{u}_{e_{2}}\cdot\nabla\varphi,\Delta^{2}\varphi)+( \Delta\tilde{\mu},\Delta^{2}\varphi) \tag{2.15}\]
Now estimating each term of the right-hand side of (2.14) using Sobolev inequality, Holder inequality, Young's inequality and keeping in mind the regularity of \((\varphi_{i},\mathbf{u}_{i})\) and \(\mathbf{u}_{ei}\) we obtain
\[\frac{1}{2}\frac{d}{dt}\|\nabla\mathbf{u}\|^{2}+\nu\|\mathbf{A} \mathbf{u}\|^{2}\leq \frac{\nu}{2}\|\mathbf{A}\mathbf{u}\|^{2}+C\Big{[}2\|\nabla \mathbf{u}_{2}\|^{2}+\|\mathbf{h}_{1}\|_{\mathbb{H}^{\frac{3}{2}}}^{2}+\| \mathbf{h}_{2}\|_{\mathbb{H}^{\frac{3}{2}}}^{2}+\|\varphi_{1}\|_{\mathbb{H}^{ 3}}^{2}+\frac{5}{\nu}\|\mu_{2}\|_{\mathbb{H}^{2}}^{2}\Big{]}\|\mathbf{h}\|_{ \mathcal{U}}^{2}\] \[+C\big{[}\|\mathbf{h}_{1}\|_{\mathbb{H}^{\frac{3}{2}}}^{2}+\| \mathbf{u}_{1}\|_{\mathbb{H}^{\frac{3}{2}}_{div}}^{2}\big{]}\|\nabla\mathbf{u} \|^{2}. \tag{2.16}\]
Similarly, estimating the right-hand side of (2.15), we get
\[\frac{1}{2}\frac{d}{dt}\|\Delta\varphi\|^{2}+\|\Delta^{2}\varphi \|^{2}\leq \frac{1}{2}\|\Delta^{2}\varphi\|^{2}+C\Big{[}3\|u_{2}\|_{\mathbb{H}^ {2}}^{2}+\|\nabla\varphi_{1}\|^{2}+3\|\mathbf{h}_{2}\|_{\mathbb{H}^{\frac{3}{2 }}}^{2}\Big{]}\|\mathbf{h}\|_{\mathcal{U}}^{2}+C\|\Delta\varphi\|^{2}. \tag{2.17}\]
Adding the inequalities (2.16), (2.17) then integrating the resulting inequality between \(0\) to \(t\) and applying Gronwall's lemma we get
\[\sup\nolimits_{t\in[0,T)}\big{(}\|\nabla\mathbf{u}(t)\|^{2}+\| \Delta\varphi(t)\|^{2}\big{)}+\int_{0}^{t}(\|\mathbf{A}\mathbf{u}(t)\|^{2}+\| \Delta^{2}\varphi(t)\|^{2})dt\leq C\|\mathbf{h}\|_{\mathcal{U}}^{2} \tag{2.18}\]
for all \(t\in[0,T)\), where \(C\) is a positive constant depends on \(\|(\varphi_{i},\mathbf{u}_{i})\|_{V},\|\mathbf{h}_{i}\|_{\mathcal{U}},\|\varphi _{i0}\|_{\mathbb{H}^{2}},\|\mathbf{u}_{i0}\|_{\mathbb{H}^{1}_{div}}\), \(i=1,2\), Which immidately gives (2.13).
## 3. Optimal Control
We introduce the space
\[\mathcal{U}:=\{\mathbf{h}(\mathbf{x},t):\mathbf{h}\in\mathrm{L}^{ \infty}(0,T;\mathbb{H}^{\frac{3}{2}}(\partial\Omega))\cap\mathrm{L}^{2}(0,T; \mathbb{H}^{\frac{5}{2}}(\partial\Omega)),\,\partial_{t}\mathbf{h}\in\mathrm{L }^{2}(0,T;\mathbb{H}^{\frac{1}{2}}(\partial\Omega)\}, \tag{3.1}\]
with the norm given by
\[\|\mathbf{h}\|_{\mathcal{U}}=\|\mathbf{h}\|_{\mathrm{L}^{\infty}(0,T;\mathbb{H} ^{\frac{3}{2}}(\partial\Omega))}+\|\mathbf{h}\|_{\mathrm{L}^{2}(0,T;\mathbb{H} ^{\frac{5}{2}}(\partial\Omega))}+\|\partial_{t}\mathbf{h}\|_{\mathrm{L}^{2}(0,T; \mathbb{H}^{\frac{1}{2}}(\partial\Omega))}. \tag{3.2}\]
**Remark 3.1**.: _If \(\partial\Omega\) is sufficiently smooth boundary of \(\Omega\) then from [36] we have the following embedding_
\[\mathrm{L}^{\infty}(0,T;\mathbb{H}^{\frac{3}{2}}(\partial\Omega))\cap\mathrm{L} ^{2}(0,T;\mathbb{H}^{\frac{5}{2}}(\partial\Omega))\hookrightarrow C([0,T]; \mathbb{H}^{\frac{1}{2}+\delta})\hookrightarrow C([0,T]\times\partial\Omega), \tag{3.3}\]
_for any \(0<\delta\leq 1\). Therefore the compatibility condition \(\mathbf{h}(\mathbf{x},0)=\mathbf{u}_{0}|_{\partial\Omega}\) is well-defined and also satisfied for every solution, \(\mathbf{u}\) of (1.1)._
For a sufficiently large positive constant \(L\), we define the set of admissible boundary control space as follows:
\[\mathcal{U}_{ad}:=\{\mathbf{h}\in\mathcal{U}\,|\,\|\mathbf{h}\|_{ \mathrm{L}^{\infty}(0,T;\mathbb{H}^{\frac{3}{2}}(\partial\Omega))\cap\mathrm{L}^{ 2}(0,T;\mathbb{H}^{\frac{5}{2}}(\partial\Omega))}\leq L,\,\|\partial_{t} \mathbf{h}\|_{\mathrm{L}^{2}(0,T;\mathbb{H}^{\frac{1}{2}}(\partial\Omega)}\leq L\}. \tag{3.4}\]
Let us define
\[\mathcal{W}:= \big{[}C([0,T];\mathrm{L}^{2}_{div}(\Omega))\cap\mathrm{L}^{2}(0,T ;\mathbb{H}^{1}_{div}(\Omega))\big{]}\] \[\times\big{[}C([0,T],\mathrm{H}^{1}(\Omega))\cap\mathrm{L}^{2}(0,T ;\mathbb{H}^{2}(\Omega))\big{]}, \tag{3.5}\] \[\mathcal{V}:= \big{[}C([0,T];\mathbb{H}^{1}_{div}(\Omega))\cap\mathrm{L}^{2}(0,T ;\mathbb{H}^{2}_{div}(\Omega))\cap\mathrm{H}^{1}(0,T;\mathrm{L}^{2}_{div}(\Omega)) \big{]}\] \[\times\big{[}C([0,T],\mathrm{H}^{2}(\Omega))\cap\mathrm{L}^{2}(0,T ;\mathrm{H}^{4}(\Omega))\cap\mathrm{H}^{1}(0,T;\mathrm{H}^{1}(\Omega))\big{]}, \tag{3.6}\]
which denote the space of global weak and strong solution of the system (1.1) respectively.
Now let us define control to state operator
\[\mathcal{S}:\mathcal{U}\to\mathcal{V}\ \ \text{by}\ \ \mathbf{h}\to(\mathbf{u}, \varphi),\]
where \((\mathbf{u},\varphi)\) is the strong solution of (1.1) corresponding to boundary control \(\mathbf{h}\) and with initial data \((\mathbf{u}_{0},\varphi_{0})\in\mathbb{H}^{1}_{div}\times\mathrm{H}^{2}\).
Therefore we can now define a reformulated cost functional
\[\tilde{\mathcal{J}}:\mathcal{U}\to[0,\infty)\ \ \text{by}\ \ \tilde{\mathcal{J}}( \mathbf{h}):=\mathcal{J}(\mathbf{u},\varphi,\mathbf{h})=\mathcal{J}(\mathcal{S }(\mathbf{h}),\mathbf{h}). \tag{3.7}\]
In the following theorem, we prove that the optimal control problem (**OCP**) has a solution that gives the existence of an optimal control.
**Theorem 3.2**.: _(Existence of an optimal control) Let the initial data \((\mathbf{u}_{0},\varphi_{0})\in\mathbb{H}^{1}_{div}\times\mathrm{H}^{2},\) and \(F\) satisfy the Assumption 2.1. Let the target functionals \(\mathbf{u}_{Q}\in\mathrm{L}^{2}(0,T;\mathbb{L}^{2}_{div}(\Omega)),\)\(\varphi_{Q}\in\mathrm{L}^{2}(\Omega\times(0,T)),\)\(\mathbf{u}_{\Omega}\in\mathbb{L}^{2}_{div}(\Omega),\)\(\varphi_{\Omega}\in\mathrm{L}^{2}(\Omega)\). There exists a control \(\mathbf{h}^{*}\in\mathcal{U}_{ad}\) such that_
\[\mathcal{J}(\mathbf{u}^{*},\varphi^{*},\mathbf{h}^{*})=\text{min}_{\mathbf{h} \in\mathcal{U}_{ad}}\mathcal{J}(\mathbf{u},\varphi,\mathbf{h}),\]
where \((\mathbf{u}^{*},\varphi^{*})\) is the unique solution of the state problem (1.1) corresponding to boundary control \(\mathbf{h}^{*}\).
Proof.: Let \(j=\ \text{inf}\ _{\mathbf{h}\in\mathcal{U}_{ad}}\mathcal{J}(\mathbf{u}, \varphi,\mathbf{h})\). As \(\mathcal{J}(\mathbf{u},\varphi,\mathbf{h})\geq 0\) for all \(\mathbf{h}\in\mathcal{U}_{ad}\) so \(j\geq 0\). Then there exist a sequence \(\mathbf{h}_{n}\in\mathcal{U}_{ad}\) such that
\[\text{lim}\ _{n\to\infty}\mathcal{J}(\mathbf{u}_{n},\varphi_{n},\mathbf{h}_{n})=j\]
where \(\mathcal{S}(\mathbf{h}_{n})=(\mathbf{u}_{n},\varphi_{n})\). Since \(\{\mathbf{h}_{n}\}\) is bounded, there exist a subsequence still denoted by \(\{\mathbf{h}_{n}\}\) such that
\[\mathbf{h}_{n} \to\mathbf{h}^{*}\ \ \text{weakly in}\ \mathrm{L}^{2}(0,T;\mathbb{H}^{\frac{5}{2}}(\partial\Omega)), \tag{3.8}\] \[\mathbf{h}_{n} \to\mathbf{h}^{*}\ \text{weak}^{*}\ \text{in}\ \mathrm{L}^{\infty}(0,T;\mathbb{H}^{\frac{3}{2}}(\partial\Omega)),\] (3.9) \[\partial_{t}\mathbf{h}_{n} \to\partial_{t}\mathbf{h}^{*}\ \ \text{weakly in}\ \mathrm{L}^{2}(0,T;\mathbb{H}^{\frac{1}{2}}(\partial\Omega), \tag{3.10}\]
for some \(\mathbf{h}^{*}\in\mathcal{U}\). Now we show that \(\mathbf{h}^{*}\in\mathcal{U}_{ad}\). To do that we only need to show \(\|\mathbf{h}^{*}\|_{\mathcal{U}}\leq L\). We know that if \(\mathbf{h}_{n}\to\mathbf{h}^{*}\) weakly in \(\|.\|\), then \(\|\mathbf{h}^{*}\|\leq\ \text{lim}\ \text{inf}\ _{n\to\infty}\|\mathbf{h}_{n}\|\). Therefore from (3.8) and (3.10) we have
\[\|\mathbf{h}^{*}\|_{\mathrm{L}^{2}(0,T;\mathbb{H}^{\frac{5}{2}}( \partial\Omega))} \leq L,\] \[\|\partial_{t}\mathbf{h}^{*}\|_{\mathrm{L}^{2}(0,T;\mathbb{H}^{ \frac{1}{2}}(\partial\Omega))} \leq L.\]
Again using the fact that any norm in a Banach space is weak* - lower semicontinuous, we have
\[\|\mathbf{h}^{*}\|_{\mathrm{L}^{\infty}(0,T;\mathbb{H}^{\frac{3}{2}}(\partial \Omega))}\leq L.\]
Now from continuous dependence of strong solution, Proposition 2.9, we conclude that there exists a subsequence \((\mathbf{u}_{n},\varphi_{n})\) such that
\[\mathbf{u}_{n} \to\mathbf{u}^{*}\ \ \text{weak}^{*}\ \text{in}\ \mathrm{L}^{\infty}(0,T;\mathbb{H}^{1}_{div}),\] \[\mathbf{u}_{n} \to\mathbf{u}^{*}\ \ \text{weakly in}\ \mathrm{L}^{2}(0,T;\mathbb{H}^{2}_{div}),\] \[\mathbf{u}_{n} \to\mathbf{u}^{*}\ \ \text{strongly in}\ \mathrm{L}^{2}(0,T;\mathbb{L}^{2}_{div}),\] \[\varphi_{n} \to\varphi^{*}\ \ \text{weak}^{*}\ \text{in}\ \mathrm{L}^{\infty}(0,T;\mathrm{H}^{2}),\] \[\varphi_{n} \to\varphi^{*}\ \ \text{weakly in}\ \mathrm{L}^{2}(0,T;\mathrm{H}^{4}),\] \[\varphi_{n} \to\varphi^{*}\ \ \text{strongly in}\ \mathrm{L}^{2}(0,T;\mathrm{H}^{1}).\]
Using the above convergence result on \((\mathbf{u}_{n},\varphi_{n})\), we can pass to the limit in the weak formulation of the system (1.1) and the limit \((\mathbf{u}^{*},\varphi^{*})\) satisfy the weak formulation of (1.1) with initial condition \((\mathbf{u}_{0},\varphi_{0})\in\mathbb{H}^{1}_{div}\times\mathrm{H}^{2},\) and boundary condition \(\mathbf{u}^{*}=\mathbf{h}^{*}\). Since \((\mathbf{u}^{*},\varphi^{*})\in\mathcal{V}\) and \(\mathbf{h}^{*}\in\mathcal{U}\), therefore \(\mathcal{S}(\mathbf{h}^{*})=(\mathbf{u}^{*},\varphi^{*})\).
Since \(\mathcal{J}\) is weakly lower semi-continuous in \(\mathcal{V}\times\mathcal{U}\), we have
\[\mathcal{J}(\mathbf{u}^{*},\varphi^{*},\mathbf{h}^{*})\leq\text{lim}\ \text{inf}_{n\to\infty}\mathcal{J}(\mathbf{u}_{n},\varphi_{n},\mathbf{h}_{n}).\]
We have from the above convergence
\[\text{lim inf}_{n\to\infty}\mathcal{J}(\mathbf{u}_{n},\varphi_{n},\mathbf{h}_{n})=j\]
Therefore, \(j\leq\mathcal{J}(\mathbf{u}^{*},\varphi^{*},\mathbf{h}^{*})\leq\text{lim inf}_{n\to \infty}\mathcal{J}(\mathbf{u}_{n},\varphi_{n},\mathbf{h}_{n})=j\)
Hence we conclude that,
\[\mathcal{J}(\mathbf{u}^{*},\varphi^{*},\mathbf{h}^{*})=\text{min}_{\mathbf{h} \in\mathcal{U}_{ad}}\mathcal{J}(\mathbf{u},\varphi,\mathbf{h}),\]
which yields that \((\mathbf{u}^{*},\varphi^{*},\mathbf{h}^{*})\) is a solution of optimal control problem.
The \(\mathbf{h}^{*}\) obtained in the Theorem 3.2 is called an optimal control and the corresponding solution \((\mathbf{u}^{*},\varphi^{*},\mathbf{h}^{*})\) is called the optimal solution.
## 4. Linearize system
In this section, we will derive the linearized system and establish the existence uniqueness results. Then we will show that the Frechet derivative of the control to state operator satisfies the linearized system.
Now, let for a fixed control \(\hat{\mathbf{h}}\in\mathcal{U}\), \(\mathcal{S}(\hat{\mathbf{h}})=(\hat{\mathbf{u}},\hat{\varphi})\in\mathcal{V}\) be the strong solution of the system (1.1) corresponding to boundary data \(\hat{\mathbf{h}}\). Let a control \(\eta\in\mathcal{U}-\{\hat{\mathbf{h}}\}\) be given. To prove Frechet differentiability of control to state operator, we consider the following system, which we obtain by linearising the state problem (1.1) around \((\hat{\mathbf{u}},\hat{\varphi})\). In this process we take \(\mathbf{u}=\hat{\mathbf{u}}+\mathbf{w},\,\varphi=\hat{\varphi}+\psi,\,\pi= \hat{\pi}+\overline{\pi}\). Substituting this, we obtain the following linearized system as
\[\begin{cases}\mathbf{w}_{t}-\nu\Delta\mathbf{w}+(\hat{\mathbf{u}}\cdot\nabla) \mathbf{w}+(\mathbf{w}\cdot\nabla)\hat{\mathbf{u}}+\nabla\overline{\pi}=- \Delta\hat{\varphi}\nabla\psi-\Delta\psi\nabla\hat{\varphi}\\ +F^{\prime}(\hat{\varphi})\nabla\psi+F^{\prime\prime}(\hat{\varphi})\psi\nabla \hat{\varphi},\,\,\text{in}\,\,Q\\ \text{div}\,\mathbf{w}=0,\,\,\text{in}\,\,Q,\\ \psi_{t}+\mathbf{w}\cdot\nabla\hat{\varphi}+\hat{\mathbf{u}}\cdot\nabla\psi= \Delta\mu_{\psi},\,\,\,\text{in}\,\,Q,\\ \mu_{\psi}=-\Delta\psi+F^{\prime\prime}(\hat{\varphi})\psi,\,\,\,\text{in}\,\,Q,\\ \mathbf{w}=\eta,\,\,\,\text{on}\,\,\Sigma,\\ \frac{\partial\psi}{\partial n}=0,\,\frac{\partial\mu}{\partial n}=0,\Sigma,\\ \mathbf{w}|_{t=0}=0,\,\psi|_{t=0}=0,\,\,\text{in}\,\,\Omega.\end{cases} \tag{4.1}\]
### Existence of weak solution of linearize system
Now we state the existence result of the linearized system (4.1) and prove it by using a Faedo-Galerkin approximation scheme.
**Theorem 4.1**.: _Let \(F\) satisfy the Assumption 2.1. Then for any \(\eta\in\mathcal{U}-\{\hat{\mathbf{h}}\}\), the system (4.1) admits a unique weak solution \((\mathbf{w},\psi)\) such that_
\[\mathbf{w}\in\mathrm{L}^{\infty}(0,T;\mathbb{L}^{2}_{div}(\Omega)) \cap\mathrm{L}^{2}(0,T;\mathbb{H}^{1}_{div}(\Omega))\cap\mathrm{H}^{1}(0,T; \mathbb{V}^{\prime}_{div}(\Omega)),\] \[\psi\in\mathrm{L}^{\infty}(0,T;\mathrm{H}^{1}(\Omega))\cap\mathrm{ L}^{2}(0,T;\mathrm{H}^{2}(\Omega))\cap\mathrm{H}^{1}(0,T;(\mathrm{H}^{1})^{ \prime}(\Omega)).\]
Proof.: We prove the theorem using the Faedo-Galerkin approximation scheme. Let us consider the families of functions \((\mathbf{u}_{k})\) and \((\gamma_{k})\), eigenfunctions of the Stokes operator and the Naumann operator \(-\Delta+I\) respectively. We consider n-dimensional subspaces \(\mathrm{U}_{n}:=\langle u_{1},\cdots,\mathbf{u}_{n}\rangle\) and \(\Psi_{n}=\langle\psi_{1},\cdots,\psi_{n}\rangle\), spanned by \(n\)-eigenfunctions, and orthogonal projection on these spaces, \(P_{n}:=P_{\mathrm{U}_{n}}\) and \(\overline{P}_{n}=P_{\Psi_{n}}\). We look for the functions
\[\mathbf{w}_{n}(t,\mathbf{x})=\overline{\mathbf{w}}_{n}(t,\mathbf{ x})+\mathbf{w}_{e}(t,\mathbf{x})=\sum_{i=0}^{i=n}a_{i}(t)\mathbf{u}_{i}( \mathbf{x})+\mathbf{w}_{e}(t,\mathbf{x}),\] \[\psi_{n}(t,\mathbf{x})=\sum_{i=0}^{i=n}b_{i}(t)\gamma_{i}( \mathbf{x}),\]
which solves the following approximated problem a.e. in \([0,T]\) and for \(i=1,...,n\)
\[\langle\partial_{t}\overline{\mathbf{w}}_{n},\mathbf{u}_{i}\rangle+ (\nabla\overline{\mathbf{w}}_{n}\cdot\nabla\mathbf{u}_{i})+((\hat{\mathbf{u}} \cdot\nabla)(\overline{\mathbf{w}}_{n}+\mathbf{w}_{e}),\mathbf{u}_{i})+((( \overline{\mathbf{w}}_{n}+\mathbf{w}_{e})\cdot\nabla)\hat{\mathbf{u}},\mathbf{ u}_{i})\] \[=-(\Delta\hat{\varphi}\nabla\psi_{n},\mathbf{u}_{i})-(\Delta \psi_{n}\nabla\hat{\varphi},\mathbf{u}_{i})+(\psi_{n}F^{\prime\prime}(\hat{ \varphi})\nabla\hat{\varphi},\mathbf{u}_{i})-\langle\partial_{t}\mathbf{w}_{ e},\mathbf{u}_{i}\rangle\ \text{for all}\ \mathbf{u}_{i}\in\mathrm{U}_{n},\ \text{in}\ \Omega, \tag{4.2}\] \[\langle\partial_{t}\psi_{n},\gamma_{i}\rangle+((\overline{ \mathbf{w}}_{n}+\mathbf{w}_{e})\cdot\nabla\hat{\varphi},\gamma_{i})+(\hat{ \mathbf{u}}\cdot\nabla\psi_{n}),\gamma_{i}\rangle=(\Delta\mu_{n},\gamma_{i}), \ \text{for all}\ \gamma_{i}\in\Psi_{n},\ \text{in}\ \Omega,\] (4.3) \[\overline{\mathbf{w}}_{n}=0,\ \ \frac{\partial\psi}{\partial \mathbf{n}}=0,\ \text{on}\ \partial\Omega,\] (4.4) \[\overline{\mathbf{w}}_{n}\big{|}_{t=0}=0,\,\psi_{n}\big{|}_{t=0}= 0\ \text{in}\ \Omega, \tag{4.5}\]
where \(\mathbf{w}_{e}\) is a solution of (2.1) with \(\mathbf{w}_{e}(t)=\eta(t)\) on \(\partial\Omega\) and a.e \(t\in[0,T]\).
This is a Cauchy problem of the system of \(2n\) ordinary differential equations with \(2n\) unknowns \(a_{i},\,b_{i}\), which can be solved by the Cauchy-Lipschitz theorem. So we get a unique solution \((\overline{\mathbf{w}}_{n},\psi_{n})\) of approximated system (4.2)-(4.3). Now we prove some apriori estimates for approximated solution \((\overline{\mathbf{w}}_{n},\psi_{n})\) independent of \(n\). Let us take \(\mathbf{u}_{i}=\overline{\mathbf{w}}_{n}\) in (4.2) and \(\gamma_{i}=\psi_{n}-\Delta\psi_{n}\) in (4.3), and multiply the equation (4.1)\({}_{4}\) with \(-\Delta\mu_{n}+\Delta\psi_{n}\), and adding these equations we get
\[\frac{1}{2}\frac{d}{dt}(\|\overline{\mathbf{w}}_{n}\|^{2}+\|\nabla \psi_{n}\|^{2}+\|\psi_{n}\|^{2})+\nu\|\nabla\overline{\mathbf{w}}_{n}\|^{2}+ \|\Delta\psi_{n}\|^{2}+\|\nabla\mu_{n}\|^{2}\] \[\leq\int_{\Omega}[(\hat{\mathbf{u}}\cdot\nabla(\overline{\mathbf{ w}}_{n}+\mathbf{w}_{e}))\,\overline{\mathbf{w}}_{n}]dx+\int_{\Omega}[( \overline{\mathbf{w}}_{n}+\mathbf{w}_{e})\cdot\nabla\hat{\mathbf{u}})\, \overline{\mathbf{w}}_{n}]dx-\int_{\Omega}\Delta\hat{\varphi}(\nabla\psi_{n} \cdot\overline{\mathbf{w}}_{n})dx\] \[-\int_{\Omega}\Delta\psi_{n}(\nabla\hat{\varphi}\cdot\overline{ \mathbf{w}}_{n})dx+\int_{\omega}\psi_{n}F^{\prime\prime}(\hat{\varphi})( \nabla\hat{\varphi}\cdot\overline{\mathbf{w}}_{n})dx-\int_{\Omega}\partial_{t }\mathbf{w}_{e}\cdot\overline{\mathbf{w}}_{n}dx+\int_{\Omega}((\overline{ \mathbf{w}}_{n}+\mathbf{w}_{e})\cdot\nabla\hat{\varphi})\,\Delta\psi_{n}dx\] \[+\int_{\Omega}(\hat{\mathbf{u}}\cdot\nabla\psi_{n})\,\Delta\psi_{ n}dx-\int_{\Omega}F^{\prime\prime}(\hat{\varphi})\psi_{n}\Delta\mu_{n}dx+\int_{ \Omega}F^{\prime\prime}(\hat{\varphi})\psi_{n}\Delta\psi_{n}dx-\int_{\Omega}(( \overline{\mathbf{w}}_{n}+\mathbf{w}_{e})\cdot\nabla\hat{\varphi})\,\psi_{n}. \tag{4.6}\]
Now we estimate each term of the right-hand side of the above inequality individually. We will use the following abbreviation in the rest of the proof: C is a generic constant that may depend on the norm of \((\hat{\mathbf{u}},\hat{\varphi})\), \(\eta\) but not on \(n\). Using Holder inequality, Young's inequality, and Agmon's inequality repeatedly give the following series of estimates:
\[\Big{|}\int_{\Omega}(\hat{\mathbf{u}}\cdot\nabla\mathbf{w}_{e}) \,\overline{\mathbf{w}}_{n}\,dx\Big{|} \leq\|\hat{\mathbf{u}}\|_{\mathrm{L}^{4}}\|\nabla\overline{ \mathbf{w}}_{n}\|\,\|\mathbf{w}_{e}\|_{\mathrm{L}^{4}}\] \[\leq C\|\hat{\mathbf{u}}\|_{\mathrm{H}_{4ie}}^{2}\|\eta\|_{ \mathrm{H}^{\frac{3}{2}}}^{2}+\frac{\nu}{8}\|\nabla\overline{\mathbf{w}}_{n}\|^{2} \tag{4.7}\] \[\Big{|}\int_{\Omega}(\overline{\mathbf{w}}_{n}\cdot\nabla\hat{ \mathbf{u}})\,\overline{\mathbf{w}}_{n}\,dx\Big{|} \leq\|\overline{\mathbf{w}}_{n}\|\,\|\nabla\hat{\mathbf{u}}\|_{ \mathrm{L}^{4}}\|\overline{\mathbf{w}}_{n}\|_{\mathrm{L}^{4}}\] \[\leq C\|\hat{\mathbf{u}}\|_{\mathrm{H}^{2}_{4ie}}^{2}\|\overline{ \mathbf{w}}_{n}\|^{2}+\frac{\nu}{8}\|\nabla\overline{\mathbf{w}}_{n}\|^{2}\] (4.8) \[\Big{|}\int_{\Omega}(\mathbf{w}_{e}\cdot\nabla\hat{\mathbf{u}})\, \overline{\mathbf{w}}_{n}\,dx\Big{|} \leq\|\mathbf{w}_{e}\|_{\mathrm{L}^{4}}\|\nabla\overline{\mathbf{w}}_{n}\| \,\|\hat{\mathbf{u}}\|_{\mathrm{L}^{4}}\] \[\leq C\|\mathbf{w}_{e}\|_{\mathrm{H}^{1}_{4ie}}^{2}\|\hat{\mathbf{u }}\|_{\mathrm{H}^{1}_{4ie}}^{2}+\frac{\nu}{8}\|\nabla\overline{\mathbf{w}}_{n}\|^{2}\] \[\leq C\|\eta\|_{\mathrm{H}^{2}}^{2}\|\hat{\mathbf{u}}\|_{\mathrm{H }^{1}_{4ie}}^{2}+\frac{\nu}{8}\|\nabla\overline{\mathbf{w}}_{n}\|^{2}\] (4.9) \[\Big{|}\int_{\Omega}\Delta\hat{\varphi}(\nabla\psi_{n}\cdot \overline{\mathbf{w}}_{n})dx\Big{|} \leq\|\Delta\hat{\varphi}\|_{\mathrm{L}^{\infty}}\|\nabla\psi_{n}\| \|\overline{\mathbf{w}}_{n}\|\] \[\leq\frac{1}{2}\|\Delta\hat{\varphi}\|_{\mathrm{L}^{\infty}}^{2}\| \nabla\psi_{n}\|^{2}+\frac{1}{2}\|\overline{\mathbf{w}}_{n}\|\] (4.10) \[\Big{|}\int_{\Omega}\Delta\psi_{n}(\nabla\hat{\varphi}\cdot \overline{\mathbf{w}}_{n})\Big{|} \leq\|\Delta\psi_{n}\|\|\nabla\hat{\varphi}\|_{\mathrm{L}^{\infty}}\| \overline{\mathbf{w}}_{n}\|\] \[\leq\frac{3}{4}\|\nabla\hat{\varphi}\|_{\mathrm{L}^{\infty}}^{2}\| \overline{\mathbf{w}}_{n}\|^{2}+\frac{1}{12}\|\Delta\psi_{n}\|^{2}\] (4.11) \[\Big{|}\int_{\omega}\psi_{n}F^{\prime\prime}(\hat{\varphi})(\nabla \hat{\varphi}\cdot\overline{\mathbf{w}}_{n})dx\Big{|} =\Big{|}\int_{\Omega}F^{\prime}(\hat{\varphi})\nabla\psi_{n}\cdot \overline{\mathbf{w}}_{n}\,dx\Big{|}\] \[\leq\|F^{\prime}(\hat{\varphi})\|_{\mathrm{L}^{\infty}}\| \nabla\psi_{n}\|\,\|\overline{\mathbf{w}}_{n}\|\]
\[\|\partial_{t}\overline{\mathbf{w}}_{n}\|_{\mathrm{L}^{2}(0,T;\mathbb{P} ^{\prime}_{div})}\leq C\|\eta\|_{\mathcal{U}}\] \[\|\partial_{t}\psi_{n}\|_{\mathrm{L}^{2}(0,T;\mathbb{P}^{\prime} (\Omega))^{\prime}}\leq C\|\eta\|_{\mathcal{U}}.\]
Then we can get a pair \((\overline{\mathbf{w}},\psi)\) such that
\[\overline{\mathbf{w}}\in\mathbf{L}^{\infty}(0,T;\mathbb{G}_{div})\cap\mathbf{ L}^{2}(0,T;\mathbb{V}_{div})\cap\mathbf{H}^{1}(0,T;\mathbb{V}^{\prime}_{div}),\]
\[\psi\in\mathrm{L}^{\infty}(0,T;\mathrm{H}^{1}(\Omega))\cap\mathrm{L}^{2}(0,T; \mathrm{H}^{2}(\Omega))\cap\mathrm{H}^{1}(0,T;(\mathrm{H}^{1}(\Omega))^{\prime}).\]
which is the weak limit of \((\overline{\mathbf{w}}_{n},\psi_{n})\). Then we can pass to limit as \(n\to\infty\) in (4.3) - (4.3) and also able to verify the pair \((\mathbf{w},\psi)\) with \(\mathbf{w}=\overline{\mathbf{w}}+\mathbf{w}_{e}\) satisfy the weak formulation of (4.1).
Since the system (4.1) is a linear system, uniqueness of weak solutions follows easily.
### Differentiability of Control to State Operator
Consider the control to state operator \(\mathcal{S}:\mathcal{U}\to\mathcal{V}\) with given initial data \((\mathbf{u}_{0},\varphi_{0})\in\mathbb{H}^{1}_{div}\times\mathrm{H}^{2}.\) As \(\mathcal{V}\subseteq\mathcal{W}\), we can consider \(\mathcal{S}\) from \(\mathcal{U}\) to a weaker space \(\mathcal{W}\).
**Definition 4.2**.: _We define \(\mathcal{S}:\mathcal{U}\to\mathcal{W}\) is Frechet differentiable in \(\mathcal{U}\) if for any \(\hat{\mathbf{h}}\in\mathcal{U}\), there exist a linear operator \(\mathcal{S}^{\prime}(\hat{\mathbf{h}}):\mathcal{U}\to\mathcal{W}\) such that_
\[\text{lim}_{\|\eta\|_{\mathcal{U}}\to 0}\frac{\|\mathcal{S}(\hat{\mathbf{h}}+ \eta)-\mathcal{S}(\hat{\mathbf{h}})-\mathcal{S}^{\prime}(\hat{\mathbf{h}})( \eta)\|_{\mathcal{W}}}{\|\eta\|_{\mathcal{U}}}=0 \tag{4.22}\]
_for any arbitrary small perturbation \(\eta\in\mathcal{U}\setminus\{\hat{\mathbf{h}}\}\)._
In the next theorem, we prove the Frechet differentiability of the control to state operator \(\mathcal{S}\).
**Theorem 4.3**.: _Let \(F\in C^{4}(\mathbb{R})\) and satisfies the assumption (2.1). Also, assume \((\mathbf{u}_{0},\varphi_{0})\) be a given initial data. Then the control to state operator \(\mathcal{S}:\mathcal{U}\to\mathcal{W}\) is Frechet differentiable. Moreover, for any \(\hat{\mathbf{h}}\in\mathcal{U}\), its Frechet derivative \(\mathcal{S}^{\prime}(\hat{\mathbf{h}})\) is given by_
\[\mathcal{S}^{\prime}(\hat{\mathbf{h}})(\eta)=(\mathbf{w},\psi),\,\forall\eta \in\mathcal{U}\setminus\{\hat{\mathbf{h}}\},\]
_where \((\mathbf{w},\psi)\) is the unique weak solution of the linearized system (4.1) with control \(\eta\), which is linearized around a strong solution \((\hat{\mathbf{u}},\hat{\varphi})\) of the system (1.1) with control \(\hat{\mathbf{h}}\)._
Proof.: Let \(\hat{\mathbf{h}}\) be a given fixed control and \((\hat{\mathbf{u}},\hat{\varphi})=\mathrm{S}(\hat{\mathbf{h}})\) be the strong solution of (1.1) with control \(\hat{\mathbf{h}}\). Let \((\overline{\mathbf{u}},\overline{\varphi})\) be the strong solution of the system (1.1) with control \(\hat{\mathbf{h}}+\eta\). Let \(\mathbf{z}=\overline{\mathbf{u}}-\hat{\mathbf{u}}\), \(\xi=\overline{\varphi}-\hat{\varphi}\). Then \((\mathbf{z},\xi)\) satisfies,
\[\begin{cases}\mathbf{z}_{t}-\nu\Delta\mathbf{z}+\mathbf{z}\cdot\nabla\mathbf{ z}+\hat{\mathbf{u}}\cdot\nabla\mathbf{z}+\mathbf{z}\cdot\nabla\hat{\mathbf{u}}+ \nabla\pi_{\mathbf{z}}=\mu_{\xi}\nabla\hat{\varphi}+\mu_{\xi}\nabla\xi+\mu_{ \hat{\varphi}}\nabla\xi,\,\,\,\text{in}\,\,Q,\\ \xi_{t}+\mathbf{z}\cdot\nabla\xi+\hat{\mathbf{u}}\cdot\nabla\xi+\mathbf{z} \cdot\nabla\hat{\varphi}=\Delta\mu_{\xi},\,\,\,\text{in}\,\,Q,\\ \text{div}\,\,\mathbf{z}=0,\,\,\,\text{in}\,\,Q,\\ \mu_{\xi}=-\Delta\xi+F^{\prime}(\overline{\varphi})-F^{\prime}(\hat{\varphi}), \,\,\,\text{in}\,\,Q,\\ \mathbf{z}|_{\partial\Omega}=\eta,\,\,\,\text{on}\,\,\Sigma,\\ \frac{\partial\xi}{\partial\mathbf{n}}\big{|}_{\partial\Omega}=0=\frac{\partial \mu_{\xi}}{\partial\mathbf{n}}\big{|}_{\partial\Omega}\,\,\,\text{on}\,\, \Sigma,\\ \mathbf{z}(0)=0,\,\xi(0)=0,\,\,\,\text{in}\,\,\Omega.\end{cases} \tag{4.23}\]
where \(\pi_{\mathbf{z}}=\pi_{\overline{\mathbf{u}}}-\pi_{\hat{\mathbf{u}}}\). Now from Proposition 2.6, continuous dependence of weak solutions, we have
\[\|\mathbf{z}\|_{\mathrm{L}^{\infty}(0,T;\mathrm{L}^{2})\cap\mathrm{L}^{2}(0,T; \mathbb{H}^{1}_{div})}^{2}+\|\xi\|_{\mathrm{L}^{\infty}(0,T;\mathrm{H}^{1}) \cap\mathrm{L}^{2}(0,T;\mathrm{H}^{2})}^{2}+\|\nabla\mu_{\xi}\|^{2}\leq M\|\eta \|_{\mathcal{U}}^{2}. \tag{4.24}\]
Let us define \(\mathbf{y}=\mathbf{z}-\mathbf{w},\,\rho=\xi-\psi\) where \((\mathbf{w},\psi)\) is the solution of linearised system (4.1) corresponding to non zero boundary for velocity as \(\eta\). Then, \((\mathbf{y},\rho)\) satisfies,
\[\begin{cases}\mathbf{y}_{t}-\nu\Delta\mathbf{y}+\mathbf{z}\cdot\nabla\mathbf{ z}+\hat{\mathbf{u}}\cdot\nabla\mathbf{y}+\mathbf{y}\cdot\nabla\hat{\mathbf{u}}+ \nabla\pi_{\mathbf{y}}&=\mu_{\rho}\nabla\hat{\varphi}+\mu_{\xi}\nabla\xi+\mu_{ \hat{\varphi}}\nabla\rho,\,\,\,\text{in}\,\,Q,\\ \rho_{t}+\mathbf{y}\cdot\nabla\hat{\varphi}+\hat{\mathbf{u}}\cdot\nabla\rho+ \mathbf{z}\cdot\nabla\xi&=\Delta\mu_{\rho},\,\,\,\text{in}\,\,Q,\\ \mu_{\rho}&=-\Delta\rho+F^{\prime}(\overline{\varphi})-F^{\prime}(\hat{\varphi})-F^ {\prime\prime}(\hat{\varphi})\psi,\,\,\,\text{in}\,\,Q,\\ \text{div}\,\mathbf{y}&=0,\,\,\,\text{in}\,\,Q,\\ \mathbf{y}|_{\partial\Omega}&=0,\,\,\,\text{on}\,\,\Sigma,\\ \frac{\partial\rho}{\partial\mathbf{n}}=0&=\frac{\partial\mu_{\rho}}{\partial \mathbf{n}},\,\,\,\text{on}\,\,\Sigma,\\ \mathbf{y}(0)=0,\,\rho(0)&=0,\,\,\,\text{in}\,\,\Omega.\end{cases} \tag{4.25}\]
We have from Theorem 4.1
\[\|(\mathbf{w},\psi)\|_{\mathcal{W}}\leq C\|\eta\|_{\mathcal{U}}.\]
We aim to show
\[\frac{\|(\mathbf{y},\rho)\|_{\mathcal{W}}}{\|\eta\|_{\mathcal{U}}}\to 0\text{ as }\|\eta\|_{ \mathcal{U}}\to 0,\]
which will directly imply (4.22). For that, we multiply the equation \(\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeqeqeq:
\[|J_{7}^{3}| \leq\|F^{\prime\prime\prime}(\theta\overline{\varphi}+(1-\theta) \hat{\varphi})\|_{\mathrm{L}^{\infty}}(\|\nabla\hat{\varphi}\|_{\mathrm{L}^{4}}+ \|\nabla\overline{\varphi}\|_{\mathrm{L}^{4}})\|\xi^{2}\|_{\mathrm{L}^{4}}\| \nabla\mu_{\rho}\|\] \[\leq\frac{1}{8}\|\nabla\mu_{\rho}\|^{2}+C(\|\nabla\hat{\varphi}\| _{\mathrm{L}^{4}}^{2}+\|\nabla\overline{\varphi}\|_{\mathrm{L}^{4}}^{2})\|\xi \|_{\mathrm{H}^{1}}^{4},\] \[|J_{7}^{2}| \leq\|F^{\prime\prime\prime}(\theta\overline{\varphi}+(1-\theta) \hat{\varphi})\|_{\mathrm{L}^{\infty}}\|\xi\|_{\mathrm{L}^{4}}\|\nabla\xi\|_{ \mathrm{L}^{4}}\|\nabla\mu_{\rho}\|\] \[\leq\frac{1}{8}\|\nabla\mu_{\rho}\|^{2}+C\|\xi\|_{\mathrm{L}^{4}} ^{2}\|\nabla\xi\|_{\mathrm{L}^{4}}^{2},\] \[|J_{7}^{3}| \leq\|F^{\prime\prime\prime}(\hat{\varphi})\|_{\mathrm{L}^{\infty }}\|\nabla\hat{\varphi}\|_{\mathrm{L}^{4}}\|\rho\|_{\mathrm{L}^{4}}\|\nabla\mu _{\rho}\|\] \[\leq C\|\nabla\hat{\varphi}\|_{\mathrm{L}^{4}}^{2}\|\rho\|_{ \mathrm{H}^{1}}^{2}+\frac{1}{8}\|\nabla\mu_{\rho}\|^{2},\] \[|J_{7}^{4}| \leq\|F^{\prime\prime}(\hat{\varphi})\|_{\mathrm{L}^{\infty}}\| \nabla\rho\|\|\nabla\mu_{\rho}\|\] \[\leq C\|\nabla\rho\|^{2}+\frac{1}{8}\|\nabla\mu_{\rho}\|^{2}.\]
Substituting \(J_{1}\) to \(J_{7}\) in (4.27) we get
\[\frac{1}{2}\frac{d}{dt}\|\rho\|_{\mathrm{H}^{1}}^{2}+ \|\nabla\mu_{\rho}\|^{2}+\frac{1}{2}\|\Delta\rho\|^{2}\leq\Big{(} \frac{1}{2}+C+C\|\hat{\varphi}\|_{\mathrm{H}^{3}}^{2}+\|\hat{\mathbf{u}}\|_{ \mathrm{H}^{2}}^{2}\Big{)}\|\rho\|_{\mathrm{H}^{1}}^{2}+C(\frac{1}{2}+\|\hat{ \varphi}\|_{\mathrm{H}^{3}}^{2})\|\mathbf{y}\|^{2}\] \[+\frac{1}{2}\|\nabla\mu_{\rho}\|^{2}+\Big{(}\frac{1}{2}\|\mathbf{ z}\|_{\mathrm{H}^{1}}^{2}\|\nabla\xi\|^{2}+C\|\mathbf{z}\|_{\mathrm{H}^{1}}^{2} \|\xi\|_{\mathrm{H}^{2}}^{2}+C\|\xi\|_{\mathrm{L}^{4}}^{4}++C\|\xi\|_{\mathrm{ L}^{4}}^{2}\|\nabla\xi\|_{\mathrm{L}^{4}}^{2}\] \[+C(\|\nabla\hat{\varphi}\|_{\mathrm{L}^{4}}^{2}+\|\nabla\overline{ \varphi}\|_{\mathrm{L}^{4}}^{2})\|\xi\|_{\mathrm{H}^{1}}^{4}\Big{)}\] \[\leq \Big{(}\frac{1}{2}+C+C\|\hat{\varphi}\|_{\mathrm{H}^{3}}^{2}+\| \hat{\mathbf{u}}\|_{\mathrm{H}^{2}}^{2}\Big{)}\|\rho\|_{\mathrm{H}^{1}}^{2}+C( \frac{1}{2}+\|\hat{\varphi}\|_{\mathrm{H}^{3}}^{2})\|\mathbf{y}\|^{2}+\frac{1 }{2}\|\nabla\mu_{\rho}\|^{2}\]
\[\|(\mathbf{y},\rho)\|_{\mathcal{W}}^{2}\leq C_{T}\|\eta\|_{\mathcal{U}}^{4}, \tag{4.33}\]
Where \(C_{T}\) depends on \(M,M_{1},M_{2},\Omega,T\). Thus
\[\lim_{\|\eta\|_{\mathcal{U}}\to 0}\frac{\|\mathcal{S}(\hat{\mathbf{h}}+\eta)- \mathcal{S}(\hat{\mathbf{h}})-\mathcal{S}^{\prime}(\hat{\mathbf{h}})(\eta)\|_{ \mathcal{W}}}{\|\eta\|_{\mathcal{U}}}=\lim_{\|\eta\|_{\mathcal{U}}\to 0}\frac{\|( \mathbf{y},\rho)\|_{\mathcal{W}}}{\|\eta\|_{\mathcal{U}}}\leq C_{T}\|\eta\|_{ \mathcal{U}}\to 0\text{ as }\|\eta\|_{\mathcal{U}}\to 0.\]
This completes the proof.
## 5. The First Order Necessary Optimality Condition
Once we have shown that the control to state operator \(\mathcal{S}\) is Frechet differentiable, the next step is to derive the first-order necessary optimality condition. In this section, we establish the first-order necessary condition of optimality satisfied by the optimal solution and the solution of the linearised system (4.1).
**Theorem 5.1**.: _Let the Asumsptsion 2.1 be satisfied and \((\mathbf{u}_{0},\varphi_{0})\in\mathbb{V}_{div}\times\mathrm{H}^{2}\). Also suppose \((\mathbf{u}^{*},\varphi^{*},\mathbf{h}^{*})\) be the optimal triplet where \(\mathbf{h}^{*}\in\mathcal{U}_{ad}\) and \(S(\mathbf{h}^{*})=(\mathbf{u}^{*},\varphi^{*}).\) Let \((\mathbf{w},\psi)\) be the unique weak solution of linearized problem (4.1) with boundary data \(\mathbf{h}-\mathbf{h}^{*}\). Then following variational inequality holds:_
\[\int_{Q}(\mathbf{u}^{*}-\mathbf{u}_{Q})\cdot\mathbf{w}dxdt+\int_ {Q}(\varphi^{*}-\varphi_{Q})\cdot\psi dxdt+\int_{\Omega}(\mathbf{u}^{*}(T)- \mathbf{u}_{\Omega})\cdot\mathbf{w}(T)dx\] \[+\int_{\Omega}(\varphi^{*}(T)-\varphi_{\Omega})\cdot\psi(T)+ \int_{\Sigma}\mathbf{h}^{*}(\mathbf{h}-\mathbf{h}^{*})dSdt\geq 0. \tag{5.1}\]
Proof.: Since \(\mathcal{U}_{ad}\) is a nonempty, convex subset of \(\mathcal{U}\), then from Lemma 2.21 of [38], we have the reformulated cost functional \(\tilde{\mathcal{J}}\) satisfies
\[\tilde{\mathcal{J}}_{\mathbf{h}}^{\prime}(\mathbf{u}^{*},\varphi^{*},\mathbf{ h}^{*})(\mathbf{h}-\mathbf{h}^{*})\geq 0\quad\forall\ \mathbf{h}\in\mathcal{U}_{ad}, \tag{5.2}\]
where \(\tilde{\mathcal{J}}_{\mathbf{h}}^{\prime}(\mathbf{u},\varphi,\mathbf{h})\) denotes the Gateaux derivative of \(\tilde{\mathcal{J}}\) with respect to \(\mathbf{h}\). Now we will determine \(\tilde{\mathcal{J}}_{\mathbf{h}}^{\prime}(\mathbf{u}^{*},\varphi^{*},\mathbf{ h}^{*})\). We have from (3.7) that \(\tilde{\mathcal{J}}(\mathbf{h})=\tilde{\mathcal{J}}(S(\mathbf{h}),\mathbf{h})\), such that, \(\mathcal{S}(\mathbf{h})=(\mathbf{u},\varphi)\) is the unique strong solution of (1.1) corresponding to control \(\mathbf{h}\). Since \(\tilde{\mathcal{J}}\) is a quadratic functional, by chain rule, we can write
\[\tilde{\mathcal{J}}_{\mathbf{h}}^{\prime}(\mathbf{u},\varphi,\mathbf{h})= \tilde{\mathcal{J}}_{\mathbf{h}}^{\prime}(S(\mathbf{h}),\mathbf{h})=\tilde{ \mathcal{J}}_{S(\mathbf{h})}^{\prime}(S(\mathbf{h}),\mathbf{h})\circ S^{ \prime}(\mathbf{h})+\tilde{\mathcal{J}}_{\mathbf{h}}^{\prime}(S(\mathbf{h}), \mathbf{h}). \tag{5.3}\]
In the above equation (5.3), we have for a fixed \(\mathbf{h}\in\mathcal{U}\), the Gateaux derivative of \(\tilde{\mathcal{J}}(S(\mathbf{h}),\mathbf{h})\) with respect to \(S(\mathbf{h})=(\mathbf{u},\varphi)\) and \(\mathbf{h}\) are denoted by \(\tilde{\mathcal{J}}^{\prime}_{S(\mathbf{h})}\) and \(\tilde{\mathcal{J}}^{\prime}_{\mathbf{h}}\), respectively. Now, we have the Gateaux derivative \(\tilde{\mathcal{J}}^{\prime}_{S(\mathbf{h})}\) at \((\mathbf{u}^{*},\varphi^{*},\mathbf{h}^{*})\) in the direction of \((\mathbf{y}_{1},y_{2})\) is given by
\[\tilde{\mathcal{J}}^{\prime}_{S(h)}(S(\mathbf{h}^{*}),\mathbf{h}^ {*})(\mathbf{y}_{1},y_{2})= \int_{Q}(\mathbf{u}^{*}-\mathbf{u}_{Q})\cdot\mathbf{y}_{1}dxdt+ \int_{Q}(\varphi^{*}-\varphi_{Q})y_{2}dxdt\] \[+\int_{\Omega}(\mathbf{u}^{*}(T)-\mathbf{u}_{\Omega})\cdot \mathbf{y}_{1}(T)dx+\int_{\Omega}(\varphi(T)-\varphi_{\Omega})y_{2}(T)dx, \tag{5.4}\]
for any \((\mathbf{y}_{1},y_{2})\in\mathcal{W}\). Similarly, we calculate the Gateaux derivative of \(\tilde{\mathcal{J}}_{\mathbf{h}}\) at \((\mathbf{u}^{*},\varphi^{*},\mathbf{h}^{*})\) in the direction of \(\mathbf{g}\) as
\[\tilde{\mathcal{J}}^{\prime}_{\mathbf{h}}(S(\mathbf{h}^{*}),\mathbf{h}^{*})( \mathbf{g})=\int_{\Gamma}\mathbf{h}^{*}\cdot\mathbf{g}\,dSdt, \tag{5.5}\]
for any \(\mathbf{g}\in\mathcal{U}\). Also from the Theorem 4.3 we get that
\[\mathcal{S}^{\prime}(\mathbf{h}^{*})(\mathbf{h}-\mathbf{h}^{*})=(\mathbf{w}, \psi). \tag{5.6}\]
Now using (5.4), (5.5), (5.6) in (5.3) we obtain,
\[\tilde{\mathcal{J}}^{\prime}_{\mathbf{h}}(\mathbf{u}^{*},\varphi^ {*},\mathbf{h}^{*})(\mathbf{h}-\mathbf{h}^{*})= \int_{Q}(\mathbf{u}^{*}-\mathbf{u}_{Q})\cdot\mathbf{w}dxdt+\int_{ Q}(\varphi^{*}-\varphi_{Q})\cdot\psi dxdt+\int_{\Omega}(\mathbf{u}^{*}(T)- \mathbf{u}_{\Omega})\cdot\mathbf{w}(T)dx\] \[+\int_{\Omega}(\varphi^{*}(T)-\varphi_{\Omega})\cdot\psi(T)\,dx +\int_{\Gamma}\mathbf{h}^{*}\cdot(\mathbf{h}-\mathbf{h}^{*})dSdt.\]
Therefore, we can conclude (5.1) from (5.2).
### First Order Necessary Optimality Condition Via Adjoint System
In this section, we would like to simplify the optimality condition (5.1) and write it in terms of optimal solution and adjoint variables. This optimality system can serve as the basis for computing approximations to optimal solutions numerically. Thus we will now derive the adjoint system corresponding to the system (1.1) by using the Lagrange multipliers method. We know that the adjoint system variables act as Lagrange multipliers corresponding to the state system variables (1.1).
It is well-known that the necessary optimality conditions satisfied by the optimal control can be derived from the exact Lagrange multipliers method, which is also known as Karush-Kuhn-Tucker (KKT) theory for optimization problems in Banach spaces. This method is described for various elliptic and parabolic problems in [38, 15]. The application of the method for nonlinear pde's is difficult, because it requires a lot of experience in matching the operators, functionals, and spaces involved. Despite the technical difficulty, KKT theory has been applied to study optimal boundary control problems for Navier-Stokes equations in [16, 18, 17], but it is not straight forward to use in our case for CHNS system (1.1) since our system is highly nonlinear coupled pde system. Thus we will use a formal Lagrange multiplier method [see section 2.10 in [38]] to derive the adjoint system, then prove the well-posedness of the adjoint system, and finally establish the necessary condition of optimality for our optimal control problem \((\mathbf{OCP})\).
For this purpose, we formally introduce the Lagrange functional for the control problem \((\mathbf{OCP})\) as follows:
\[\mathcal{L}((\mathbf{u},\varphi), \mathbf{h},(\mathbf{p},\zeta,\hat{P},\mathbf{p}_{1},\zeta_{1}))\] \[:=\mathcal{J}(\mathbf{u},\varphi,\mathbf{h})-\int_{Q}[\mathbf{u} _{t}-\nu\Delta\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u}+\nabla\pi-(-\Delta \varphi+F^{\prime}(\varphi))\nabla\varphi]\cdot\mathbf{p}\,dxdt\] \[-\int_{Q}[\varphi_{t}+\mathbf{u}\cdot\nabla\varphi-\Delta(-\Delta \varphi+F^{\prime}(\varphi))]\cdot\zeta\,dxdt-\int_{Q}(div\ \mathbf{u})\hat{P}\,dxdt\] \[-\int_{\Sigma}(\mathbf{u}-\mathbf{h})\cdot\mathbf{p}_{1}\,dSdt -\int_{\Sigma}\frac{\partial\varphi}{\partial\mathbf{n}}\zeta_{1}\,dSdt \tag{5.7}\]
for any \(\mathbf{h}\in\mathcal{U}_{ad}\) and \((\mathbf{u},\varphi)=\mathcal{S}(\mathbf{h})\). Here \(\zeta,\mathbf{p},\hat{P},\zeta_{1},\mathbf{p}_{1}\) are Lagrange multipliers corresponding to five state constraints in (1.1) respectively.
Let \((\mathbf{u}^{*},\varphi^{*},\mathbf{h}^{*})\) be the optimal solution to the problem \((\mathbf{OCP})\) such that \((\mathbf{u}^{*},\varphi^{*})=\mathcal{S}(\mathbf{h}^{*})\). Then by using
the Lagrange principle, we conclude that \((\mathbf{u}^{*},\varphi^{*},\mathbf{h}^{*})\) together with Lagrange multipliers \(\mathbf{p},\zeta,\hat{P},\mathbf{p}_{1},\zeta_{1}\), satisfies the first order optimality condition associated with the optimization problem related to the Lagrange functional \(\mathcal{L}\), defined as follows:
\[\min_{\mathbf{h}\in\mathcal{U}_{ad}}\mathcal{L}((\mathbf{u},\varphi),\! \mathbf{h},(\mathbf{p},\zeta,\hat{P},\mathbf{p}_{1},\zeta_{1})).\]
Now, since \((\mathbf{u},\varphi)\) has become formally unconstrained, the Frechet derivative of \(\mathcal{L}\) with respect to \((\mathbf{u},\varphi)\) will vanish at the optimal point \((\mathbf{u}^{*},\varphi^{*},\mathbf{h}^{*})\), which implies
\[\mathcal{L}^{\prime}_{(\mathbf{u},\varphi)}((\mathbf{u}^{*},\varphi^{*}), \mathbf{h}^{*},(\mathbf{p},\zeta,\hat{P},\mathbf{p}_{1},\zeta_{1}))(\mathbf{u }_{1},u_{2})=0 \tag{5.8}\]
for all smooth function \((\mathbf{u}_{1},u_{2})\) such that
\[\mathbf{u}_{1}(0,.)=0,\ u_{2}(0,.)=0\ \ \text{in}\ \Omega.\]
Furthermore, from the Lagrange principle, the constraints on \(\mathbf{h}\) in (3.4) gives the following variational inequality
\[\mathcal{L}^{\prime}_{\mathbf{h}}((\mathbf{u}^{*},\varphi^{*}),\mathbf{h}^{*},(\mathbf{p},\zeta,\hat{P},\mathbf{p}_{1},\zeta_{1}))(\mathbf{h}-\mathbf{h}^{ *})\geq 0 \tag{5.9}\]
for all \(\mathbf{h}\in\mathcal{U}_{ad}\).
Next, we want to determine the Lagrange multipliers \(\mathbf{p},\zeta,\hat{P},\mathbf{p}_{1},\zeta_{1}\) from (5.8), which are the adjoint states corresponding to state equations (1.1). For this purpose, we formally calculate (5.8), then perform integration by parts and take the terms together corresponding to \((\mathbf{u}_{1},u_{2},\pi)\) to derive the following linear system satisfied by \((\mathbf{p},\zeta,\hat{P})\):
\[\left\{\begin{aligned} &-\partial_{t}\mathbf{p}-\nu\Delta \mathbf{p}+(\mathbf{u}^{*}\cdot\nabla)\mathbf{p}-(\mathbf{p}\cdot\nabla^{T}) \mathbf{u}^{*}+\zeta\nabla\varphi^{*}-\nabla\hat{P}=\mathbf{u}^{*}-\mathbf{u} _{Q},\quad\text{ in }Q,\\ &-\partial_{t}\zeta-\mathbf{u}^{*}\cdot\nabla\zeta-\mathbf{p} \cdot\nabla(\Delta\varphi^{*})+\text{div}((\nabla\mathbf{p})\cdot\nabla \varphi^{*})+\text{div}((\nabla^{T}(\nabla\varphi^{*}))\cdot\mathbf{p})\\ &+\Delta^{2}\zeta-F^{\prime\prime}(\varphi^{*})\Delta\zeta= \varphi^{*}-\varphi_{Q},\quad\text{ in }Q,\\ &\text{div}\,\mathbf{p}=0,\quad\text{ in }Q,\\ &\mathbf{p}=0,\quad\text{ on }\Sigma,\\ &\frac{\partial\zeta}{\partial\mathbf{n}}=0=\frac{\partial( \Delta\zeta)}{\partial\mathbf{n}},\quad\text{ on }\Sigma,\\ &\mathbf{p}(T)=\mathbf{u}^{*}(T)-\mathbf{u}_{\Omega},\,\zeta(T)= \varphi^{*}(T)-\varphi_{\Omega},\quad\text{ in }\Omega.\end{aligned}\right. \tag{5.10}\]
Furthermore, on the boundary \(\Sigma\), the two lagrange multipliers \(\mathbf{p}_{1},\ \zeta_{1}\) can be uniquely determined in terms of \((\mathbf{p},\zeta,\hat{P})\) and it satisfy the following equations:
\[\mathbf{p}_{1}+\hat{P}\mathbf{n}+\frac{\partial\mathbf{p}}{ \partial\mathbf{n}}=0,\quad\text{ on }\Sigma \tag{5.11}\] \[\frac{\partial\zeta_{1}}{\partial\mathbf{n}}+[(\nabla\mathbf{p}) \cdot\nabla\varphi^{*}+(\nabla^{T}(\nabla\varphi^{*}))\cdot\mathbf{p}] \mathbf{n}=0,\quad\text{ on }\Sigma. \tag{5.12}\]
We call the linear pde system (5.10) as the adjoint system corresponding to (1.1).
**Remark 5.2**.: _The expression of Lagrange functional \(\mathcal{L}\) in (5.7) is not well-defined yet because we only have the regularity of \((\mathbf{u},\varphi)\in\mathcal{V}\) for the control \(\mathbf{h}\in\mathcal{U}\). We do not know the regularity of Lagrange multipliers \(\zeta,\mathbf{p},\hat{P},\zeta_{1},\mathbf{p}_{1}\) yet. Thus the Lagrange multiplier method presented in this section has the sole purpose of identifying the correct form of the adjoint system._
Now we establish the following existence result of the adjoint system (5.10).
**Theorem 5.3**.: _Let \((\mathbf{u}^{*},\varphi^{*})\in\mathcal{V}\) and assumption (2.1) on \(F\) be satisfied. Also, assume \(\mathbf{u}_{\Omega}\in\mathrm{L}^{2}_{div}(\Omega),\,\varphi_{\Omega}\in\mathrm{ H}^{1}(\Omega)\). Then the linear problem (5.10) has a unique solution \((\mathbf{p},\zeta)\) such that_
\[\mathbf{p}\in \mathrm{L}^{\infty}(0,T;\mathbb{G}_{div})\cap\mathrm{L}^{2}(0,T; \mathbb{V}_{div})\cap\mathrm{H}^{1}(0,T,\mathbb{V}^{\prime}_{div}),\] \[\zeta\in \mathrm{L}^{\infty}(0,T;\mathrm{H}^{1})\cap\mathrm{L}^{2}(0,T; \mathrm{H}^{2}\cap\mathrm{H}^{3})\cap\mathrm{H}^{1}(0,T;(\mathrm{H}^{1})^{ \prime}).\]
Proof.: The proof follows from a similar argument as in Theorem 4.1 using the Faedo - Galerkin method. For the sake of simplicity, we omit the approximation scheme and do the apriori estimates only.
We multiply the equation (5.10)\({}_{1}\) by \(\mathbf{p}\) and the equation (5.10)\({}_{2}\) by \(\zeta-\Delta\zeta\) we get
\[-\frac{1}{2}\frac{d}{dt}\|\mathbf{p}\|^{2}+\nu\|\nabla\mathbf{p}\| ^{2} =((\mathbf{p}\cdot\nabla^{T})\mathbf{u}^{*},\mathbf{p})+(\zeta \nabla\varphi^{*},\mathbf{p})+(\mathbf{u}^{*}-\mathbf{u}_{Q},\mathbf{p})\] \[:=\sum_{k=1}^{3}I_{k} \tag{5.13}\] \[-\frac{1}{2}\frac{d}{dt}\|\zeta\|_{\mathrm{H}^{1}}^{2}+\|\Delta \zeta\|^{2}+\|\nabla(\Delta\zeta)\|^{2} =(\mathbf{p}\cdot\nabla(\Delta\varphi^{*}),\zeta)-(\text{div}(( \nabla\mathbf{p})\cdot\nabla\varphi^{*}),\zeta)\] \[-(\text{div}((\nabla^{T}(\nabla\varphi^{*}))\cdot\mathbf{p}), \zeta)+(F^{\prime\prime}(\varphi^{*})\Delta\zeta,\zeta)\] \[+(\varphi^{*}-\varphi_{Q},\zeta)-(\mathbf{u}^{*}\cdot\nabla \zeta,\Delta\zeta)+(\mathbf{p}\cdot\nabla(\Delta\varphi^{*}),\Delta\zeta)\] \[+(\text{div}((\nabla\mathbf{p})\cdot\nabla\varphi^{*}),\Delta \zeta)+(\text{div}((\nabla^{T}(\nabla\varphi^{*}))\cdot\mathbf{p}),\Delta\zeta)\] \[-(F^{\prime\prime}(\varphi^{*})\Delta\zeta,\Delta\zeta)-(\varphi^ {*}-\varphi_{Q},\Delta\zeta)\] \[:=\sum_{i=1}^{11}J_{i} \tag{5.14}\]
Now we estimate each \(I_{i}\) and \(J_{i}\) one by one using Poincare, Young's, and Sobolev inequality. We estimate \(I_{1}\) using Ladyzhanskya and Young's inequality as
\[|I_{1}| \leq\|\mathbf{p}\|_{\mathrm{L}^{4}}^{2}\|\nabla\mathbf{u}^{*}\|\] \[\leq\sqrt{2}\|\mathbf{p}\|\|\nabla\mathbf{p}\|\|\nabla\mathbf{u}^ {*}\|\] \[\leq\frac{\nu}{12}\|\nabla\mathbf{p}\|^{2}+\frac{6}{\nu}\|\mathbf{ p}\|^{2}\|\nabla\mathbf{u}^{*}\|^{2}.\]
Similarly, we can estimate \(I_{2}\) as
\[|I_{2}| \leq\|\nabla\zeta\|\|\varphi^{*}\|_{\mathrm{L}^{4}}\|\mathbf{p}\| _{\mathrm{L}^{4}}\] \[\leq\frac{C}{4}\|\nabla\zeta\|^{2}+\frac{1}{4C}\|\varphi^{*}\|_{ \mathrm{L}^{4}}^{2}\|\mathbf{p}\|\|\nabla\mathbf{p}\|\] \[\leq\frac{C}{4}\|\nabla\zeta\|^{2}+\frac{\nu}{12}\|\nabla\mathbf{ p}\|^{2}+\frac{3}{16C\nu}\|\varphi^{*}\|_{\mathrm{L}^{4}}^{4}\|\mathbf{p}\|^{2},\]
Where \(C\) is a generic constant.
\[|I_{3}|\leq\frac{3}{4\nu}\|\mathbf{u}^{*}-\mathbf{u}_{Q}\|^{2}+\frac{\nu}{12} \|\nabla\mathbf{p}\|^{2}\]
Adding \(I_{1}\) to \(I_{3}\) in the equation (5.13) we get
\[-\frac{1}{2}\frac{d}{dt}\|\mathbf{p}\|^{2}+\frac{3\nu}{4}\|\nabla\mathbf{p}\| ^{2}\leq\frac{3}{4\nu}\|\mathbf{u}^{*}-\mathbf{u}_{Q}\|^{2}+\frac{C}{4}\| \nabla\zeta\|^{2}+\Big{(}\frac{6}{\nu}\|\nabla\mathbf{u}^{*}\|^{2}+\frac{3}{16 C\nu}\|\varphi^{*}\|_{\mathrm{L}^{4}}^{4}\Big{)}\|\mathbf{p}\|^{2}. \tag{5.15}\]
Now we will estimate \(J_{i}\).
\[|\mathrm{J}_{1}| \leq\|\mathbf{p}\|\|\|\nabla\zeta\|\|\Delta\varphi^{*}\|_{\mathrm{ L}^{\infty}}\] \[\leq\frac{1}{4C}\|\varphi^{*}\|_{\mathrm{H}^{2}}^{2}\|\mathbf{p}\| ^{2}+\frac{C}{4}\|\nabla\zeta\|,\]
where we have used Agmon's inequality and Young's inequality. Similarly, \(J_{2}\) can be estimated as
\[|J_{2}| =|\big{(}(\nabla\mathbf{p})\cdot\nabla\varphi^{*},\nabla\zeta\big{)}|\] \[\leq\frac{\nu}{12}\|\nabla\mathbf{p}\|^{2}+\frac{6}{\nu}\|\varphi^ {*}\|_{\mathrm{H}^{3}}^{2}\|\nabla\zeta\|^{2}.\]
Similarly,
\[|J_{3}| =|\big{(}(\nabla(\nabla\varphi^{*}))\cdot\mathbf{p},\nabla\zeta \big{)}|\] \[\leq\|\mathbf{p}\|\|\nabla\zeta\|\|\nabla(\nabla\varphi^{*})\|_{ \mathrm{L}^{\infty}}\]
\[|J_{4}| \leq\|F^{\prime\prime}(\varphi^{*})\|_{\mathrm{L}^{\infty}}^{2}\| \Delta\zeta\|\|\zeta\|\] \[\leq C\|F^{\prime\prime}(\varphi^{*})\|_{\mathrm{L}^{\infty}}^{2}\| \zeta\|^{2}+\frac{1}{8}\|\Delta\zeta\|^{2},\] \[|J_{5}| \leq\frac{1}{2}\|\varphi^{*}-\varphi_{Q}\|^{2}+\frac{1}{2}\|\zeta \|^{2},\] \[|J_{6}| \leq C\|\mathbf{u}^{*}\|_{\mathrm{H}^{2}}^{2}\|\nabla\zeta\|^{2}+ \frac{1}{8}\|\Delta\zeta\|^{2},\] \[|J_{7}| =|(\mathbf{p}\cdot\nabla(\Delta\zeta),\Delta\varphi^{*})|\] \[\leq\frac{1}{12}\|\nabla(\Delta\zeta)\|^{2}+\frac{3}{4}\|\Delta \varphi^{*}\|_{\mathrm{L}^{\infty}}^{2}\|\mathbf{p}\|^{2},\] \[|J_{8}| =|(\text{div}((\nabla\mathbf{p})\cdot\nabla\varphi^{*}),\Delta \zeta)|\] \[\leq\|\mathbf{p}\|_{\mathrm{L}^{4}}\|\nabla(\Delta\varphi^{*})\| _{\mathrm{L}^{4}}\|\nabla(\Delta\zeta)\|\] \[\leq\|\mathbf{p}\|^{\frac{1}{2}}\|\nabla\mathbf{p}\|^{\frac{1}{2 }}\|\varphi^{*}\|_{\mathrm{H}^{4}}^{2}\|\nabla(\Delta\zeta)\|\] \[\leq C\|\varphi^{*}\|_{\mathrm{H}^{4}}^{2}\|\mathbf{p}\|^{2}+ \frac{\nu}{12}\|\nabla\mathbf{p}\|^{2}+\frac{1}{12}\|\nabla(\Delta\zeta)\|^{2},\] \[|J_{9}| =|(\left(\nabla^{T}(\nabla\varphi^{*})\right)\cdot\mathbf{p}, \nabla(\Delta\zeta)\right)|\] \[\leq\frac{3}{4}\|\varphi^{*}\|_{\mathrm{H}^{4}}^{2}\|\mathbf{p}\| ^{2}+\frac{1}{12}\|\nabla(\Delta\zeta)\|^{2},\] \[|J_{10}| \leq(C+1)\|\varphi^{*}\|_{\mathrm{H}^{3}}^{2}\|\nabla\zeta\|^{2} +\frac{1}{5}\|\Delta\zeta\|^{2}+\frac{1}{12}\|\nabla(\Delta\zeta)\|^{2},\] \[|J_{11}| \leq\frac{1}{2}\|\varphi^{*}-\varphi_{Q}\|^{2}+\frac{1}{8}\| \Delta\zeta\|^{2}.\]
Taking estimates of \(J_{1}\) to \(J_{4}\) into account, we write (5.14) as
\[-\frac{1}{2}\frac{d}{dt}\|\zeta\|_{\mathrm{H}^{1}}^{2}+\frac{1}{ 2}\|\Delta\zeta\|^{2}+\frac{2}{3}\|\nabla(\Delta\zeta)\|^{2} \leq\Big{(}\frac{1}{4C}\|\varphi^{*}\|_{\mathrm{H}^{2}}^{2}+( \frac{1}{4C}+\frac{5}{2}+C)\|\varphi^{*}\|_{\mathrm{H}^{4}}^{2}+2\|\varphi^{* }\|_{\mathrm{H}^{3}}^{2}\Big{)}\|\mathbf{p}\|^{2}\] \[+\Big{(}\frac{6}{\nu}+\frac{C+1}{4}+C\|\mathrm{F}^{\prime\prime}( \varphi^{*})\|_{\mathrm{L}^{\infty}}^{2}+C\|\mathbf{u}^{*}\|_{\mathrm{H}^{2}} ^{2}\Big{)}\|\zeta\|_{\mathrm{H}^{1}}^{2}\] \[+\frac{3}{4\nu}\|\mathbf{u}^{*}-\mathbf{u}_{Q}\|^{2}+\frac{3}{8} \|\varphi^{*}-\varphi_{Q}\|^{2} \tag{5.16}\]
Adding the inequalities (5.15) and (5.16) we get
\[-\frac{1}{2}\frac{d}{dt}(\|\mathbf{p}\|^{2} +\|\zeta\|_{\mathrm{H}^{1}}^{2})+\frac{3\nu}{4}\|\nabla\mathbf{p} \|^{2}+\frac{1}{2}\|\Delta\zeta\|^{2}+\frac{2}{3}\|\nabla(\Delta\zeta)\|^{2} \leq\frac{3}{4\nu}\|\mathbf{u}^{*}-\mathbf{u}_{Q}\|^{2}+\frac{1}{2}\|\varphi^ {*}-\varphi_{Q}\|^{2}\] \[+\Big{[}\frac{6}{\nu}\|\nabla\mathbf{u}^{*}\|^{2}+\frac{3}{16C\nu }\|\varphi^{*}\|_{\mathrm{L}^{4}}^{4}+\frac{1}{4C}\|\varphi^{*}\|_{\mathrm{H}^ {2}}^{2}+C\|\varphi^{*}\|_{\mathrm{H}^{4}}^{2}+2\|\varphi^{*}\|_{\mathrm{H}^{ 3}}^{2}\Big{]}\|\mathbf{p}\|^{2}\] \[+C\Big{[}1+\|\mathrm{F}^{\prime\prime}(\varphi^{*})\|_{\mathrm{L} ^{\infty}}^{2}+\|\mathbf{u}^{*}\|_{\mathrm{H}^{2}}^{2}\Big{]}\|\zeta\|_{ \mathrm{H}^{1}}^{2}. \tag{5.17}\]
Integrating the inequality (5.17) from \(t\) to \(T\) yields
\[\frac{1}{2}(\|\mathbf{p}(t)\|^{2}+\|\zeta(t)\|_{\mathrm{H}^{1}}^{ 2})+\frac{3\nu}{4}\int_{t}^{T}\|\nabla\mathbf{p}(s)\|^{2}ds+(\frac{1}{2}-C_{3} )\int_{t}^{T}\|\Delta\zeta(s)\|^{2}ds+\frac{3}{4}\int_{t}^{T}\|\nabla(\Delta \zeta(s))\|^{2}ds\] \[\leq\big{[}\|\mathbf{p}(T)\|^{2}+\|\zeta(T)\|^{2}\big{]}+\frac{3}{ 4\nu}\int_{t}^{T}\|\mathbf{u}^{*}(s)-\mathbf{u}_{Q}\|^{2}ds+\frac{1}{2}\int_{t }^{T}\|\varphi^{*}(s)-\varphi_{Q}\|^{2}ds\] \[+\int_{t}^{T}\Big{[}\frac{6}{\nu}\|\nabla\mathbf{u}^{*}(s)\|^{2}+ \frac{3}{16C\nu}\|\varphi^{*}(s)\|_{\mathrm{L}^{4}}^{4}+\frac{1}{4C}\|\varphi^{* }(s)\|_{\mathrm{H}^{2}}^{2}+C\|\varphi^{*}(s)\|_{\mathrm{H}^{4}}^{2}+2\| \varphi^{*}(s)\|_{\mathrm{H}^{3}}^{2}\Big{]}\|\mathbf{p}(s)\|^{2}ds\] \[+C\int_{t}^{T}\Big{[}1+\|\mathrm{F}^{\prime\prime}(\varphi^{*}(s)) \|_{\mathrm{L}^{\infty}}^{2}+\|\mathbf{u}^{*}(s)\|_{\mathrm{H}^{2}}^{2}\Big{]} \|\zeta(s)\|_{\mathrm{H}^{1}}^{2}ds, \tag{5.18}\]
for all \(t\in[0,T]\). Applying Gronwall's lemma to the inequality (5.18)
\[\frac{1}{2}(\|{\bf p}(t)\|^{2}+\|\zeta(t)\|_{\rm H^{1}}^{2})\] \[\quad\leq\Big{[}\|{\bf p}(T)\|^{2}+\|\zeta(T)\|_{\rm H^{1}}^{2}+ \frac{3}{4\nu}\int_{0}^{T}\|{\bf u}^{*}(s)-{\bf u}_{Q}\|^{2}ds+\frac{1}{2}\int_ {0}^{T}\|\varphi^{*}(s)-\varphi_{Q}\|^{2}ds\Big{]}\] \[\quad\quad\times exp\Big{[}T+\int_{0}^{T}\|{\rm F}^{\prime\prime} (\varphi^{*}(s))\|_{\rm L^{\infty}}^{2}ds+\int_{0}^{T}\|{\bf u}^{*}(s)\|_{\rm H ^{2}}^{2}ds\Big{]}\] \[\quad\quad\times exp\Big{[}\int_{0}^{T}\!\frac{6}{\nu}\|\nabla{ \bf u}^{*}(s)\|^{2}ds+\frac{3}{16C\nu}\int_{0}^{T}\|\varphi^{*}(s)\|_{\rm L^{ 4}}^{4}ds+\frac{1}{4C}\int_{0}^{T}\|\varphi^{*}(s)\|_{\rm H^{2}}^{2}ds\] \[\quad\quad\quad\quad\quad+C\int_{0}^{T}\|\varphi^{*}(s)\|_{\rm H ^{4}}^{2}ds+2\int_{0}^{T}\|\varphi^{*}(s)\|_{\rm H^{3}}^{2}ds\Big{]}. \tag{5.19}\]
Since \(({\bf u}^{*},\varphi^{*})\) is the strong solution of nonlinear system (1.1), the right-hand side of (5.1) is finite. Therefore using (5.18) and (5.1) we conclude that \({\bf p}\in{\rm L^{\infty}}(0,T;\mathbb{G}_{div})\cap{\rm L^{2}}(0,T;\mathbb{V }_{div})\) and \(\zeta\in{\rm L^{\infty}}(0,T;\mathbb{H}^{1})\cap{\rm L^{2}}(0,T;\mathbb{H}^{2} \cap\mathbb{H}^{3}).\) Now with this regularity of \({\bf p}\), \(\zeta\) and using the equations (5.10)\({}_{1}\) and (5.10)\({}_{2}\) we get a uniform estimates on time derivatives \({\bf p}_{t}\), \(\zeta_{t}\) so that \({\bf p}_{t}\in{\rm L^{2}}(0,T;\mathbb{V}_{div}^{*})\), \(\zeta_{t}\in{\rm L^{2}}(0,T;(\mathbb{H}^{1})^{\prime}).\) Hence we get a weak solution \(({\bf p},\zeta)\) as claimed. The uniqueness of weak solutions follows from the linearity of the system. Since \(({\bf p},\zeta)\) is the unique solution of the adjoint system, as in the Navier - Stokes equations (see [37]), we can determine \(\hat{P}\in{\rm L^{2}}(0,T;\mathbb{L}_{0}^{2}(\Omega))\), where \(\mathbb{L}_{0}^{2}(\Omega)=\{g\in{\rm L^{2}}(\Omega):\int_{\Omega}g(x)dx=0\}\).
**Remark 5.4**.: _Note that, if \({\bf u}_{\Omega}\in\mathbb{H}_{div}^{1}\) then solution of the adjoint system, \({\bf p}\) satisfy \({\bf p}\in{\rm L^{\infty}}(0,T;\mathbb{V}_{div})\cap{\rm L^{2}}(0,T;\mathbb{H} ^{2})\). This can be shown by multiplying the equation (5.10)\({}_{1}\) by \({\bf A}{\bf p}\) and proceeding similarly as in Theorem 5.3._
**Corollary 5.5**.: _Let \({\bf u}_{\Omega}\in\mathbb{H}_{div}^{1}\) and all the assumptions of Theorem 5.3 hold. Then the Lagrange multipliers \(({\bf p}_{1},\zeta_{1})\) are uniquely determined by the equations (5.11) and (5.12) such that_
\[{\bf p}_{1}\in{\rm L^{2}}(0,T;\mathbb{H}^{-\frac{1}{2}}(\partial\Omega)),\zeta _{1}\in{\rm L^{2}}(0,T;\mathbb{H}^{\frac{1}{2}}(\partial\Omega)). \tag{5.20}\]
Proof.: Since \(({\bf p},\zeta,\hat{P})\) is the unique solution of the adjoint system (5.10), we can determine \(({\bf p}_{1},\zeta_{1})\) from the equations (5.11) and (5.12) uniquely on the boundary \(\Sigma\). We only need to show (5.20). From the Trace theorem we have \(\frac{\partial{\bf p}}{\partial{\bf n}}\in{\rm L^{2}}(0,T;\mathbb{H}^{-\frac{ 1}{2}}(\partial\Omega))\) and \(\hat{P}{\bf n}\in{\rm L^{2}}(0,T;\mathbb{H}^{-\frac{1}{2}}(\partial\Omega))\). Therefore from (5.11), we have \({\bf p}\in{\rm L^{2}}(0,T;\mathbb{H}^{-\frac{1}{2}}(\partial\Omega))\). Moreover, using Gagliardo - Nirenberg interpolation inequality we obtain
\[\|{\bf p}\cdot\nabla\varphi^{*}\|_{\rm H^{1}}^{2} =\Big{(}\sum_{|\alpha|\leq 1}\int_{\Omega}|D^{\alpha}({\bf p}\cdot \nabla\varphi^{*})|^{2}\Big{)}\] \[\leq\sum_{|\alpha|\leq 1}\Big{(}\int_{\Omega}|D^{\alpha}{\bf p} \cdot\nabla\varphi^{*}|^{2}+\int_{\Omega}|{\bf p}\cdot D^{\alpha}(\nabla \varphi^{*})|^{2}\Big{)}\] \[\leq\|{\bf p}\|_{\rm W^{1,4}}^{\frac{1}{2}}\|{\bf\nabla}\varphi^{ *}\|_{\rm L^{4}}^{2}+\|{\bf p}\|_{\rm L^{4}}^{2}\|\varphi^{*}\|_{\rm W^{2,4}}^{2}\] \[\leq C\|{\bf p}\|_{\rm H^{2}}^{\frac{3}{2}}\|{\bf p}\|_{\rm L^{2}} ^{\frac{1}{2}}\|\nabla\varphi^{*}\|_{\rm L^{4}}+C\|\nabla{\bf p}\|^{2}\| \varphi^{*}\|_{\rm H^{3}}^{\frac{5}{3}}\|\varphi^{*}\|_{\rm H^{1}}^{\frac{1}{ 2}}.\]
As a consequence, using the fact \(({\bf u}^{*},\varphi^{*})\in\mathcal{V}\) we obtain
\[{\bf p}\cdot\nabla\varphi^{*}\in{\rm L^{2}}(0,T;\mathbb{H}^{1}(\Omega)).\]
Therefore, \((\nabla{\bf p})\cdot\nabla\varphi^{*}+(\nabla^{T}(\nabla\varphi^{*}))\cdot{\bf p }=\nabla({\bf p}\cdot\nabla\varphi^{*})\in{\rm L^{2}}(0,T;\mathbb{L}^{2}( \Omega)).\) Which implies
\[[(\nabla{\bf p})\cdot\nabla\varphi^{*}+(\nabla^{T}(\nabla\varphi^{*}))\cdot{\bf p }]{\bf n}\in{\rm L^{2}}(0,T;\mathbb{H}^{-\frac{1}{2}}(\partial\Omega)).\]
Therefore, it follows from (5.12) that
\[\frac{\partial\zeta_{1}}{\partial{\bf n}}\in{\rm L^{2}}(0,T;\mathbb{H}^{-\frac{ 1}{2}}(\partial\Omega)),\]
and hence from Trace theorem,
\[\zeta_{1}\in{\rm L^{2}}(0,T;\mathbb{H}^{\frac{1}{2}}(\partial\Omega)).\]
**Remark 5.6**.: _Using the Theorem 5.3 and Corollary 5.5, we can now conclude that the Lagrange functional \(\mathcal{L}\) defined in (5.7) is well-defined._
In the following theorem, we will derive the first-order necessary optimality condition in terms of optimal solution of \((\mathbf{OCP})\) and the adjoint variables \(\mathbf{p},\zeta\).
**Theorem 5.7**.: _Let the Assumption 2.1 holds and \((\mathbf{u}_{0},\varphi_{0})\in\mathbb{V}_{div}\times\mathrm{H}^{2}\). In addition, let \((\mathbf{u}^{*},\varphi^{*},\mathbf{h}^{*})\) be the optimal triplet, i.e., \(\mathbf{h}^{*}\in\mathcal{U}_{ad}\) be the optimal boundary control of the problem \((\mathbf{OCP})\) and \((\mathbf{u}^{*},\varphi^{*})\) be the strong solution of (1.1) corresponding to \(\mathbf{h}^{*}\). Let \((\mathbf{p},\zeta)\) be the solution of the adjoint system (5.10). Then for any \(\mathbf{h}\in\mathcal{U}_{ad}\), the following variational inequality holds:_
\[\int_{\Sigma}\mathbf{h}^{*}(\mathbf{h}-\mathbf{h}^{*})\,dSdt-\int_{\Sigma} \hat{P}\mathbf{n}\cdot(\mathbf{h}-\mathbf{h}^{*})\,dSdt-\int_{\Sigma}\frac{ \partial\mathbf{p}}{\partial\mathbf{n}}\cdot(\mathbf{h}-\mathbf{h}^{*})\,dSdt \geq 0. \tag{5.21}\]
Proof.: We have from (5.22) that
\[\mathcal{L}^{\prime}_{\mathbf{h}}((\mathbf{u}^{*},\varphi^{*}),\mathbf{h}^{*},(\mathbf{p},\zeta,\hat{P},\mathbf{p}_{1},\zeta_{1}))(\mathbf{h}-\mathbf{h}^{* })\geq 0 \tag{5.22}\]
for all \(\mathbf{h}\in\mathcal{U}_{ad}\). A direct computation of (5.22) leads to the following inequality
\[\int_{\Sigma}(\mathbf{h}^{*}+\mathbf{p}_{1})(\mathbf{h}-\mathbf{h}^{*})\,dSdt \geq 0. \tag{5.23}\]
Now substituting the value of \(\mathbf{p}_{1}\) from (5.11) in (5.23) we get
\[\int_{\Sigma}\mathbf{h}^{*}(\mathbf{h}-\mathbf{h}^{*})\,dSdt-\int_{\Sigma} \hat{P}\mathbf{n}\cdot(\mathbf{h}-\mathbf{h}^{*})\,dSdt-\int_{\Sigma}\frac{ \partial\mathbf{p}}{\partial\mathbf{n}}\cdot(\mathbf{h}-\mathbf{h}^{*})\,dSdt \geq 0,\quad\mathbf{h}\in\mathcal{U}_{ad}.\]
That completes the proof.
**Remark 5.8**.: _We can also prove optimality condition (5.21) in Theorem 5.7 from equation (5.1) in Theorem 5.1. For this, we take \((\mathbf{w},\psi)\) to be the unique weak solution of linearised system (4.1) corresponding to boundary \(\eta=\mathbf{h}-\mathbf{h}^{*}\), for any \(\mathbf{h}\in\mathcal{U}_{ad}\). Then from the optimal condition (5.8), by taking \((\mathbf{u}_{1},u_{2})\) as the linearised solution \((\mathbf{w},\psi)\) and using the fact that \(\mathbf{w}|_{\Sigma}=\mathbf{h}-\mathbf{h}^{*}\), \(\frac{\partial\psi}{\partial n}|_{\Sigma}=0\), we can derive_
\[\int_{Q}(\mathbf{u}^{*}-\mathbf{u}_{Q})\cdot\mathbf{w}\,dxdt+ \int_{Q}(\varphi^{*}-\varphi_{Q})\cdot\psi\,dxdt+\int_{\Omega}(\mathbf{u}^{*} (T)-\mathbf{u}_{\Omega})\cdot\mathbf{w}(T)\,dx\] \[+\int_{\Omega}(\varphi^{*}(T)-\varphi_{\Omega})\cdot\psi(T)\,dx =\int_{\Sigma}\mathbf{p}_{1}\cdot(\mathbf{h}-\mathbf{h}^{*})\,dSdt,\quad \forall\mathbf{h}\in\mathcal{U}_{ad}.\]
_Now from the variational inequality (5.1), (5.21) follows easily._
Finally, using the variational formula (5.21), we can interpret the optimal boundary control in terms of the adjoint variables.
**Corollary 5.9**.: _Let \(\mathbf{h}^{*}\in\mathcal{U}_{ad}\) be a optimal boundary control associated with \((\mathbf{OCP})\). Then \(\mathbf{h}^{*}\) and the adjoint system \((\mathbf{p},\zeta)\) satisfy the projection formula_
\[\mathbf{h}^{*}=\mathcal{P}_{\mathcal{U}_{ad}}(-\hat{P}\mathbf{n}-\frac{ \partial\mathbf{p}}{\partial\mathbf{n}}), \tag{5.24}\]
_where \(\mathcal{P}_{\mathcal{U}_{ad}}\) is the orthogonal projector from \(\mathrm{L}^{2}(\Sigma)\) onto \(\mathcal{U}_{ad}\)._
|
2308.12936
|
On the regularity problem for parabolic operators and the role of
half-time derivative
|
In this paper we present the following result on regularity of solutions of
the second order parabolic equation $\partial_t u - \mbox{div} (A \nabla
u)+B\cdot \nabla u=0$ on cylindrical domains of the form $\Omega=\mathcal
O\times\mathbb R$ where $\mathcal O\subset\mathbb R^n$ is a $1$-sided chord arc
domain (it satisfies both corkscrew and Harnack chain conditions) and has
uniformly $n-1$ rectifiable boundary. Let $u$ be a solution of such PDE in
$\Omega$ and the non-tangential maximal function of its gradient in spatial
directions $\tilde{N}(\nabla u)$ belongs to $L^p(\partial\Omega)$ for some
$p>1$. Furthermore, assume that for $u|_{\partial\Omega}=f$ we have that
$D^{1/2}_tf\in L^p(\partial\Omega)$. Then both $\tilde{N}(D^{1/2}_t u)$ and
$\tilde{N}(D^{1/2}_tH_t u)$ also belong to $L^p(\partial\Omega)$, where
$D^{1/2}_t$ and $H_t$ are the half-derivative and the Hilbert transform in the
time variable, respectively. We expect this result will spur new developments
in the study of solvability of the $L^p$ parabolic Regularity problem as thanks
to it it is now possible to formulate the parabolic Regularity problem on a
large class of time-varying domains.
|
Martin Dindoš
|
2023-08-24T17:25:08Z
|
http://arxiv.org/abs/2308.12936v2
|
# On the regularity problem for parabolic operators and the role of half-time derivative
###### Abstract.
In this paper we present the following result on regularity of solutions of the second order parabolic equation \(\partial_{t}u-\operatorname{div}(A\nabla u)+B\cdot\nabla u=0\) on cylindrical domains of the form \(\Omega=\mathcal{O}\times\mathbb{R}\) where \(\mathcal{O}\subset\mathbb{R}^{n}\) is a uniform domain (it satisfies both corkscrew and Harnack chain conditions) and has uniformly \(n-1\) rectifiable boundary. Let \(u\) be a solution of such PDE in \(\Omega\) and the non-tangential maximal function of its gradient in spatial directions \(\tilde{N}(\nabla u)\) belongs to \(L^{p}(\partial\Omega)\) for some \(p>1\). Furthermore, assume that for \(u|_{\partial\Omega}=f\) we have that \(D_{t}^{1/2}f\in L^{p}(\partial\Omega)\). Then both \(\tilde{N}(D_{t}^{1/2}u)\) and \(\tilde{N}(D_{t}^{1/2}H_{t}u)\) also belong to \(L^{p}(\partial\Omega)\), where \(D_{t}^{1/2}\) and \(H_{t}\) are the half-derivative and the Hilbert transform in the time variable, respectively. We expect this result will spur new developments in the study of solvability of the \(L^{p}\) parabolic Regularity problem as thanks to it it is now possible to formulate the parabolic Regularity problem on a large class of time-varying domains.
Key words and phrases:parabolic PDEs, boundary value problems, half-derivative, non-tangential maximal function 2020 Mathematics Subject Classification: 35K10, 35K20, 35K40, 35K51
## 1. Introduction
The study of the \(L^{p}\) boundary value problems for elliptic and parabolic PDEs has a long and interesting history. Let \(\Omega\subset\mathbb{R}^{n}\times\mathbb{R}\) be a space-time domain and consider the parabolic differential equation on \(\Omega\) of the form
\[\begin{cases}\partial_{t}u-\operatorname{div}(A\nabla u)+B\cdot\nabla u=0& \text{in }\Omega,\\ u=f&\text{on }\partial\Omega,\end{cases} \tag{1.1}\]
where \(A=[a_{ij}(X,t)]\) is a \(n\times n\) matrix satisfying the uniform ellipticity condition with \(X\in\mathbb{R}^{n}\), \(t\in\mathbb{R}\) and \((X,t)\in\Omega\). That is, there exists positive constants \(\lambda\) and \(\Lambda=\|A\|_{L^{\infty}}\) such that
\[\lambda|\xi|^{2}\leq\sum_{i,j}a_{ij}(X,t)\xi_{i}\xi_{j} \tag{1.2}\]
for almost every \((X,t)\in\Omega\) and all \(\xi\in\mathbb{R}^{n}\). We shall assume that \(|B(X,t)|\lesssim\delta^{-1}(X,t)\), where is the parabolic distance of a point \((X,t)\) to the boundary of \(\partial\Omega\).
The elliptic analogue of this problem is that \(\Omega\subset\mathbb{R}^{n}\) and the corresponding PDE is \(\operatorname{div}(A\nabla u)+B\cdot\nabla u=0\) in \(\Omega\).
The most classical boundary value problem is the \(L^{p}\) Dirichlet problem (\(1<p<\infty\)) where for any given boundary data \(f\in L^{p}(\partial\Omega)\) we want to find a solution \(u:\Omega\to\mathbb{R}\)
of (1.1) such that its non-tangential maximal function \(N(u)\) belongs to \(L^{p}(\partial\Omega)\) and the estimate
\[\|N(u)\|_{L^{p}(\partial\Omega)}\leq C\|f\|_{L^{p}(\partial\Omega)} \tag{1.3}\]
holds for some \(C>0\) only depending on the PDE, domain and \(p\). The estimate (1.3) makes perfect sense in both elliptic and parabolic settings. The non-tangential maximal function \(N\) is defined using non-tangential approach regions \(\Gamma(\cdot)\) with vertices on \(\partial\Omega\). Very minimal regularity of the boundary domain \(\Omega\) is required to define these. In the elliptic settings it suffices to have \(\Omega\) satisfying the corkscrew condition and \(n-1\)-Ahlfors regular boundary (in order to define measure on \(\partial\Omega\)). See [3, 14, 37] and many others.
There has been a recent substantial progress in considering the parabolic \(L^{p}\) Dirichlet problem on domains with minimal regularity. In [27, 28] the notion of parabolic uniform rectifiability was introduced and subsequently [4] has shown that the bounded solutions of the heat equation \(\partial_{t}-\Delta u=0\) on \(\Omega\) have the usual Carleson measure estimates.
To set this into a wider context, in a series of works J. Lewis and his collaborators [21, 22, 23, 26, 33, 34], showed that the "good" parabolic graphs for parabolic singular integrals and parabolic potential theory are regular \(Lip(1,1/2)\) graphs, that is, graphs which are \(Lip(1,1/2)\) (in space-time coordinates) and which possess extra regularity in time in the sense that a (non-local) half-order time-derivative of the defining function of the graph belongs to parabolic BMO space. Papers such as [10, 12, 13, 25] show solvability of the parabolic Dirichlet problem on domains of this type under very mild assumptions on coefficients of the parabolic PDE. Recently, [5] has shown that these conditions on the domain are both necessary and sufficient on graph-like domains. The concept of parabolic uniform rectifiability is a generalisation of this concept to non-graph domains.
The \(L^{p}\) Regularity problem, is again a Dirichlet problem but with more regular boundary data. The elliptic problem has seen a substantial recent development with solvability established on Lipschitz domains under small Carleson condition of coefficients ([15, 16, 17]), large Carleson condition ([11, 37]). The later paper actually considers domains more general than Lipschitz, namely domains that satisfy the corkscrew condition and have uniformly \(n-1\)-rectifiable boundary. For the case of coefficients independent of one (transversal) direction see [29]. The formulation of what is the \(L^{p}\) Regularity problem is non-controversial in the elliptic case. Given a boundary data \(f\) having one derivative in \(L^{p}(\partial\Omega)\) we see a solution \(u\) such that we have one extra derivative on the left-hand side of (1.3), that is
\[\|\tilde{N}(\nabla u)\|_{L^{p}(\partial\Omega)}\leq C\|\nabla_{T}f\|_{L^{p}( \partial\Omega)}. \tag{1.4}\]
Here \(\nabla_{T}f\) is the tangential derivative of \(f\) (which is a well defined object on Lipschitz domains). When working on domains as in [37] we replace the right-hand side by the norm in Hajlasz-Sobolev space \(\dot{M}^{1,p}(\partial\Omega)\) (see below for definition).
In order to properly formulate the Regularity problem for parabolic PDEs we first note that due to natural parabolic scaling \(\nabla u\) scales like a half derivative time-derivative \(D_{t}^{1/2}u\), since the second derivative \(\nabla^{2}u\) behaves like \(\partial_{t}u\). However, \(D_{t}^{1/2}u\) is a non-local object which requires some thoughts on how to deal with it and what conditions to impose.
The initial formulation of the Regularity problem is due to Brown [6, 7] who formulated it on Lipschitz cylinders, that is on domains of the form \(\Omega=\mathcal{O}\times[0,T]\) where \(\mathcal{O}\subset\mathbb{R}^{n}\) is a bounded Lipschitz domain and time variable runs over a bounded interval \([0,T]\). Subsequent authors such as Mitrea [35], Nystrom [40] or Castro-Rodriguez-Lopez-Staubach [8] have dealt with this issue in a similar way.
What Brown has done was to consider solutions on \(\Omega\) with initial data \(u=0\) at \(t=0\) and naturally extending \(u\) by zero for all \(t<0\). Then a definition of half-derivative using fractional integrals \(I_{\sigma}\) for functions \(f\in C^{\infty}(-\infty,T)\) is used to give meaning to \(D_{t}^{1/2}f\) and \(D_{t}^{1/2}u\).
A variant of this approach (which we present here) is to consider the parabolic PDE on on infinite cylinder \(\Omega=\mathcal{O}\times\mathbb{R}\). This can be always be achieved by extending the coefficients by setting \(A(X,t)=A(X,T)\) for \(t>T\) and \(A(X,t)=A(X,0)\) for \(t<0\). Since the parabolic PDE has a defined direction of time the solution on the infinite cylinder \(\Omega=\mathcal{O}\times\mathbb{R}\) with boundary data \(u\big{|}_{\partial\Omega}=0\) for \(t<0\) will coincide with the solution on the finite cylinder \(\mathcal{O}\times[0,T]\) with zero initial data.
Having a well-defined half-derivative the authors [7, 8, 35, 40] then define the parabolic Regularity problem by asking for
\[\|\tilde{N}(\nabla u)\|_{L^{p}(\partial\Omega)}+\|\tilde{N}(D_{t}^{1/2}u)\|_{ L^{p}(\partial\Omega)}\leq C(\|\nabla_{T}f\|_{L^{p}(\partial\Omega)}+\|D_{t}^{1/2}f \|_{L^{p}(\partial\Omega)}). \tag{1.5}\]
Some authors also ask for control of \(\|\tilde{N}(D_{t}^{1/2}H_{t}u)\|_{L^{p}}\), where \(H_{t}\) is the one-dimensional Hilbert transform in the \(t\)-variable.
This approach is reasonable, if the domain is not time-varying, i.e. of the form \(\mathcal{O}\times(t_{0},t_{1})\) but runs immediately into an obvious issue when this is no longer true, as any reasonable definition of half-derivative requires \(u(X,t)\) to be defined for all \(t\in\mathbb{R}\) or at least on a half-line.
As an illustration, consider the simplest possible case of a domain of the form \(\Omega=\{(x,x_{n},t):\,x\in\mathbb{R}^{n-1},\,x_{n}>\psi(x,t),\,t\in\mathbb{R}\}\), for some continuous function \(\psi:\mathbb{R}^{n-1}\times\mathbb{R}\to\mathbb{R}\). Unless \(\psi(x,t)=\psi(x)\), i.e. the function is time-independent the issue with defining half-derivative will arise (at least for points near the boundary).
Recalling works J. Lewis and his collaborators mentioned above, under the condition
\[\|\nabla\psi\|_{L^{\infty}}<\infty\qquad\text{and}\qquad D_{t}^{1/2}\psi\in BMO (\mathbb{R}^{n-1}\times\mathbb{R}). \tag{1.6}\]
a natural change of variables in the parabolic settings (certainly motivated by analogous change of variables in the elliptic settings as in for example [39]) maps \(\rho:\Omega\to U:=\mathbb{R}_{+}^{n}\times\mathbb{R}\) which is domain that is constant in time. By transferring the parabolic PDE from \(\Omega\) to \(U\) we are now in a the previous situation where half time-derivative can be defined without an issue. The map \(\rho\) is bijection that also preserves ellipticity and has comparable non-tangential cones and norms derived from them. However, we pay certain price, namely that the new PDE on \(U\) has a drift (first-order term) of the form \(B\cdot\nabla u\). For more details see for example [25]. This is also the reason why our PDE (1.1) does include the drift term \(B\cdot\nabla u\) so that the new PDE on \(U\) falls into the framework considered here.
We therefore ask in this paper the following key question:
_Is the presence of the term \(\|\tilde{N}(D_{t}^{1/2}u)\|_{L^{p}(\partial\Omega)}\) and perhaps also_
\[\|\tilde{N}(D_{t}^{1/2}H_{t}u)\|_{L^{p}(\partial\Omega)}\]
_needed in (1.5)?_
If not, then asking for control only of \(\|\tilde{N}(\nabla u)\|_{L^{p}(\partial\Omega)}\) opens significant new avenues of research. It might be possible to consider domains \(\Omega\) that satisfy as in [27, 28] parabolic uniform rectifiability but are not necessary graph or locally graph-domain.
Our main result is that this is indeed the case. That is if \(\partial\Omega=\mathcal{O}\times\mathbb{R}\) so that we can define the operators \(D_{t}^{1/2}\) and \(H_{t}\) and \(u:\Omega\to\mathbb{R}\) is a solution then bounds for \(\|\tilde{N}(D_{t}^{1/2}u)\|_{L^{p}(\partial\Omega)}\) and \(\|\tilde{N}(D_{t}^{1/2}H_{t}u)\|_{L^{p}(\partial\Omega)}\) follow from bounds for \(\|\tilde{N}(\nabla u)\|_{L^{p}(\partial\Omega)}\) and \(\|D_{t}^{1/2}f\|_{L^{p}(\partial\Omega)}\).
**Theorem 1.1**.: _Fix \(1<p<\infty\) and consider \(\Omega=\mathcal{O}\times\mathbb{R}\) such that \(\mathcal{O}\subset\mathbb{R}^{n}\) is a uniform domain and has uniformly \(n-1\)-rectifiable boundary. Suppose that \(u:\Omega\to\mathbb{R}\) is a function that solves1_
Footnote 1: \(u\) needs to be a reinforced weak solution (defined below) on a set \(\Omega\).
\[\begin{cases}Lu&=\partial_{t}u-\operatorname{div}(A\nabla u)+B\cdot\nabla u=0 \quad\text{ in }\Omega\\ u|_{\partial\Omega}&=f\quad\text{ on }\partial\Omega.\end{cases} \tag{1.7}\]
_Here \(A:\Omega\to M_{n\times n}(\mathbb{R})\) is a bounded uniformly elliptic matrix-valued function and \(B:\Omega\to\mathbb{R}^{n}\) is a vector such that \(|B|\lesssim\delta(X,t)^{-1}\). The solution \(u\) is understood to attain it boundary data \(f\) by limiting non-tangentially almost everywhere._
_Then there exist a constant \(C(L,\Omega,p)>0\) (but not on \(u\)) such that if \(\tilde{N}(\nabla u)\) and \(D_{t}^{1/2}f\) belong to \(L^{p}(\partial\Omega)\) we have the estimate:_
\[\|\tilde{N}(D_{t}^{1/2}u)\|_{L^{p}(\partial\Omega)}+\|\tilde{N}(D_{t}^{1/2}H_ {t}u)\|_{L^{p}(\partial\Omega)}\leq C(\|\tilde{N}(\nabla u)\|_{L^{p}}+\|D_{t }^{1/2}f\|_{L^{p}}). \tag{1.8}\]
_Remark 1._ Although not stated here this result also applies to parabolic systems. There are only two places in our argument where the PDE is actually used and for any parabolic system the argument given here also applies.
_Remark 2._ In the definition above we are very liberal about the notion of a solution. Clearly, assuming some \(L^{p}\) norm of \(\tilde{N}(\nabla u)\) implies that in the interior of \(\Omega\) we have a well-defined gradient \(\nabla u\in L^{2}_{loc}(\Omega)\). We further need that half-derivative \(D_{t}^{1/2}\) can be moved from \(\partial_{t}\) onto a test function. Such solutions (called reinforced weak solutions) can be constructed using variety of techniques, for example via the "hidden coercivity method" as in [1, 2] or by the usual construction of the parabolic measure. The class \(\dot{\mathsf{E}}_{\mathrm{loc}}(\Omega)\) is the right one so that objects such as \(\tilde{N}(D_{t}^{1/2}u)\), \(\tilde{N}(D_{t}^{1/2}H_{t}u)\) can be defined, without them the left-hand side of (1.8) does not make any sense.
Theorem 1.1 leads us to give the following definition of the \(L^{p}\) Regularity problem.
**Definition 1.1**.: _Let \(\Omega\subset\mathbb{R}^{n}\times\mathbb{R}\) be space-time domain that is parabolically uniformly rectifiable and consider again the parabolic PDE problem (1.7). Fix \(1<p<\infty\) and assume that for \(f:\partial\Omega\to\mathbb{R}\) we have a well-defined notion of one spatial derivative, for example using the Hajlasz-Sobolev norm:_
\[|f(X,t)-f(Y,t)|\leq|X-Y|(g(X,t)+g(Y,t)|, \tag{1.9}\]
_for a.e. \(t\in\mathbb{R}\) and all \((X,t)\) and \((Y,t)\) on \(\partial\Omega\) except perhaps a set of surface measure zero (on \(\partial\Omega\)). Here \(g\in L^{p}(\partial\Omega)\). Similarly, assume a well-defined notion of fractional half-regularity of \(f\) in the \(t\)-variable and call the corresponding space of function by \(\dot{\mathcal{X}}_{1,1/2}^{p}\). Assume that \(\dot{\mathcal{X}}_{1,1/2}^{p}\) coincides with the usual space \(\dot{L}_{1,1/2}^{p}(\partial\Omega)\) when \(\Omega=\mathbb{R}_{+}^{n}\times\mathbb{R}\)._
_We say that the \(L^{p}\) Regularity problem for the parabolic PDE problem (1.7) is solvable if there exists a linear functional \(T:f\mapsto u\) such that \(u:\Omega\to\mathbb{R}\) solves the equation (1.7), attains on \(\partial\Omega\) datum \(f\) non-tangentially almost everywhere and for some \(C=C(L,\Omega,p)>0\) we have:_
\[\|\widetilde{N}(\nabla u)\|_{L^{p}(\partial\Omega)}\leq C\|f\|_{\dot{ \mathcal{X}}_{1,1/2}^{p}(\partial\Omega)}. \tag{1.10}\]
**Corollary 1.2**.: _By Theorem 1.1 the above notion of \(L^{p}\) solvability of the Regularity problem coincides with the definition of the Regularity problem as given previously in [6, 7, 8, 35, 40] on Lipschitz cylinders \(\Omega=\mathcal{O}\times\mathbb{R}\) or on time-varying domains \(\Omega=\{(x,x_{n},t):\,x\in\mathbb{R}^{n-1},\,x_{n}>\psi(x,t),\,t\in\mathbb{R}\}\) for \(\psi\) satisfying the usual condition (1.6)._
## 2. Definitions
### Parabolic Sobolev Space on \(\partial\Omega\)
When considering the appropriate function space for our boundary data we want it to have the same homogeneity as the PDE. From now on we assume our domain is of the form \(\Omega=\mathcal{O}\times\mathbb{R}\) as we want to consider the usual \(D_{t}^{1/2}\) that requires integration over all \(t\in\mathbb{R}\). We start with the case when \(\mathcal{O}=\mathbb{R}_{+}^{n}\).
Since as a rule of thumb, one derivative in time behaves like two derivatives in space and so the correct order of our time derivative should be \(1/2\), if we impose data with one derivative in spatial variables. This problem has been studied previously in [23, 24, 40], who have followed [18] in defining the homogeneous parabolic Sobolev space \(\dot{L}_{1,1/2}^{p}\) in the following way. We start with the simplest case when \(\partial\Omega=\mathbb{R}^{n}\).
**Definition 2.1**.: _The homogeneous parabolic Sobolev space \(\dot{L}_{1,1/2}^{p}(\mathbb{R}^{n})\), for \(1<p<\infty\), is defined to consist of an equivalence class of functions \(f\) with distributional derivatives satisfying \(\|f\|_{\dot{L}_{1,1/2}^{p}(\mathbb{R}^{n})}<\infty\), where_
\[\|f\|_{\dot{L}_{1,1/2}^{p}(\mathbb{R}^{n})}=\|\mathbb{D}f\|_{L^{p}(\mathbb{R} ^{n})} \tag{2.1}\]
_and_
\[(\mathbb{D}f)\widehat{\ }(\xi,\tau):=\|(\xi,\tau)\|\widehat{f}(\xi,\tau). \tag{2.2}\]
_Here \(\|(\xi,\tau)\|\) on \(\mathbb{R}^{n-1}\times\mathbb{R}\) is defined as the unique positive solution \(\rho\) to the following equation_
\[\frac{|\xi|^{2}}{\rho^{2}}+\frac{\tau^{2}}{\rho^{4}}=1. \tag{2.3}\]
_One can easily show that \(\|(\xi,\tau)\|\sim|\xi|+|\tau|^{1/2}\) and that this norm scales correctly according to the parabolic nature of the PDE._
_Remark_.: In the definition above we consider \((x,t)\in\mathbb{R}^{n}=\mathbb{R}^{n-1}\times\mathbb{R}\).
In addition, following [20], we define a parabolic half-order time derivative by
\[(\mathbb{D}_{n}f)\widehat{\ }(\xi,\tau):=\frac{\tau}{\|(\xi,\tau)\|}\widehat{f}( \xi,\tau). \tag{2.4}\]
If \(0<\alpha\leq 2\), then for \(g\in C_{c}^{\infty}(\mathbb{R})\) the _one-dimensional fractional differentiation operators_\(D_{\alpha}\) are defined by
\[(D_{\alpha}g)\widehat{\ }(\tau):=|\tau|^{\alpha}\widehat{g}(\tau). \tag{2.5}\]
It is also well known that if \(0<\alpha<1\) then
\[D_{\alpha}g(s)=c\int_{\mathbb{R}}\frac{g(s)-g(\tau)}{|s-\tau|^{1+\alpha}}\, \mathrm{d}\tau \tag{2.6}\]
whenever \(s\in\mathbb{R}\). If \(h(x,t)\in C_{c}^{\infty}(\mathbb{R}^{n})\) then by \(D_{\alpha}^{t}h:\mathbb{R}^{n}\to\mathbb{R}\) we mean the function \(D_{\alpha}h(x,\cdot)\) defined a.e. for each fixed \(x\in\mathbb{R}^{n-1}\). We now establish connections between \(\mathbb{D}\), \(\mathbb{D}_{n}\) and \(D_{1/2}^{t}\). By [9, 19, 20] we have that
\[\|\mathbb{D}f\|_{L^{p}(\mathbb{R}^{n})}\sim\|\mathbb{D}_{n}f\|_{L^{p}(\mathbb{ R}^{n})}+\|\nabla f\|_{L^{p}(\mathbb{R}^{n})}\sim\|D_{1/2}^{t}f\|_{L^{p}( \mathbb{R}^{n})}+\|\nabla f\|_{L^{p}(\mathbb{R}^{n})}, \tag{2.7}\]
for all \(1<p<\infty\). Here \(\nabla\) denotes the usual gradient in the variables \(x\in\mathbb{R}^{n-1}\).
Similarly, if \(\Omega=\mathcal{O}\times\mathbb{R}\), where \(\mathcal{O}\subset\mathbb{R}^{n}\) is a bounded or unbounded Lipschitz domain, we can define both homogeneous and inhomogeneous versions of the space \(\dot{L}_{1,1/2}^{p}(\partial\Omega)\), \(L_{1,1/2}^{p}(\partial\Omega)\) via partition of unity on \(\partial\mathcal{O}\) and projection of local Lipschitz graphs describing \(\partial\mathcal{O}\times\mathbb{R}\) onto \(\mathbb{R}^{n}\). We omit the details, see [9] for more.
Before considering more general \(\mathcal{O}\) let us recall some definitions.
**Definition 2.2** (**Corkscrew condition)**.: _[_30_]__. A domain \(\mathcal{O}\subset\mathbb{R}^{n}\) satisfies the Corkscrew condition if for some uniform constant \(c>0\) and for every surface ball \(\Delta:=\Delta(q,r),\) with \(q\in\partial\mathcal{O}\) and \(0<r<\operatorname{diam}(\partial\mathcal{O})\), there is a ball \(B(X_{\Delta},cr)\subset B(q,r)\cap\mathcal{O}\). The point \(X_{\Delta}\subset\mathcal{O}\) is called a corkscrew point relative to \(\Delta,\) (or, relative to \(B\)). We note that we may allow \(r<C\operatorname{diam}(\partial\mathcal{O})\) for any fixed \(C\), simply by adjusting the constant \(c.\)_
**Definition 2.3** (**Harnack Chain condition)**.: _[_30_]__. Let \(\delta(X)\) denote the distance of \(X\in\mathcal{O}\) to \(\partial\mathcal{O}\). A domain \(\text{mathcal}O\) satisfies the Harnack Chain condition if there is a uniform constant \(C\) such that for every \(\rho>0,\)\(\Lambda\geq 1\), and every pair of points \(X,X^{\prime}\in\mathcal{O}\) with \(\delta(X),\)\(\delta(X^{\prime})\geq\rho\) and \(|X-X^{\prime}|<\Lambda\,\rho\), there is a chain of open balls \(B_{1},\ldots,B_{N}\subset\mathcal{O}\), \(N\leq C(\Lambda)\), with \(X\in B_{1},\)\(X^{\prime}\in B_{N},\)\(B_{k}\cap B_{k+1}\neq\emptyset\) and \(C^{-1}\operatorname{diam}(B_{k})\leq\operatorname{dist}(B_{k},\partial\mathcal{O} )\leq C\operatorname{diam}(B_{k}).\) The chain of balls is called a Harnack Chain._
**Definition 2.4** (**Uniform domains)**.: _If \(\mathcal{O}\) satisfies both the Corkscrew and Harnack Chain conditions, then \(\mathcal{O}\) is a uniform domain._
We omit precise definition of uniformly rectifiable \(n-1\)-dimensional boundary as the details are unimportant for the rest of our exposition. It essentially means that \(\partial\mathcal{O}\) contains "large pieces" of \(n-1\) dimensional Lipschitz graphs in certain uniform way.
Thus with \(\mathcal{O}\) as stated, we may consider the Hajlasz-Sobolev space \(M^{1,p}(\partial O)\) consisting of functions \(f\in L^{p}_{loc}(\partial\mathcal{O})\) such that for some \(g\in L^{p}(\partial\mathcal{O})\)
\[|f(x)-f(y)|\leq|x-y|[g(x)+g(y)]\qquad\text{for almost every $x,y\in\partial \mathcal{O}$}. \tag{2.8}\]
The norm is given as the infimum of \(L^{p}\) norms of all such \(g\). Given this for a space-time function \(f:\partial\mathcal{O}\times\mathbb{R}\to\mathbb{R}\) we define \(M^{1,p}_{x}(\partial\mathcal{O}\times\mathbb{R})\) to consist of functions \(f\) such that for some \(g\in L^{p}(\partial\mathcal{O}\times\mathbb{R})\) and a.e. \(t\in\mathbb{R}\) we have
\[|f(x,t)-f(y,t)|\leq|x-y|[g(x,t)+g(y,t)]\qquad\text{for almost every $x,y\in \partial\mathcal{O}$}, \tag{2.9}\]
with \(\|f\|_{M^{1,p}_{x}}:=\inf\|g\|_{L^{p}}\) for all such function \(g\). We shall then define
\[\dot{L}^{p}_{1,1/2}(\partial\mathcal{O}\times\mathbb{R})=\{f\in L^{p}_{loc}( \partial\mathcal{O}\times\mathbb{R}):\,\|f\|_{M^{1,p}_{x}}+\|D^{1/2}_{t}f\|_{ L^{p}}<\infty\}.\]
By Corollary 1.2 of [32] we have \(\dot{F}^{1}_{p,2}=\dot{M}^{1,p}\) where \(\dot{F}^{s}_{p,q}\) denotes the usual range of homegeneous Triebel-Lizorkin spaces and hence this definition coincides with the definition of \(\dot{L}^{p}_{1,1/2}\) given previously. Thus we have our candidate space for Definition 1.1.
### Reinforced weak solutions
We recall the paper [2] that neatly presents the concept of reinforced weak solutions for the parabolic problem we are interested in.
If \(\mathcal{O}\) is an open subset of \(\mathbb{R}^{n}\), we let \(H^{1}(\mathcal{O})=W^{1,2}(\mathcal{O})\) be the standard Sobolev space of real valued functions \(v\) defined on \(\mathcal{O}\), such that \(v\) and \(\nabla v\) are in \(L^{2}(\mathcal{O};\mathbb{R})\) and \(L^{2}(\mathcal{O};\mathbb{R}^{n})\), respectively. A subscripted 'loc' will indicate that these conditions hold locally.
We shall say that \(u\) is a _reinforced weak solution_ of \(\partial_{t}u-\mathrm{div}(A\nabla u)=0\) on \(\Omega=\mathcal{O}\times\mathbb{R}\) if
\[u\in\dot{\mathsf{E}}_{\mathrm{loc}}(\Omega):=H^{1/2}_{\mathrm{loc}}(\mathbb{R };L^{2}_{\mathrm{loc}}(\mathcal{O}))\cap L^{2}_{\mathrm{loc}}(\mathbb{R};W^{1, 2}_{\mathrm{loc}}(\mathcal{O}))\]
and if for all \(\phi,\psi\in C^{\infty}_{0}(\Omega)\),
\[\iint_{\Omega}\left[A\nabla u\cdot\nabla(\phi\psi)+H_{t}D^{1/2}_{t}(u\psi) \cdot D^{1/2}_{t}\phi+H_{t}D^{1/2}_{t}(u\phi)\cdot D^{1/2}_{t}\psi\right]\ \mathrm{d}X\,\mathrm{d}t=0. \tag{2.10}\]
Here, \(D^{1/2}_{t}\) is the half-order derivative and \(H_{t}\) the Hilbert transform with respect to the \(t\) variable, designed in such a way that \(\partial_{t}=D^{1/2}_{t}H_{t}D^{1/2}_{t}\). The space \(\dot{H}^{1/2}(\mathbb{R})\) is the homogeneous Sobolev space of order \(1/2\) (it is the completion of \(C^{\infty}_{0}(\mathbb{R})\) for the norm \(\|D^{1/2}_{t}(\cdot)\|_{2}\) and, modulo constants, it embeds into the space \(\mathcal{S}^{\prime}(\mathbb{R})/\mathbb{C}\) of tempered distributions modulo constants). By \(H^{1/2}_{\mathrm{loc}}(\mathbb{R})\) we mean functions \(u\) such that \(u\phi\in\dot{H}^{1/2}(\mathbb{R})\) for all \(\phi\in C^{\infty}_{0}(\Omega)\). Here we have departed slightly from [2] as in their definition of \(\dot{\mathsf{E}}_{\mathrm{loc}}(\Omega)\) the space \(H^{1/2}_{\mathrm{loc}}\) is replaced by \(\dot{H}^{1/2}\) and \(\psi\equiv 1\). For such \(u\) (2.10) simplifies to
\[\iint_{\Omega}\left[A\nabla u\cdot\nabla\phi+H_{t}D^{1/2}_{t}u\cdot D^{1/2}_{ t}\phi\right]\ \mathrm{d}X\,\mathrm{d}t=0.\]
Clearly any \(u\in\dot{H}^{1/2}(\mathbb{R})\) also belongs to \(H^{1/2}_{\rm loc}(\mathbb{R})\) (see Proposition 3.1) and hence our notion of reinforced weak solution somewhat weaker than in [2]. This can be seen by taking a sequence of functions \(\psi_{n}\in C^{\infty}_{0}(\Omega)\) for which \(\psi_{n}\to 1\) as \(n\to\infty\).
Our definition has advantage that by taking a cut-off of the function \(u\) we might potentially improve the decay of \(D^{1/2}_{t}u\) at infinity. In particular, this matters when considering \(u\big{|}_{\partial\Omega}\in L^{p}_{1,1/2}(\partial\Omega)\) for \(p>2\) as such \(u\) might not decay fast enough to have \(u\in\dot{H}^{1/2}(\mathbb{R};L^{2}_{\rm loc}(\mathcal{O}))\).
At this point we remark that for any \(u\in\dot{H}^{1/2}(\mathbb{R})\) and \(\phi,\psi\in C^{\infty}_{0}(\mathbb{R})\) the formula
\[\int_{\mathbb{R}}\left[H_{t}D^{1/2}_{t}(u\psi)\cdot D^{1/2}_{t}\phi+H_{t}D^{1/ 2}_{t}(u\phi)\cdot D^{1/2}_{t}\psi\right]\,{\rm d}t=-\int_{\mathbb{R}}u\cdot \partial_{t}(\phi\psi)\,{\rm d}t\]
holds, where on the right-hand side we use the duality form between \(\dot{H}^{1/2}(\mathbb{R})\) and its dual \(\dot{H}^{-1/2}(\mathbb{R})\) extending the complex inner product of \(L^{2}(\mathbb{R})\). It follows that a reinforced weak solution is a weak solution in the usual sense on \(\Omega\) as it satisfies \(u\in L^{2}_{\rm loc}(\mathbb{R};W^{1,2}_{\rm loc}(\mathcal{O}))\) and for all \(\phi\in C^{\infty}_{0}(\Omega)\),
\[\iint_{\Omega}A\nabla u\cdot\nabla\phi\,{\rm d}X\,{\rm d}t-\iint_{\Omega}u \cdot\partial_{t}\phi\,{\rm d}X\,{\rm d}t=0.\]
We get this by taking \(\psi=1\) on the set where \(\phi\) is supported. This implies \(\partial_{t}u\in L^{2}_{\rm loc}(\mathbb{R};W^{-1,2}_{\rm loc}(\mathcal{O}))\). Conversely, any weak solution \(u\) in \(H^{1/2}_{\rm loc}(\mathbb{R};L^{2}_{\rm loc}(\mathcal{O}))\) is a reinforced weak solution.
### Non-tangential approach regions and Non-tangential maximal function
Our notion of solvability of the Dirichlet problem requires few more definitions. In the first place, we need to introduce a slightly non-standard notion of a nontangential approach region - such regions are typically referred to as "cones" when the domain is Lipschitz regular, and "corkscrew regions" for more general domains that satisfy the corkscrew given above. We first define non-tangential approach region on the set \(\mathcal{O}\).
In the following, the parameter \(a\) is positive and will be referred to as the "aperture". A standard corkscrew region associated with a boundary point \(q\in\partial\mathcal{O}\) is defined ([30]) to be
\[\gamma_{a}(q)=\{X\in\mathcal{O}:|X-q|<(1+a)\delta(X)\}\]
for some \(a>0\) and nontangential maximal functions, square functions are defined in the literature with respect to these regions. Here \(\delta(X)=dist(X,\partial\mathcal{O})\) is the Euclidean distance of a point \(X\) to the boundary. Here and further below we implement the rule that lower case letters (such as \(q\) denote points on \(\partial\mathcal{O}\)), while upper case letters (such as \(X,Y\)) denote points inside \(\mathcal{O}\).
We modify this definition in order to achieve a certain geometric property which may not hold for the \(\gamma_{a}(Q)\) in general. If the domain \(\mathcal{O}\) is Lipschitz we may stop here.
**Definition 2.5**.: _For \(Y\in\mathcal{O}\), let \(S_{a}(Y):=\{q\in\partial\Omega:Y\in\gamma_{a}(q)\}.\) Set_
\[\tilde{S}_{a}(Y):=\bigcup_{q\in S_{a}(Y)}\{q^{\prime}\in\partial\mathcal{O}:|q -q^{\prime}|<a\delta(Y)\}.\]
_Define_
\[\widetilde{\gamma}_{a}(q):=\{Y\in\Omega:q\in\tilde{S}_{a}(Y)\}.\]
It was show in [14] that these "novel" corkscrew regions have the property that for any \(q\in\partial\mathcal{O}\), \(\gamma_{a}(q)\subset\widetilde{\gamma}_{a}(q)\subset\gamma_{2a}(q)\). Thus our \(\widetilde{\gamma}_{a}(q)\) is sandwiched in between two standard corkscrew regions and is thus itself a corkscrew region.
We take advantage of the product structure of our domain \(\Omega\) and for any point \((q,\tau)\in\partial\Omega=\partial\mathcal{O}\times\mathbb{R}\) and define the non-tangential cones \(\Gamma_{a}(q,\tau)\subset\Omega\) by simply taking
\[\Gamma_{a}(q,\tau)=\{(X,t)\in\Omega:X\in\widetilde{\gamma}_{a}(q)\text{ and }|t-\tau|<\delta(X)^{2}\}.\]
This is equivalent to more standard non-tangential parabolic cones defined as
\[\widetilde{\Gamma}_{a}(q,\tau)=\{(X,t)\in\Omega:d((X,t),(q,\tau))<(1+a)\delta( X,t)\},\]
where \(d((X,t),(q,\tau))\) is the parabolic distance function
\[d((X,t),(q,\tau))=\left(|X-q|^{2}+|t-\tau|\right)^{1/2},\]
and \(\delta(X,t)\) is the parabolic distance to the boundary
\[\delta(X,t)=\inf_{(q,\tau)\in\partial\Omega}d((X,t),(q,\tau)).\]
In our case thanks to the fact the domain as in infinite cylinder we always have \(\delta(X,t)=\delta(X)\). It's a simple exercise to show that for any \(a>0\) there exists \(b=b(a)>0\) such that
\[\widetilde{\Gamma}_{1/b}(q,\tau)\subset\Gamma_{a}(q,\tau)\subset\widetilde{ \Gamma}_{b}(q,\tau),\]
for all boundary points \((q,\tau)\). Typically we suppress the subscript \(a\) and consider it fixed and only write \(\Gamma(\cdot)\). The \(L^{p}\) norms of the non-tangential maximal function defined below are comparable for different values of apertures \(a\).
**Definition 2.6**.: _For \(\Omega\) as above, the non-tangential maximal function \(\tilde{N}_{p,a}\) or just \(\tilde{N}_{p}\) is defined using \(L^{p}\) averages over the interior parabolic balls balls in the domain \(\Omega\). Specifically, given \(w\in L^{p}_{loc}(\Omega)\) we set for \((q,\tau)\in\partial\Omega\):_
\[\tilde{N}_{p,a}(w)(q,\tau):=\sup_{(X,t)\in\Gamma_{a}(q,\tau)}w_{p}(X,t), \tag{2.11}\]
_where, at each \((X,t)\in\Omega\),_
\[w_{p}(X,t):=\left(\int\!\!\!\!\!\!\int_{B_{\delta(X,t)/2}(X,t)}|w(Y,s)|^{p}\, dY\,ds\right)^{1/p}. \tag{2.12}\]
_When we omit the subscript \(p\) we always understand that \(p=2\), hence \(\tilde{N}=\tilde{N}_{2}=\tilde{N}_{2,a}\) for some fixed \(a>0\). If we consider a different value than \(p=2\) it will be explicitly stated. We note that the radius of the parabolic ball \(\delta(X,t)/2\) in (2.12) is chosen for convenience only, choosing for example \(\delta(X,t)/8\) will lead to a different non-tangential maximal function but with comparable \(L^{p}\) norm to the original one which can be seen by considering the level sets, for example._
The regions \(\Gamma_{a}(p,\tau)\) have the following property inherited from \(\gamma_{2a}(p)\): for any pair of points \((X,t),(X^{\prime},t^{\prime})\) in \(\Gamma_{a}(p,\tau)\), there is a Harnack chain of balls connecting them - see Definition 2.3. The centers of the balls in this Harnack chain will be contained in a corkscrew region \(\Gamma_{a^{\prime}}(p,\tau)\) of slightly larger aperture, where \(a^{\prime}\) depends only the geometric constants in the definition of the domain \(\mathcal{O}\).
## 3. Proof of Theorem 1.1
Proof.: We postpone the proof of the fully general case and first look at \(\Omega=\mathbb{R}_{+}^{n}\times\mathbb{R}\). Clearly, when \(\Omega\) is a Lipschitz domain the matters can be reduced to this case by considering local consider projections of the \(\mathcal{O}\times\mathbb{R}\) which will gives us control over truncated version of the non-tangential maximal function. This is the heart of the matter anyway as considering regularity away from the boundary is easier.
We will address the case when \(\mathcal{O}\) is uniform domain with uniformly rectifiable \(n-1\)-dimensional boundary at the end by highlighting the key differences this more general case requires.
Fix a boundary point \((p,\tau)\in\mathbb{R}^{n}\) and consider the nontangential cone \(\Gamma(p,\tau)\) emanating from it. Pick an arbitrary \((X,t)\in\Gamma(p,\tau)\subset\mathbb{R}_{+}^{n}\times\mathbb{R}=\Omega\). Recall that
\[\widetilde{N}(D_{t}^{1/2}u)(p,\tau)=\sup_{(X,t)\in\Gamma(p,\tau)}w(X,t),\, \text{where: }w(X,t)=\left(\fint_{Q(X,t)}|D_{t}^{1/2}u(Y,s)|^{2}dY\,ds\right)^{1/2}.\]
Here we denote by \(Q(X,t)\) the parabolic region \(Q(X,t)=\{(Y,s):|X-Y|<\delta(X,t)/4\,\&\,|s-t|<\delta^{2}(X,t)/16\}\). We note that the \(\delta(X,t)=x_{n}\) is the distance of the point to the boundary. Here the shape of the region surrounding the point \((X,t)\) does not matter as long as it contains and is contained in parabolic balls centered at \((X,t)\) of diameter \(\approx\delta(X,t)\). The choice of \(\delta(X,t)/4\) and \(\delta(X,t)^{2}/16\) is just for convenience so that an enlargement \(3Q(X,t)\subset\subset\Omega\).
We find a smooth cutoff function \(\varphi\in C_{0}^{\infty}(\mathbb{R})\) such that \(\varphi(t^{\prime})=1\) for \(|t-t^{\prime}|<x_{n}^{2}/8\) and \(\varphi(t^{\prime})=0\) for \(|t-t^{\prime}|>3x_{n}^{2}/8\). Clearly we may also assume that \(\|\partial_{t}\varphi\|_{L^{\infty}}\lesssim x_{n}^{-2}\). Consider the projection \(\pi:\Omega\to\partial\Omega\) defined via
\[\pi:(X,t)=(x,x_{n},t)\mapsto(x,0,t).\]
Let us also denote by \(\Delta\) the projection of \(Q\) onto the boundary, that is \(\Delta=\Delta(X,t)=\{(y,0,s):\exists(y,y_{n},s)\in Q(X,t)\}=\pi(Q(X,t))\). Sets like \(2\Delta(X,t)\), \(3\Delta(X,t)\) are defined analogously. We write
\[D_{t}^{1/2}u=D_{t}^{1/2}((u-\fint_{\Delta}f)\varphi)+D_{t}^{1/2} ((u-f)(1-\varphi))\] \[+D_{t}^{1/2}((\fint_{\Delta}f-f)\varphi)+D_{t}^{1/2}(f). \tag{3.1}\]
Here we used a shortcut by denoting \(f=u\circ\pi\) which makes sense since \(u\big{|}_{\partial\Omega}=f\).
Consider for \((Y,s)\in Q(X,t)\) the term \(D_{t}^{1/2}u(Y,s)\) as sum of the four terms above in (3.1). We first work on the third term. We state the claim as a proposition as it will be useful again.
**Proposition 3.1**.: _Let \(f\in\dot{L}_{1,1/2}^{p}(\mathbb{R}^{n})\) and \(\varphi\) be a \(C_{0}^{\infty}(\mathbb{R}^{n})\) cutoff function such that \(\varphi=1\) on \(2\Delta\) and \(\varphi=0\) outside \(3\Delta\), where \(\Delta\subset\mathbb{R}^{n}\) is a parabolic boundary ball of size \(n\). Then \(\varphi\) is a parabolic boundary ball of size \(n\)._
Proof.: We first consider the case when \(\mathcal{O}\) is uniform domain with uniformly rectifiable \(n-1\)-dimensional boundary at the end by highlighting the key differences this more general case requires.
Fix a boundary point \((p,\tau)\in\mathbb{R}^{n}\) and consider the nontangential cone \(\Gamma(p,\tau)\) emanating from it. Pick an arbitrary \((X,t)\in\Gamma(p,\tau)\subset\mathbb{R}_{+}^{n}\times\mathbb{R}=\Omega\). Recall that
\[\widetilde{N}(D_{t}^{1/2}u)(p,\tau)=\sup_{(X,t)\in\Gamma(p,\tau)}w(X,t),\, \text{where: }w(X,t)=\left(\fint_{Q(X,t)}|D_{t}^{1/2}u(Y,s)|^{2}dY\,ds\right)^{1/2}.\]
Here we denote by \(Q(X,t)\) the parabolic region \(Q(X,t)=\{(Y,s):|X-Y|<\delta(X,t)/4\,\&\,|s-t|<\delta^{2}(X,t)/16\}\). We note that the \(\delta(X,t)=x_{n}\) is the distance of the point to the boundary. Here the shape of the region surrounding the point \((X,t)\) does not matter as long as it contains and is contained in parabolic balls centered at \((X,t)\) of diameter \(\approx\delta(X,t)\). The choice of \(\delta(X,t)/4\) and \(\delta(X,t)^{2}/16\) is just for convenience so that an enlargement \(3Q(X,t)\subset\subset\Omega\).
We find a smooth cutoff function \(\varphi\in C_{0}^{\infty}(\mathbb{R})\) such that \(\varphi(t^{\prime})=1\) for \(|t-t^{\prime}|<x_{n}^{2}/8\) and \(\varphi(t^{\prime})=0\) for \(|t-t^{\prime}|>3x_{n}^{2}/8\). Clearly we may also assume that \(\|\partial_{t}\varphi\|_{L^{\infty}}\lesssim x_{n}^{-2}\). Consider the projection \(\pi:\Omega\to\partial\Omega\) defined via
\[\pi:(X,t)=(x,x_{n},t)\mapsto(x,0,t).\]
Let us also denote by \(\Delta\) the projection of \(Q\) onto the boundary, that is \(\Delta=\Delta(X,t)=\{(y,0,s):\exists(y,y_{n},s)\in Q(X,t)\}=\pi(Q(X,t))\). Sets like \(2\Delta(X,t)\), \(3\Delta(X,t)\) are defined analogously. We write
\[D_{t}^{1/2}u=D_{t}^{1/2}((u-\fint_{\Delta}f)\varphi)+D_{t}^{1/2}((u-f)(1- \varphi))\] \[+D_{t}^{1/2}((\fint_{\Delta}f-f)\varphi)+D_{t}^{1/2}(f). \tag{3.2}\]
Here we used a shortcut by denoting \(f=u\circ\pi\) which makes sense since \(u\big{|}_{\partial\Omega}=f\).
Consider for \((Y,s)\in Q(X,t)\) the term \(D_{t}^{1/2}u(Y,s)\) as sum of the four terms above in (3.2). We first work on the third term. We state the claim as a proposition as it will be useful again.
**Proposition 3.1**.: _Let \(f\in\dot{L}_{1,1/2}^{p}(\mathbb{R}^{n})\) and \(\varphi\) be a \(C_{0}^{\infty}(\mathbb{R}^{n})\) cutoff function such that \(\varphi=1\) on \(2\Delta\) and \(\varphi=0\) outside \(3\Delta\), where \(\Delta\subset\mathbb{R}^{n}\) is a parabolic boundary ball of size \(n\). Then \(\varphi\) is a parabolic boundary ball of size \(n\)._
Proof.: We first consider the case when \(\mathcal{O}\) is uniform domain with uniformly rectifiable \(n-1\)-dimensional boundary at the end by highlighting the key differences this more general case requires.
Fix a boundary point \((p,\tau)\in\mathbb{R}^{n}\) and consider the nontangential cone \(\Gamma(p,\tau)\) emanating from it. Pick an arbitrary \((X,t)\in\Gamma(p,\tau)\subset\mathbb{R}_{+}^{n}\times\mathbb{R}=\Omega\). Recall that
\[\widetilde{N}(D_{t}^{1/2}u)(p,\tau)=\sup_{(X,t)\in\Gamma(p,\tau)}w(X,t),\, \text{where: }w(X,t)=\left(\fint_{Q(X,t)}|D_{t}^{1/2}u(Y,s)|^{2}dY\,ds\right)^{1/2}.\]
Here we denote by \(Q(X,t)\) the parabolic region \(Q(X,t)=\{(Y,s):|X-Y|<\delta(X,t)/4\,\&\,|s-t|<\delta^{2}(X,t)/16\}\). We note that the \(\delta(X,t)=x_{n}\) is the distance of the point to the boundary. Here the shape of the region surrounding the point \((X,t)\) does not matter as long as it contains and is contained in parabolic balls centered at \((X,t)\) of diameter \(\approx\delta(X,t)\). The choice of \(\delta(X,t)/4\) and \(\delta(X,t)^{2}/16\) is just for convenience so that an enlargement \(3Q(X,t)\subset\subset\Omega\).
We find a smooth cutoff function \(\varphi\in C_{0}^{\infty}(\mathbb{R})\) such that \(\varphi(t^{\prime})=1\) for \(|t-t^{\prime}|<x_{
\(r\) in spatial and \(r^{2}\) in the time variable. Let us also assume that \(\|\partial_{t}\varphi\|_{L^{\infty}}\lesssim r^{-2}\) and \(\|\nabla_{x}\varphi\|_{L^{\infty}}\lesssim r^{-1}\). Fix any \(m>1\). For any \((x,t)\in\Delta\) and \((x^{\prime},t^{\prime})\in m\Delta\) we have_
\[|D_{t}^{1/2}((f-\mathchoice{\hbox{\fint\nolimits}}{\hbox{\fint\nolimits}}{ \hbox{\fint\nolimits}}{\hbox{\fint\nolimits}}{\hbox{\fint\nolimits}}{\hbox{ \fint\nolimits}})\varphi)(x,t)-\varphi(x,t)D_{t}^{1/2}(f)(x,t)|\lesssim M_{t}(M (h))(x^{\prime},t^{\prime})+M(|\nabla f|)(x^{\prime},t^{\prime}), \tag{3.2}\]
_for some \(L^{p}\) function \(h\) with norm \(\lesssim\|D_{t}^{1/2}f\|_{L^{p}}\). Furthermore we have that \(\|(f-\mathchoice{\hbox{\fint\nolimits}}{\hbox{\fint\nolimits}}{\hbox{ \fint\nolimits}}{\hbox{\fint\nolimits}}{\hbox{\fint\nolimits}}{\hbox{\fint \nolimits}})\varphi\|_{L^{p}_{1,1/2}(\mathbb{R}^{n})}\lesssim\|f\|_{L^{p}_{1, 1/2}(\mathbb{R}^{n})}\)._
Proof.: Since \(f\in\dot{L}^{p}_{1,1/2}(\mathbb{R}^{n})\) we have that for almost every \(x\in\mathbb{R}^{n-1}\) the function \(g(\cdot)=f(x,\cdot)\) belongs to \(\dot{L}^{p}_{1/2}(\mathbb{R})\), where the space is defined a the closure of \(C_{0}^{\infty}(\mathbb{R})\) function with respect to the norm \(\|D_{t}^{1/2}g\|_{L^{p}(\mathbb{R})}\).
Let us recall that in the setting of Triebel-Lizorkin spaces this space is the homogeneous space \(\dot{F}^{1/2}_{p,2}(\mathbb{R})\) (c.f. section 1.4 of [36]). Furthermore since \(\dot{F}^{1/2}_{p,2}\subset\dot{F}^{1/2}_{p,\infty}\) we have the following thanks to Corollary 1.2 of [32]:
The space \(\dot{F}^{1/2}_{p,2}(\mathbb{R})\) is contained in the space \(\dot{F}^{1/2}_{p,\infty}(\mathbb{R})=\dot{M}^{1/2,p}(\mathbb{R})\) where \(\dot{M}^{1/2,p}(\mathbb{R})\) is again the Hajlasz-Sobolev space. If for \(g\in L^{p}_{loc}(\mathbb{R})\) there exists \(h\in L^{p}(\mathbb{R})\) such that
\[|g(t)-g(s)|\leq|t-s|^{1/2}[h(t)+h(s)]\qquad\text{for almost every $t,s\in\mathbb{R}$}, \tag{3.3}\]
then
(3.4) \[\dot{M}^{1/2,p}(\mathbb{R})=\{g\in L^{p}_{loc}(\mathbb{R}):\text{ there exists $h\in L^{p}(\mathbb{R})$ such that \eqref{eq:
the numerator \(|\varphi(x,t)-\varphi(x,s)|\leq 1\). This allows us to conclude that when \((x,t)\notin 8\Delta\) then the second term will be bounded by
(3.7) \[\left|\int_{\mathbb{R}}\frac{\varphi(x,t)-\varphi(x,s)}{|t-s|^{3/2}}(f(x,s)- \mathchoice{\vbox{\hbox{$-$ }}\kern-7.499886pt}{\vbox{\hbox{$-$ }}\kern-6.374903pt}{\vbox{\hbox{$-$ }}\kern-4.499931pt}{\vbox{\hbox{$-$ }}\kern-3.749943pt}{\vbox{\hbox{$-$ }}\kern-3.
Here in the last line we used the boundedness of the maximal function and the fact that we deal with truncated version of the maximal function and so we only need to enlarge our domain of integration. This together with (3.6) implies that
\[\int_{8\Delta}[D_{t}^{1/2}((f-\fint_{\Delta}f)\varphi]^{p}\,dx\,dt\lesssim\int_{ 32\Delta}[|D_{t}^{1/2}f|^{p}+h^{p}+|\nabla f|^{p}]dx\,dt.\]
Next we estimate the same integral on \(\mathbb{R}^{n}\setminus 8\Delta\). We only need to consider integrating over \((x,t)\) for which there is \(s\in\mathbb{R}\) such that \((x,s)\in 8\Delta\). Let us call such \(s=s_{0}\). Let us call \(B\) the set of such points \(x\). It follows that
\[\int_{\mathbb{R}^{n}\setminus 8\Delta}[D_{t}^{1/2}((f-\fint_{\Delta}f)\varphi ]^{p}\,dx\,dt=\int_{(B\times\mathbb{R})\setminus 8\Delta}[D_{t}^{1/2}((f-\fint_{ \Delta}f)\varphi]^{p}\,dx\,dt.\]
Recalling (3.7) and (3.6) we see that we only need to deal with (3.7). We further split the integral above into regions such that the distance of points \((x,t)\) to \(4\Delta\) is approximately \(2^{i}r^{2}\). Hence
\[\int_{\mathbb{R}^{n}\setminus 8\Delta}[D_{t}^{1/2}((f-\fint_{ \Delta}f)\varphi]^{p}\,dx\,dt\] \[\lesssim\sum_{i=2}^{\infty}\int_{(B\times\{t\in\mathbb{R}:dist((x,t),4\Delta)\approx 2^{i}r^{2}\}}(2^{-3ip/2})r^{-p}[M_{t<4r^{2}}(|f-\fint_{ \Delta}f|)]^{p}(x,s_{0})dx\,dt\] \[\qquad\lesssim\sum_{i=2}^{\infty}2^{i(1-3p/2)}\int_{32\Delta}[h^{ p}+|\nabla u|^{p}]dx\,dt\lesssim\int_{32\Delta}[h^{p}+|\nabla f|^{p}]dx\,dt.\]
This shows that \(D_{t}^{1/2}\) of our function belongs to \(L^{p}\). For the spatial gradient the argument is easier as \(\nabla\) is local. We clearly have
\[\nabla((f-\fint_{\Delta}f)\varphi)=(\nabla f)\varphi+(f-\fint_{\Delta}f)\nabla\phi.\]
The first term is fine and in the second \(\nabla\varphi\lesssim r^{-1}\). It follows that we need to estimate \(r^{-1}\|f-\fint_{\Delta}f)\|_{L^{p}(3\Delta)}\), given where is the support of \(\varphi\). The estimate is very similar to the one above yielding again \((\int_{32\Delta}[h^{p}+|\nabla u|^{p}]dx\,dt)^{1/p}\). From this or claim follows.
Let's now focus on (3.2). Here we restrict ourselves to points \((x,t)\in\Delta\) and because of how we defined \(\varphi\) we can only have \(\varphi(x,t)-\varphi(x,s)\neq 0\) for \(|s-t|\approx r^{2}\). Hence in the sum in (3.8) only first few terms will be non-vanishing. It follows that in the calculation below (3.8) and (3.10) the maximal function we take there might be centred at an arbitrary point \((x^{\prime},t^{\prime})\) of some enlargement of \(\Delta\), say \(m\Delta\) for a fixed \(m>1\). Only the implied constants will change with an increase of \(m\). Thus we get that
\[|D_{t}^{1/2}((f-\fint_{\Delta}f)\varphi)(x,t)-\varphi(x,t)D_{t}^{1/2}(f)(x,t)| \lesssim M_{t}(M(h))(x^{\prime},t^{\prime})+M(|\nabla f|)(x^{\prime},t^{ \prime}),\]
as claimed.
We return to the proof of Theorem 1.1. By combining (3.2) and (3.1) we see that for any point \((Y,s)\in Q(X,t)\) we have
\[D_{t}^{1/2}u(Y,s)=D_{t}^{1/2}((u-\fint_{\Delta}f)\varphi)(Y,s)+D_{t}^{1/2}((u-f )(1-\varphi))(Y,s)+r(Y,s), \tag{3.12}\]
where \(r\) is a function such that
\[|r(Y,s)|\lesssim M_{t}(M(h))(P,\tau)+M(|\nabla f|)(P,\tau), \tag{3.13}\]
for a fixed function \(h\in L^{p}\) that depends on \(f\). Recall that \((p,\tau)\) is the vertex of the nontangential cone to which \((X,t)\) belongs to. It follows from the geometry of the nontangential cones, that there exist some fixed number \(m>>1\) (size of which depends on the aperture of our nontangential cone \(\Gamma(p,\tau)\)) such that \((p,\tau)\in m(\pi(Q(X,t)))=m\Delta\) where \(m\Delta\) is the usual enlargement of the boundary ball \(\Delta\) by a factor of \(m\). Thus by (3.2) we have the estimate (3.13).
With the decomposition (3.12) we have an estimate of \(w(X,t)\) defined above by
\[w(X,t)\lesssim w_{1}(X,t)+w_{2}(X,t)+w_{3}(X,t), \tag{3.14}\]
where \(w_{i}\) is defined as the \(L^{2}\) average of the \(i\)-th term of the right-hand side of (3.12). Hence for \(w_{3}\) we have
\[w_{3}(X,t)=\left(\fint\hskip-7.0pt\int_{Q(X,t)}r(Y,s)^{2}dY\,ds\right)^{1/2} \lesssim M_{t}(M(h))(P,\tau)+M(|\nabla f|)(P,\tau). \tag{3.15}\]
Next we consider \(w_{2}\). For \((Y,s)\in Q(X,t)\) we have that
\[D_{t}^{1/2}((u-f)(1-\varphi))(Y,s)=-c\int_{s^{\prime}\in\mathbb{R}}\frac{(u-f )(Y,s^{\prime})(1-\varphi(Y,s^{\prime})))}{|s-s^{\prime}|^{3/2}}ds^{\prime}.\]
Recall that the support of \(1-\varphi\) is outside \(2Q\) and hence \(|s-s^{\prime}|\geq x_{n}^{2}\approx y_{n}^{2}\) on support \(1-\varphi\). Thus we don't need to worry about the singularity of this integral at zero. It follows:
\[|D_{t}^{1/2}((u-f)(1-\varphi))|(Y,s)\lesssim\int_{|s-s^{\prime}|\geq x_{n}^{2} }\frac{|u-f|(Y,s^{\prime})}{|s-s^{\prime}|^{3/2}}ds^{\prime}\]
\[=\sum_{i=0}^{\infty}2^{-i/2}\fint_{|s-s^{\prime}|\approx 2i^{x_{n}^{2}}}x_{n}^{-1 }|u-f|(Y,s^{\prime})ds^{\prime}.\]
Recall, that we understand here \(f\) as \(u\circ\pi\). By using the fundamental theorem of calculus we have
\[|u-f|(Y,s^{\prime})=\int_{0}^{z_{n}}|\partial_{n}u|(y,z_{n},s^{\prime})dz_{n}, \tag{3.16}\]
and therefore
\[|D_{t}^{1/2}((u-f)(1-\varphi))|(Y,s)\lesssim\sum_{i=0}^{\infty}2^{-i/2}\fint \hskip-7.0pt\int_{[0,y_{n}]\times|s-s^{\prime}|\approx 2^{i}x_{n}^{2}}| \partial_{n}u|(y,z_{n},s^{\prime})dz_{n}\,ds^{\prime}.\]
We square this and integrate over \(Q(X,t)\). To deal with the sum we split the term \(2^{-i/2}\) into two parts to get decay in Cauchy-Schwarz inequality on both terms. That is with the integral being the \(b_{n}\) term in the calculation
\[\left(\sum_{n}2^{-i/2}b_{n}\right)^{2}=\left(\sum_{n}2^{-i/4}2^{-i/4}b_{n} \right)^{2}\leq(\sum_{n}2^{-i/2})\sum_{n}2^{-i/2}|b_{n}|^{2}.\]
Hence
\[\fint_{Q(X,t)}|D_{t}^{1/2}((u\!-\!f)(1\!-\!\varphi))|^{2}dY\,ds\lesssim\sum_ {i=0}^{\infty}2^{-i/2}\left(\fint_{\{|x-y|<2x_{n}\}\times[0,3x_{n}]\times|s-t |\approx 2^{i}x_{n}^{2}}|\partial_{n}u|\right)^{2}.\]
The integral over the region of integration can be estimated using the non-tangential maximal function of \(\partial_{n}u\). Clearly we may use the \(L^{1}\) version of it and we get that
\[\int\hskip-10.0pt\int_{\{|x-y|<2x_{n}\}\times[0,3x_{n}]\times|s-t|\approx 2^{i}x _{n}^{2}}|\partial_{n}u|dY\,ds\lesssim\int\hskip-10.0pt\int_{\{|x-y|<2x_{n}\} \times|s-t|\approx 2^{i}x_{n}^{2}}\tilde{N}_{1}(\partial_{n}u)\,dy\,ds.\]
Recall however, that we have \(\tilde{N}_{1}\lesssim\tilde{N}_{2}\) by Holder. Hence the last term is further bounded by \(\int_{\{|x-y|<2x_{n}\}\times|s-t|\approx 2^{i}x_{n}^{2}}\tilde{N}_{2}( \partial_{n}u)\,dy\,ds.\) Considering the maximal functions we can then further estimate it by
\[CM_{t}(M(\tilde{N}(\nabla u)))(P,\tau),\]
where \((P,\tau)\) is as before the vertex of the non-tangential cone to which the point \((X,t)\) belongs to. The first maximal function \(M\) is over the parabolic boundary balls, while \(M_{t}\) is a maximal function in the time variable only. Hence we have for \(w_{2}\) the estimate (after summing over all \(i\)):
\[w_{2}(X,t)\lesssim M_{t}(M(\tilde{N}(\nabla u)))(P,\tau). \tag{3.17}\]
Notice that so far we have not used the fact that \(u\) is a solution of (1.7). We only need it to estimate the local term defining \(w_{1}\). By (3.12) we want to consider averages of \(D_{t}^{1/2}((u-\int\hskip-10.0pt\int_{\Delta}f)\varphi)\) over \(Q(X,t)\). To simplify the matters further we shall look at the averages of \(D_{t}^{1/2}((u-\int\hskip-10.0pt\int_{Q(X,t)}u)\varphi)\) since for the difference of these two terms we have that \(D_{t}^{1/2}((\int\hskip-10.0pt\int_{\Delta}f-\int\hskip-10.0pt\int_{Q(X,t)}u) \varphi)=(\int\hskip-10.0pt\int_{\Delta}f-\int\hskip-10.0pt\int_{Q(X,t)}u)D_{t }^{1/2}(\varphi)\). As \(D_{t}^{1/2}(\varphi)\approx x_{n}^{-1}\) and \(\int\hskip-10.0pt\int_{Q(X,t)}u\) enjoys estimates very similar to those in (3.16) we get that
\[\left(\int\hskip-10.0pt\int_{Q(X,t)}|D_{t}^{1/2}((\int\hskip-10.0pt\int_{\Delta }f-\int\hskip-10.0pt\int_{Q(X,t)}u)\varphi)|^{2}\right)^{1/2}\lesssim M( \tilde{N}(\nabla u))(P,\tau). \tag{3.18}\]
Consider now \(D_{t}^{1/2}((u-\int\hskip-10.0pt\int_{Q(X,t)}u)\varphi)\). Recall that currently \(\varphi\) is a cutoff function only in the \(t\) variable. Consider another function \(\psi\in C_{0}^{\infty}(\mathbb{R}_{+}^{n})\) such that \(|\nabla\psi|\lesssim x_{n}^{-1}\), \(0\leq\psi\leq 1\) and the product function \(\Phi(X,t)=\psi(X)\varphi(t)\) equals to \(1\) on \(2Q(X,t)\) and vanishes outside \(3Q(X,t)\). We clearly have
\[\int\hskip-10.0pt\int_{Q(X,t)}|D_{t}^{1/2}((u-\int\hskip-10.0pt\int_{Q(X,t)}u) \varphi)|^{2}=\int\hskip-10.0pt\int_{Q(X,t)}|D_{t}^{1/2}((u-\int\hskip-10.0pt \int_{Q(X,t)}u)\Phi)|^{2} \tag{3.19}\]
Here \(v=u-\int\hskip-10.0pt\int_{Q(X,t)}u\). Hence \(v\) solves the same PDE as \(u\), namely that
\[Lv=\partial_{t}v-\operatorname{div}(A\nabla v)+B\cdot\nabla v=0.\]
We multiply above PDE by \(-H_{t}(v\Phi)\Phi\) and integrate over our domain. Here \(H_{t}\) is the Hilbert transform in the time variable. Consider first the term that contains \(v_{t}\). We will have
\[-\int\hskip-10.0pt\int_{\mathbb{R}_{+}^{n}\times\mathbb{R}}(\partial_{t}v)H_{t }(v\Phi)\Phi=-\int\hskip-10.0pt\int_{\mathbb{R}_{+}^{n}\times\mathbb{R}}( \partial_{t}(v\Phi))H_{t}(v\Phi)+\int\hskip-10.0pt\int_{\mathbb{R}_{+}^{n} \times\mathbb{R}}vH_{t}(v\Phi)\partial_{t}\Phi. \tag{3.20}\]
The second term of (3.20) we will treat as an "error term" and we bound it. Recall that \(H_{t}\) is an isometry on \(L^{2}(\mathbb{R})\) and hence
\[\int_{t\in\mathbb{R}}\left|H_{t}(v\Phi)\right|^{2}dt=\int_{t\in\mathbb{R}}\left| (v\Phi)\right|^{2}dt\leq\int_{t\in 3Q(X,t)}\left|v\right|^{2}dt.\]
Hence the second term of (3.20) has a bound (by Cauchy Schwarz and taking into account the support of \(\Phi\)):
\[|E|=\left|\iint_{\mathbb{R}_{+}^{n}\times\mathbb{R}}vH_{t}(v\Phi)\partial_{t} \Phi\right|\leq x_{n}^{-2}\iint_{3Q(X,t)}|v|^{2}. \tag{3.21}\]
Here we have used the fact that \(|\partial_{t}\Phi|\lesssim x_{n}^{-2}\). Continuing with the first term on the right-hand side of (3.20) we write \(\partial_{t}=D_{t}^{1/2}H_{t}D_{t}^{1/2}\) and move the half derivative \(D_{t}^{1/2}\) to the other term. This gives us
\[-\iint_{\mathbb{R}_{+}^{n}\times\mathbb{R}}(\partial_{t}(v\Phi))H_{t}(v\Phi) =\iint_{\mathbb{R}_{+}^{n}\times\mathbb{R}}|D_{t}^{1/2}H_{t}(v\Phi)|^{2}= \iint_{\mathbb{R}_{+}^{n}\times\mathbb{R}}|D_{t}^{1/2}(v\Phi)|^{2}. \tag{3.22}\]
This is precisely the term we aim to estimate (see (3.19)). We now consider remaining terms of our integrated PDE for \(v\). We have
\[\iint_{\mathbb{R}_{+}^{n}\times\mathbb{R}}|D_{t}^{1/2}(v\Phi)|^{2}=-\iint_{ \mathbb{R}_{+}^{n}\times\mathbb{R}}\operatorname{div}(A\nabla v)H_{t}(v\Phi) \Phi+\iint_{\mathbb{R}_{+}^{n}\times\mathbb{R}}(B\cdot\nabla v)H_{t}(v\Phi) \Phi+E.\]
The second term after integrating by parts (and remembering that \(\nabla\) and \(H_{t}\) commute) yields three new terms to consider:
\[\iint_{\mathbb{R}_{+}^{n}\times\mathbb{R}}(A\nabla v)H_{t}((\nabla v)\Phi) \Phi+\iint_{\mathbb{R}_{+}^{n}\times\mathbb{R}}(A\nabla v)H_{t}(v\nabla(\Phi) )\Phi+\iint_{\mathbb{R}_{+}^{n}\times\mathbb{R}}(A\nabla v)H_{t}(v\Phi) \nabla\Phi.\]
By Cauchy-Schwarz (applying it to all three terms) we further obtain bounds by
\[\lesssim\iint_{3Q(X,t)}|\nabla v|^{2}+\iint_{\mathbb{R}_{+}^{n}\times\mathbb{ R}}|H_{t}((\nabla v)\Phi)|^{2}+\iint_{\mathbb{R}_{+}^{n}\times\mathbb{R}}|H_{t}(v \nabla(\Phi))|^{2}+x_{n}{}^{-2}\iint_{\mathbb{R}_{+}^{n}\times\mathbb{R}}|H_{ t}(v\Phi)|^{2}.\]
Here in the last term we use \(|\nabla\Phi|\lesssim x_{n}^{-1}\). We again use the fact that \(H_{t}\) is an \(L^{2}\) isometry in time with allows us to remove \(H_{t}\) from each term. Thus after taking into account the set on which \(\Phi\) is supported the line above simplifies to
\[\lesssim\iint_{3Q(X,t)}|\nabla v|^{2}+x_{n}^{-2}\iint_{3Q(X,t)}|v|^{2}.\]
Hence after putting all terms together:
\[\iint_{\mathbb{R}_{+}^{n}\times\mathbb{R}}|D_{t}^{1/2}(v\Phi)|^{2}\lesssim \iint_{3Q(X,t)}|\nabla v|^{2}+x_{n}^{-2}\iint_{3Q(X,t)}|v|^{2}.\]
Thus we have for \(w_{1}(X,t)\) (since \(\nabla v=\nabla u\)) and using (3.18):
\[w_{1}(X,t)\lesssim M(\tilde{N}_{2}(\nabla u))(P,\tau)+x_{n}^{-1}\left(\iint_{3 Q(X,t)}|v|^{2}\right)^{1/2}. \tag{3.23}\]
Here we have again used the fact that \((X,t)\in\Gamma(P,\tau)\) and the definition of \(\tilde{N}_{2}(\nabla u)\). For a fixed time \(t_{0}\) let us consider averages of \(u\) on a fixed time-slice \(t_{0}\), that is
\[u_{av}(t_{0}):=\fint_{Q(X,t)\cap\{t_{0}=const\}}u(Y,t_{0})dY. \tag{3.24}\]
By Poincare inequality in the spatial variables we have that
\[\fint_{3Q(X,t)\cap\{t_{0}=const\}}|u-u_{av}|^{2}(Y,t_{0})\lesssim x_{n}^{2} \iint_{3Q(X,t)\cap\{t_{0}=const\}}|\nabla u|^{2}.\]
Thus by (3.23) we see that
\[w_{1}(X,t)\lesssim M(\tilde{N}(\nabla u))(P,\tau)+x_{n}^{-1}\left(\fint_{|s-t |<9x_{n}^{2}}|u_{av}(s)-\fint_{Q(X,t)}u|^{2}ds\right)^{1/2}. \tag{3.25}\]
Observe that
\[\fint_{Q(X,t)}u=\fint_{|s-t|<x_{n}^{2}}u_{av}(s)\,ds,\]
and hence we really need to understand the difference \(|u_{av}(s)-u_{av}(s^{\prime})|\) for different times \(s,s^{\prime}\) in the interval \((t-9x_{n}^{2},t+9x_{n}^{2})\). Consider
\[\left|\fint_{3Q(X,t)\cap\{t_{0}=const\}}[u(Y,t_{0})-u_{av}(t_{0})]\psi(Y)dY \right|^{2}\]
\[\leq\fint_{3Q(X,t)\cap\{t_{0}=const\}}|u(Y,t_{0})-u_{av}(t_{0})|^{2}\psi^{2}(Y )dY\]
\[\leq\fint_{3Q(X,t)\cap\{t_{0}=const\}}|u(Y,t_{0})-u_{av}(t_{0})|^{2}\,dY \lesssim x_{n}^{2}\fint_{3Q(X,t)\cap\{t_{0}=const\}}|\nabla u|^{2}\,dY.\]
Hence for \(\beta=|3Q(X,t)\cap\{t_{0}=const\}|^{-1}\iint_{\mathbb{R}^{n}}\psi(Y)dY>0\) we have that
\[\left|u_{av}(t_{0})-u_{av}^{\psi}(t_{0})\right|\lesssim x_{n}\left(\fint_{3Q( X,t)\cap\{t_{0}=const\}}|\nabla u|^{2}\,dY\right)^{1/2}. \tag{3.26}\]
Here
\[u_{av}^{\psi}(t_{0})=\beta^{-1}\fint_{3Q(X,t)\cap\{t_{0}=const\}}u(Y,t_{0}) \psi(Y)dY=\frac{1}{\iint\psi\,dY}\iint_{\mathbb{R}^{n}\times\{t_{0}\}}(u\psi) (Y,t_{0})dY.\]
We again use the PDE (for \(u\)) and multiply the equation \(Lu=0\) by \(\psi\) and integrate it over the region \(\mathbb{R}^{n}\times[s,s^{\prime}]\), where we extend the function by zero outside our domain \(\Omega\). Recall that \(\psi\) is a cutoff function in the spatial variables only. We have
\[\int_{\mathbb{R}^{n}\times\{s^{\prime}\}}u\psi\,dY-\iint_{\mathbb{R}^{n}_{+} \times\{s\}}u\psi\,dY=\iint_{\mathbb{R}^{n}\times[s,s^{\prime}]}(\partial_{t} u)\psi\,dY\,dt\]
\[=\iint_{\mathbb{R}^{n}\times[s,s^{\prime}]}[\operatorname{div}(A\nabla u)-B \cdot\nabla u]\psi\,dY\,dt=-\iint_{\mathbb{R}^{n}\times[s,s^{\prime}]}[(A\nabla u )\nabla\psi+B\cdot\nabla u\psi]\,dY\,dt.\]
Since \(|B|\lesssim\delta^{-1}\approx x_{n}^{-1}\) the right-hand side is bounded by \(x_{n}^{-1}\iint_{3Q(X,t)}|\nabla u|\). After dividing by \(\iint_{\mathbb{R}^{n}}\psi(Y)dY\) we get that
\[\left|u_{av}^{\psi}(s)-u_{av}^{\psi}(s^{\prime})\right|\lesssim x_{n}\iint_{3Q (X,t)}|\nabla u|dY\leq x_{n}\left(\iint_{3Q(X,t)}|\nabla u|^{2}\,dY\,dt\right)^{ 1/2}.\]
This combined with (3.26) yields
\[\left(\fint_{|s-t|<9x_{n}^{2}}|u_{av}(s)-\iint_{Q(X,t)}u|^{2}ds\right)^{1/2} \lesssim x_{n}\left(\iint_{3Q(X,t)\cap\{t_{0}=const\}}|\nabla u|^{2}\,dY\right) ^{1/2},\]
which is precisely what is needed for (3.25). Thus
\[w_{1}(X,t)\lesssim M(\tilde{N}(\nabla u))(P,\tau) \tag{3.27}\]
holds. Recall that \(w(X,t)\lesssim w_{1}(X,t)+w_{2}(X,t)+w_{3}(X,t)\) and hence by (3.15), (3.17) and (3.27) we get for \(\tilde{N}(D_{t}^{1/2}u)(P,\tau)=\sup_{(X,t)\in\Gamma(P,\tau)}w(X,t)\):
\[\tilde{N}(D_{t}^{1/2}u)(P,\tau)\lesssim M(\tilde{N}(\nabla u))(P,\tau)\]
\[+M_{t}(M(\tilde{N}(\nabla u)))(P,\tau)+M_{t}(M(h))(P,\tau)+M(|\nabla f|)(P,\tau).\]
Since we assume that \(\|\tilde{N}(\nabla u)\|_{L^{p}}\lesssim\|f\|_{\dot{L}^{p}_{1,1/2}}\), it follows that \(\|h\|_{L^{p}}+\|\nabla f\|_{L^{p}}\lesssim\|f\|_{\dot{L}^{p}_{1,1/2}}\). Thus by boundedness of Hardy-Littlewood maximal functions \(M\) and \(M_{t}\) when \(p>1\) we therefore have for some \(C>0\):
\[\|\tilde{N}(D_{t}^{1/2}u)\|_{L^{p}(\mathbb{R}^{n})}\leq C\|f\|_{\dot{L}^{p}_{1,1/2}(\mathbb{R}^{n})}\]
as desired.
Let us briefly address similar bounds for \(\tilde{N}(D_{t}^{1/2}H_{t}u)\). Because of (3.22) and the bounds we have just established above we have that
\[\left(\iint_{Q(X,t)}|D_{t}^{1/2}H_{t}(v\varphi)|^{2}\right)^{1/2}\lesssim M( \tilde{N}(\nabla u))(P,\tau).\]
Since
\[D_{t}^{1/2}H_{t}u-D_{t}^{1/2}H_{t}(v\varphi)=D_{t}^{1/2}H_{t}((u-f)(1-\varphi)) \tag{3.28}\]
\[+D_{t}^{1/2}H_{t}((\fint_{\Delta}f-f)\varphi)+D_{t}^{1/2}H_{t}(f),\]
we just need to re-analyse the three remaining terms for the new operator \(D_{t}^{1/2}H_{t}\). However, \(D_{t}^{1/2}H_{t}\) and \(D_{t}^{1/2}\) are similar operators, we have
\[D_{t}^{1/2}H_{t}g(s)=c\int_{\mathbb{R}}\frac{g(s)-g(\tau)}{(s-\tau)|s-\tau|^{ 1/2}}d\tau.\]
It follows that the argument for the first term on the right-hand side of (3.28) will be identical to the one given for \(D^{1/2}\). Same is true for the second term. Finally as \(H_{t}\) is bounded on \(L^{p}\) when \(p>1\) we get that \(\|D_{t}^{1/2}H_{t}(f)\|_{L^{p}}\lesssim\|D_{t}^{1/2}(f)\|_{L^{p}}\) and hence the last term belongs to \(L^{p}\) as needed. It follows that
\[\|\tilde{N}(D_{t}^{1/2}H_{t}u)\|_{L^{p}(\mathbb{R}^{n})}\leq C\|f\|_{\dot{L}^{ p}_{1,1/2}(\mathbb{R}^{n})}\]
as desired.
Now we discuss the key differences between the case \(\mathcal{O}=\mathbb{R}_{+}^{n}\) and the case \(\mathcal{O}\) is an arbitrary uniform domain with \(n-1\) dimensional uniformly rectifiable boundary. Some steps are identical, for example after replacing \(x_{n}\) by \(h=\delta(X,t)=\delta(X)\) we see that the estimate for (3.27) holds, modulo our initial step where we have claimed (3.18). This step, as well as (3.16), need a serious rethink as they rely on the projection \(\pi:\Omega\to\partial\Omega\) which only really make sense on graph-like domains.
Here we modify the idea of the paper [14]. The whole integration here takes place on a single time slice of the domain \(\Omega\) and hence let us drop for now the variable \(t\) completely.
Consider a function \(u:\mathcal{O}\to\mathbb{R}\). Let us define space-only averages of \(u\), that is
\[u_{av}(X)=\fint_{|X-Z|<\delta(Y)/8}u(Z)\,dZ.\]
For a fixed point \(X\in\mathcal{O}\) and let \(h=\delta(X)\). Using the fundamental theorem of calculus we clearly see that this average is related to the average defined in (3.24) and we will have for \(u\) on the time slice \(t_{0}\) (slightly abusing the notation):
\[|u_{av}(X)-u_{av}(t_{0})|\lesssim h\fint_{B(X,\delta(X)/4)}|\nabla u|. \tag{3.29}\]
Recall that \(\tilde{S}(X)=\{q\in\partial\Omega:X\in\tilde{\gamma}(q)\}\). By Proposition 3.3 of [14] we have \(\sigma(\tilde{S}(X))\approx h^{n-1}\), where \(\sigma=\mathcal{H}^{n-1}\Big{|}_{\partial\mathcal{O}}\) is the natural \(n-1\)-dimensional Hausdorff measure on \(\partial\mathcal{O}\).
For any \(q\in\tilde{S}(Y)\) we estimate the difference \(|u_{av}(X)-f(q)|\). The argument uses both key properties of our domain \(\mathcal{O}\), namely the existence of corkscrew points and interior Harnack chain condition. We claim that
\[|u_{av}(X)-f(q)|\lesssim A_{\tilde{a}}(\nabla u)(q) \tag{3.30}\]
where
\[A_{\tilde{a}}(\nabla u)(q)=\int_{\!\!\!\int_{\tilde{\gamma}_{\tilde{a}}^{2h}( q)}}|\nabla v|(Z)\delta(Z)^{1-n}dZ.\]
Here, the parameter \(\tilde{a}\) (the aperture of the corkscrew region \(\widetilde{\gamma}(q)\)) will be determined later and the superscript \(2h\) mean that we will truncate the corkscrew region at the height \(2h\), that is \(\widetilde{\gamma}^{2d}(q):=\widetilde{\gamma}(q)\cap B(q,2h)\).
Proof of (3.30): Since \(X\in\widetilde{\gamma}_{a}(q)\), it follows that \(X\in\gamma_{2a}(q)\) and so there is a sequence of corkscrew points \(X_{j}\) associated to the point \(q\) at scales \(r_{j}\approx 2^{-j}h\), \(j=0,1,2,\dots\) with \(X_{0}=X\). By the Harnack chain condition, for each \(j\) there is a number \(N\) and a constant \(C>0\) such that there exists \(n(j)\leq N\) balls \(B_{k}^{(j)}\) of radius \(\approx 2^{-j}h\) with \(CB_{k}^{(j)}\subset\mathcal{O}\), \(X_{j-1}\in B_{1}^{(j)}\), \(X_{j}\in B_{n}^{(j)}\), and \(B_{k}^{(j)}\cap B_{k+1}^{(j)}\neq\emptyset\). Therefore we can find another chain of balls with the same properties for a larger but fixed choice of \(N\) so that \(4B_{k}^{(j)}\subset\mathcal{O}\) and \(B_{k+1}^{(j)}\subset 2B_{k}^{(j)}\).
Considering the whole collection of balls \(B_{k}^{(j)}\) for all \(j=0,1,2,\dots\) and \(k=1,\dots,n(j)\leq N\) it follows that we have an infinite chain of balls, the first of which contains \(X=X_{0}\)
converging to the boundary point \(q\), with the property that any pair of consecutive balls in the chain have roughly the same radius and their 4-fold enlargements are contained in \(\mathcal{O}\). We relabel these balls \(B_{j}(X_{j},r_{j})\) with centers \(X_{j}\) and radii \(r_{j}\approx t^{-j}h\) for some \(t<1\) depending on \(N\).
We next claim that, for any \(j=0,1,2,\dots\),
\[\left|\int\hskip-10.0pt\int_{B_{j}}u(Z)dZ-\int\hskip-10.0pt\int_{B_{j+1}}u(Z)dz \right|\lesssim t^{-j}\int\hskip-10.0pt\int_{2B_{j}}|\nabla u|(Z)dZ\approx\int \hskip-10.0pt\int_{2B_{j}}|\nabla u|(Z)\delta(Z)^{1-n}dZ. \tag{3.31}\]
To prove (3.31), define the map \(T(Y)=r_{j}/r_{j+1}(Y-X_{j})+X_{j+1}\) from \(B_{j}\) to \(B_{j+1}\). Then
\[|u(T(Y))-u(Y)|\leq\int_{\ell\in[Y,T(Y)]}|\nabla u(\ell)|\ d\ell \tag{3.32}\]
where \([Y,T(Y)]\) is the line segment from \(Y\) to \(T(Y)\). Averaging \(Y\) over \(B_{j}\), using the triangle inequality, and observing that the collection of lines \([Y,T(Y)]\) is contained in \(2B_{j}\), gives us what we want.
The claim (3.30) results from summing the averages in (3.31) as follows.
Set
\[U_{j}:=\int\hskip-10.0pt\int_{B_{j}}u(Z)dZ-\int\hskip-10.0pt\int_{B_{j+1}}u(Z)dZ\]
Because \(u\) attains value \(f(q)\) on the boundary, the averages \(\int\hskip-10.0pt\int_{B_{j}}u(Z)dZ\) are converging to \(f(q)\) for \(\sigma\)-a.e. \(q\in\partial\Omega\). It follows that
\[|u_{av}(X)-f(q)|\leq\sum_{j=0}^{\infty}|U_{j}|\lesssim\sum_{j=0}^{\infty}\int \hskip-10.0pt\int_{2B_{j}}|\nabla u|(Z)\delta(Z)^{1-n}dZ.\]
We have that each \(X_{j}\in\gamma_{1+2a}(q)\) and hence for some \(\tilde{a}>>2a\) (independent of \(q\)) we will have that the enlarged ball \(2B_{j}\subset\gamma_{\tilde{a}}(q)\). Hence (3.30) follows.
Since (3.30) holds for all \(q\in\tilde{S}(X)\) we integrate the inequality over this set. This gives us
\[\left|\int_{\tilde{S}(X)}[u_{av}(X)-f(q)]d\sigma(q)\right|\lesssim\int_{ \tilde{S}(X)}A_{\tilde{a}}d\sigma=\int_{\tilde{S}(X)}\int\hskip-10.0pt\int_{ \tilde{\gamma}_{\tilde{a}}^{2h}(q)}|\nabla v|(Z)\delta(Z)^{1-n}dZ\,d\sigma(q).\]
Let \(T(X)=\bigcup_{q\in\tilde{S}(X)}\gamma_{\tilde{a}}^{2h}(q)\). Since for any \(Z\in T(X)\) we have that \(\sigma((\tilde{S}(Z))\lesssim\delta(Z)^{n-1}\)
By exchanging the order in the last integration get get that
\[\left|\int_{\tilde{S}(X)}[u_{av}(X)-f(q)]d\sigma(q)\right|\lesssim\int\hskip-10.0pt \int_{T(X)}|\nabla v|(Z)dZ.\]
It follows that
\[|u_{av}(X)-f_{av}|\lesssim h^{1-n}\int\hskip-10.0pt\int_{T(X)}|\nabla v|(Z)dZ, \tag{3.33}\]
where \(f_{av}=\sigma(\tilde{S}(X))^{-1}\int_{\tilde{S}(X)}f\,\mathrm{d}\sigma\). We now use this on the time-slices \(Q(X,t)\cap\{\tau=const\}\) for \(\tau\in(t-h^{2}/16,t+h^{2}/16)\), integrate in \(\tau\) and average. Let \(\Delta\) be the set
\(\tilde{S}(X)\times(t-h^{2}/16,t+h^{2}/16)\) any \(\mu\) the product measure \(\,\mathrm{d}\mu=\,\mathrm{d}\sigma\,\mathrm{d}t\). It follows that
\[\left|\int\!\!\!\!\!\int_{Q(X,t)}u-\mu(\Delta)^{-1}\int_{\Delta}f\right|\lesssim \fint_{t-h^{2}/16}^{t+h^{2}/16}[|u_{av}(X,\tau)-f_{av}(\tau)|+|u_{av}(\tau)-u_{ av}(X,\tau)|]\,\mathrm{d}\tau\]
and after using (3.33) for the first term and (3.29) for the second term we get that
\[\left|\int\!\!\!\!\!\int_{Q(X,t)}u-\mu(\Delta)^{-1}\int_{\Delta}f\right| \lesssim h^{-n-1}\int\!\!\!\!\!\!\int_{T(X)\times(t-h^{2}/16,t+h^{2}/16)}| \nabla v|dZ\,d\tau.\]
We can now use this to derive an analogue of (3.18). Let
\[\tilde{\Delta}:=\{q^{\prime}\in\partial\mathcal{O}:T(X)\cap\tilde{\gamma}(q^{ \prime})\neq\emptyset\}\times(t-h^{2}/16,t+h^{2}/16).\]
This set is an enlargement of \(\Delta\) with comparable surface measure \(\mu\approx h^{n+1}\). The integral on the right-hand side has therefore (as before) the estimate by
\[\left(\int\!\!\!\!\!\int_{Q(X,t)}|D_{t}^{1/2}((\mu(\Delta)^{-1}\int_{\Delta}f \,\mathrm{d}\mu-\int\!\!\!\!\!\!\int_{Q(X,t)}u)\varphi)|^{2}\right)^{1/2} \lesssim M(\tilde{N}(\nabla u))(P,\tau), \tag{3.34}\]
for all \((P,\tau)\in\Delta\). By a similar technique we can derive also an analogue of (3.17), as we have already done the hard bit when we have obtained (3.33).
It remains to consider the \(w_{3}\) term. As we have just seen \(\mu(\Delta)^{-1}\int_{\Delta}f\,\mathrm{d}\mu\) is the right replacement for \(\fint_{\Delta}f\) we had previously. There is no difference in the calculation until (3.10). The last term of it needs to be estimated differently. By (2.9) for almost every \(\tau\in(t-h^{2}/16,t+h^{2}/16)\) we have for \(\sigma\)-a.e. \(x,y\in\partial\Omega\):
\[|f(x,\tau)-f(y,\tau)|\leq|x-y|[g(x,\tau)+g(y,\tau)]\text{ for some }g\in L^{p}( \Omega)\text{ with norm }\lesssim\|f\|_{M^{1,p}_{x}(\partial\Omega)}.\]
It follows that the estimate we seek is very similar to (3.9) but with the roles of variables \(x\) and \(t\) swapped. Hence that the last term (3.10) of will be bounded by
\[\int_{32\Delta}g^{p}d\sigma\,dt.\]
Recall that in our case \(\Delta=\tilde{S}(X)\times(t-h^{2}/16,t+h^{2}/16)\) and by Definition 2.5 the set \(\tilde{S}(X)\) is a union of boundary balls of radius \(a\delta(X)\). Hence by "\(32\Delta\)" we mean the set
\[32\Delta:=\bigcup_{q\in S(X)}\{q;\in\partial\mathcal{O}:|q-q^{\prime}|<32a \delta(X)\}\times(t-8h^{2},t+8h^{2}).\]
The rest of the argument remain the same. Eventually we get for \(w_{3}(X,t)\) the estimate
\[w_{3}(X,t)\lesssim M_{t}(M(h))(P,\tau)+M(g)(P,\tau). \tag{3.35}\]
From this the claim follows.
|
2307.04709
|
Fatal errors and misuse of mathematics in the Hong-Page Theorem and
Landemore's epistemic argument
|
In the pursuit of understanding collective intelligence, the Hong-Page
Theorems have been presented as cornerstones of the interplay between diversity
and ability. However, upon rigorous examination, there seem to be inherent
flaws and misinterpretations within these theorems. H\'el\`ene Landemore's
application of these theorems in her epistemic argument and her political
proposal showcases a rather unsettling misuse of mathematical principles. This
paper critically dissects the HongPage Theorems, revealing significant
inconsistencies and oversights, and underscores the indispensable role of
'ability' in group problem-solving contexts. This paper aims not to undermine
the importance of diversity, but rather to highlight the dangers of misusing
mathematical principles and the necessity for a more nuanced comprehension of
mathematical results when applying them to social sciences.
|
Álvaro Romaniega
|
2023-07-10T17:17:22Z
|
http://arxiv.org/abs/2307.04709v2
|
# Fatal errors and misuse of mathematics in the Hong-Page theorem and Landemore's epistemic argument
###### Abstract.
In the pursuit of understanding collective intelligence, the Hong-Page Theorems have been presented as cornerstones of the interplay between diversity and ability. However, upon rigorous examination, there seem to be inherent flaws and misinterpretations within these theorems. Helene Landemore's application of these theorems in her epistemic argument and her political proposal showcases a rather unsetting misuse of mathematical principles. This paper critically dissects the Hong-Page Theorems, revealing significant inconsistencies and oversights, and underscores the indispensable role of 'ability' in group problem-solving contexts. This paper aims not to undermine the importance of diversity, but rather to highlight the dangers of misusing mathematical principles and the necessity for a more nuanced comprehension of mathematical results when applying them to social sciences.
###### Contents
* 1 The "Diversity Trumps Ability" Theorem
* 1.1 Definitions
* 1.2 Problem assumptions
* 1.3 Problem solver assumptions
* 1.4 Group problem solver assumptions
* 1.5 Trivial corollaries from the assumptions
* 1.6 Other results and simpler proof
* 2 Removing technical hypotheses: Counterexamples
* 2.1 \(V\) is an injection, Assumption 3
* 2.2 Unique best agent, Assumption 8
* 2.3 Clones performance
* 2.4 Selection of clones
* 3 New Hong-Page style theorem: Ability trumps diversity
* 3.1 The new assumptions
* 3.2 The theorem
* 4 The Diversity Prediction Theorem and the Crowds Beat Averages Law
* 4.1 The results.
* 4.2 The asymmetric role of "ability" and "diversity".
* 5 Hong and Page's misuse of mathematics: an obscured trivial theorem
* 5.1 Misusing the mathematics to obscure a trivial fact
* 5.2 Misusing the theorem to answer question it does not
* 5.3 Misusing the prestige of mathematics
* 5.4 A basic mathematical error in advocating for diversity
* 6 Landemore's misuse of mathematics: an invalid and unsound argument for her political proposal
* 6.1 The argument
* 6.2 Basic misunderstanding of the mathematical theorems
###### Contents
* 1 Introduction
* 2 The 'Unique' Problem
* 2.1 The 'Unique' Problem
* 2.2 The 'Unique' Problem
* 2.3 The 'Unique' Problem
* 2.4 The 'Unique' Problem
* 2.5 The 'Unique' Problem
* 2.6 The 'Unique' Problem
* 2.7 The 'Unique' Problem
* 2.8 The 'Unique' Problem
* 2.9 The 'Unique' Problem
* 2.1 The 'Unique' Problem
* 2.1 The 'Unique' Problem
* 2.2 The 'Unique' Problem
* 2.3 The 'Unique' Problem
* 2.4 The 'Unique' Problem
* 2.5 The 'Unique' Problem
* 2.6 The 'Unique' Problem
* 2.7 The 'Unique' Problem
* 2.8 The 'Unique' Problem
* 2.9 The 'Unique' Problem
* 2.1 The 'Unique' Problem
* 2.2 The 'Unique' Problem
* 2.3 The 'Unique' Problem
* 2.4 The 'Unique' Problem
* 2.5 The 'Unique' Problem
* 2.6 The 'Unique' Problem
* 2.7 The 'Unique' Problem
* 2.8 The 'Unique' Problem
* 2.9 The 'Unique' Problem
* 2.10 The 'Unique' Problem
* 2.11 The 'Unique' Problem
* 2.12 The 'Unique' Problem
* 2.13 The 'Unique' Problem
* 2.14 The 'Unique' Problem
* 2.15 The 'Unique' Problem
* 2.16 The 'Unique' Problem
* 2.17 The 'Unique' Problem
* 2.18 The 'Unique' Problem
* 2.19 The 'Unique' Problem
* 2.20 The 'Unique' Problem
* 2.21 The 'Unique' Problem
* 2.22 The 'Unique' Problem
* 2.23 The 'Unique' Problem
* 2.24 The 'Unique' Problem
* 2.25 The 'Unique' Problem
* 2.26 The 'Unique' Problem
* 2.27 The 'Unique' Problem
* 2.28 The 'Unique' Problem
* 2.29 The 'Unique' Problem
* 2.30 The 'Unique' Problem
* 2.31 The 'Unique' Problem
* 2.32 The 'Unique' Problem
* 2.33 The 'Unique' Problem
* 2.34 The 'Unique' Problem
* 2.35 The 'Unique' Problem
* 2.36 The 'Unique' Problem
* 2.37 The 'Unique' Problem
* 2.38 The 'Unique' Problem
* 2.39 The 'Unique' Problem
* 2.31 The 'Unique' Problem
* 2.32 The 'Unique' Problem
* 2.33 The 'Unique' Problem
* 2.34 The 'Unique' Problem
* 2.35 The 'Unique' Problem
* 2.36 The 'Unique' Problem
* 2.37 The 'Unique' Problem
* 2.38 The 'Unique' Problem
* 2.39 The 'Unique' Problem
* 2.4 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.4 The 'Unique' Problem
* 2.4.5 The 'Unique' Problem
* 2.4.6 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.8 The 'Unique' Problem
* 2.4.9 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.4 The 'Unique' Problem
* 2.4.5 The 'Unique' Problem
* 2.4.6 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.8 The 'Unique' Problem
* 2.4.9 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.4 The 'Unique' Problem
* 2.4.5 The 'Unique' Problem
* 2.4.6 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.8 The 'Unique' Problem
* 2.4.9 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.4 The 'Unique' Problem
* 2.4.5 The 'Unique' Problem
* 2.4.6 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.8 The 'Unique' Problem
* 2.4.9 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.4 The 'Unique' Problem
* 2.4.5 The 'Unique' Problem
* 2.4.6 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.8 The 'Unique' Problem
* 2.4.9 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.4 The 'Unique' Problem
* 2.4.5 The 'Unique' Problem
* 2.4.6 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.8 The 'Unique' Problem
* 2.4.9 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.4 The 'Unique' Problem
* 2.4.4 The 'Unique' Problem
* 2.4.4 The 'Unique' Problem
* 2.4.5 The 'Unique' Problem
* 2.4.6 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.8 The 'Unique' Problem
* 2.4.9 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.4 The 'Unique' Problem
* 2.4.4 The 'Unique' Problem
* 2.4.4 The 'Unique' Problem
* 2.4.4 The 'Unique' Problem
* 2.4.5 The 'Unique' Problem
* 2.4.6 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.8 The 'Unique' Problem
* 2.4.9 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.4 The 'Unique' Problem
* 2.4.4.1 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.4.4 The 'Unique' Problem
* 2.4.4.5 The 'Unique' Problem
* 2.4.6 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.8 The 'Unique' Problem
* 2.4.9 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.4.4 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.4.4 The 'Unique' Problem
* 2.4.4.5 The 'Unique' Problem
* 2.4.6 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.8 The 'Unique' Problem
* 2.4.9 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.4 The 'Unique' Problem
* 2.4.4.5 The 'Unique' Problem
* 2.4.6 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.7 The 'Unique' Problem
* 2.4.8 The 'Unique' Problem
* 2.4.9 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.1 The 'Unique' Problem
* 2.4.2.2 The 'Unique' Problem
* 2.4.2 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.4.1 The 'Unique' Problem
* 2.4.2.1 The 'Unique' Problem
* 2.4.2.2 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem
* 2.4.3 The 'Unique' Problem *
In the penultimate section, Section 6, we turn our critique towards Helene Landemore's political proposal. We present her misuse of mathematics and demonstrate a basic misunderstanding of the mathematical theorems. We proceed to analyze the misuse of the hypotheses of the 'Diversity Trumps Ability Theorem' and argue for the vacuousness of the 'Numbers Trump Ability Theorem'.
Finally, Section 7 wraps up the paper, offering a robust summary of the findings and their implications on the ongoing discourse surrounding collective intelligence and the role of diversity and ability within it.
Certain sections of this text, particularly the initial ones, presume a degree of mathematical proficiency (although the mathematics used are not particularly involved). However, Sections 5, 6, and 7 are essentially devoid of mathematics, instead referencing earlier sections and the results derived there. For the mathematically-intensive portions, I have strived to provide a non-technical explanation, typically prefaced with the phrase, "In other words".
A final caveat, this critique should not be taken as a dismissal of the importance of diversity (which I consider one, among others, important epistemic factor) in decision-making, but rather as a call to address the misuse of mathematics in these contexts. It urges us to consider the rigorous and nuanced approach required when applying mathematical theories to sociopolitical constructs. As such, this paper makes a contribution to the ongoing discourse on collective intelligence, fostering a deeper understanding of the mathematical theorems used to achieve a desired conclusion.
## 1. The "Diversity Trumps Ability" Theorem
### Definitions
**Definition 1.1** (Problem).: \(V_{\phi}:X\to[0,1]\) a function and \(X\), for simplicity, finite. The _Problem_ is finding the maximum of \(V_{\phi}\).
**Definition 1.2** (Problem solver).: _A problem solver_ is a function \(\phi:X\to X\). The set of problem solvers is denoted by \(\Phi\). For a probability measure on \(X\) (full support), \(\nu\), the expected value of the performance of each agent is given by
\[\mathbb{E}_{\nu}(V_{\phi}\circ\phi)=\sum_{x\in X}V_{\phi}\left(\phi(x)\right) \nu(x)\,. \tag{1}\]
Let's discuss the intuition behind the problem-solving process. Given an initial state \(x\) from the set \(X\), a problem solver aims to find a solution to the problem by mapping \(x\) to \(\phi(x)\). In other words, the problem solver transforms the input \(x\) into a potential solution, hoping that this transformation will maximize the value of the function \(V_{\phi}\).
### Problem assumptions
**Assumption 1** (Unique problem).: \(\forall\phi,\phi^{\prime}\in\Phi,\ V_{\phi}=V_{\phi^{\prime}}=:V\)_._
In other words, any two problem solvers evaluate the problem space in the same way. This assumption simplifies the analysis by ensuring that all problem solvers have a consistent evaluation criterion for the problem1.
Footnote 1: In real settings, like the ones to which people attempt to apply the theorem, this assumption is far from accurate. People have different values, so right from the beginning, the hypotheses are not satisfied. Nevertheless, in this paper, I want to focus on more profound critiques, even though issues like diversity in values are quite significant.
**Assumption 2** (Unique solution).: \(\exists_{=1}x^{*}\ /\ V(x^{*})=1\)_._
In other words, there is exactly one optimal solution to the problem that maximizes the value of the function \(V\). This assumption allows us to focus on finding the unique solution.
**Assumption 3** (Strictly increasing problem).: \(V\) is injective, i.e., if \(V(x)=V(x^{\prime})\), then \(x=x^{\prime}\). That is, we can order \(X\) as \(\{x_{1},\ldots,x_{|X|}\}\) such that
\[V(x_{1})<\ldots<V(x_{|X|})\,.\]
In other words, this assumption implies that the problem has a well-defined ordering of potential solutions. The original article did not state explicitly that the value function \(V\) is one-to-one. This assumption is necessary for the theorem to hold, as Thompson pointed out, [10].
### Problem solver assumptions
**Assumption 4** (Everywhere ability in problem solvers).: \(\forall\;\phi\in\Phi:\;V(\phi(x))\geq V(x)\;\forall\;x\in X\). In particular, \(\phi(x^{*})=x^{*}\,\).
This assumption, in combination with Assumption 3 and 4, states that all problem solvers are able to improve the value of any state. In other words, if a problem solver is applied to a state, the value of the state will never decrease.
**Assumption 5** (No improvement, idempotence).: \(\forall\phi\in\Phi,\;\phi\circ\phi=\phi\,\).
This assumption states that problem solvers are idempotent. In other words, applying a problem solver to a state twice will have the same effect as applying it once.
**Assumption 6** ("Difficulty", imperfect problem solvers).: \(\forall\phi\in\Phi\;\exists x\;/\;\phi(x)\neq x^{*}\,\).
In other words, by hypothesis, for every agent there are instances where they fail to find the optimal solution.
**Assumption 7** ("Diversity", sufficient unstuck problem solvers).: \(\forall x\in X\backslash\{x^{*}\}\;\exists\,\phi\in\Phi\;/\;\phi(x)\neq x\,\).
In other words, this assumption ensures a "diversity" of problem solvers in \(\Phi\), with at least one problem solver capable of making progress from any non-optimal state.
**Assumption 8** (Unique best problem solver).: \(|\arg\max_{\phi\in\Phi}\{\mathbb{E}_{v}(V\circ\phi)\}|=1\), i.e., there is only one best-performing agent
In other words, this assumption states that there is only one problem solver that performs best on average. There is only one problem solver that is the most likely to lead to the optimal state.
### Group problem solver assumptions
**Assumption 9** (In series deliberation).: The agents \(\Phi^{\prime}\coloneqq\{\phi_{1},\ldots,\phi_{N}\}\), when working together to solve the problem starting at \(x\) are equivalent to:
1. First, \(i_{1}\) such that \(x_{1}\coloneqq\phi_{i_{1}^{x}}(x)\neq x_{0}\coloneqq x\).
2. Second, \(i_{2}\) such that \(x_{2}\coloneqq\phi_{i_{2}^{x}}(x_{2})\neq x_{1}\).
3. Inductively, \(i_{j}\) such that \(x_{j}\coloneqq\phi_{i_{j}^{x}}(x_{j-1})\neq x_{j-1}\).
This stops at \(x_{n}\) such that it is a fixed point for all elements of \(\{\phi_{1},\ldots,\phi_{N}\}\) (all the agents are stuck at the same point, unanimity).
There can be multiple sequences arriving at the same point. The fixed point exists as \(x^{*}\) is a fixed point for all elements of \(\Phi\) by assumption. The group performance is tantamount to composition of the functions in a proper way:
\[\phi^{\Phi^{\prime}}(x)\coloneqq\phi_{i_{n}^{x}}\circ\ldots\phi_{i_{1}^{x}}(x )\,.\]
In other words, this assumption states that a group of problem solvers can be thought of as a sequence that takes turns applying the problem solvers in the group to the current state, such that
we approach the optimal value. The group will stop when it reaches a state that is a fixed point for all of the problem solvers in the group, i.e., unanimity.
**Assumption 10** (Clones).: There exists an infinite amount of identical copies of each agent \(\phi\in\Phi\).
This assumption states that there are an infinite number of problem solvers available. This is necessary for the theorem to hold.
### Trivial corollaries from the assumptions
By construction (i.e., from the assumptions and _not_ the theorem), we have the following corollaries. Note that no profound or even standard mathematical results are needed; we just need the assumptions mentioned above combined with trivial arithmetic and trivial properties of sets.
**Corollary 1.3**.: _All members of \(\Phi\) working together can solve the Problem \(\forall\ x\in X\)._
Proof.: This follows straightforwardly from Assumptions 3, 4, 7, and 9. Indeed,
\[V(x)<V\left(\phi_{i_{1}^{x}}(x)\right)<\ldots<V\left(\phi_{i_{n}^{x}}(x_{n-1}) \right)=1\,,\]
for some \(n\leq|X|\).
**Corollary 1.4**.: \(\exists\ x\in X\) _such that \(\forall N_{1}\in\mathbb{N}\), \(N_{1}\) "clones" of the the best performing agent cannot solve the Problem._
Proof.: By Assumptions 5 and 9, \(N_{1}\) "clones" of the best performing agent work in the same way as a single clone alone. By Assumption 6, no agent can solve the problem alone for some \(x\), so the corollary follows straightforwardly.
So, just "rearranging" the assumptions we can trivially (again, no profound mathematics, just basic arithmetic and trivial properties of sets) prove the following version of the "theorem" containing the main conclusion. The advantage of this formulation is that no clones are needed.
**Theorem 1.5** (Basic Hong-Page theorem).: _Given the Assumptions 1-9, all problem solvers working together perform better than the best problem solver (in the sense that there is a state \(x\) such that the best problem solver cannot solve but the whole group can). Note that the best problem solver is included in the first group._
Proof.: This follows straightforwardly from the corollaries (i.e., the assumptions) given above.
Again, this is by construction, based on the way we've formulated our hypotheses. Let us explain the proof in words. By assumption, we have arranged for the best agent not to always solve the problem. On the other hand, by mere assumption, for every state, there is an agent that can reach a better state. As the number of states is finite, they will reach the optimum in a finite number of steps. Note that the "diverse group" includes the best agent. The fact that the "diverse" group" outperforms the best agent is trivial, as they include additional agents and, by hypothesis, they do not worsen the solution and improve it for some states.
To connect with the original statement, let us formulate the following version.
**Theorem 1.6** (Deterministic Hong-Page's Theorem).: _Given the hypotheses:_
* _"_Ability-diversity" assumptions. The assumptions given above, Assumption_ 1-10_._
* _"Counting" assumptions. We choose two groups. In the first,_ \(N_{1}\) _(large) clones such that_2__\(\Phi\subset\{\phi_{1}^{R},\ldots,\phi_{N_{1}}^{R}\}=:\Phi_{R}\)_, and, in the second, there are_ \(N_{1}\) _clones of the best performing agent selected from a group of_ \(N\)_,_ \(\Phi_{B}\)_._
_Then, the performance of \(\Phi_{R}\) is better than \(\Phi_{B}\) in the sense of (1)._
Proof.: It is trivial by the corollaries given above. Indeed, by the first corollary \(\mathbb{E}_{v}(V\circ\phi^{\Phi_{2}})=1\,.\) By the second corollary, \(\mathbb{E}_{v}(V\circ\phi^{\Phi_{1}})<1\) as \(\exists_{\geq 1}\,x\) such that \(V(\phi^{\Phi_{1}}(x))<1\).
So, what did Hong and Page prove in their article? Essentially, they demonstrated that assumption \(\mathcal{A}_{2}\) holds almost surely (after defining some probability measures). This is a not very difficult probabilistic claim that has nothing to do with either ability or diversity, which are contained in the assumptions \(\mathcal{A}_{1}\). It is a probabilistic fact that can be shown, regardless of whether the objects considered are diverse agents, incapable problem solvers, balls in a box, or mathematical functions in a Hilbert space. Nevertheless, this is the heart of proof of their article published in the Proceeding of the National Academy of Sciences. But one might question, given that we have shown that Theorem 1.5 (a trivial restatement of the assumptions) encapsulates all the information regarding diversity and ability, what is the necessity of introducing clones? This is why Thompson say that the theorem "is trivial. It is stated in a way which obscures its meaning. It has no mathematical interest and little content." We can compare the previous version with the original statement:
**Theorem 1.7** ([2]).: _Given the assumption above, let \(\mu\) be a probability distribution over \(\Phi\) with full support. Then, with probability one, a sample path will have the following property: there exist positive integers \(N\) and \(N_{1}\), \(N>N_{1}\), such that the joint performance of the \(N_{1}\) independently drawn problem solvers exceeds the joint performance of the \(N_{1}\) individually best problem solvers among the group of \(N\) agents independently drawn from \(\Phi\) according to \(\mu\)._
As noted by Thompson, [10], the theorem as originally stated was false because Assumption 3 was not included. See Section 2.1 for more details. Note that "\(N_{1}\) individually best problem solvers" are just clones of the best problem solver (unique by assumption), not, for instance, the first and second according to their expected value (which will perform better). This restriction is imposed by assumption. More precisely, we have a new assumption:
**Assumption 11**.: By hypothesis we have the following.
1. The first group is selected randomly from a pool of clones of the elements in \(\Phi\). The group size \(N_{1}\) can be adjusted as required.
2. Similarly, the second group is chosen independently, but from an identically distributed set of clones of the elements in \(\Phi\) of size \(N_{1}\). This selection process follows the stipulations that: * the group size \(N\) can be adjusted as required, * the selection allows for the repetition of the best problem solvers.
### Other results and simpler proof
**Proposition 1.8**.: _Assuming the conditions of Theorem 1.7 with \(N_{1}\) large enough, with probability one,_
1. _the randomly selected group of_ \(N_{1}\) _problem solvers will invariably converge on the correct solution without any disagreement and unanimity,_
2. _the "random group" always contains the best-performing agent._
_These facts explain that they can always outperform the best problem solvers._
Proof.: This is straightforward from Corollary 1.3 and that, following the first part of Assumption 11, the first group includes a copy of \(\Phi\)\(\mu\)- almost surely. It is also Lemma 1 in [2]. There is unanimity as, for every state \(x\in X\), the group solution is \(x^{*}\), where everyone accepts as a solution, \(\phi(x^{*})=x^{*}\). The second statement follows from the Strong Law of Large Numbers, see below for more details, Remark 1.9.
For instance, the following \(\Phi:=\{\phi_{1},\phi_{2},\phi_{3}\}\) such that
\begin{tabular}{c|c|c|c|c} & \(V(x)\) & \(\phi_{1}(x)\) & \(\phi_{2}(x)\) & \(\phi_{3}(x)\) \\ \hline \(a\) & \(1/4\) & \(b\) & \(a\) & \(b\) \\ \(b\) & \(1/2\) & \(b\) & \(c\) & \(b\) \\ \(c\) & \(3/4\) & \(d\) & \(c\) & \(c\) \\ \(d\) & \(1\) & \(d\) & \(d\) & \(d\) \\ \end{tabular} satisfies the hypotheses of the theorem, but, if the "random" group does not include the best performing agent3, \(\phi_{1}\), then it cannot outperform \(\phi_{1}\).
Footnote 3: Assumptions can be made to exclude the best performing agent, while ensuring that there is another agent that performs as the best one does when needed. Consequently, it is no surprise that the theorem still holds. However, this approach is purely ad hoc.
In fact, a simpler proof of the theorem can be constructed based on this simple fact. This approach also exposes the theorem's triviality given its underlying assumptions.
_Simpler proof of Theorem 1.7_.: First, by hypothesis (Assumptions 6 and 7), \(\exists\;x_{*}\in X,\;\phi^{*},\phi_{*}\in\Phi\) such that the best agent \(\phi^{*}(x_{*})\neq x^{*}\) and \(V\left(\phi_{*}\left(\phi^{*}(x_{*})\right)\right)>V\left(\phi^{*}(x_{*})\right)\). By hypothesis (Assumptions 9 and 10), \(V\circ\phi^{\left\{\phi^{*},\phi^{*}\right\}}\geq V\circ\phi^{*}\), where the equality is strict for at least one point. Given that \(\nu\) has full-support, \(\mathds{E}_{\nu}\left(V\circ\phi^{\left\{\phi^{*},\phi_{*}\right\}}\right)> \mathds{E}_{\nu}\left(V\circ\phi^{*}\right).\)
Second, we introduce the probabilistic selection of clones. By the Strong Law of Large Numbers (SLLN), \(\mu\left(\omega\in\Omega\,:\,\bigcap_{\phi}\left(\lim_{N\to\infty}f^{N}(\phi)= \mu\left(\{\phi\}\right)\right)\right)=1\,,\) where \(f^{N}\left(\phi\right)\) represents the frequency of appearance of \(\phi\) when the size of the group of clones is \(N\). The intersection is finite. For this full-measure set, we define \(N_{\phi}=N_{\phi}(\omega)\) as the integer such that, if \(N\geq N_{\phi}\), then \(f^{N}\left(\phi\right)>\mu\left(\{\phi\}\right)/2\). Following Assumption 11, we take \(N_{1}\coloneqq\max\{N_{\phi^{*}},N_{\phi^{*}},\frac{2}{\mu\left(\{\phi^{*}\} \right)},\frac{2}{\mu\left(\{\phi_{*}\}\right)}\}\) for the first event. By these definitions, at least one copy of \(\phi_{*}\) and \(\phi^{*}\) are included. For the second event, take \(N\geq\frac{2}{\mu\left(\{\phi^{*}\}\right)}N_{1}\), so there are more than \(N_{1}\) copies of \(\phi^{*}\) in the second group. The proof then follows from the first part of this argument.
In other words, the first paragraph corresponds to the part of the theorem where diversity and ability are put into play, which essentially reduces to the following triviality: by assumption, there are two distinct agents - the best agent, and another agent - and a state \(x_{*}\) such that the best agent does not provide the optimal solution for this state. However, the other agent can improve upon the solution of the best agent for this state. This implies that the performance of a group consisting of the best agent and this additional agent surpasses the performance of the best agent alone, at least for some states. For other states, again by assumption, adding an agent does not worsen the situation, thus completing the deterministic clone-free part of the proof. Subsequently, we apply the strong law of large numbers to ensure that, under the setting of Assumption 11, the random group will always contain copies of these two agents, and the best performing agents are all copies of the unique best performing agent.
**Remark 1.9**.: Noting that if \(X\) is finite, then \(\Phi\) must also be finite, we can choose \(N_{1}\) such that almost surely every member appears in the random group. We just need to set:
\[N_{1}\coloneqq\max_{\phi\in\Phi}\{N_{\phi},\frac{2}{\mu\left(\{\phi\}\right)} \}\,.\]
In the original proof by Hong and Page, \(N_{1}\) is set so that every member needed to reach the optimum state \(x^{*}\) appears with probability one. Thus, \(N_{1}\) must be large, which virtually4 guarantees
that at least one copy of each member of \(\Phi\) is included in the "random group". In any case, this \(N_{1}\) as defined is sufficient to ensure the theorem holds true. \(\diamond\)
## 2. Removing technical hypotheses: Counterexamples
The theorem depends critically on certain assumptions that we are going to analyze now. In this section, I will refrain from critiquing certain empirical hypotheses, such as the assumption that agents share the same concept of problem-solving (Assumption 1), or that they can recognize the solution (\(\phi(x^{*})=x^{*}\)). Such critiques largely pertain to the plausibility inherent in every model, and one could always defend these by invoking ideal conditions, much as one might assume frictionless systems in physics. Although these critiques can be adequate, a different critique, following a "Moorean style", will be presented in the following section, where we will revisit some empirical hypotheses (not the ones mentioned above), slightly modifying them to enhance their plausibility, which may lead to contrary conclusions. However, in this section, I wish to focus on certain technical assumptions, often overlooked, that are essential for the theorem to hold. Without these assumptions, the theorem fails. These technical assumptions, by their nature, involve facets of the model (not the underlying reality) that are difficult to verify, hence making it challenging to argue for their plausibility. This raises the question of why we should adopt these hypotheses, rather than others, unless we are trying to reach a particular conclusion.
**Remark 2.1**.: The distinction between empirical and technical assumptions might seem somewhat arbitrary, but it nonetheless serves a useful purpose in our analysis. For instance, assume that we apply the theorem to a jury in a criminal trial. As we will see, the values of \(V\) (apart from \(V(x^{*})=1\), the right option) are important for the theorem to hold; if certain conditions are not met, then the thesis of the theorem fails. However, how could one verify that the hypotheses on \(V\) hold when \(V\) is not empirically observable? Similar remarks apply, even more strongly, on how to model clones and select them for (almost infinite) groups. \(\diamond\)
### \(V\) is an injection, Assumption 3
This was pointed out by Thompson and we reproduce it here with minor modifications. This assumption was not originally in [1, 2], making the theorem false.
Let \(X=\{a,b,c,d\}\). Define \(V(x)\) and three agents \(\phi_{1}\), \(\phi_{2}\) and \(\phi_{3}\) according to the table below:
\begin{tabular}{c|c|c|c|c} & \(V(x)\) & \(\phi_{1}(x)\) & \(\phi_{2}(x)\) & \(\phi_{3}(x)\) \\ \hline \(a\) & \(1/3\) & \(d\) & \(c\) & \(b\) \\ \(b\) & \(2/3\) & \(b\) & \(c\) & \(b\) \\ \(c\) & \(2/3\) & \(c\) & \(c\) & \(b\) \\ \(d\) & \(1\) & \(d\) & \(d\) & \(d\) \\ \end{tabular} The set of agents \(\Phi=\{\phi_{1},\phi_{2},\phi_{3}\}\) satisfies all the hypotheses of the theorem. The agents \(\phi_{1},\phi_{2},\phi_{3}\) have average values \(5/6,9/12,9/12\) respectively, so \(\phi_{1}\) is the "best" agent. Notice that all three agents acting together do not always return the point \(d\), where the maximum of \(V\) occurs. Indeed all three agents acting together work only as well as \(\phi_{1}\) acting alone. Hence in this case, no group of agents can outperform \(\phi_{1}\), or, equivalently, multiple copies of \(\phi_{1}\), hence no \(N\) and \(N_{1}\) exist which satisfy the theorem.
**Remark 2.2**.: In real-life applications, the value of \(V\) can be highly uncertain. Therefore, it is sensible to assume that, in the case of two states, \(x,x^{\prime}\in X\), where it is estimated that \(V(x)\approx V(x^{\prime})\), we set \(V(x)=V(x^{\prime})\) for practical purposes. This situation should not be disregarded as uncommon. Nevertheless, as argued in [3], cited by Landemore:
You don't fail to make it to the cashier in a grocery store when you are completely indifferent between buying one more apple or one more orange, nor do deliberators
in a meeting fail to decide on some course of action if two options have precisely equivalent value. Adding a simple tie-breaking rule to the theorem is entirely sufficient to deal with the mathematical hiccup and move forward with the fundamental scientific question at hand.
This argument completely misses the point. The problem is not that we are indifferent between the solutions \(b\) or \(c\), but rather that no one knows the solution if we do start at \(b\) or \(c\) (no one moves from these states to \(d\); they get stuck at \(c\)). The fact that the value function is "indifferent" implies that the hypotheses (in particular, the "diversity" assumption) are not sufficient to guarantee that \(d\) is reached.
The thesis of the theorem still holds if we replace Assumption 7 with \(\forall x\in X\backslash\{x^{*}\}\ \exists\ \phi\in\Phi\ \big{/}\ V\left(\phi(x) \right)>V(x)\,\). However, this adjustment only serves to render the theorem more trivial and misapplies the term diversity. This condition simply implies that for every state, there exists an agent that can strictly improve that state. It is unsurprising that in a finite number of steps, these agents reach the maximum, which, by hypothesis, the best problem solver cannot always attain. Consequently, this adjustment does fix the theorem, but at the cost of making it more trivial and highlighting that what the theorem requires is not "diversity", but the existence of a more "able" problem solver who can improve upon areas where others fall short. \(\diamond\)
### Unique best agent, Assumption 8
To justify this assumption, Hong and Page write:
Let \(\nu\) be the uniform distribution. If the value function \(V\) is one to one, then the uniqueness assumption is satisfied.
This is a mathematical mistake. Let us consider \(X=\{a,b,c,d\}\). Define \(V(x)\) such that \(0<V(a)<V(b)<V(c)<1\), \(V(b)<\frac{1}{2}\left(V(a)+1\right)\) and \(n\) agents \(\phi_{1}\), \(\phi_{2}\) and \(\phi_{i}\) according to the table below:
\[\begin{array}{c|c|c|c|c}&V(x)&\phi_{1}(x)&\phi_{2}(x)&\phi_{i}(x)\\ \hline a&V(a)&a&c&\phi_{i}(a)\\ b&V(b)&b&c&\phi_{i}(b)\\ c&V(c)&d&c&\phi_{i}(c)\\ d&1&d&d&d\end{array}\]
The set of agents \(\Phi=\{\phi_{1},\phi_{2},\phi_{i}\}_{i=3}^{N}\) satisfies all the hypotheses of the theorem and are ordered according to its expected value. If we set
\[V(c)\coloneqq\frac{1}{3}\left(V(a)+V(b)+1\right)\,,\]
then \(\phi_{1},\phi_{2}\) have the same "expected ability" under the uniform measure. Furthermore, now the theorem is false. Indeed,
\[\phi_{1}\circ\phi_{2}(a)=d,\,\phi_{1}\circ\phi_{2}(b)=d,\,\phi_{1}(c)=d,\, \phi_{1}(d)=\phi_{2}(d)=d\,.\]
In this case, no group of agents can outperform \(\{\phi_{1},\phi_{2}\}\), no \(N\) and \(N_{1}\) exist which satisfy the theorem.
**Remark 2.3**.: Here, we have demonstrated an example involving two agents possessing identical "expected abilities". Of course, in real-world applications, there would likely be uncertainty or variability in the value of \(\mathds{E}_{\nu}\left(V\circ\phi\right)\); thus, it would be prudent to consider an interval rather than a single point. In such circumstances, the top-performing agents might comprise multiple individuals with high probability. However, as demonstrated, the theorem may not necessarily hold in these scenarios.
### Clones performance
As we saw, simply by the assumptions, one million Einsteins, Gausses or von Neumanns are the same as just one of them. Indeed, mathematically, by Assumption 10, \(\{\phi,\ldots,\phi\}\) is a well-defined set of problems solvers such that
\[\phi^{\{\phi,\ldots,\phi\}}\stackrel{{\text{Assumption \ref{Assumption:10}}}}{{=}}\phi\circ\ldots\circ\phi \stackrel{{\text{Assumption \ref{Assumption:10}}}}{{=}}\phi.\]
Again, just the assumptions they arbitrarily made. But this may not make much sense if we want to apply it to real-life scenarios. More realistic versions could be:
* _Improvement:_\(V\circ\phi\circ\phi\geq V\circ\phi\) (strict inequality for some points). In other words, if an agent (competent) produced a solution after a certain amount of time, say one hour, it would provide a better answer if it had one-million hours, or if a "clone" could pick up where he left off.
* _Work in parallel5_:_\(V\circ\phi^{\{\phi,\phi\}}\geq V\circ\phi\) (strict inequality for some point). In other words, one can imagine that a group of Einsteins would not work sequentially, always producing the same result, but would divide the work, resources, focus, etc. to produce a better answer once they have put all of their findings together. Footnote 5: As a technical note, now \(\{\phi,\phi\}\) should be considered a multiset (the multiplicity distinguishes multisets).
I am not certain about the most appropriate way to model clones, but the authors' approach does not seem plausible. However, this is necessary for the theorem to stand. Otherwise, as \(N_{1}\to\infty\), no group of agents could generally outperform \(\phi,\stackrel{{ N_{1}}}{{\dots}},\phi\); we cannot guarantee the existence of an \(N\) that would satisfy the theorem.
**Remark 2.4**.: Following Jason Brennan's'magic wand' thought experiment, let's imagine we are confronted with an exceedingly difficult problem to solve, for instance, the Navier-Stokes Millennium Problem. Suppose we have a magic wand at our disposal that can create agents to solve the problem for us. Should we choose Terence Tao, or should we use the magic wand to create 100 Terence Taos working together to solve our problem? According to the assumption of the Hong-Page Theorem, this magic wand would be useless. \(\diamond\)
**Remark 2.5**.: Regarding the issue of clones, the following is stated in [2]:
* [...] we present a simpler version of our result where X is assumed to be finite. This finite version makes the insight more straightforward, although it comes at the cost of trivializing some intricate assumptions and arguments. For example, the group of the best-performing agents is proven below to be comprised of identical agents. This is an artifact of the finite version. In the general version under reasonable conditions, the group of the best-performing agents can be shown to be similar, not necessarily the same.
However, this explanation is far from accurate. Clones also appear in the less realistic case where \(X\) is not finite. This occurs because we have to take copies from \(\Phi\) and, if \(\phi\) has already appeared, it can appear again. Moreover, the finite version of the model is neither sufficient nor necessary for proving that the group of the best-performing agents is comprised of identical agents.
In a scenario where \(X\) is finite, the best agents could be several different ones. This could be easily demonstrated by following my previous example from Section 2.2 or [1, Assumption 5']. In the version where \(X\) is not finite, according to Assumption 5 of their appendix, \(B(\phi^{*},\delta)\cap\Phi=\{\phi\in\Phi\mid d(\phi,\phi^{*})<\delta\}\) could contain only one agent, namely \(\phi^{*}\). It should also be noted that a finite \(X\) represents a more realistic setup. Typically, rendering things continuous simplifies the analysis, as it allows us to use standard calculus, for example, but this is not the case here. It is less realistic to assume that agents have answers to an infinite set of elements than to a finite set.
### Selection of clones
Similarly, the selection of clones appears to be arbitrary and seems tailored to reach the intended conclusion.
* The choice of two independent groups seems arbitrary. Why not fix \(N\) and, from the same group, select a random subgroup of size \(N_{1}\), as well as the best \(N_{1}\) problem solvers, and then compare? In such a scenario, the theorem might not hold. Indeed, we need \(N\gg N_{1}\) such that the Strong Law of Large Numbers (SLLN) applies, \(\mu\left(\{\phi^{*}\}\right)\) can be very small. However, a random group of \(N_{1}\), \(\Phi_{N_{1}}\), agents might not include all the problem solvers of \(\Phi\), thus we cannot guarantee a probability of one, as the theorem does. That is, for \(N>N_{1}\), there are setting such that \[\mathbb{P}\left(\Phi\subset\Phi_{N_{1}}\right)<1\,.\]
* Permitting repetition is also arbitrary. We could, for instance, select the best problem solvers without allowing repetitions. Recall from Section 2.3 that adding a repeated clone is equivalent to adding nothing. This could prevent the paradoxical result that, by mere hypotheses, when choosing the best problem solvers from a group of size \(N\) is more beneficial when the group size is relatively small, i.e., for choosing the best it is preferable to have less options available. However, if we prohibit repetitions, then the theorem does not hold as the best problem solvers will include the ones (without taking repeated clones into account) of the "random" group, so no \(N\) and \(N_{1}\) exist which satisfy the theorem.
We should note the general approach adopted by Hong and Page. They introduce randomness into their model by employing clones. Subsequently, they invoke the Strong Law of Large Numbers to ensure that the frequency of appearance converges to the original probability \(\mu\), effectively eliminating the randomness that was introduced and obscuring the results. In the next section, we will remove clones.
## 3. New Hong-Page style theorem: Ability trumps diversity
We are going to state and prove a new version of the Hong-Page theorems such that the hypotheses are going to be of the same kind and as plausible (or even more as we will see, for instance, no need of clones and disagreement is possible) as the ones in the Hong-Page theorem. Nevertheless, we will reach the opposite conclusion, "ability trumps diversity". I am not claiming that this theorem has any social content, it simply reflects that it is the assumptions the ones that are doing all the work. The moral would be if we create two groups from the group in the original theorem - one in which we make the minimal reduction in ability while ensuring full diversity, and another in which we considerably reduce diversity while ensuring ability - the less diverse group would systematically outperform the fully diverse group. In other words, ability trumps diversity.
### The new assumptions
Among a set of agents \(\Phi\), we select two finite groups with different properties. We are going to modify some assumptions, but the other remain the same. First, let us introduce the possibility of disagreement following Assumption 9 as:
\[\phi_{i_{j+1}^{x}}\left(\phi_{i_{j}^{x}}(x_{j-1})\right)=x_{j-1}\,,\,\,\,\text {with}\,\,\,\phi_{i_{j}^{x}}(x_{j-1})\neq x_{j-1}\text{ and }i_{j}^{x}\neq i_{j+1}^{x}\,.\]
A disagreement is a stopping point. In other words, if there is a cycle such that one agent proposes a new solution and other reverse back that solution there is a disagreement and that initial solution is given as the group solution. This is a simple model where disagreement is possible.
**Remark 3.1**.: Note that in the original formulation of the Hong-Page Theorem, for any group of any size, even if they might not be able to reach the correct solution, there will be no disagreement in any case, as per Assumption 9. This always leads to unanimity, which is highly unrealistic.
Let also \(\mu_{x}\) be a probability measure such that if \(x\) is the previous solution, \(\mu_{x}\left(\{i\}\right)\) represents the probability that \(\phi_{i}(x)\) the next solution in the deliberation chain, see Assumption 9. This measure has full-support, no one is silenced. The indices are chosen independently. Once fixed the \(x_{0}\), this defines a probability measure \(\mathbb{P}\) on the possible paths. That is,
\[\mathbb{P}\left(x_{k}=x^{\prime}\mid x_{k-1}=x\right)=\mu_{x}\left(\left\{i \ \mid\ \phi_{i}\left(x\right)=x^{\prime}\right\}\right)>0\,.\]
#### 3.1.1. The ability group
Let us denote a group by \(\Phi^{\mathrm{A}}=\left\{\phi_{\alpha}\right\}_{\alpha\in A}\) which is selected such that:
* _Ability:_\(V\circ\phi_{\alpha}\geq V\). In other words, this group is chosen to ensure ability in the sense that each agent does not decrease the value of the initial state given. This is Assumption 4, but now is imposed by the selection of the group.
* _Common knowledge:_\(\exists X_{\mathrm{CK}}\subset X\) such that: _Non-diversity set_: \(\forall\alpha,\alpha^{\prime}\in A\) we have \(\phi_{\alpha}|_{X_{\mathrm{CK}}}\equiv\phi_{\alpha^{\prime}}|_{X_{\mathrm{CK}}}\). _Knowledge:_\(\phi_{\alpha}(x_{c})=x^{*}\ \forall x_{c}\in X_{\mathrm{CK}}\ \forall\ \alpha\in A\). In other words, this group selection comes with a selection bias. The agents have a common knowledge that makes them similar; for the set \(X_{\mathrm{CK}}\), they all give the same solution. This is an extension of the second part of Assumption 4; agents are not only able to recognize that \(x^{*}\) is the solution, but they can do the same for other states \(x\in X_{\mathrm{CK}}\). Note that \(x^{*}\in X_{\mathrm{CK}}\).
* _Smaller diversity set_: \(\forall\ x\in X\backslash X_{\mathrm{CK}}\), Assumption 7 holds. In other words, for this group, the original assumption of Hong and Page only holds for the "smaller" set \(X_{\mathrm{CK}}\).
* More importantly, we diminish diversity in a second way. For all \(x\) in \(X\backslash X_{\mathrm{CK}}\), there exists exactly one agent, \(\phi^{x}\), who provides a distinct answer, and the set of unique answers could be equidistributed, meaning it's not just one agent always giving the different answer. Formally, \(\{x\mid\phi^{x}=\phi\}\leq\frac{|X|}{|\Phi^{\mathrm{A}}|}+1\) for all \(\phi\in\Phi^{\mathrm{A}}\). Therefore, if the ratio is small enough, two agents are quite similar, signifying a lack of diversity, i.e., following [1, Appendix], their distance is relatively small.
**Remark 3.2**.: For the theorem to function, we don't require a large \(X_{\mathrm{CK}}\), but having it large makes the agents less diverse. It could be just \(x^{*}\) as in the original theorem. \(\diamond\)
#### 3.1.2. The diversity group
Let us denote a group by \(\Phi^{\mathrm{D}}=\left\{\phi_{j}\right\}_{j\in\mathcal{J}}\) which is selected such that the maximum diversity is guaranteed. More precisely, there is a unique \(x^{0}\in X\) such that:
* _Full-diversity with ability:_\(\forall\ x\in X\backslash\{x^{0},x^{*}\}\) there is a set of agents \(\left\{\phi_{j_{k}^{x}}\right\}_{k=1}^{n_{x}}\subset\Phi^{\mathrm{D}}\) such that \(\phi_{j_{k}^{x}}\left(x\right)\neq x\) and such that all the states, and only those, closer to the solution \(x^{*}\) (that is, all improve the state) are the local optimum for some agent.
* _Minimal ability loss_: there is only one agent \(\phi_{j_{0}}\in\Phi^{\mathrm{D}}\) and only one state \(x^{0}\) such that \(V\left(\phi_{j_{0}}(x^{0})\right)<V\left(x^{0}\right)\). Note that this is the _minimal ability that can be lost_.
### The theorem
**Theorem 3.3** (Ability Trumps Diversity).: _Let \(\Phi^{\mathrm{A}}\), \(\Phi^{\mathrm{D}}\) as above with the given assumptions. Then, the ability group outperforms the diversity group._
Proof.: To prove this theorem, we need to compare the performances of the two groups. First, we consider the ability group \(\Phi^{\mathrm{A}}\). Any agent from \(\Phi^{\mathrm{A}}\) does not decrease the value of the given state. Moreover, for any state in the non-diversity set \(X_{\mathrm{CK}}\), all agents in \(\Phi^{\mathrm{A}}\) will return the optimal solution \(x^{*}\). Thus, following the measure \(\mu_{x^{\prime}}\), for any \(x\in X\),
\[V(x)\leq V\left(\phi_{j_{1}^{x}}(x)\right)\leq\ldots\leq V\left(\phi_{j_{n}^{ x}}(x_{n-1})\right)=1\,.\]
This is because \(p_{x^{\prime}}\coloneqq\mu_{x^{\prime}}^{A}\left(\alpha\in A\ \mid\phi_{\alpha}(x^{\prime})\neq x^{\prime}\right)>0\). This holds true even in the worst-case scenario where all agents, except one, are stuck at that point. Thus being stuck has probability, by the subadditive property,
\[\prod_{i=1}^{\infty}(1-p_{x^{\prime}})=0\ \to\mathbb{P}\left(\exists\ n_{0},x^{ \prime}\ :\ \phi_{i\pi}(x^{\prime})=x^{\prime}\ \forall\ n\geq n_{0}\right)\leq\sum_{n_{0}}\sum_{x^{ \prime\prime}}\prod_{i=n_{0}}^{\infty}(1-p_{x^{\prime\prime}})=0\,.\]
where the sum in \(x^{\prime}\) is finite. Thus, with probability one, in a finite number of steps we have strict inequalities reaching \(x^{*}\), returning this as the solution. Thus, for all \(x\in X\), every path starting at \(x\) leads to \(x^{*}\). Thus,
\[\mathbb{E}_{\mu^{\Delta},v}(V\circ\Phi^{\mathrm{A}})\coloneqq\sum_{x\in X}v(x )\mathbb{E}_{\mu^{\Delta}}\left(V\circ\Phi^{\mathrm{A}}(x)\right)=1\,.\]
where \(\Phi^{\mathrm{A}}(x)\coloneqq\phi^{\Phi^{\mathrm{A}}}\).
Now, consider the diversity group \(\Phi^{\mathrm{D}}\). This group is selected to maximize diversity and only allows minimal ability loss. However, there exists exactly one agent \(\phi_{j_{0}}\) and one state \(x^{0}\) such that \(V\left(\phi_{j_{0}}(x^{0})\right)<V\left(x^{0}\right)\). Let \(x^{(-1)}\coloneqq\phi_{j_{0}}\left(x^{0}\right)\) and let \(x\) such that \(V(x)\leq V(x^{\prime})\). Similar as above, by fineness, the probability that
\[\mathbb{P}\left(\exists\ i_{k}^{x}\ \mid\ \phi_{i_{k}^{x}}(x_{k-1})=x^{\prime} \right)>0\,.\]
We have two possibilities:
* If \(x_{k-1}=x^{(-1)}\) and \(V(x)\leq V(x^{(-1)})\), then, a "disagreement cycle" can be completed, \(x_{k-1}=x^{(-1)}\to x^{0}\to x^{(-1)}\), returning \(x^{(-1)}\). This happens with probability \[\sum_{k=1}^{\infty}\mathbb{P}\left(x_{k-1}=x^{(-1)}\right)\mathbb{P}\left(x_{ k}=x^{0}\mid x_{k-1}=x^{(-1)}\right)\mu_{x^{0}}^{\mathrm{D}}\left(j_{0} \right)>0,\] where \(\mathbb{P}\left(x_{k}=x^{0}\mid x_{k-1}=x^{(-1)}\right)\ =\ \mu_{x^{(-1)}}^{ \mathrm{D}}\left(\{j\in J\ \mid\ \phi_{j}\left(x^{(-1)}\right)=x^{0}\}\right)>0\), where we have used the full-diversity assumption.
* Also, if \(x_{k-1}=x^{0}\) again, completes a disagreement cycle, \(x^{0}\to x^{(-1)}\to x^{0}\), returning \(x_{0}\). Similarly, this happens with probability \[\sum_{k=1}^{\infty}\mathbb{P}\left(x_{k-1}=x^{0}\right)\mu_{x^{0}}^{\mathrm{D} }\left(j_{0}\right)\mathbb{P}\left(x_{k+1}=x^{(-1)}\mid x_{k}=x^{(-1)}\right) >0\,.\] As \(V(x^{(-1)})<V(x^{0})<1\), thus, \[\mathbb{E}_{\mu^{\mathrm{D}},v}(V\circ\Phi^{\mathrm{D}})\coloneqq\sum_{x\in X}v (x)\mathbb{E}_{\mu^{\mathrm{D}}}\left(V\circ\Phi^{\mathrm{D}}(x)\right)<1\,.\]
## 4. The Diversity Prediction Theorem and the Crowds Beat Averages Law
### The results
They also present another theorem that would be useful later. First, some definitions. Given a set of individuals labeled as \(i=1,\ldots,N\), we associate to each of them a signal or prediction of some magnitude, which has \(\theta\) as true value. The squared error of an individual's signal equals the square of the difference between the signal and the true outcome:
\[\mathrm{SE}(s_{i})=(s_{i}-\theta)^{2}\,.\]
The average squared error is given by
\[\text{MSE}(\underline{s})=\frac{1}{n}\sum_{i=1}^{n}(s_{i}-\theta)^{2}\,,\]
with \(\underline{s}\coloneqq(s_{1},s_{2},\dots,s_{n})\). The collective prediction is
\[c=c(\underline{s})=\frac{1}{n}\sum_{i=1}^{n}s_{i}\,.\]
Predictive diversity of the collective is defined as:
\[\hat{\sigma}(\underline{s})=\frac{1}{n}\sum_{i=1}^{n}(s_{i}-c)^{2}\,.\]
This is simply a (biased) estimation of the variance. Two trivial theorems can be deduced. The first, a particular version of the Pythagoras Theorem:
**Theorem 4.1** (Diversity Prediction Theorem).: _The squared error of the collective prediction equals the average squared error minus the predictive diversity:_
\[SE\left(c\left(\underline{s}\right)\right)=MSE(\underline{s})-\hat{\sigma}( \underline{s})\,.\]
Proof.: This is quite standard, but let us give a proof using the (generalized) Pythagoras Theorem. In \(\mathbb{R}^{n}\) we can define the standard Euclidean or \(l^{2}\)-norm. If \(\underline{c}=(c,\dots,c)\) and analogously for \(\underline{\theta}\), then \(\langle\underline{s}-\underline{c}\,,\underline{\theta}-\underline{c}\rangle_ {l^{2}}=0\) so the Pythagoras Theorem gives
\[\|\underline{s}-\underline{\theta}\|_{l^{2}}^{2}=\|\underline{\theta}- \underline{c}\|_{l^{2}}^{2}+\|\underline{s}-\underline{c}\|_{l^{2}}^{2}\,. \tag{2}\]
**Corollary 4.2** (Crowd Beats Averages Law).: _The squared error of the collective's prediction is less than or equal to the averaged squared error of the individuals that make up the crowd._
\[SE\left(c\left(\underline{s}\right)\right)\leq MSE(\underline{s})\,.\]
### The asymmetric role of "ability" and "diversity"
Before we proceed, let's note two simple mathematical observations:
**Error 1**.: \(MSE\) **and \(\hat{\sigma}\) cannot be treated as independent** as both depend on \(\underline{s}\). That is, altering one will generally change the other (it is not fixed), with the effect on the prediction error being, in principle, undetermined.
**Error 2**.: Therefore, it would be a significant mathematical error to consider that for the prediction error, \(SE\), to be small is enough to make "diversity", \(\hat{\sigma}\) large.
These observations are mathematically trivial. Also, they can be graphically demonstrated when we consider the case of \(n=2\), which brings us back to the standard Pythagoras theorem, see Figure 1. Knowing either \(\text{MSE}(\underline{s})\) or \(\hat{\sigma}\) alone is not sufficient to determine the value of the prediction error. In fact, according to the Crowd Beats Averages Law, we can see:
\[\text{SE}\left(\text{MSE},\hat{\sigma}\right)\in\left[0,\text{SE}^{\text{max} }(\underline{s})\right]. \tag{3}\]
This bound is sharp, with \(\text{SE}^{\text{max}}:=\text{MSE}\). Since SE is not solely determined by either "ability" or "diversity", these variables can be observed in the context of the maximum prediction error, i.e., \(\text{SE}^{\text{max}}\). More precisely:
**Proposition 4.3**.: _Let \(\text{SE}^{\text{max}}\) represent the maximum prediction error. Then,_
* _If_ \(\Delta MSE<0\)_, then_ \(\Delta SE^{\max}<0\)_. In other words, if "ability" increases, the maximum prediction error decreases. Particularly, if the increase in ability is large enough, the prediction error will decrease._
* _If_ \(\Delta\hat{\sigma}>0\)_, then_ \(\Delta SE^{\max}\geq 0\)_. This implies that if "diversity" increases, the maximum prediction error also increases. In particular, an increase in diversity alone does not guarantee a reduction in the prediction error. Furthermore, if the increase in diversity is substantial enough, the maximum prediction error will also increase._
Proof.: This is a trivial consequence of \(\text{MSE}=\text{SE}^{\max}\) and the twin inequality of the Crowd Beats Averages Law: \(\hat{\sigma}\leq\text{SE}^{\max}\).
Using the Crowd Beats Averages Law (and other trivial results), we arrive at a seemingly contradictory result: increasing "ability" eventually reduces the prediction error, but increasing diversity ultimately increases the maximum prediction error. Consequently, the Diversity Prediction Theorem and the Crowd Beats Averages Law provide limited insight into how diversity impacts the prediction error in a general setting without controlling for ability.
Figure 1. Increasing diversity does not always improve predictions and can sometimes significantly worsen them. The prediction error is represented by the red line, with the red dot indicating the prediction. The black line corresponds to MSE, and the brown line to \(\hat{\sigma}\). The true value, \(\theta\), is \(\frac{1}{2}\) (represented by the green dot). As diversity increases more than threefold (from 0.04 to 0.14), the squared error becomes more than forty times larger.
## 5. Hong and Page's misuse of mathematics: an obscured trivial theorem
### Misusing the mathematics to obscure a trivial fact
The Hong-Page theorem is, in essence, a misuse of mathematics. It employs standard probability techniques, such as the Borel-Cantelli lemma (as my simpler proof demonstrates, unnecessarily), to obfuscate its hypotheses, making it inaccessible to individuals outside the field. That is, mathematics to complex a simple fact, not to simplify complex relations.
Indeed, as we saw in Theorem 1.5 the theorem's conclusion--that a group performs better than the best single individual--is inevitable by construction, by the way the theorem's premises are structured. It posits two fundamental hypotheses: first, that the "best" individual agent, \(\phi^{*}\), cannot always solve the problem optimally, and second, that a diverse group \(\Phi\) of agents can always find an optimal solution. When these assumptions are in play, the conclusion of the theorem is logically guaranteed.
But, to hide this simple fact the clone existence and selection is introduced, to invoke the probability apparatus. This is done in the second set of assumption of Theorem 1.6. They define a probability space and prove that, if we can select clones of \(\Phi\) infinitely, with probability one the first group will contain at least one copy of each element of \(\Phi\) and the second group will be chosen so that is made only of copies of \(\phi^{*}\). Using the previous paragraph, the conclusion follows directly. This constitutes the heart of their article's proof published in the Proceedings of the National Academy of Sciences. But, since we've shown that Theorem 1.5--a simple restatement of the assumptions--encapsulates all information regarding diversity and ability (the probabilistic part could be applied to anything, like colored stones in a box), one may question the necessity of introducing clones in the first place. Thus, it appears that the theorem's complexity may stem more from an obfuscation of its simple underpinnings than from a deep, mathematical truth about diversity and ability.
Thus, while the Hong-Page theorem uses mathematical techniques, its conclusion is more a trivial product of its constructed premises than a deep, unexpected and universal truth revealed through rigorous mathematical exploration.
### Misusing the theorem to answer question it does not
In [2], they say:
These results still leave open an important question: Can a functionally diverse group whose members have less ability outperform a group of people with high ability who may themselves be diverse? The main result of our paper addresses exactly this question.
This is false. They insist:
To make a more informed decision, the organization administers a test to 1,000 applicants that is designed to reflect their individual abilities in solving such a problem. Suppose the applicants receive scores ranging from 60% to 90%, so that they are all individually capable. Should the organization hire (i) the person with the highest score, (ii) 20 people with the next 20 highest scores, or (iii) 20 people randomly selected from the applicant pool? Ignoring possible problems of communication within a group, the existing literature would suggest that ii is better than i, because more people will search a larger space, but says little about ii vs. iii. The intuition that agents with the highest scores are smarter suggests that the organization should hire ii, the individually best- performing agents. The intuition that the randomly selected agents will be functionally diverse suggests that the organization should hire iii, the randomly selected ones. **In this paper, we provide conditions under which iii is better than ii. (emphasis added)**
This is false. By Proposition 1.8, the groups being compared consist of clones that include, at least, all agents necessary to always reach the correct solutions _versus_ clones of the best agent, which, by assumption, is the same as the best agent alone. As \(N_{1}\) is large enough, the best agent will be included in the first group.
Expressed differently, if we consider only one copy for each agent (as more are, by assumption, see Section 2.3, redundant), the groups being compared are \(\Phi\) versus \(\phi^{*}\). Note that \(\phi^{*}\subset\Phi\). No random selection is involved, as discussed in Section 2.4. Therefore, a more appropriate comparison would be:
1. the person with the highest score,
2. 20 people with the next 20 highest scores,
3. 20 people randomly selected from the applicant pool,
4. the 1000 applicants (or however many are needed to always reach the solution) working together perfectly.
The Hong-Page paper deals with i) versus iv), a triviality, **not**, as they explicitly claim, ii) versus iii).
During a conference at the European Central Bank (ECB), Page stated:
1. Create a group of the 20 best agents - the best individuals - and I compare them to a random group of 20 agents [...] it turns out though if you do the math on this, the diverse group almost always outperforms the other group if you use reasonable-sized groups, like groups of size 10 or 20 [...] the paper and model I just showed you where diverse groups do better than random groups was written by myself and Lu Hong [...]
He used Figure 1(a) to illustrate this point. However, as we have mentioned before, this representation is not directly related to the theorem. In the "Alpha Group", the best agent, 138, should be the only member, and this agent should also be included in the Diverse Group, along with all the other agents, see Figure 1(b). Furthermore, groups of size 10 or 20 may not be large enough for the SLLN to hold, especially if \(\mu\left(\left\{\phi\right\}\right)\) is small enough for some agents.
In the same conference at the ECB, he further states:
1. As the problem becomes complex, the best team doesn't consist of the best individuals. Why? Because the best individuals tend to be similar and what you really want on hard problems is diversity.
Figure 2. Comparison of the slides. In green, the best agent.
However, this statement seems to confuse, either deliberately or unintentionally, an assumption with a factual result. The claim that "the best individuals are similar" (actually, clones of the same agent) is not a derived conclusion, but a trivial consequence of the presuppositions, see Section 2.4. The proof of this claim cannot be found in [2]; it's established by assumption, Section 2.4. Furthermore, as explored in Section 3, even when conducting a fair comparison between ability and diversity - and even when the ability group is characterized by relatively homogeneous problem solvers - ability can still outperform diversity. Therefore, this statement is completely misguided.
### Misusing the prestige of mathematics
When Page claims:
This theorem is no mere metaphor or cute empirical anecdote that may or may not be true ten years from now. It is a mathematical truth.
It is as accurate as asserting:
If
\[p\wedge q\]
, then
\[p\wedge q\]
is no mere metaphor or cute empirical anecdote that may or may not be true ten years from now. It is a mathematical truth. To be more precise, the logical structure of the theorem is as follows:
1. _Hypothesis 1_, \(H_{1}\): The best agent cannot always solve the problem.
2. _Hypothesis 2_, \(H_{2}\): The "diverse group" can always solve the problem.
3. _Conclusion_, C: The "diverse group" outperforms the unique best agent at problem-solving, signifying that "diversity trumps ability".
This argument is logically valid--a tautology, so the proposition \(H_{1}\wedge H_{2}\to C\) is certainly true. However, the argument's soundness might be questionable as the hypotheses might not be factual. Thus, it doesn't provide any certainty regarding the conclusion, C, i.e., whether diversity indeed outperforms ability. Here, mathematics seems to be used as a tool of persuasion, asserting that it's not ideological, but pure math. However, as we have shown, they are not proving what they claim to be proving.
### A basic mathematical error in advocating for diversity
Scott Page has argued that large diversity implies small prediction error. However, this conclusion, while favorable to the hypothesis that diversity reduces prediction error, constitutes a significant mathematical mistake. Indeed, in a lecture (University of Michigan), Page states:
And you might also ask, where does the madness of crowds originate? How could it be that a crowd could get something completely wrong? Well, that's not difficult to understand either, because crowd error equals average error multiplied by diversity. If I want this to be large, if I want large collective error, then I need large average error, meaning that I need people to be getting things wrong, on average. **Additionally, I need diversity to be small**. So, **the madness of crowds comes from like-minded individuals** who are all incorrect, and once again, the equation provides us with this result. _(emphasis added)_
This mathematical misunderstanding involves a basic arithmetic error that we mentioned in Error 2. From the "Diversity Prediction Theorem" (with \(\underline{\varsigma}\) term omitted for simplicity),
\[\text{SE}=\text{MSE}-\hat{\sigma}\,,\]
we **cannot** deduce that a large SE implies a small \(\hat{\sigma}\). Rather, it implies that MSE must be much larger than \(\hat{\sigma}\), where \(\hat{\sigma}\) could be as large as desired. See, for instance, Figure 0(b) for an illustration, where the prediction error is large, but diversity is larger (so it cannot be "small").
## 6. Landemore's misuse of mathematics: an invalid and unsound argument for her political proposal
### The argument
The argument, in a nutshell, is, [6]:
Democracy is here modeled as a collective decision-procedure involving the combination of two mechanisms: inclusive and egalitarian deliberation and simple majority rule. The claim is that democracy thus defined is more likely to yield better solutions and predictions on political questions than less inclusive and less egalitarian decision-rules because it structurally maximizes the cognitive diversity brought to bear on collective problems. Cognitive diversity--here defined as the fact that people see problems in the world and make predictions based on different models of the way the world works or should be interpreted --is a group property that has been shown to be a crucial factor of group performance in various contexts and indeed more important to the problem-solving abilities of a group than individual competence of the members itself (Page 2007). I argue that under the conditions of uncertainty that characterize politics (the fact that the bundle of issues to be faced by any polity over the medium to long term cannot be predicted ahead of time), political decision-making characterized by maximal inclusiveness and equality can be expected to be correlated with greater cognitive diversity, which, in turn, is correlated with better problem-solving and prediction. A central assumption of the argument is that politics is characterized by uncertainty. This uncertainty (which is an assumption about the world, not necessarily the subjective epistemic stage of the deliberators) is what renders all-inclusiveness on an equal basis epistemically attractive as a model for collective decision-making. Given this uncertainty egalitarian inclusiveness is adaptive or "ecologically rational" (Landemore 2014).
And the conclusion is:
The argument presented here is based on a simple model of democracy and is entirely deductive. It essentially credits the epistemic superiority of democracy to inclusive deliberation, that is, deliberation involving all the members of the community (whether directly or, where unfeasible, through their democratic representatives) [...] The advantage of my deductive epistemic argument, ultimately, is that even if it fails to explain the way actual democracies work, it can serve as a useful normative benchmark to diagnose the way in which existing democracies epistemically dysfunction and imagine alternative institutional arrangements. One implication of the epistemic argument is indeed that in order to obtain the theoretically promised epistemic benefits of democracy, we would need to make the decision-procedures used in actual democracies a lot more inclusive and a lot more egalitarian than they are at present. Institutional reforms that the argument points toward include the replacement of elected representatives with randomly selected ones and a greater use of simple majoritarian decision-making.
While the argument is not explicitly stated6, a crucial hypothesis needed for the theorem assumes the following forms:
* _Hypothesis_, \(H\): Cognitive diversity, defined as individuals seeing problems and making predictions based on different models of the world, is a group property that improves group performance in various contexts.
* _Hypothesis_', \(H^{\prime}\): Greater cognitive diversity within a group correlates with better problem-solving and prediction abilities.
To justify this, Landemore relies on the results of Hong and Page as described above [5]:
To make that claim, I essentially rely on Hong and Page's formal results about the centrality of cognitive diversity to the emergent property of collective intelligence.
We aim to demonstrate that this hypothesis is unjustified, which subsequently renders the argument both logically unsound and inapplicable to real-world scenarios. Additionally, we will highlight instances where she incorrectly deduces propositions from these mathematical theorems, leading to a logically invalid argument.
**Remark 6.1**.: Some of the critiques presented in the previous section also apply to Landemore. For instance, when she informally discusses the theorem, she falls into the same misrepresentation as Hong and Page, as discussed in Section 5.2. For instance, she stated in a public debate:
There are multiple Hong-Page theorems. The one that I use mostly is the 'Diversity
Trumps Ability' theorem. It's basically a formalization of the idea that under certain conditions, you're better off solving problems with a group of average people who
think differently than with a group of experts or very smart people.
As we have previously illustrated, this assertion is entirely false, see Section 5.2 and below for more details. \(\diamond\)
### Basic misunderstanding of the mathematical theorems
Landemore says (about the Theorem 1.7):
Let me pause here to emphasize what a remarkably **counterintuitive**, indeed amazing, result his is. Where the conditions apply, you are better off with a **random group of people who think differently than with a bunch of Einsteins**! Who would have thought? In my view, this result should truly change our perspective on what makes groups smart to begin with; I believe it has huge implications for the way we should think about political bodies making decisions on our behalf. _(emphasis added)_
Also [5],
That theorem was sufficiently counterintuitive that they provided a computational
example to provide intuition.
This misunderstanding is significant. She is confusing the conclusions of the theorem with its hypotheses. The fact that a 'bunch of Einsteins' is equivalent to only one Einstein (who, by hypothesis, cannot always solve the problem) is not a conclusion; it's an assumption that she fails to mention. More precisely, the hypotheses stipulate that the "random" or "diverse" group always reaches the global solution, see Corollary 1.3. Moreover, by assumption, a group of Einsteins is considered equivalent to one Einstein (Section 2.4). Yet again by assumption, it's not always guaranteed that this group or an individual Einstein reaches the global solution (Assumption 6). How is this counterintuitive or surprising? It appears to be merely a reiteration of the assumptions, which Landemore never fully discloses. She fails to mention that clones working together are _presumed_ to perform just like a single person working alone (refer to Section 2.3), or that the best agent ("Einstein") is postulated to be unique (see Section 2.2), as detailed in [4, 7]. Further details will be elaborated below. Furthermore, by Proposition 1.8, the random group includes a collection of Einsteins. Thus, the basic structure of the argument is:
1. _Hypothesis 1_, \(H_{1}\): Group \(G_{R}\) always reach the optimal solution. \(G_{R}\) includes a collection of Einsteins.
2. _Hypothesis 2_, \(H_{2}\): A collection of Einsteins is not perfect.
3. _Conclusion_, \(C\): Group \(G_{R}\) is "better" than a collection of Einsteins.
Thus, the "under the right conditions" of Landemore is, basically, presupposing the conclusion. How can someone truly understand the theorem and consider this counterintuitive? Once the probabilistic component, which might be obscuring for non-mathematicians but standard for most mathematicians, is removed, the theorem is a triviality (see Section 5.1).
**Remark 6.2**.: This misunderstanding appears to have significant implications on Landemore's thought (from the same debate as before):
The theorem's conclusions are not intuitive at all. I think they run against an entrenched belief that experts know best [...]. What this theorem unveiled for me is the possibility that when it comes to collective intelligence, we should stop thinking of it in terms of an addition of individual intelligences. It's really more about the group property. Does it contain enough diversity that we're going to push each other closer again to this global optimum? And that, I think, is not trivial at all. For me, it was a paradigm shift.
Moreover, it leads her to compare the Hong-Page theorem, which is trivial (Section 5.1), with a genuinely profound and counterintuitive theorem, such as Arrow's impossibility theorem. However, she believes the difference in treatment between the theorems is based on the difference in their conclusions:
For me, these results are remarkable. In fact, it's interesting to see that other theorems, like the Arrow's Impossibility Theorem, which leads to very negative conclusions about democracy, are considered brilliant and worth a Nobel Prize. It always seems that things are not considered surprising and trivial if they go in one particular direction.
Despite of this, Landemore, after stating the Theorem 1.7, says [6]
To the extent that **cognitive diversity** is a key ingredient of collective intelligence, and specifically one that **matters more than average individual ability**, the more inclusive the deliberation process is, the smarter the solutions resulting from it should be, overall. _(emphasis added)_
As we saw in Section 3, this is false. The theorem presupposes that every problem solver in every state improves the state to a new state closer to the global optimum. Furthermore, as shown by the more realistic Theorem 3.3, if we create two groups from the group in the original theorem - one in which we make the minimal reduction in ability while ensuring full diversity, and another in which we significantly reduce diversity while ensuring ability - the less diverse group would systematically outperform the fully diverse group. In other words, ability trumps diversity.
There are also other severe mathematical errors with the "Diversity Prediction Theorem". Landemore says [6]:
In other words, when it comes to predicting outcomes, cognitive differences among voters matter just as much as individual ability. Increasing prediction diversity by one unit results in the same reduction in collective error as does increasing average ability by one unit.
This is mathematically incorrect and entirely wrong: the effect is undetermined, it's not of the same magnitude, and it's not necessarily a reduction, as explained in Section 4.2, see Error 1. It is a mathematical error to assume that the terms in Theorem 4.1 can be changed while the other remain
fixed. Furthermore, as we observed earlier in Proposition 4.3, the diversity and ability terms do not play the same role. While increasing ability eventually reduces the prediction error, increasing diversity does not have the same effect and, furthermore, it eventually increases the maximum prediction error. Therefore, Landemore's argument has a significant gap; without controlling for ability, increasing diversity does not guarantee a reduction in the prediction error.
### The misuse of hypotheses of the "Diversity Trumps Ability Theorem"
To justify the use of the theorem, she says [4],
Importantly, the four conditions for this theorem to apply **are not utterly demanding**. The first one simply requires that the problem be difficult enough, since we do not need a group to solve easy problems. The second condition requires that all problem solvers are relatively smart (or not too dumb). In other words, the members of the group must have local optima that are not too low; otherwise the group would get stuck far from the global optimum. The third condition simply assumes a diversity of local optima such that the intersection of the problem solvers' local optima contains only the global optimum. In other words, the participants think very differently, even though the best solution must be obvious to all of them when they are made to think of it. Finally, the fourth condition requires that the initial population from which the problem solvers are picked must be large and the collection of problem solvers working together must contain more than a handful of problem solvers. This assumption ensures that the randomly picked collection of problem solvers in the larger pool is diverse--and in particular, more cognitively diverse than a collection of the best of the larger pool, which would not necessarily be the case for too small a pool relative to the size of the subset of randomly chosen problem solvers or for too small a subset of problem solvers in absolute terms. (emphasis added)
This is, once again, incorrect. Those are not the only conditions required for the theorem to apply. Among others, she doesn't mention the hypotheses from Sections 2.1, 2.2, 2.3, and 2.4. If these conditions do not hold, the theorem doesn't hold (see the counterexamples). And, as we've seen, these conditions can be rather restrictive (such as assuming that a billion Einsteins will not outperform a single Einstein). Therefore, her statement of the theorem is incorrect.
**Remark 6.3**.: Landemore is following Page's book, which also neglects to mention these conditions. Moreover, his Condition 2 (Landemore's second condition; see also [5]) is ill-stated. The 'Calculus Condition' requires that \(\phi(X)\) is countable (which is trivial if \(X\) is finite), but he interprets it as 'all problem solvers are smart.' This condition doesn't relate to being smart, contrary to Page's and, consequently, Landemore's interpretation. For instance, consider the function \(\phi:X\to X\) defined as \(\phi(x)=x_{m}\), where \(V(x_{m})\) is a global minimum of \(V\) (e.g., 0). Then, \(\phi(X)=\{x_{m}\}\), finite, which hardly represents being'smart.' In fact, it's the worst agent conceivable since it assigns the solution furthest from the global optimum to every state. Nevertheless, Page (and subsequently Landemore) refers to this as being'smart.' It's also noteworthy that in his book, Page's conditions are subject to Thompson's critique (Section 2.1), although Page denied it. As Landemore, citing Page, puts it in [5],
Condition 3: The Diversity Condition. Any solution other than the global optimum is not a local optimum for some nonzero percentage of problem solvers.
However, in his response to Thompson, Page does not refer to his stated Condition 3, but to what he incorrectly thinks Condition 3 requires (which is the same mistake present in [2] and pointed out by Thompson).
Landemore defends the hypotheses of Theorem 1.7, see the previous quote or the section "The meaning and empirical plausibility of the assumptions behind the Diversity Trumps Ability Theorem" in [5]. However, other sets of hypotheses are plausible, or even more so, which is problematic. More specifically, in a Moorean style, we are going to construct a set of incompatible propositions, requiring us to reject (at least) the least plausible one. We are going to use Theorem 3.3 for this task.
The propositions are as follows:
1. Hong-Page's framework can be used for a deductive argument for epistemic collective-decision systems in the sense that it can serve as a benchmark or be useful in deriving implications to obtain epistemic benefits (as in Section 6.1).
2. The assumptions of Theorem 1.5.
3. The assumptions of Theorem 3.3.
Note that (1) and (2) imply that "diversity trumps ability," but (1) and (3) imply "ability trumps diversity", so at least one of these propositions must be rejected, but:
1. Rejecting the first would invalidate Landemore's argument, as the theorem would then have no relevance to collective-decision systems.
2. Rejecting the second would undermine Landemore's proposition that "cognitive diversity is a key ingredient of collective intelligence, and specifically one that matters more than average individual ability".
3. Rejecting the third, without rejecting (2), amounts to "biting the bullet". The assumptions of Theorem 3.3 are relatively more plausible than those in (2). For instance, there is no need to assume the existence of clones, that 100 Einsteins working together to solve a problem are the same as one, or that there will be no disagreement. Furthermore, unlike Hong and Page's Theorem, it provides a fair comparison between ability and diversity. See Section 5.1 and references therein for more details.
Note that I am not claiming that Theorem 3.3 has any social content (I personally reject several of the propositions above), but it suffices to form the Moorean set of incompatible propositions. It also serves to show how Landemore commits an equivocation fallacy7 or misuses natural language to represent mathematical statements. For instance, Assumption 4 is read as agents are "relatively smart (or not too dumb)" and then she discusses that voters satisfy this, [5]. But note that the hypotheses of Section 3.1.2 only reduce by an almost insignificant amount the ability, so they can be considered "relatively smart (or not too dumb)". But the thesis changes radically. Thus, Landemore's justification for the use of the theorem is severely flawed.
Footnote 7: This can also be seen as a motte-and-bailey fallacy, which is also present in Page’s presentation of the theorems.
Finally, even assuming that the hypotheses are plausible, there exists a significant contradiction in Landemore's work. Recall that the hypothesis of Theorem 1.7 not only guarantees that the "random" group is better than the best agent, but also ensures that they always reach the correct conclusion without disagreement or dissent, as shown in Proposition 1.8, see also Remark 3.1. These are the same hypotheses that, according to Landemore, make cognitive diversity more crucial than individual ability. This perfect deliberation is what enables the "diverse" group to surpass the best agent; the former is perfect by assumption, while the latter is not. Nonetheless, Landemore states in [6]:
Deliberation is far from being a perfect or complete decision-mechanism, in part because it is time-consuming and rarely produces unanimity.
And in [5], she further notes:
I thus do not need to assume away, as Quirk seems to accuse me of doing, the possibility of disagreement.
Therefore, if she rejects an implication of the theorem, she must also reject at least one of its hypotheses. However, as we have seen, she defends the hypotheses of the theorem she applies. This creates a contradiction.
### The vacuousness of the Numbers Trump Ability "Theorem"
Landemore's key innovation is the following, as stated in [5]:
The second step of my argument--my addendum to Page and Hong-- proposes that the "cheapest" (i.e., easiest and most economical) way to achieve cognitive diversity in the absence of knowledge about the nature of complex and ever-changing political problems is to include everyone in the group. [...] This "Numbers Trump Ability Theorem" thus supports a strong epistemic case for democracy, in which my key innovation is to support inclusiveness for its instrumental, specifically epistemic properties: Under the right conditions, including everyone in the decision-making process simply makes the group more likely to get the right (or, at least better) answers.
The argument is straightforward: if, for epistemic reasons, diversity is what matters, then including everyone is the simplest way to increase diversity. Aside from practical issues, which Landemore somewhat considers, the problem with this reasoning (which is not actually a theorem) is that the premise is false. We have argued that in both Hong-Page theorems, ability plays a crucial role, as seen in Section 3 and 4.2. Thus, increasing the number of people can have detrimental effects. Therefore, the "theorem" is false. Nevertheless, it can be "corrected" as:
**Theorem 6.4** (Enlightened Numbers Trump Numbers "Theorem").: _Under the right conditions and given the uncertainty in the ability of the agents, including everyone with ability above a certain threshold in the decision-making process makes the group more likely to arrive at the correct (or, at least, better) answers than merely including people without controlling for ability._
If we acknowledge the 'absence of knowledge about the nature of complex and ever-changing political problems', it would be prudent to select problem solvers who are competent enough to handle these uncertain problems. Hence, we must take ability into account. _In other words, once corrected, this theorem lends support to a version of epistocracy_. As I've previously stated, I don't find the Hong-Page theorems particularly enlightening, so I do not advocate for this theorem. Nonetheless, if we follow Landemore's line of reasoning, this interpretation would be more accurate.
In general, all the theorems that Landemore uses for her political defense of democracy ([4]) presuppose certain levels of ability and knowledge. This is the case of the Hong-Page theorems, as seen in Section 3 and 4.2, as well as the Condorcet Jury Theorem (CJT) and the Miracle of Aggregation, as shown in [9, Theorem 3.5 and Theorem 4.3]. These latter two theorems, which belong to the same general theorem (non-homogeneous CJT), are far more likely if we include epistemic weights that are stochastically correlated (with a measurement error) with epistemic rationality. Also, if ability is not controlled, these theorems can operate in the opposite direction, ensuring that we almost surely choose the _wrong_ option. Thus, from an epistemic and instrumental perspective, these theorems strongly suggest including ability thresholds or, the more feasible and semiotically problem-free case of epistemic weights with a minimum of 1 (no one excluded) and stochastically correlated (the inevitable measurement error is taken into account, perfect correlation is not assumed) with epistemic rationality, for a starting practical proposal, see [9, Appendix D] or, for a lengthier discussion8, [8]. This could serve as a preliminary proposal that needs to be tested and experimented with. While it might still be far from perfect, it should be evaluated in comparison to the existing alternatives.
Nevertheless, Landemore staunchly opposes epistocracies. In her chapter "Against Epistocracies", [7], she says:
My first question to Brennan is this: What would such exclusion achieve? Recall that in my model deliberation does most of the epistemic work. Most filtering of bad input or bad reasoning occurs at that deliberative stage. So there is no reason not to include everyone as one more, howsoever uninformed, voice will not pollute the outcome but will at most delay the conclusion of the deliberation.
This is incorrect. First, if no selection is done, one cannot ensure the conditions of the Hong-Page theorem, so one cannot expect the result (that deliberation works) to hold. Second, as we have seen in Theorem 3.3, introducing these kinds of agents can pollute the outcome, getting stuck at a solution far from the global maximum. This is the same error that pollutes all of Landemore's analysis, so we insist here; all these theorems assume a certain amount of ability, but Landemore just presupposes9 this without questioning seriously enough and focusing mainly on diversity, which has a "secondary" effect (Theorem 3.3 and Section 4.2).
Footnote 9: For instance,
Assuming that, on average, the citizens from among whom we select representatives meet a minimal thresh- old of individual competence, random selection is a more promising, authentically democratic way of selecting representatives that maximizes cognitive diversity in the face of political uncertainty.
For instance, she continually emphasizes the uncertainty:
As time goes by and circumstances change, however, it becomes very likely that his epistocracy will run into issues where it will miss the very voices and votes it purposely excluded. Even if the probability is low, the expected cost might still be huge. Why take the risk? There may be a short window of time in which a Brennanist epistocracy would work, perhaps even better than a democracy. But probabilistically, this superiority is bound to vanish over time. The question is when. [...] Most importantly, there is no reason to exclude any voice in a model that assumes democratic deliberation itself can weed out the bad input.
However, this same uncertainty, when translated into uncertain abilities of the problem solvers, could lead to the inclusion of some problem solvers who, rather than aiding, actually obstruct us from reaching the optimal solution. Thus, her "probabilistic" claims like "But probabilistically, this superiority is bound to vanish over time" and that the expected cost is substantial are unfounded, and she provides no valid proof for such strong propositions. It's important to note that there may be merit in including all voices in some capacity. The purpose of this part is not to criticize that, but to critically analyze her use of mathematical results to draw certain conclusions. Therefore, it is not reasonable--and borders on begging the question--to assume that democratic deliberation itself can weed out bad input.
## 7. Conclusion
Our rigorous dissection of the Hong-Page Theorems has uncovered significant issues. The misrepresentation of ability, the negligence of certain assumptions, and the fundamental misuse of mathematical principles have led to their flawed application in sociopolitical constructs.
Hong and Page's application of mathematics in their theorem obscures its inherent triviality. By employing mathematical complexity, they have managed to present trivial facts as profound insights, thus misrepresenting the actual implications of their theorem. It is vital that we apply mathematics with extreme caution and rigor, especially when it serves as the foundation for decisions that can have substantial impacts on our social structures and institutions. As such, despite
its thousands of citations, [2] should not be regarded as a serious contribution to the field of collective decision problem solving. Similarly, with our additional analysis of the "Diversity Prediction Theorem" and Page's misinterpretation, Section 5.4, basic claims of Page's book _The Difference_ are affected, from the preface:
Perhaps because _The Difference_ takes time to digest, eventually, accurate readings won out. Reviewers recognized that _The Difference_ explores the pragmatic, bottom-line contributions of diversity. It does so using models and logic, not metaphor. The book's claims that "collective ability equals individual ability plus diversity" and that "diversity trumps ability" are mathematical truths, not feel-good mantras.
Helene Landemore's application of mathematical reasoning in her political proposition is wanting in both validity and soundness. Specifically, her 'Numbers Trump Ability' theorem, derived from her interpretation of the Hong-Page Theorems, demonstrates significant flaws, as do many other conclusions based on these results, including her use of the 'Diversity Prediction Theorem'. For instance, as we have shown, her epistemic argument is both unsound and invalid. Consequently, the central thesis of her book [4] is seriously compromised. As Landemore herself states in [5]:
Let me briefly rehearse what I see as the main argument of the book. At its heart is a simple model of what, under certain conditions that I deem plausible enough, can be expected of an inclusive political decision process in a comparison with less inclusive ones. [...] In my eyes, the main value of my book is to create a simplified, relatively rigorous framework for the meaningful comparison of the properties of basic political "regimes."
A similar problem arises in other foundational claims related to her political proposal of 'Open Democracy', found in works such as [4], [5], [6], and [7].
This critique should not be taken as a dismissal of the importance of diversity in decision-making, but rather as a call to address the misuse of mathematics in these contexts. It urges us to consider the rigorous and nuanced approach required when applying mathematical theories to sociopolitical constructs. As such, this paper makes a contribution to the ongoing discourse on collective intelligence, fostering a deeper understanding of the mathematical theorems used to achieve a desired conclusion.
|
2302.08935
|
Coup de grace to the charged Higgs solution of $P_5^\prime$ and
$R_{D^{(*)}}$ discrepancies
|
We consider a general two Higgs doublet model which can simultaneously solve
discrepancies in neutral B meson decay ($b\to s\ell \overline \ell$
distribution) and charged B meson decay ($b\to c\tau\overline\nu$) with a
charged Higgs. The model contains two additional neutral scalars at the same
mass scale and predicts distinctive signals at the LHC. Based on the recent
same-sign top search by the ATLAS collaboration, we found the constraint on the
scalar mass spectrum. To probe the remaining mass window, we propose a novel
$cg\to t\tau\overline\tau$ process at the LHC.
|
Syuhei Iguro
|
2023-02-17T15:21:06Z
|
http://arxiv.org/abs/2302.08935v1
|
# Coup de grace to the charged Higgs solution of \(P^{\prime}_{5}\) and \(R_{D^{(*)}}\) discrepancies
###### Abstract
We consider a general two Higgs doublet model which can simultaneously solve discrepancies in neutral B meson decay (\(b\to s\ell\overline{\ell}\) distribution) and charged B meson decay (\(b\to c\tau\overline{\nu}\)) with a charged Higgs. The model contains two additional neutral scalars at the same mass scale and predicts distinctive signals at the LHC. Based on the recent same-sign top search by the ATLAS collaboration, we found the constraint on the scalar mass spectrum. To probe the remaining mass window, we propose a novel \(cg\to t\tau\overline{\tau}\) process at the LHC.
Two Higgs Doublet Model, \(b\to s\ell\overline{\ell}\), \(b\to c\tau\nu\), Top-Associated Scalar Production +
Footnote †: preprint: P3H–23-010, TTP23-05
## I Introduction
The current flavor anomalies in B meson decays _e.g._ deviations in angular distribution in \(b\to s\mu\overline{\mu}\) processes, so-called \(P^{\prime}_{5}\)[1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]#1, and lepton flavor violation of \(\overline{B}\to D^{(*)}\tau\overline{\nu}\)[23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36] can be solved with a light charged scalar (\(H^{+}\)) from a generic two Higgs doublet model (G2HDM) [37; 38]#2. Although a significant deviation in the lepton flavor universality test in \(b\to s\ell\overline{\ell}\) transition where \(\ell=e,\,\mu\) has disappeared in the recent LHCb measurement [61; 62] thanks to the improved electron tagging method. Furthermore, the deviation in \(B_{s}\to\mu\overline{\mu}\) has gone [63] and, consequently the explicit priority of the vector\(-\)axial vector (V\(-\)A) like interaction no longer exists [13]. Those recent changes brought charged Higgs solution back into the game and makes it more appealing. There days, due to the disappearance of \(R_{K^{(*)}}\) puzzle, there is a psychological tone down for the B anomalies, though, it is a fact that there are still about \(3\sim 4\,\sigma\) discrepancies in \(b\to s\ell\overline{\ell}\) and \(b\to c\tau\overline{\nu}\) processes.
Footnote #1: Different from lepton flavor universality ratio \(R_{K^{(*)}}=\text{BR}(B\to K^{(*)}\mu\overline{\mu})/\text{BR}(B\to K^{(*)}e \overline{\nu})\), there is sizable hadronic parameter dependence. For instance sizable charm hadronic contributions would also explain the deviation, see Refs, [12; 13] for instance. On the other hand, the tension between measured \(\text{BR}(B_{s}\to\phi\mu\overline{\mu})\)[14], \(\text{BR}(A_{b}\to\Lambda\mu\overline{\mu})\)[15] and \(\text{BR}(B\to K^{(*)}\mu\overline{\mu})\)[5] and the SM predictions [16; 17; 18; 19; 20; 21; 22] can be relaxed with the vector contribution.
Interestingly a successful charm penguin contribution to the flavor universal vector operator of \(b\to s\ell\overline{\ell}\) and tree level \(b\to c\tau\overline{\nu}\) transition are both controlled by the common \(\overline{b}_{L}c_{R}H^{-}\) interaction where the corresponding Yukawa coupling is denoted as \(\rho^{tc}_{u}\). In the G2HDM, the coupling \(\rho^{tc}_{u}\) induces \(\overline{t}_{L}c_{R}\phi\) interaction where \(\phi=H,\,A\) denotes additional neutral scalars which are SU(2)\({}_{L}\) partners of the charged Higgs. It is noted that the additional doublet with sizable \(\rho^{tc}_{u}\) is discussed with the spontaneous CP violating scenario [64; 65] and the electroweak baryogenesis [66].#3
Footnote #2: The possibility was originally pointed out in Ref. [37], and recently revisited in Ref. [38]. It is noted that thanks to the relaxed constraint from \(B_{c}\to\tau\overline{\nu}\)[39; 40; 41; 42] and the experimental shift, \(\bar{H}^{+}\) can now explain \(R_{D^{(*)}}=\text{BR}(\overline{B}\to D^{(*)}\tau\overline{\nu})/\text{BR}( \overline{B}\to D^{(*)}\overline{\nu})\) within \(1\,\sigma\)[43]. For the individual explanation, see, Refs. [44; 45; 46; 47; 48] for \(b\to s\ell\overline{\ell}\) and Refs. [49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60] for \(R_{D^{(*)}}\).
The available mass range of the charged scalar for the simultaneous explanation is bounded from the above based on the \(\tau\overline{\nu}\) resonance searches at the LHC [68] as \(m_{H^{+}}\leq 400\,\text{GeV}\)[69]. Although in Ref. [59] we theoretically showed that the \(b+\tau\overline{\nu}\) resonance search is a powerful tool to probe the remaining parameters, the corresponding experimental search has not been performed.
Different from recent studies which mainly focus on the charged scalar collider phenomenology in light of deviations in B meson decays [58; 59; 70], we consider the collider signal of additional neutral scalars. Although connection between sizable \(\rho^{tc}_{u}\) and neutral scalars mediated multi-top final states at the LHC has been discussed in Refs. [54; 69; 71; 72; 73; 74; 75],#4 last summer, the ATLAS collaboration reported the game changing result [80]. They searched for the G2HDM in top-associated processes and directly set the upper limit on \(\rho^{tc}_{u}\). In this letter, we reinterpret the constraint in light of the simultaneous explanation and propose an additional process to cover the remaining parameter space through the neutral scalars. Thanks to the electroweak precision data even after the controversial CDF result [81], the mass of those additional scalars (\(m_{\phi}\)) should be similar to \(m_{H^{+}}\) up to \(\mathcal{O}(v)\) where \(v=246\,\text{GeV}\) denotes the vacuum expectation value. Therefore it would be natural to consider the LHC phenomenology to fully probe the interesting parameter space.
Footnote #3: They used the closed time path formalism [67] to evaluate the produced baryon number.
The outline of the letter is given as follows. In Sec. II we introduce the model setup and explain the relevant parameters. The favored region and upper limit on the additional scalars are summarized in Sec. III. In Sec. IV we investigate the model prediction of top-associated processes. Summary and discussion will be given in Sec. V.
Model setup
We consider a two Higgs doublet model (2HDM) where an additional scalar doublet is introduced to the SM. The general scalar potential of the model is given as
\[\mathrm{V(H_{1},}H_{2})=M_{11}^{2}H_{1}^{\dagger}H_{1}+M_{22}^{2}H_ {2}^{\dagger}H_{2}-\left(M_{12}^{2}H_{1}^{\dagger}H_{2}+\mathrm{h.c.}\right)\] \[\qquad+\frac{\lambda_{1}}{2}(H_{1}^{\dagger}H_{1})^{2}+\frac{ \lambda_{2}}{2}(H_{2}^{\dagger}H_{2})^{2}+\lambda_{3}(H_{1}^{\dagger}H_{1})(H_ {2}^{\dagger}H_{2})\] \[\qquad+\lambda_{4}(H_{1}^{\dagger}H_{2})(H_{2}^{\dagger}H_{1})+ \frac{\lambda_{5}}{2}(H_{1}^{\dagger}H_{2})^{2}\] \[\qquad+\left\{\lambda_{6}(H_{1}^{\dagger}H_{1})+\lambda_{7}(H_{2 }^{\dagger}H_{2})\right\}(H_{1}^{\dagger}H_{2})+\mathrm{h.c.}. \tag{1}\]
Here, we work in the _Higgs basis_ where only one doublet takes the VEV [82; 83]:
\[H_{1}=\begin{pmatrix}G^{+}\\ \frac{1}{\sqrt{2}}(v+h+iG^{0})\end{pmatrix},\,H_{2}=\begin{pmatrix}H^{+}\\ \frac{1}{\sqrt{2}}(H+iA)\end{pmatrix}, \tag{2}\]
where \(G^{+}\) and \(G^{0}\) denotes the NG bosons. It is noted that alignment where the SM h lives in \(H_{1}\) is considered to avoid the constraint from \(t\to ch\)[84; 85; 86]. For simplicity, we further assume the CP-conserving scalar potential and then one can define the CP-even and -odd scalar mass eigenstates. The SM-like Higgs is \(h\) and \(H\) and \(A\) correspond to additional the CP-even and -odd neutral scalars. Masses differences among additional scalars are given as,
\[m_{H}^{2}=m_{A}^{2}+\lambda_{5}v^{2},\,\,\,m_{H^{+}}^{2}=m_{A}^{2}-\frac{ \lambda_{4}-\lambda_{5}}{2}v^{2}. \tag{3}\]
It is noted that other potential couplings does not affect the following discussion.
When the both doublets couple to all fermions, the Higgs bosons have flavor violating interactions in general. In this letter we take the bottom-up approach and introduce the interaction Lagrangian of the heavy scalars relevant to \(b\to s\ell\overline{\ell}\) and \(b\to c\tau\overline{\nu}\),
\[\mathcal{L}_{int} = \rho_{u}^{tc}\frac{H+iA}{\sqrt{2}}(\overline{t}P_{R}c)+\rho_{e}^ {\tau\tau}\frac{H-iA}{\sqrt{2}}(\overline{\tau}P_{R}\tau) \tag{4}\] \[+ V_{td,\,\mu}^{\ast}\rho_{u}^{tc}H^{-}(\overline{d}_{t}P_{R}c)- \rho_{e}^{\tau\tau}H^{-}(\overline{\tau}P_{L}\nu_{\tau})+\mathrm{h.c.},\]
where \(P_{L/R}=(1\mp\gamma_{5})/2\) and \(V\) are a chirality projection operator and Cabbibo-Kobayashi-Maskawa matrix [87; 88], respectively. The neutral scalar interaction and the charged scalar interaction are related by the \(\mathrm{SU(2)_{L}}\) rotation. We assume that other Yukawa coupling to be small (\(\ll\mathcal{O}(10^{-2})\)) for simplicity. For the more detailed phenomenological analysis with other Yukawa couplings, see Refs. [50; 54; 89]. We will also discuss this point in Sec. V.
For the later convenience we show the approximate formulae for the partial decay width,
\[\Gamma(\phi\to\tau\overline{\tau})\simeq\frac{|\rho_{e}^{\tau\tau}|^{2}}{16 \pi}m_{\phi},\,\Gamma(\phi\to tc)\simeq\frac{3|\rho_{u}^{tc}|^{2}m_{\phi}}{16 \pi}\beta^{2}(m_{\phi}), \tag{5}\]
where \(\Gamma(\phi\to tc)=\Gamma(\phi\to t\overline{c})+\Gamma(\phi\to\overline{t}c)\) and \(\beta(m_{\phi})=\left(1-\frac{m_{\phi}^{2}}{m_{\phi}^{2}}\right)\) are defined.#5
Footnote #5: In this letter we neglect light fermion masses, though, one can trivially include the effect.
## III Summary of the available parameter region
First we consider the charged Higgs contribution to flavor universal \(b\to s\ell\overline{\ell}\). Since the coupling dependence is different among \(b\to s\ell\overline{\ell}\) (induced by the charm penguin \(\propto|\rho_{u}^{tc}|^{2}\)) and the most constraining flavor process, \(B_{s}-\overline{B}_{s}\) mixing (charged Higgs box \(\propto|\rho_{u}^{tc}|^{4}\)), we can set an upper limit on the charged Higgs mass [37; 38]. The relevant Hamiltonian for \(b\to s\ell\overline{\ell}\) in our model is given as
\[\mathcal{H}_{\mathrm{eff}}=-\frac{\alpha G_{F}}{\sqrt{2}\pi}V_{tb}V_{t_{s}}^{ \ast}C_{9}(\overline{s}\gamma^{\mu}P_{L}b)(\overline{l}\gamma_{\mu}l)+\mathrm{ h.c.}, \tag{6}\]
where \(l=e,\,\mu\) and \(\tau\). We note that contribution from \(Z\) penguin is small enough to neglect. We follow the prescription in Ref. [38] and use the following numerical formula,
\[C_{9}^{l}(\mu_{b})\simeq-0.95\left(\frac{|\rho_{u}^{tc}|}{0.7}\right)^{2}\left( \frac{200\,\mathrm{GeV}}{m_{H^{+}}}\right)^{2}. \tag{7}\]
This should be compared with the recent global fit to \(b\to s\ell\overline{\ell}\) data of \(C_{9}^{l}(\mu_{b})=-0.95\pm 0.13\)[90].#6 In Fig. 1, we show \(1\,(2)\,\sigma\) favored region in green (yellow) on the \(m_{H^{+}}\) vs. \(\rho_{u}^{tc}\) plane. Since we also has the upper limit on the mass as \(m_{H^{+}}\leq 400\,\mathrm{GeV}\) and the lower limit form
Figure 1: The favored region of \(C_{9}^{l}\) is shown in green (\(1\,\sigma\)) and yellow (\(2\,\sigma\)) on the \(\rho_{u}^{tc}\) vs. \(m_{H^{+}}\) plane. \(B_{s}-\overline{B}_{s}\) mixing constraint excludes the magenta region. Cyan, purple, blue regions are excluded by low mass di-jet resonance searches. The orange dashed line corresponds to the upper limit from the same-sign top search adopted from Ref. [80] assuming \(m_{H^{+}}=m_{H}\). See the main text for further detail.
the LEP experiment [91], we focus on \(100\,\mathrm{GeV}\leq m_{H^{+}}\leq 400\,\mathrm{GeV}\). As mentioned above \(B_{s}\) meson mixing puts the most stringent flavor constraint [92] which is shown in magenta.
In this mass region, di-jet resonance searches at the LHC are able to set the upper limit on \(\rho_{u}^{tc}\)[58]. We overlay the constraint from the (bottom flavored) di-jet searches in blue [93], purple [94] and cyan [95] where \(\mathrm{BR}(H^{+}\to\overline{b}c)\)=1 is assumed. It is noted that as we will see soon later, we need a hierarchy of \(|\rho_{u}^{tc}|\gg|\rho_{e}^{\tau\tau}|\) for the simultaneous explanation. As a result \(H^{+}\to\overline{b}c\) is the dominant decay mode in the minimal set up of Eq. (4) and hence the exclusion discussed above is unaffected.#7 We see that di-jet constraints touch the interesting parameter region. Run 2 full data would be possible to improve the constraint further.
Footnote #7: The stau search constraint [96; 97] on the charged Higgs is very weak due to \(\mathrm{BR}(H^{+}\to\overline{b}c)\simeq 1\). See, Fig. 4 of Ref. [98].
We move onto the explanation of the \(R_{D^{(*)}}\) discrepancy. The relevant interaction Hamiltonian is given as
\[\mathcal{H}_{\mathrm{eff}}=2\sqrt{2}G_{F}V_{cb}C_{S_{L}}^{\tau}(\overline{c}P_ {L}b)(\overline{\tau}P_{L}\nu_{\tau}). \tag{8}\]
The charged Higgs contribution including renormalization group running corrections [99; 100; 101; 102], is approximately given as
\[|C_{S_{L}}^{\tau}(\mu_{b})|\simeq 0.83\left(\frac{|\rho_{u}^{tc}\rho_{e}^{ \tau\tau}|}{0.03}\right)\left(\frac{200\,\mathrm{GeV}}{m_{H^{+}}}\right)^{2}. \tag{9}\]
Adopting the analytic formulae of \(R_{D^{(*)}}\) in Ref. [58]#8 latest \(1\,\sigma\) explanation is realized with \(0.68\lesssim|C_{S_{L}}^{\tau}(\mu_{b})|\lesssim 1.13\).#9 By combining Eqs. (7, 9), one can see that the simultaneous explanation requires the large magnitude difference in \(\rho_{u}^{tc}\) and \(\rho_{e}^{\tau\tau}\).
Footnote #8: Those analytic formulae used in Ref. [58] are consistent with the recent result [103] within the uncertainty.
Footnote #9: To fit the \(R_{D^{(*)}}\) data \(\rho_{u}^{tc}\rho_{e}^{\tau\tau}\) needs to have a complex phase, however, this does not change the following discussion.
So far we focused on the charged Higgs phenomenology, however, neutral scalar mass spectrum is constrained with the LHC data and electroweak precision observables. The last summer the ATLAS collaboration reported the result of the G2HDM search in top-associated processes [80] for \(m_{\phi}\geq\,200\,\mathrm{GeV}\).#10 The relevant signal events include the same-sign top quarks. In Fig. 1, the constraint directly taken from Ref. [80] is shown in the orange dashed line assuming \(m_{H}=m_{H^{+}}\).#11 It is observed that this same-sign top search would exclude the \(b\to s\ell\overline{\ell}\) explanation for \(m_{\phi}\geq\,200\mathrm{GeV}\). Although there is a loop-hole in this same-sign top bound. There are two-types of the contributing Feynman diagrams, namely t-channel (left) and s-channel (right) as shown in Fig. 2. In both diagrams, due to the different CP nature of \(H\) and \(A\), the amplitude cancels in the mass degenerate limit. The destructive interference for the dominant s-channel approximately happens up to the width difference [73]. For the simultaneous explanation, \(\rho_{u}^{tc}\) needs to be as large as \(0.7\) (\(0.8\)) for \(m_{H^{+}}=200\,(250)\,\mathrm{GeV}\) and hence the total width of \(\Gamma_{\phi}=0.8\,(3.5)\,\mathrm{GeV}\) is predicted. This indicates that \(|\lambda_{5}|\leq\mathcal{O}(10^{-2})\) is necessary for the simultaneous explanation with \(m_{\phi}\geq 200\,\mathrm{GeV}\). To simplify the analysis and evade the constraint we set \(m_{A}=m_{H}\) in the following.
Footnote #10: To adopt the experimental data and extend the constraint down to \(m_{\phi}\simeq m_{t}\), detailed distribution data is necessary. Although this data is not available in Ref. [80] and thus beyond scope of this letter.
On the other hand, additional neutral scalars dominantly decay to \(\tau\overline{\tau}\) for \(m_{\phi}\leq m_{t}\). In that case, the electroweak pair production of neutral scalars results in multiple \(\tau\) final state. Such a region is studied in Ref. [104] and even only with Run 1 data [105] we can exclude our scenario of \(m_{\phi}\leq m_{t}\). Furthermore do not have an explicit new physics signal with the Run 2 full data [106; 107] and hence the exclusion is robust.
Besides, electroweak precision observables are helpful to further constrain the mass spectrum. We consider \(S\) and \(T\) parameter constraint#12[108; 109] both excluding
Figure 3: The \(\chi^{2}\) based on \(S\) and \(T\) parameters before (dashed) and after (solid) the recent CDF result is shown as a function of \(m_{\phi}\). For blue, orange and green lines, \(m_{H^{+}}=150,\,200,\,250\,\mathrm{GeV}\) are fixed. The gray vertical line corresponds to \(m_{\phi}=m_{t}\).
Figure 2: The representative diagrams for the same-sign top final state at the LHC. In the numerical evaluation we include the charge conjugated processes also. The dominant contribution comes from the right diagram.
and including recent controversial CDF result [81]. More concretely we use
\[S=0.00\pm 0.07,\;\;\;T=0.05\pm 0.06, \tag{10}\]
with the correlation of \(\rho=0.92\)[110] (denoted as 2021 fit) and
\[S=0.086\pm 0.077,\;\;T=0.177\pm 0.070, \tag{11}\]
with the correlation of \(\rho=0.89\) based on the global fit [111] (denoted as 2023 fit). Fig. 3 shows \(\chi^{2}\) of \(S\) and \(T\) parameters as a function of \(m_{\phi}\) where \(m_{H^{+}}=\)150 GeV (blue), 200 GeV (orange) and 250 GeV (green) is fixed. Dashed and solid lines are drawn based on 2021 fit and and 2023 fit. We see that the favored \(m_{\phi}\) is different depending on the fit data. For \(m_{H^{+}}=150\) GeV, 2023 fit disfavors \(m_{t}\leq m_{\phi}\leq 200\) GeV more than \(2\,\sigma\), while 2021 fit allows the mass window.
In short section summary, for the simultaneous explanation we need to set \(m_{t}\leq m_{\phi}\leq 200\) GeV or \(\mathcal{O}(1)\) GeV level mass degeneracy among neutral scalars.
## IV Exotic top processes
In order to fully probe the remaining mass window of \(m_{\phi}\) we propose another top-associated process, namely \(gc\to c\to t\phi\to t\tau\overline{\tau}\) where the relevant diagram is shown in Fig. 4.#13 In the mass window, even with the hierarchical coupling structure, BR\((\phi\to\tau\overline{\tau})\) could be sizable due to the phase space suppression in \(\phi\to tc\) decay. The production cross section is calculated using MadGraph5_aMC@NLO[113] using NNPDF2.3[114] at the leading order in the five flavor scheme with \(\sqrt{s}=13\) TeV. Fig. 5 shows the cross section in pb as a function of \(m_{\phi}\). The prediction of the \(1\,\sigma\) simultaneous explanation was obtained by fixing the charged Higgs mass \(m_{H^{+}}=150\) GeV (blue), 200 GeV (orange), 250 GeV (green) and \(m_{\phi}\) (black). It is observed that bands are overlapping and the cross section is as large as 30 fb\(\sim\)10 pb for the mass window.#14 A heavier charged scalar predicts the larger signal rate since it requires larger couplings.
Footnote #15: The charge asymmetry of the top quark would help to improve the sensitivity since the SM single top has the production asymmetry, while our signal does not have this feature.
Footnote #16: The charge asymmetry of the top quark would help to improve the sensitivity since the SM single top has the production asymmetry, while our signal does not have this feature.
Estimating the size of the electroweak SM back ground (BG) is not difficult even for our mass range. For instance, \(tZq\) and \(thq\) production contribute to \(t+\tau\overline{\tau}+q\) final state with cross section of \(\simeq 50\) fb [115] and \(\simeq 5\) fb [116] where \(\tau\overline{\tau}\) comes from \(Z\) and \(h\) decay for each. Therefore the contribution from those processes are expected to be moderate. On the other hand, it is not easy to estimate the precise amount of the miss-tag associated BG _e.g._ from \(tW^{-}q\to t\tau\overline{\nu}+\not{j}\) and \(t\overline{t}\to tW^{-}j\to t\tau\overline{\nu}+\not{j}\) where slashed final state will be miss-tagged as a hadronically decaying \(\tau\) (\(\tau_{h}\)). For the precise determination we need a considerable help from the experimental side and thus investigating the sensitivity of this channel is beyond the scope of this letter.#15 Actually Ref. [117] searched for the \(thq\) production with \(h\to\tau\overline{\tau}\) with Run 2 full data. They set the upper limit of \(\mu=8.1^{+8.2}_{-7.5}\) where \(\mu\) denotes a signal strength. This approximately leads to the upper limit on \(\sigma(thq\to t\tau\overline{\tau}q)\lesssim 100\) fb for \(m_{\tau\tau}=125\) GeV. Since the invariant mass of our signal is larger, the corresponding SMBG would be smaller and thus we can expect the better sensitivity.
Footnote #15: The charge asymmetry of the top quark would help to improve the sensitivity since the SM single top has the production asymmetry, while our signal does not have this feature.
## V Summary and discussion
Recently the charged Higgs solution to B anomalies became more interesting than ever. The charged Higgs need to interact with left-handed bottom quark and thus can be a part of an additional doublet. Hence a two Higgs doublet model is a minimal model and there are also two additional neutral scalars. The Yukawa interaction of those scalars are related by SU(2)\({}_{L}\) rotation and the simultaneous explanation predicts distinctive signal at the LHC. The theoretical proposals to probe the solution via charged Higgs mediated processes was made last year, however, the crucial process has not been tested experimentally yet. Although, in the meantime, the ATLAS
experiment reported the game changing constraint on the neutral scalars. In this letter we reinterpret the ATLAS constraint and obtained the condition for the mass spectrum of the additional neutral scalars: \(\mathcal{O}(1)\,\text{GeV}\) mass degeneracy among \(H\) and \(A\) or \(m_{t}\leq m_{\phi}\leq 200\,\text{GeV}\) where \(\phi\) denotes \(H\) and \(A\). We also pointed out that the signal cross section of \(gc\to t\phi\to t\tau\overline{\tau}\) could be as large as \(10\,\text{fb}\text{\sim}10\,\text{pb}\) for the mass window.
Imposing a U(1) Peccei-Quinn symmetry [118], \(\{H_{1},\,H_{2}\}\to\{H_{1},\,H_{2}e^{i\alpha}\}\) can prohibit \(\lambda_{5}\) and realize the mass degeneracy of additional neutral scalars [119]. Although this symmetry should be broken since we also need Yukawa couplings, \(\rho_{u}^{tc}\) and \(\rho_{e}^{\tau\tau}\) and therefore the more complicated setup is necessary [120; 121; 122; 123].
In general, other couplings \(e.g.\) di-bottom quark coupling, namely \(\rho_{d}^{bb}\) would be non-negligible. For instance, one would think that \(\mathcal{O}(10^{-2})\) of \(\rho_{d}^{bb}\) could reduce the branching ratio of \(\phi\to\tau\overline{\tau}\) thanks to the color factor and revive the scenario with \(m_{\phi}\leq m_{t}\).#16 Although this is difficult since the ATLAS collaboration searched additional particles in flavor changing to decays set \(\mathcal{O}(10^{-4})\) upper bound on \(\text{BR}(t\to qX)\times\text{BR}(X\to b\overline{b})\) very recently [86]. Therefore an additional coupling to bottom quarks, does not save the scenario. Since \(c\to b\) miss tagging rate, \(\epsilon_{c\to b}\) is about \(15\sim 20\,\%\)[124], even if neutral scalars decay into charm quarks, the scenario is difficult to survive the constraint. On the other hand, \(\rho_{d}^{bb}\) would be able to reduce signal rate of \(gc\to t\tau\overline{\tau}\) process.
Footnote #16: It is noted that even in that case \(C_{S_{R}}^{\tau}\), where the chirality of quarks are flipped in Eq. (8), can not be large to affect \(R_{D^{(*)}}\) due to the \(V_{cb}\) suppression [54].
It would to worthwhile to emphasize that the ATLAS bound [80] does not necessarily kill the solo \(R_{D^{(*)}}\) solution even without mass degeneracy. This is because that the contribution to \(C_{S_{L}}\) is proportional to the coupling product of \(\rho_{u}^{tc\alpha}\rho_{e}^{\tau\tau}\) (see, Eq. (9)) and hence the larger \(\rho_{e}^{\tau\tau}\) allows the smaller \(\rho_{u}^{tc}\). If we want to avoid the ATLAS bound on \(\rho_{u}^{tc}\) by setting \(m_{A},\,m_{H}\leq 200\,\text{GeV}\) instead, electroweak precision parameters at \(2\,\sigma\) give the upper limit on the charged Higgs mass as \(m_{H^{+}}\leq 270\,\text{GeV}\) (\(290\,\text{GeV}\)) for Eq. (10) (Eq. (11)). In this case, \(t+\tau\overline{\tau}\) would provide a key test since \(\text{BR}(\phi\to\tau\overline{\tau})\) will be amplified compared to the scenario for the simultaneous explanation.
## Acknowledgements
I would like to thank Ulrich Nierste, Marco Fedele, Hiroyasu Yonaha and Teppei Kitahara for the useful discussion and great encouragement. I appreciate Masaya Kohda for the exchange on the same-sign top signal in 2018. I also appreciate Javier Montejo Berlingen, Tamara Vazquez Schroeder and Shigeki Hirose for the detailed information of Ref. [80]. The work is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 396021762-TRR 257.
|
2310.07817
|
Nonlinear global Fréchet regression for random objects via weak
conditional expectation
|
Random objects are complex non-Euclidean data taking value in general metric
space, possibly devoid of any underlying vector space structure. Such data are
getting increasingly abundant with the rapid advancement in technology.
Examples include probability distributions, positive semi-definite matrices,
and data on Riemannian manifolds. However, except for regression for
object-valued response with Euclidean predictors and
distribution-on-distribution regression, there has been limited development of
a general framework for object-valued response with object-valued predictors in
the literature. To fill this gap, we introduce the notion of a weak conditional
Fr\'echet mean based on Carleman operators and then propose a global nonlinear
Fr\'echet regression model through the reproducing kernel Hilbert space (RKHS)
embedding. Furthermore, we establish the relationships between the conditional
Fr\'echet mean and the weak conditional Fr\'echet mean for both Euclidean and
object-valued data. We also show that the state-of-the-art global Fr\'echet
regression developed by Petersen and Mueller, 2019 emerges as a special case of
our method by choosing a linear kernel. We require that the metric space for
the predictor admits a reproducing kernel, while the intrinsic geometry of the
metric space for the response is utilized to study the asymptotic properties of
the proposed estimates. Numerical studies, including extensive simulations and
a real application, are conducted to investigate the performance of our
estimator in a finite sample.
|
Satarupa Bhattacharjee, Bing Li, Lingzhou Xue
|
2023-10-11T18:59:38Z
|
http://arxiv.org/abs/2310.07817v1
|
# Nonlinear global Frechet regression for random objects via weak conditional expectation
###### Abstract
Random objects are complex non-Euclidean data taking value in general metric space, possibly devoid of any underlying vector space structure. Such data are getting increasingly abundant with the rapid advancement in technology. Examples include probability distributions, positive semi-definite matrices, and data on Riemannian manifolds. However, except for regression for object-valued response with Euclidean predictors and distribution-on-distribution regression, there has been limited development of a general framework for object-valued response with object-valued predictors in the literature. To fill this gap, we introduce the notion of a weak conditional Frechet mean based on Carleman operators and then propose a global nonlinear Frechet regression model through the reproducing kernel Hilbert space (RKHS) embedding. Furthermore, we establish the relationships between the conditional Frechet mean and the weak conditional Frechet mean for both Euclidean and object-valued data. We also show that the state-of-the-art global Frechet regression recently developed by Petersen and Muller (2019) emerges as a special case of our method by choosing a linear kernel. We require that the metric space for the predictor admits a reproducing kernel, while the intrinsic geometry of the metric space for the response is utilized to study the asymptotic properties of the proposed estimates. Numerical studies, including extensive simulations and a real application, are conducted to investigate the performance of our estimator in a finite sample.
## 1 Introduction
Encountering complex non-Euclidean data, taking values in a general metric space that may defy any inherent linear structure, has become increasingly common in areas such as biological or social sciences with the rapid advancement of technology. Examples of such "_random object_" data, recorded in the form of images, shapes, networks,
|
2301.11934
|
Compression theory for inhomogeneous systems
|
The physics of complex systems stands to greatly benefit from the qualitative
changes in data availability and advances in data-driven computational methods.
Many of these systems can be represented by interacting degrees of freedom on
inhomogeneous graphs. However, the irregularity of the graph structure and the
vastness of configurational spaces present a fundamental challenge to
theoretical tools, such as the renormalization group, which were so successful
in characterizing the universal physical behaviour in critical phenomena. Here
we show that compression theory allows to extract relevant degrees of freedom
in arbitrary geometries, and develop efficient numerical tools to build an
effective theory from data. We demonstrate our method by applying it to a
strongly interacting system on an Ammann-Beenker quasicrystal, where it
discovers an exotic critical point with broken conformal symmetry.
|
Doruk Efe Gökmen, Sounak Biswas, Sebastian D. Huber, Zohar Ringel, Felix Flicker, Maciej Koch-Janusz
|
2023-01-27T19:00:00Z
|
http://arxiv.org/abs/2301.11934v2
|
# Machine learning assisted discovery of exotic criticality in a planar quasicrystal
###### Abstract
Our understanding of universality and phase transitions is deeply rooted in the notion of scaling. Indeed continuous phase transitions typically exhibit scale-invariant behavior facilitating the use of standard renormalization group (RG) techniques. Some critical systems, however, evade full scale invariance, in that observables reproduce themselves only under a set of discrete scale factors \(\delta^{n}\). Such discrete scale invariance (DSI) presents a conceptual challenge as many of our theoretical tools fail to be directly applicable. Here, we report on a discovery of emergent degrees of freedom for the recently studied classical dimer model on the quasiperiodic Ammann-Beenker tiling. Using a machine learning assisted approach we establish that their statistics can be expressed in terms of emergent large-scale super-dimers. Moreover, the result reveals an emergent discrete scale invariance, where the same dimer problem is re-appearing at successive discrete coarse-grained levels, demonstrating proximity to an RG fixed point. Our findings not only provide a rare example of successfully applying RG to a strongly-correlated system on a two-dimensional quasicrystal, but, owing to the generality of the approach, delineate a new paradigm in analysis and a practical tool for discovering coarse-grained representations in quasiperiodic and other non-homogeneous systems.
Introduction -The study of critical phenomena has been a major driving force in condensed matter physics. It spurred the discovery of the renormalization group (RG) [1; 2; 3; 4] and of conformal field theories [5; 6; 7]. It also underlies the classification of topological states of matter via their gapless boundaries [8; 9].
An important currently unfolding development in the theory of critical phenomena is the study of strongly correlated systems on quasicrystals (QC). The self-similar structure of such systems paired with the lack of translation symmetry make an RG treatment both appropriate, and at the same time far from straightforward. To wit, though tailored and largely limited to 1D, RG methods [10; 11] nevertheless reveal new types of critical points in the context of many-body localization [12; 13; 11]. In 2D the critical Sutherland-Kalugin-Katz wavefunctions [14; 15] of tight binding Hamiltonians provide a stepping stone towards correlated physics on QCs. From a phenomenological perspective, several QC critical systems [16; 17; 10] show evidence of non-conformal critical points with discrete (DSI), rather than the usual continuous scale invariance (see Fig.1). DSI has been found in non-equilibrium scenarios [18; 19; 17]; non-conformal critical points, more generally, have been suggested [20] as a resolution of the discrepancies between numerics [21], and experiments, particularly on the lambda-point anomaly [22].
Defying the intuition that quasiperiodicity is often irrelevant in an RG sense [23; 24], these observations indicate that its interplay with strong interactions provides a path towards novel critical phenomena. One system suspected of harbouring such an exotic type of criticality consists of classical dimers on the Ammann-Beenker (AB) tiling, a 2D bipartite quasicrystal with a recursive structure and a 'forbidden' octagonal symmetry [16]. The dimers themselves are an abstraction of resonant valence bonds [25] arising from strong correlations in quantum antiferromagnets. Recently, some of us proved the existence of defect-free dimer coverings on the AB tiling
Figure 1: **Conformal invariance and discrete scale invariance.** (a) On a regular lattice, continuous field variables emerge from discrete degrees of freedom as the lattice becomes irrelevant under coarse graining. (b) In quasicrystals, self-similarity and strong correlations may conspire to keep the coarse-grained the degrees of freedom discrete and graph structure relevant upon zooming out.
and reported Monte Carlo (MC) evidence for quasi long ranged correlations [16]. An analytical account of the ensemble described by these dimer coverings, in particular of its potentially critical nature, remained outstanding. This is due to the complex structure of correlations, and lack of understanding regarding the relevant degrees of freedom (DOF) driving this critical behaviour.
The lack of applicable RG methods and the high dimensional configuration space of the problem naturally inspire the use of machine learning (ML). Despite impressive results [26; 27; 28; 29; 30; 31; 32; 33; 34], ML has yet to establish itself as a guide to theorizing about unexplored systems.
Here, we demonstrate such a development. We leverage analytical results reformulating RG in the language of formal compression theory [35], and a numerical algorithm employing contrastive learning to execute these ideas in regular lattices [36; 37]. We extend these tools to quasiperiodic systems (in fact, systems on arbitrary static graphs), and apply them to the AB dimer problem, obtaining qualitatively new theoretical results.
Our algorithm explicitly constructs the effective DOF in large patches of the system. The mapping is _local,_ and turns out to depend on a _linear_ function of microscopic dimer occupations. It reveals them to be clock variables, perfectly compatible with the hierarchical structure of the AB tiling (hosting, _e.g._, \(\mathbb{Z}_{8}\) variables in 8-fold symmetric patches). Moreover, the nearby clock variables are strongly correlated: they align with one of their neighbours, locking the pair into an effective dimer at a large scale. These emergent "super-dimers" obey an approximate effective dimer exclusion principle, in effect yielding a system close to the original AB dimer model, but at a larger scale. The stability of this picture across scales strongly suggests proximity of the original system to an RG fixed point, and _an emergent discrete scale invariance_ of the critical theory. In a parallel work, some of us provide a microscopic interpretation of these emergent super-dimers as certain alternating dimer paths on the AB lattice, and study the criticality numerically [38].
_The system -_ The Ammann-Beenker (AB) construction gives _quasiperiodic_ tilings of the plane utilizing two distinct plaquettes: a rhombus and a square [39]. Like their more famous cousins, the Penrose tilings [40], AB tilings feature diffraction patterns exhibiting crystallographically 'forbidden' symmetries, here 8-fold [41]. Likewise, they can also can be generated by a recursive procedure in which an _inflation_ map \(\sigma\) acts on a small seed patch by decomposing the constituent plaquettes as shown in Fig.2b, and subsequently rescaling all the edge lengths by the silver ratio \(\delta\). A special role is played by 8-fold coordinated vertices: under inflations all lower coordinated vertices ultimately become (and stay) 8-vertices. Each 8-vertex is characterised by an _order_, _i.e._ the maximal number of inverse _deflations_\(\sigma^{-1}\) after which it still remains 8-fold coordinated. Intuitively, the order of an 8-vertex specifies the maximal size of the local patch centered on it, within which the lattice appears perfectly 8-fold symmetric. The quasiperiodic AB lattice is thus invariant under discrete rescalings. This invariance is easily visualized for even order deflations \(\sigma^{2n}\) by drawing a super-lattice connecting 8-fold vertices (Fig.2a).
Dimer models enjoy a deceptively simple definition: the microscopic dimers live on the links of (any) lattice, which can be either occupied or empty. The key element is a set of hard _local_ constraints: at every vertex where the links meet, one and only one of the links is occupied. This gives rise to a surprisingly rich phenomenology. Dimer models on regular lattices have been studied extensively, in part due to their relevance to high-\(T_{c}\) superconductivity [42], but have since been shown to support topological order and fractionalisation [43; 44] and exotic critical points [45]. The quantum and classical versions are closely related. The latter not only is a starting point for the quantum version [46; 47], but is important in its own right, with deep connections to combina
Figure 2: **Self-similarity of the AB tiling, and the coarse graining blocks.** (a) A microscopic dimer configuration (small black links) on the AB tiling’s edges, with an overlaid AB _superlattice_, self-similar to the microscopic one. The effective DOF at a supervertex of a given (colour coded) valence will be obtained by coarse graining the dimer configuration in the surrounding region \(\mathcal{V}\) of a shape dictated by the inflation rules and shown as a polygon of a matching colour. (b) The inflation (deflation) \(\sigma^{2(-2)}\) of the elementary rhombi and squares generating the tiling, with parts of the polygonal domains indicated in colour. Coarse graining all such polygonal patches executes a deflation \(\sigma^{-2}\) of the original AB lattice, yielding the superlattice shown.
torics [48; 49; 50] and the study of random surfaces [51; 52].
Recent work has begun to explore the interplay of (strongly-correlated) dimer physics and quasiperiodicity. Particularly, AB tilings, in contrast to Penrose tilings [53], host perfectly matched dimer configurations in the thermodynamic limit (_i.e._ with a vanishing density of defects), and numerically computed dimer correlations exhibit a quasi power-law decay with a complex spatial structure [16]. Moreover, the combinatorial proof of perfect matching pointed to a hierarchy of self-similar effective matching problems at different scales between spatial regions bounded by 'pseudomembranes', _i.e._ collections of edges which collectively host exactly one dimer.
Taken together these facts suggest a conjecture that not only the AB tilings themselves, but crucially also the _physics_ of the dimers on the AB tilings, exhibit discrete scale invariance [16] - a potentially striking and unusual example of the relevance of quasiperiodicity for the critical behaviour. A proof, and a microscopic physical mechanism at the level of the dimer ensemble was, however, absent.
The putative criticality naturally calls for a renormalisation group (RG) analysis. Alas, RG approaches for quasiperiodic systems in \(D\geq 2\) dimensions are in their infancy and, in particular, to the best of our knowledge no such tools are available for the AB dimer system.
Results -To solve this challenge we employ the recent results on a formal correspondence between lossy compression theory and real-space RG [35]: the relevant operators of the theory, supported in a local spatial patch \(\mathcal{V}\), emerge as variational solutions to a suitably posed information bottleneck problem [54] (see Appendix A). Intuitively, they are compressions of the subsystem \(\mathcal{V}\), which preserve the most information about its environment \(\mathcal{E}\). While previously only discussed for regular lattices, we note here that this holds in _any static graph_, in particular for quasiperiodic lattices, and thus it provides a theoretical avenue to define an RG procedure for such systems.
An efficient approximate numerical realization of this approach on regular lattices was introduced by some of us as the RSMI-NE algorithm [36; 37]. Here, we extend it to _arbitrary static graphs_[55]. Keeping the implementation details to Appendix A, we directly apply it to the AB dimer system. We address, in turn, two key questions: what are the local effective DOF, and what are their correlations. This is systematically revealed by the analysis of data provided by our algorithm.
To uncover the emergent DOFs, we need to specify the spatial partition for the blocks \(\mathcal{V}\) first. In the AB tiling
Figure 3: **Finding effective clock variables.** (a) Coarse graining transformation \(\boldsymbol{\Lambda}\) mapping Monte Carlo configurations in \(V\) into bitstrings \(\mathcal{H}\) on supervertices of \(\sigma^{-2}\) deflated tiling. (b, f) The length of the bitstring \(\mathcal{H}^{8(3)}\) is determined by the saturation of mutual information at 4 (2) bits at 8(3)-supervertices. Each bit \(\mathcal{H}_{i}\) is decided by the sign of linear transformation \(\Lambda_{i}\cdot\mathcal{V}\). The respective optimal filters \(\boldsymbol{\Lambda}\) in (c, g) carry a representation of the local spatial symmetries of corresponding supervertices, namely \(\mathsf{C}_{8}\) and mirror. (d, h) The probability distributions \(P(\mathcal{H}^{8(3)})\) sparsely occupy the space of codes, and form abstract clock variables. (e) Particularly, \(\mathcal{H}^{8}\) forms a closed 8-loop, where each state has exactly two neighbours with Hamming-distance 1. (i) Transitions between adjacent clock-states are induced by the representations of the local symmetries on filters, enabling to identify abstract clock-states with spatial directions along the links of the quasiperiodic lattice (see main text).
there are _natural_ choices, set by the recursive structure of the AB lattice itself [56]. At each scale, the AB tiling can be covered by four classes of blocks [16], shown in Fig.2 in different colours, each deflating to vertices of differing connectivity in the super-lattice.
In each inequivalent class, the algorithm identifies the emergent DOF as a \(\mathbb{Z}_{n}\)_clock variable_, with \(n\) the connectivity, or class, of \(\mathcal{V}\) in the superlattice. This is revealed as follows: the variational compression map \(\mathbf{\Lambda}\) assigns to an MC dimer configuration \(\mathcal{V}\) a short binary code \(\mathcal{H}\) (Fig.3a), the bits being set by applying individual components of \(\mathbf{\Lambda}\) to \(\mathcal{V}\) (itself a long bitstring of dimer occupations in the block). Each component is _a priori_ a general nonlinear map, parametrized by a neural network, whose output is finally binarized.
The length of the code is _not_ supplied, but found, by sequentially increasing the number of components in \(\mathbf{\Lambda}\), and training the compression of \(\mathcal{V}\) to optimally preserve the mutual information with its environment \(\mathcal{E}\). Crucially, the maximal retained information about \(\mathcal{E}\) plateaus with the optimal code-length _depending on the class of \(\mathcal{V}\)_. Particularly, for \(\mathcal{V}\) in class-8 the optimal number of components is four, while for class-3 only two (Fig.3b,f). Further, nonlinearity of \(\mathbf{\Lambda}\) networks _does not_ improve compression: the same amount of information is preserved with only _linear_ components. Optimal linear maps on the space of dimer configurations on \(V\) are shown for classes 8 and 3 in Figs.3c and g, respectively. We note that RGNI-NE training is _unsupervised_.
To unravel the physical content of these encodings, we further query the RGNI-NE outputs. The code statistics in Fig.3d reveal striking features: of the sixteen 4-bit codes in class-8 only eight are ever assigned, with half of the codes unused. Yet in Fig.3b a 3-bit encoding, which has exactly eight available codes, is suboptimal. Moreover, the frequencies of all class-8 codes used are the same (3d), while for class-3 only two frequencies are identical (3h). These puzzling results indicate that RGNI-NE finds _structure_ beyond merely the number of states of the DOF, which is essential to correlations with \(\mathcal{E}\), and which _cannot_ be encoded with fewer bits.
We thus investigate the codes, and the \(\mathbf{\Lambda}\) maps. We first note that the 4-bit codes form a closed 8-_cycle_, with neighbours differing by a single bit-flip, and each code having exactly two 1-bit distant neighbours (Fig.3e) [57]. The uniform frequencies and the cyclic structure of the code hint at a symmetry.
Indeed, a class-8 patch \(\mathcal{V}\) of the AB _lattice_ is locally symmetric under \(\pi/4\) rotations. We observe that under such rotations the components of the optimal \(\mathbf{\Lambda}\) map in Fig.3c change as \((\Lambda_{1},\Lambda_{2},\Lambda_{3},\Lambda_{4})\rightarrow(\Lambda_{4},- \Lambda_{3},-\Lambda_{1},-\Lambda_{2})\), which is a representation of a gen
Figure 4: **Emergent dimer exclusion rule and self-similar dimer-dimer correlations across scales.** (a) The probability distribution of microscopic (_i.e._\(\delta^{0}\)) dimers (in greyscale) on an AB patch, conditioned on one of the links (in orange) hosting a dimer. (b, c) First two columns: the probabilities \(P(\mathcal{H}|\mathcal{H}^{3})\) of the _emergent_ clock variables on the \(\delta^{2}\) and \(\delta^{4}\) superlattice (in greyscale), conditioned on two distinct states of one of the 3-clocks (in orange). The third column shows distributions conditioned on a state of the central 8-clock. Binding of adjacent clock variables into super-dimers obeying dimer exclusion constraints is revealed by sharply peaked conditional distributions. The effective super-dimers reproduce also longer-range dimer-dimer correlations at both \(\delta^{2}\) and \(\delta^{4}\) scales. (d, e) Examples of (a single component of) optimal coarse-graining filters producing the central 8-state clock variable at scales \(\delta^{2}\) and \(\delta^{4}\). The latter comprises 2760 microscopic links.
erator of the cyclic group \(\mathsf{C}_{8}\). We emphasize that it is the compression map, and consequently _the emergent DOF_ now carrying a representation of what is _a priori_ a (local) symmetry only of the AB lattice. Similar analysis can be performed for other classes of \(\mathcal{V}\), which have a mirror symmetry. In particular, under its action for the class-3 patch in Fig.3g we have \((\Lambda_{1},\Lambda_{2})\rightarrow(\Lambda_{2},\Lambda_{1})\), explaining equal frequency of the \(\mathbf{01}\) and \(\mathbf{10}\) codes. Hence, we conclude that, rather than becoming continuous, the emergent DOFs of the dimer system at \(\sigma^{2}\) scale remain discrete, and mimic the local symmetry of the underlying super-lattice. This holds equally at \(\sigma^{4}\) scale, providing the first indication of a discrete scale invariance. Having found the emergent DOFs in each class \(\mathcal{V}\) individually, we turn to their correlations, where discrete scale invariance manifests itself fully. To this end we simultaneously coarse grain dimer configurations in multiple blocks, which collectively form an AB superlattice as in Fig.2a, using the trained compression maps (Fig.3c,g).
As noted before, the number of states of each emergent DOF equals the connectivity of the supervertex it lives on. Since the distribution of each state's frequencies reflects the underlying superlattice symmetry, these internal DOFs can be identified with spatial orientations along the edges of the superlattice. For example, since mirror symmetry w.r.t. the axis connecting the 8- and 3-vertices in Fig.3i relates the code frequencies of the 3-vertex codes \(\mathbf{01}\) and \(\mathbf{10}\) (Fig.3h), the remaining state \(\mathbf{11}\) is the one pointing towards the 8-vertex.
We probe the correlations by _conditioning_ on the state of one of the vertices. In Fig.4b,c, fragments of \(\sigma^{-2}\) and \(\sigma^{-4}\) superlattices are shown, with the state of the conditioning variable, identified with a direction, in orange, while the conditional _distribution_ of DOFs at the other vertices in greyscale. Remarkably, this distribution is very strongly correlated, effectively forcing occupation of some states, and excluding others. To wit, when the 3-vertex DOF points towards the 8-vertex, the distribution \(P(\mathcal{H}|\mathcal{H}^{3})\) of the latter is sharply peaked in the matching direction, while _no other neighbour_ of the 3-vertex points towards it (allowing, for example, the identification of the 8-vertex code \(\mathbf{1011}\) with a spatial orientation in Fig.3i). Conversely, when the 3-vertex DOF points towards one of its other neighbours, it is "matched" by it, while the 8-vertex DOF distribution has zero weight _precisely and only_ towards that 3-vertex.
Examining all such correlations we arrive at a striking conclusion: the effective DOFs in \(\mathcal{V}\)'s throughout the lattice are paired with _one and only one_ of their neighbours into emergent "super-dimers" on the edges of the superlattice. The exclusion of certain clock variable orientations in Figs.3(a-e) is a precise reflection of the hard dimer-constraints, which these super-dimers obey. Moreover, comparison of further correlations to those of the microscopic dimers in Fig.4a reveals that not just the local-dimer constraints, but also longer-range correlations are reproduced correctly. The physics of the microscopic dimer model on the AB lattice is thus replicated to a high degree of accuracy at the \(\delta^{2}\) scale, and again, at the \(\delta^{4}\) scale (where 'locking' is even sharper, see Fig.4c), thereby demonstrating DSI across three scales.
The quasiperiodicity of the AB lattice and the strong interactions of the dimer model conspire to re-create self-similar DOF at a higher scale, giving rise to discrete scale invariance (Fig.1), which we uncover guided by the outputs of the RSMI-NE algorithm.
We emphasize the dual computational and conceptual aspect of this result: each compression map \(\mathbf{\Lambda}\) at the \(\sigma^{4}\) scale is a highly structured function of approximately \(10^{3}\) microscopic dimer occupations (\(\sim 2^{10^{3}}\) configurations), effectively impossible to guess or analyze by hand only, and yet providing sharp and concise physical insights about DOFs, symmetries, and correlations. We have, in effect, reached a point were ML techniques can not only assist, but _facilitate_ progress in theoretical physics.
Our approach provides a roadmap for unravelling universal behaviour, extending RG methods or more broadly performing dimensional reduction in settings where configuration spaces with complex topology appear. We expect this to be of importance to the study of quasicrystals, more general inhomogeneous systems such as metallic glasses, and biological networks.
_Acknowledgements -_ D.E.G., and S.D.H. gratefully acknowledge financial support from the Swiss National Science Foundation and the NCCR QSIT. S.B. acknowledges support by the European Research Council under the European Union Horizon 2020 Research and Innovation Programme via Grant Agreement No. 804213-TMCS. Z.R. acknowledges support from ISF grant 2250/19. M.K.-J. gratefully acknowledges financial support from the European Union's Horizon 2020 programme under Marie Sklodowska-Curie Grant Agreement No. 896004 (COMPLEX ML).
|
2304.05147
|
Artificial Collective Intelligence Engineering: a Survey of Concepts and
Perspectives
|
Collectiveness is an important property of many systems--both natural and
artificial. By exploiting a large number of individuals, it is often possible
to produce effects that go far beyond the capabilities of the smartest
individuals, or even to produce intelligent collective behaviour out of
not-so-intelligent individuals. Indeed, collective intelligence, namely the
capability of a group to act collectively in a seemingly intelligent way, is
increasingly often a design goal of engineered computational systems--motivated
by recent techno-scientific trends like the Internet of Things, swarm robotics,
and crowd computing, just to name a few. For several years, the collective
intelligence observed in natural and artificial systems has served as a source
of inspiration for engineering ideas, models, and mechanisms. Today, artificial
and computational collective intelligence are recognised research topics,
spanning various techniques, kinds of target systems, and application domains.
However, there is still a lot of fragmentation in the research panorama of the
topic within computer science, and the verticality of most communities and
contributions makes it difficult to extract the core underlying ideas and
frames of reference. The challenge is to identify, place in a common structure,
and ultimately connect the different areas and methods addressing intelligent
collectives. To address this gap, this paper considers a set of broad scoping
questions providing a map of collective intelligence research, mostly by the
point of view of computer scientists and engineers. Accordingly, it covers
preliminary notions, fundamental concepts, and the main research perspectives,
identifying opportunities and challenges for researchers on artificial and
computational collective intelligence engineering.
|
Roberto Casadei
|
2023-04-11T11:22:47Z
|
http://arxiv.org/abs/2304.05147v1
|
# Artificial Collective Intelligence Engineering: a Survey of Concepts and Perspectives
###### Abstract.
Collectiveness is an important property of many systems--both natural and artificial. By exploiting a large number of individuals, it is often possible to produce effects that go far beyond the capabilities of the smartest individuals, or even to produce intelligent collective behaviour out of not-so-intelligent individuals. Indeed, collective intelligence, namely the capability of a group to act collectively in a seemingly intelligent way, is increasingly often a design goal of engineered computational systems--motivated by recent techno-scientific trends like the Internet of Things, swarm robotics, and crowd computing, just to name a few. For several years, the collective intelligence observed in natural and artificial systems has served as a source of inspiration for engineering ideas, models, and mechanisms. Today, artificial and computational collective intelligence are recognised research topics, spanning various techniques, kinds of target systems, and application domains. However, there is still a lot of fragmentation in the research panorama of the topic within computer science, and the verticality of most communities and contributions makes it difficult to extract the core underlying ideas and frames of reference. The challenge is to identify, place in a common structure, and ultimately connect the different areas and methods addressing intelligent collectives. To address this gap, this paper considers a set of broad scoping questions providing a map of collective intelligence research, mostly by the point of view of computer scientists and engineers. Accordingly, it covers preliminary notions, fundamental concepts, and the main research perspectives, identifying opportunities and challenges for researchers on artificial and computational collective intelligence engineering.
## 1. Introduction
Nowadays, technical systems are evolving in complexity: they are increasingly large-scale, heterogeneous, and dynamic, posing several challenges to engineers and operators. For instance, progress in the information and communication technology (ICT) is promoting a future where computation is deeply integrated in a large variety of environments: our bodies, homes, buildings, cities, planet, and universe. In other words, the vision of pervasive and ubiquitous computing is stronger than ever, with an increasing trend towards the mass deployment of a large number of heterogeneous devices nearly everywhere, to improve existing applications and create new ones. However, we still seem quite far at exploiting the full potential of the interconnected networks of devices at our disposal.
Nevertheless, there is some progress. New paradigms and solutions have been proposed, often drawing from that powerful source of mechanisms and solutions that is nature. Indeed, we are witnessing a long-term research endeavour aiming at bringing powerful properties and capabilities of living systems into technical systems (Stein et al., 2021). Intelligence, evolution, emergence of novel capabilities, resilience, and social integration (Bellman et al., 2021; Stein et al., 2021) are often observed in natural, living systems and considered important features of artificial, engineered systems as well. Indeed, computer scientists and engineers are increasingly often interested not just at making individual devices smarter, but also at making whole ecosystems of devices (and people) more collectively intelligent. Creating collective intelligence (CI) in artificial systems, however, is challenging. Indeed, various computer science and engineering fields such as, e.g., multi-agent systems (Woolridge, 2009) and swarm robotics (Brambilla et al., 2013), have often encountered problems related to this "CI challenge". Moreover, the generality of the problem and the possibility of transferring ideas and techniques across fields has also motivated the emergence of a general
research field specifically aimed at studying how to build CI in artificial systems, also known as under terms such as _artificial collective intelligence (ACI)_(Tumer and Wolpert, 2004; Zheng et al., 2018) and _computational collective intelligence (CCI)_(Badica et al., 2014).
There exist some surveys on CI/ACI, but they tend to adopt specific viewpoints limiting the overall scope of the study, such as models for social computing systems (Suran et al., 2020), interaction modality (He et al., 2019), or large-scale cooperative multi-agent systems (Tumer and Wolpert, 2004). The main goal of this article is to review the concepts, models, and perspectives needed for the _engineering of ACI_. We can say that the article mainly considers _cyber-physical collectives_ as target systems, namely groups of interconnected computing devices, possibly situated in physical environment and possibly involving "humans in the loop" (Schirner et al., 2013), which are to be thought as "programmable platforms" for services and applications benefitting by the CI emerging from their activity. The idea is to provide a research map on CI for computer scientists and engineers, generally useful for the broad techno-scientific community.
In summary, we provide the following contributions:
* we perform a _scoping review_(Petticrew and Roberts, 2008), different from existing surveys in scope and focus, covering concepts, models, and perspectives related to CI, ACI, and their (software) engineering, which can also be seen as a foundation for more systematic reviews;
* we provide a map and taxonomy of the ACI field, by connecting it with related fields and providing categories to frame research works on ACI;
* we outline opportunities and challenges for further research, in terms of target domains and interesting developments of existing methods.
In other words, we provide a broad overview of the field of CI/ACI, larger in scope and more oriented towards software systems engineering with respect to (He et al., 2019; Malone and Bernstein, 2015; Suran et al., 2020; Tumer and Wolpert, 2004).
The article is organised as follows. First, a set of broad scoping questions are elicited, to provide a structure for the paper and its discussions. After this, a survey of existing reviews relevant to CI is presented, to also motivate the perspective of this very article. Then, the preliminary concepts of a "collective" and "(individual) intelligence" are briefly reviewed. Upon this basis, to understand what CI is, some reference definitions, examples, models, and classifications are reviewed from the literature. Then, to discuss how CI can be engineered, a number of perspectives are considered, under which some main approaches for CI engineering are pointed out. Building on such a presentation of approaches, a discussion of opportunities and challenges related to CI engineering is developed, providing directions for further research. Finally, a wrap-up is provided with some conclusive thoughts.
## 2. Method
Our goal is to scope the large and fragmented area of (artificial) collective intelligence, in order to identify its key concepts, relevant perspectives, research problems, and gaps--with an emphasis on its engineering and its computational/artificial intelligence (AI) side. Accordingly, we perform a scoping review (Petticrew and Roberts, 2008). This tool can be preferred over systematic reviews whenever specific research questions are hard to identify or ineffective, or when the goal is to identify the _types_ of available evidence, clarify notions, and key characteristics/factors related to a concept (Munn et al., 2018). Indeed, we seek to provide a map of the field, supporting more focussed and systematic reviews in the future.
We use a question-based method to drive the investigation and selection of the bibliography of this manuscript. In particular, we consider the following _scoping questions_ (_SQs_).
1. What is a collective?
* What is (individual) intelligence?
* What is collective intelligence?
* What behaviours can be termed "collectively intelligent"? Are there paradigmatic examples?
* What are the requirements for collective intelligence?
* What relationships exist between individual and collective intelligence?
* How does collective intelligence unfold/emerge?
* How can collective intelligence be measured?
* How can collective intelligence be built artificially/computationally?
* What is the state of the art of (computational) collective intelligence?
* How is the research community on collective intelligence structured?
* cover the _preliminary concepts_ underlying the notion of CI, setting the necessary background for addressing SQ2-SQ3 (which are about _what_ CI is) and SQ4-SQ7 (which are about the _factors_, _characteristics_, and _mechanics_ of CI). Then, SQ8 is about the problem of _engineering_ CI, and SQ9-SQ10 are meta-questions concerning research in the field. Notice that these are broad scoping questions aimed mainly at providing directions for the search and identification of the research works included in the survey.
## 3. Tertiary study
In order to motivate the need for a survey on CI, we performed a tertiary study where secondary studies (e.g., surveys, systematic reviews) and collections are reviewed. We organise these according to whether they consider CI in its generality (i.e., abstracting from its applications and areas) focus on its artificial/computational form (ACI/CCI), its swarm-like form, or specific kinds of collectives or goals. Therefore, this section also provides a partial answer to SQ10.
In general, we can observe a lack of comprehensive reviews and maps of the CI field. From this situation, we draw a motivation for this article: providing a map of the topic, especially aimed at computer scientists and engineers, showing different perspectives and providing some highlights from the state of the art in ACI.
#### 3.0.1. Reviews on CI as a general topic
Two main surveys to date aim at addressing CI as a general topic. He et al. (2019) analyse CI across different fields based on a taxonomy that distinguishes between isolation, collaboration, and feedback-based CI paradigms. Suran et al. (2020) performed a systematic literature review to elicit a general model of CI and its attributes, with a focus on social computing. These two contributions consider, integrate, and somewhat subsume previous, more limited, or less general CI models and reviews (Krause et al., 2010; Lykourentzou et al., 2009; Salminen, 2012; Yu et al., 2018), and so they will be discussed more extensively in later sections. However, to the best of our knowledge, there are no comprehensive mapping studies providing a broad overview of the field for computer scientists.
#### 3.0.2. Multi-disciplinary collections
In (Malone and Bernstein, 2015), essays on CI are collected from different fields including economics, biology, human-computer interaction (HCI), AI, organisational behaviour, and sociology. In (Millhouse et al., 2021), a collection of contributions from a workshop gathering scientists in different areas is provided, with the goal of sharing _"insights about how intelligence can emerge from interactions among multiple agents--whether those agents be machines, animals, or human beings."_
#### 3.0.3. Reviews on _ACI/CCI_
The paper by Tumer and Wolpert (2004), published in 1999, surveys CCI systems across the categories of (i) AI and machine learning (ML), including multi-agent systems (MASs); (ii) social science-inspired systems, such as those found in economics and game theory; (iii) evolutionary game-theoretical approaches; (iv) biologically-inspired systems, like
swarm intelligence, artificial life, and population approaches; (v) physics-based systems; and (vi) other research fields ranging from network theory and self-organisation. This is a very rich survey but covers research published before the year 2000 and is slightly focused towards automatic and utility-based approaches.
The editorial by Jung (2017) reviews special issue papers on the integration of CCI and big data, where it is considered how data-driven CI can help in (i) collecting data, (ii) analysing data, and (iii) using data e.g. to support decision making.
The review by Rossi et al. (2018) provides a survey and taxonomy of multi-agent algorithms for collective behaviour, classified into: consensus, artificial potential functions, distributed feedback control, geometric algorithms, state machines and behaviour composition, bio-inspired algorithms, density-based control, and optimisation algorithms. What emerges is a rather sharp distinction between low-level (e.g., bio-inspired self-organisation) and high-level coordination.
#### 3.0.4. Reviews on swarm intelligence
Several reviews on swarm intelligence have been published (Brambilla et al., 2013; Chakraborty and Kar, 2017; Dorigo and Stutzle, 2019; Figueiredo et al., 2019; Fister et al., 2013; Kolling et al., 2016; Mavrovouniotis et al., 2017; Navarro and Matia, 2013; Nguyen et al., 2020; Rajasekhar et al., 2017; Schranz et al., 2021; Yang and He, 2013; Zedadra et al., 2018; Zhang et al., 2015). In the swarm intelligence field, a large part of research is devoted to devising (meta-)heuristics and algorithms for solving complex optimisation problems. Mavrovouniotis et al. (2017) focus on swarm algorithms for dynamic optimisation, namely in settings where the environment changes over time.
Moreover, reviews in this context often adopt an angle based on what natural system inspired swarm intelligence mechanisms. For instance, Rajasekhar et al. (2017) provide a survey on algorithms inspired by honey bees, e.g. based on mating, foraging, and swarming behaviours of honey bees; similar surveys exist for bat algorithms (Yang and He, 2013), firefly algorithms (Fister et al., 2013), ant colony optimisation (Dorigo and Stutzle, 2019).
Some surveys consider swarm intelligence applied to specific problems such as self-organising pattern formation (Oh et al., 2017), feature selection (Nguyen et al., 2020), clustering (Figueiredo et al., 2019), green logistics (Zhang et al., 2015), collective movement (Navarro and Matia, 2013). Other surveys consider swarm intelligence in particular contexts or as exhibited by particular kinds of systems, such as Internet of Things (IoT) systems (Zedadra et al., 2018), cyber-physical systems (CPSs) (Schranz et al., 2021), and robot swarms (Brambilla et al., 2013; Kolling et al., 2016).
#### 3.0.5. Reviews on CI for specific systems and settings
Reviews from specific viewpoints include collections and surveys on human CI (Salminen, 2012), deep learning (Ha and Tang, 2021), enterprise information systems (Nguyen et al., 2019), and sociotechnical systems supported by 5G communications (Narayanan et al., 2022).
Salminen (2012) performed a literature review of CI in human context, grouping contributions into (i) micro level, emphasising enabling factors; (ii) emergence (or meso) level, emphasising how global patterns arise from local activity; and (iii) macro level, emphasising the kinds of system output. A review on human CI by a crowd science perspective is provided by Yu et al. (2018). Krause et al. (2010) review and compare swarm intelligence in animals and humans.
Ha and Tang (2021) performed a survey of recent developments on the embedding of CI principles into deep learning methods. They discuss e.g. how CI can help in devising novel architectures and training algorithms, and recent works on multi-agent (reinforcement) learning. Studies like this one are important since they elicit and strengthen trans-disciplinary relationships which are key for complex interdisciplinary fields like CI.
Narayanan et al. (2022) provide a survey of the CI emerging in human-machine socio-technical systems supported by 5G communications. The discussed applications include road traffic control,
unmanned aerial vehicles, smart grid management, and augmented democracy. The point is that to realise their full potential, these kinds of decentralised socio-technical systems often require proper connectivity properties and capabilities to support and foster the emergence of CI. For instance, from the analysis, the authors foresee that the 5G communication technology can promote CI by enhancing aspects like connectivity with neighbour nodes, interaction protocols, knowledge exchange, and the exploration-exploitation tradeoff via improved speed, latency, and reliability. On the other hand, there are significant challenges addressed by current and hopefully by future research in terms of security, privacy, and radio resource management.
## 4. Preliminary Concepts
This section provides an introduction to the notions of collectives and individual intelligence, hence addressing SQ0 and SQ1, and providing preliminary concepts for introducing and discussing CI in the next section.
### Collectives
Informally, a _collective_ is a (possibly dynamic) group of largely _homogeneous_ individuals, which are also called the _members_ of the collective. Different works may use different or more specific definitions for a collective. Different fields often target different kinds of collectives, often resulting in implicit assumptions.
Devising a general and comprehensive characterisation of collectives is an open research problem, addressed in the context of _mereology_, namely the study of _parthood_ relations, and _ontology_, namely the study of "what there is". In the literature, a few formal theories attempt to deeply characterise collectives and collective phenomena (Bottazzi et al., 2006; Brodaric and Neuhaus, 2020; Galton and Wood, 2016; Wood and Galton, 2009).
For instance, in (Wood and Galton, 2009), a taxonomy of collective phenomena is provided, along the classification criteria of _membership_ (concerned with the identity and cardinality of the members of a collective), _location_ (of the collective as well as of its members), _coherence_ (the source of "collectiveness"), _roles_ (if members are distinguished by roles), _depth_ (concerning levels of collectives). In particular, two main sources for collectiveness can be devised: internal or external _causes_, and _shared purposes_ or _goals_. Regarding depth, it is worth noticing that, unlike the _componenthood_ relation in composites, _membership_ in collectives is generally not transitive (Brodaric and Neuhaus, 2020). Composites can be defined as structured pluralities or groups of parts, called _components_, playing specific functions (Brodaric and Neuhaus, 2020). In the literature, it is generally assumed that composites are heterogeneous, while collectives are homogeneous (Brodaric and Neuhaus, 2020).
Moreover, a collective is often intended to be a "concrete particular" (i.e., not an abstraction like a mathematical set) and a "continuum" (i.e., a particular existing and possibly changing over a time span) (Wood and Galton, 2009). Defining a general, comprehensive, and precise characterisation or taxonomy of collectives is not trivial (Wood and Galton, 2009). For instance, certain collectives may require a certain number of members or roles to be filled to exist (Wood, 2016), or may change identity following certain changes in their composition. Sometimes, collectives may be abstracted by specific collective properties or collective knowledge (Nguyen, 2008). Collectiveness may also be considered as a degree, and hence a quantifiable property (Daniels et al., 2016) of phenomena and groups of individuals.
There exist several related group-like notions, which differ e.g. by perspective, the key relation between items, or the fundamental property of the group. Some of these group-like notions are summarised in Table 1, with a proposed classification--following Brodaric and Neuhaus (2020), though different meronomies are possible. A collective is a particular kind of plurality or group.
Crowds, swarms, herds, flocks, schools can generally be considered specific kinds of collectives. Organisations and systems might be modelled as constructs based on the structural arrangement and heterogeneity of composites, but are also amenable to be characterised as collectives.
Like for the notion of intentional stance (Dennett, 1989), it may make sense to adopt a _collective stance_ in which e.g. _"the human species [a group] is viewed as a single organism"_(Gaines, 1994), though the idea of _collective intentionality_ is problematic and subject of intense philosophical debate (Schweikard and Schmid, 2013). Indeed, we believe that the perspective of collectiveness can provide a complementary point of view to that of an individual for understanding and engineering various sorts of systems involving groups of individuals. However, when addressing themes involving collectives (such as CI), it is important to clarify what kind of collectives are addressed, as this would help to clarify the assumptions and generality of a specific contribution.
### Intelligence
Intelligence is a controversial and elusive concept subject to philosophical debate (Legg et al., 2007), best understood as a nomological network of constructs (Reeve et al., 2011). Etymologically, intelligence comes from Latin "intelligere", which means "to understand". It can be defined as _"the global capacity of the individual to act purposefully, to think rationally, and to deal effectively with the environment"_(Wechsler, 1946), or the property that _"measures an agent's ability to achieve goals in a wide range of environments"_(Legg et al., 2007). In general, there are two different interpretations: intelligence as either a collection of task-specific skills or a general learning ability (Chollet, 2019), which reflect the distinction between _crystallised_ and _fluid_ abilities, respectively.
Problems about intelligence include, for instance, its definition and modelling, such as devising the structure of intelligence (Reeve et al., 2011); its relation with action; its measurement and evaluation; its analysis; and its construction and development.
Concerning the theories of intelligence, there are two main traditions (Reeve et al., 2011): the _psychometric tradition_, based on the number and nature of basic cognitive abilities or _factors_; and the developmental or holistic perspective, based on acquired intellect.
The problem of the _measure_ of intelligence (Chollet, 2019; Hernandez-Orallo, 2017) is of course related to what representation or model of intelligence is considered, and is complicated by the need of distinguishing between causality and correlation, selecting a representative set of environments
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Concept** & **(Typical) Parent concept** & **(Typical) Defining properties** \\ \hline Plurality; Collection; Group; Set & & Set-inclusion \\ \hline Composite & Plurality & Componenthood, heterogeneity \\ \hline Collective & Plurality & Membership, homogeneity \\ \hline Crowd & Collective & Nature (humans) \\ \hline Swarm & Collective & Nature (insect-like) \\ \hline Robot swarm & Swarm & Nature (simple robots), structure (high numbers) \\ \hline Herd; Flock; School & Collective & Nature (animals) \\ \hline Organisation & Composite / Collective & Structure, roles \\ \hline System & Composite / Collective & Interacting elements; Boundary \\ \hline Multi-agent system & System & Nature (agency) \\ \hline \end{tabular}
\end{table}
Table 1. Common group-like notions addressed in computer science and engineering.
for evaluation, etc. Carrol defines an _ability_ (i.e. an intelligence factor) as a source of variance in performance for a certain class of tasks (Carroll, 1993). Measuring intelligence is based on _factor analysis_, i.e., it works by running specific tests (_observables_) and using factors (_unobservables_) as possible explanations for correlations among the observables, describing their variability. It is expected that the nature of the entity whose intelligence we are considering would drive and require the definition of suitable factor models.
Various taxonomies of intelligence have been proposed over time. A common distinction is between _natural_(van Gerven, 2017) and _artificial intelligence_(Russell and Norvig, 2020). Both can be considered under the unifying notion of _abstract intelligence_(Wang, 2009).
## 5. Understanding Collective Intelligence
On the basis of the preliminary concepts introduced in the previous section, this section focusses on _what_ CI is, according to literature, discussing definitions, examples, models, and the main classifications of CI (namely ACI and CCI) we are interested into, hence addressing SQ2 to SQ7. Understanding the goals, characteristics, and main frames of reference of CI is important before turning to the problem of CCI engineering in the next section.
### Definitions and characterisations of collective intelligence
Collective intelligence is the intelligence that can be ascribed to a collective--where a collective is a multiplicity of entities (commonly characterised as discussed in the previous section). Indeed, by abstracting a collective as a _whole_, namely as a _higher-order individual_ in turn (consisting of other individuals, which are its _members_), it should be possible to transfer characterisations of individual intelligence to it.
Table 2 reports some definitions of CI taken from the literature. From them, it is possible to see recurrent as well as peculiar aspects of CI characterisations.
#### 5.1.1. Reuse of (individual) intelligence definitions
Some definitions do not attempt to re-define "intelligence" but merely bring existing characterisations of intelligence, commonsense acceptations, or its general meaning as a nomological network of concepts (Reeve et al., 2011) to the collective realm. This has the advantages of simplicity, generality, and _openness_, which may promote multi-, inter- and trans-disciplinarity.
#### 5.1.2. General vs. task-specific
If we reuse existing notions of intelligence, it means that we may consider how different definitions in turn apply to collective entities. For instance, similarly to individual intelligence, CI may be considered as a general problem-solving ability or as a set of specific skills. Evidence for the existence of a general CI statistical factor \(c\) in human groups has been provided by Woolley et al. (2010), where such factor is shown to be more correlated with average social sensitivity and diversity, rather than with average or maximum individual intelligence of the members.
#### 5.1.3. Collectives of different natures
Some definitions largely abstract from the nature of collectives (cf. "collections" or "groups of individuals", "artificial and/or natural"), some assume a minimal set of characteristics for individuals (cf. agency, ability to interact, etc.), some require that the individuals are connected in some way (cf. interaction, or existence of social structures).
#### 5.1.4. Different sources for collectiveness and mechanisms for CI
Terms like interaction, collaboration, competition, and social structure might be used to further constrain the scope of CI to particular kinds of collectives or to different mechanisms thereof that are possible for supporting
#### 5.1.5. Connection to emergence
Various definitions build on the notion of _emergence_, which relates to the production, in a system, of radically novel, coherent macro-level patterns from micro-level activity (Wolf and Holvoet, 2004).
#### 5.1.6. Phenomenological approach
Similarly to emergence, which is often studied phenomenologically (Minati, 2018; Rainey and Jamshidi, 2018), some CI definitions adopt a phenomological standpoint where the focus is not on what CI actually is, but on the phenomena that may be associated to it.
#### 5.1.7. Positive vs. negative CI
It is common to consider CI as a _quantifiable_ property and specifically as a _signed_ quantity, i.e., positive or negative. Indeed, various authors talk about _negative_ collective intelligence (Laan et al., 2017; Szuba, 2001) in order to characterise the cases where a collective would perform worse than one of its individual members. In such cases, the social constraints effectively hinder individual abilities with no benefit.
### Examples
In the following, notable examples of CI are briefly reviewed.
**Example 1** (Markets).: Markets are economic systems that consist of a large number of rational self-interested agents, buyers and sellers, that engage in transactions regarding assets. The prices of assets change to reflect supply and demand, as well as the larger context, and can be seen as a reification of the collective intelligence of the entire market (Lo, 2015). So, markets can be seen as a mechanism for sharing information and making decisions about how to allocate resources in a collectively intelligent way (Malone and Bernstein, 2015). Accordingly, market-based abstractions have been considered in computer science to promote globally efficient systems (Mainland et al., 2004).
**Example 2** (Wisdom of crowds).: Crowds - groups of people - can be of different kinds (cf. physical vs. psychological crowds) and can exhibit different degrees of CI. A crowd can exhibit intelligent (Surowiecki, 2005) or unintelligent behavior (Laan et al., 2017). Surowiecki (2005) popularised the term "wisdom of crowds", showing that groups are able of good performance under certain circumstances, providing aggregate responses that incorporate and exploit the collective knowledge of the participants. Among the conditions required for a crowd to be wise, Surowiecki (2005) identified _diversity_ (of individuals), _independence_ (of individual opinions), and _decentralisation_ (of individual knowledge acquisition)--whose importance has been confirmed by later studies such as, e.g., those by Woolley et al. (2010).
**Example 3** (Swarm intelligence).: Swarm intelligence is the CI that emerges in groups of simple agents (Bonabeau et al., 1999). Swarm intelligence was first observed in natural systems, such as insect societies (e.g., ant colonies, beehives), which inspired mechanisms and strategies for improving the flexibility, robustness, and efficiency of artificial systems. With respect to the general field of CI, swarm intelligence may be considered as a sub-field that deals with very large groups and individuals behaving according to simple rules. Since the criteria of cardinality and simplicity are degrees, the boundaries of the field is also fuzzy.
**Example 4** (Learning multi-agent systems).: Another notable example of CI is given by MASs (Wooldridge, 2009). Unlike swarms, MASs usually comprise rational agents, possibly structured into organisations, and possibly exhibiting properties of strong agency (Wooldridge, 2009), i.e., which may in turn be individually intelligent. The agents as well as the MAS may be able to _learn_ about the environment, themselves, or the behaviour that they should follow to maximise some local or global notion of utility (Tumer and Wolpert, 2004).
**Example 5** (Human-machine collective intelligence).: A powerful example of CI is the so-called _human-machine collective intelligence (HMCI)_[Smirnov and Ponomarev, 2019] or _hybrid CI_[Moradi et al., 2019; Peeters et al., 2021], which is the one that applies to heterogeneous systems involving both machines and humans. The idea is to promote the synergy between artificial/machine intelligence and human intelligence, which are often seen as complementary forms of intelligence. An exemplar of HMCI is Wikipedia, a hypermedia system of interconnected collective knowledge, which is created and revised by humans through the mediation of Web technologies. Wikipedia data can also be autonomously processed by agents to build other kinds of applications leveraging its collective knowledge.
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|p{142.3pt}|} \hline
**Ref.** & **Definition** & **Remarks** \\ \hline Malone and Bernstein (2015) & _Groups of individuals acting collectively in ways that seem intelligent_ & • “Reuse” of the notion of intelligence \\ & & • Collective action \\ \hline Nguyen et al. (2009) & _The form of intelligence that emerges from the collaboration and competition of many individuals (artificial and/or natural)_ & • Emergence \\ & & • Mechanisms (collaboration, competition) \\ & & • Members of different nature \\ \hline He et al. (2019) & _Collective intelligence (CI) refers to the intelligence that emerges at the macro-level of a collection and transcends that of the individuals._ & • Emergence (transcendence) \\ \hline Tumer and Wolpert (2004) & _[COLlective INtelligence (COIN)] Any pair of a large, distributed collection of interacting computational processes among which there is little to no centralized communication or control, together with a “world utility” function that rates the possible dynamic histories of the collection._ & • Requirements (interaction, decentralisation) \\ & _We can say that the phenomenon of CI has emerged in a social structure of interacting agents or beings, over a certain period, iff the weighted sum of problems they can solve together as a social structure is higher during the whole period than the sum of problems weighted in the same way that can be solved by the agents or beings when not interacting_ & • Requirements (social structure, interaction) \\ \hline Lykourentzou et al. (2009) & _Collective intelligence (CI) is an emerging research field which aims at combining human and machine intelligence, to improve community processes usually performed by large groups._ & • Hybrid or human-machine CI \\ \hline \end{tabular}
\end{table}
Table 2. Some definitions of CI from the literature.
### Models
Here, we briefly review two main general models of CI from the literature, which comprehensively summarise and integrate previous models.
#### 5.3.1. The isolation-, collaboration-, and feedback-based CI paradigms (He et al., 2019)
He et al. (2019) propose a taxonomy of CI into three paradigms of increasing power, based on the absence or presence of _interaction_ and _feedback_ mechanisms. In their view, CI can be generally regarded as an aggregation of individual behaviour results. Then:
1. _Isolation paradigm._ The individuals are isolated and behave independently, producing results that are aggregated in some way. The aggregation result does not affect the individual behaviours. Isolation studies use statistical and mining tools.
2. _Collaboration paradigm._ There is direct or indirect _interaction_ between the individuals. Indirect interaction can be modelled through a notion of _environment_. Aggregation operates on individual behaviour results and the environment state. The aggregation result does affect neither the individual behaviours nor the environment.
3. _Feedback paradigm._ This paradigm adds to the interaction paradigm a "downward causation" of the aggregation result on the individual behaviours and/or the environment.
#### 5.3.2. CI framework by Suran et al. (2020)
Suran et al. (2020) analyse 12 studies on CI and devise a _generic_ model based on 24 CI attributes split into 3 CI components: individuals, coordination/collaboration activities, and communication means. The generic model is based on:
* Characterisation of _who_ is involved in a CI system, in terms of: passive actors (users); active actors (CI contributors), which may be crowds or hierarchies; properties of actors in terms of diversity, independence, and critical mass; and interactions.
* Characterisation of _motivation_ of CI actors: intrinsic or extrinsic.
* Characterisation of CI goals: individual and community objectives.
* Characterisation of CI processes: in terms of types of activities (decide, contest, and voluntary) and interactions (dependent or independent).
Moreover, CI systems can be considered as complex adaptive systems and often are subject to requirements for proper functioning e.g. on state, data, aggregation, decentralisation, task allocation, and robustness.
### Factors and quantification of CI
Key scientific questions, fundamental for both understanding and engineering CI, include what _factors_ promote or inhibit CI, and, specifically, what is the relationship between individual and collective intelligence. We already mentioned the seminal work by Woolley et al. (2010) sustaining the idea of a general CI factor \(c\), shown to be more correlated with the level of sociality than with the levels of intelligence of individuals. We also pointed out the example of swarm intelligence as a kind of CI emerging from a multitude of simple agents characterised by limited individual intelligence. In this example, clearly, it is the aspect of _interaction_ - with other agents and/or the environment - that fosters the production of effective patterns of behaviour.
Works have been carried out to investigate these relationships. For instance, in a later study, Woolley et al. (2015) focus on (i) group composition, e.g., in terms of skills and diversity of the members of a group; and (ii) group interaction, e.g., in terms of structures and norms constraining and ruling the interaction. They found that the individual skills that contribute the most to CI are those that bring sufficient diversity and effectiveness in collaboration, whereas group-level psychological elements like satisfaction and cohesiveness are not influential. Considering different kinds of interactive cognitive systems, Chmait et al. (2016) study the influence of the following
factors: (i) concerning individuals: individual intelligence, individual reasoning/learning speed; (ii) concerning cooperation: cardinality of the collective, time to interact, communication protocol; and (iii) concerning agent-environment interaction: search space complexity (through uncertainty), and algorithmic complexity of the environment. They quantify the CI of a group of agents as the mean accumulated reward in a set of test environments, hence extending the Anytime Universal Intelligence Test (Hernandez-Orallo and Dowe, 2010) to collectives. What is observed is that such factors - considered independently and/or in joint configurations with other factors - do shape the CI of groups in non-trivial ways. These factors are also related to the components of CI models--a nice overview is provided in (Suran et al., 2020).
### Main kinds of CI
A typical classification of CI is by the nature of the entities involved.
#### 5.5.1. Natural CI
Natural CI is the CI exhibited by collectives found in nature, such as swarms of insects, packs, herds, or groups of animals, crowds of people, flocks of birds, schools of fishes, etc. In all these systems there exist non-trivial collective phenomena and societal aspects that deserve deep investigation. Insect societies are analysed e.g. in the seminal book by (Bonabeau et al., 1999). For collective animal behaviour, one of the main references is the book by Sumpter (2010), which describe collective phenomena as those in which _"repeated interactions among many individuals produce patterns on a scale larger than themselves"_. For CI in humans, some historical references include (Le Bon, 2002; Surowiecki, 2005); moreover, there are contributions for specific human settings like crowds of pedestrians (Sieben et al., 2017) and problems like e.g. the relationship between language and collective action (Smith, 2010).
The study of natural CI is important because it is powerful source of inspiration for CI mechanisms to be applied to artificial systems (Bonabeau et al., 1999).
#### 5.5.2. Artificial (ACI) and Computational Collective Intelligence (CCI)
ACI is the CI exhibited by human-made machines. Notice that, strictly speaking, natural and artificial CI constitute a false
Figure 1. The relationships between collective intelligence (CI), artificial collective intelligence (ACI), and computational collective intelligence (CCI). The dashed line is used to denote the false dichotomy between artificial and natural CI.
dichotomy since there is inherent subjectivity regarding where the line between the two is drawn, and these could also be considered as a gradation. ACI and CCI are mostly considered as synonyms in the literature. However, some authors refer to CCI as a particular kind of ACI, i.e., _"as an AI sub-field dealing with soft computing methods which enable making group decisions or processing knowledge among autonomous units acting in distributed environments"_[Badica et al., 2014]. _Soft computing_ methods are those that help to address complex problems by overcoming approximation and uncertainty, using techniques such as fuzzy logic, expert systems, machine learning, genetic algorithms, artificial neural networks [Ibrahim, 2016]. In other words, the distinction between ACI and CCI may follow the common way to distinguish between artificial and computational intelligence [Engelbrecht, 2007], where the former tends to prefer hard, symbolic approaches while the latter tends to prefer soft and bio-inspired computing techniques. The relationships between CI, ACI, and CCI is shown in Figure 1. CCI might also be intended as a part of natural CI to account for the notion of _biological computation_[Mitchell, 2011], whereby biological systems are considered as computing devices [van Gerven, 2017]. However, not all of ACI is necessarily computational, since also mechanical machines can exhibit intelligence [Stradner et al., 2013; Wang, 2009]--cf. Braitenberg vehicles [Dvoretskii et al., 2022], some of which are purely mechanical vehicles with hard-wired connections between sensors and actuators. Common sub-fields of ACI include e.g. semantic web, social networks, and multi-agent systems [Nguyen et al., 2009]. Often, ACI and CCI include systems comprising both machines and humans. Possible taxonomies for ACI are proposed in the next section.
Notice that terms like swarm intelligence or multi-agent intelligence may refer to natural, to artificial systems, or to a general model comprising both.
Other kinds of CI are described in the following, as they are very much related to peculiar CI engineering methods and techniques.
## 6. Perspectives of Artificial Collective Intelligence Engineering
Building on the previous discussion of _what_ CI is and its main models, this section focusses on _how CI can be engineered_, according to literature, hence addressing SQ8 to SQ10. In doing this, we will picture a map of the state of the art in CI engineering, setting the stage for a discussion of research opportunities and challenges in the next section.
Depending on what kind of CI has to be achieved (cf. the previous section), various _perspectives_ and approaches to _CI engineering_ can be devised, each one leveraging and providing peculiar sets of _CI mechanisms_.
### Knowledge-oriented vs. behaviour-oriented CI
From an industrial point of view, the engineering of CI often revolves around engineering the ICT platforms and algorithms for collecting data from human activity and extracting knowledge from collected data [Alag, 2008; Segaran, 2007]. There are several ways in which humans using web applications can provide data through their interaction: e.g., by what content they search, what paths they follow, what feedback they provide, what content they add, etc. Then, techniques like data mining, text mining, and machine learning can be used to classify information, cluster information, predict trends, recommend content, filter information, aggregate information etc. We may call this _knowledge-oriented_ CI since the collective intelligence lies in the data produced and processed by a collective, and ICT has a role in supporting such information creation, ultimately promoting the emergence of _latent_ collective knowledge. This is essentially what Surowiecki (2005) calls _cognition_ problems.
This kind of systems may not seem a form of CI. Indeed, one might be tempted to completely abstract over the collective of agents providing the data, and merely consider a conceptually single
source of data and how data is processed and aggregated by a conceptually single process. However, the CI nature of all this starts to emerge once one considers the overall process by a larger, socio-technical perspective. By this perspective, several agents produce information through their activity and reasoning, possibly interacting with other agents and with supporting tools (e.g., a network to share information, tools to make sense of others' contributions, etc.)--cf. the isolation, collaboration, and feedback-based paradigms.
Conversely, we may call _behaviour-oriented_ CI the collective intelligence that drives the global behaviour of a system. This includes what Surowiecki (2005) refers to as _coordination_ and _cooperation_ problems. Examples include the form of intelligence driving the way in which robotic swarms move (Navarro and Matia, 2013), a computational ecosystem that self-organises into activity and communication structures (Pianini et al., 2021), and a market that self-regulates itself (Lo, 2015). However, as the latter example shows, since collective action is connected with collective decision making, which in turn is connected to collective knowledge, the border between knowledge-oriented and behaviour-oriented CI is fuzzy and so these types of CI should not be thought as containers for mechanisms but rather as containers for typical CI goals.
The distinction between "plain" systems and systems-of-systems (SOS) (Nielsen et al., 2015), e.g., based on the properties of autonomy, belonging, connectivity, diversity, and emergence (Boardman and Sauser, 2006), is also relevant in this discussion (Peeters et al., 2021). Research under the knowledge-oriented CI umbrella, typically involving socio-technical systems, seems mostly related to the SoS framework. Research under the behavior-oriented CI umbrella, instead, seems more uniformly distributed along both the system (cf. swarm intelligence and aggregate computing--Bonabeau et al., 2019; Viroli et al. (1999)) and SoS viewpoints (cf. hybrid human-machine systems--Peeters et al., 2021; Seckic et al. (2021)).
### Manual vs. automatic ACI development
Regarding ACI, it is possible to distinguish two main kinds of approaches: those based on _manual design_ and those based on _automatic design_.
#### 6.2.1. Manual design of ACI
In the manual approach, a designer specifies the behaviour of the computational agents making up the collective directly by providing _behaviour rules_ (or _policies_).
Here, the key issue is determining what individual behaviour, when replicated or combined with other behaviours or phenomena, can give rise to the desired emergent behaviour. Programming approaches that are thought to somehow address this goal are often known as _macro-programming_ in the literature (Casadei, 2023), a term that recurred especially in early 2000s in the context of wireless sensor networks (WSNs) (Newton et al., 2005). A foundational contribution to macro-programming is given by Reina et al. (2015), where a methodology is proposed for passing from macroscopic descriptions to a microscopic implementation through a design pattern, obtaining a quantitative correspondence between micro and macro dynamics. Research has produced different macro-programming frameworks, e.g., for expressing the behaviour of robot swarms (Pinciroli and Beltrame, 2016) and distributed, IoT systems (Mizzi et al., 2018; Noor et al., 2019)--though most of them lack formal foundations.
A notable macro-programming approach that has recently been subject to intense research is _aggregate computing (AC)_(Viroli et al., 2019). AC consists of a functional macro-programming language expressing collective behaviour in terms of computations over distributed data structures called _computational fields_(Mamei et al., 2004; Viroli et al., 2019). The basic language constructs provide support for dealing with (i) lifting standard values to field values; (ii) abstracting field computations through functions; (iii) stateful evolution of fields; and (iv) handling bi-directional
communication through so-called _neighbouring fields_. Using such constructs, and library functions e.g. handling information flows through gradients or supporting higher-level patterns, a programmer can write an aggregate program that expresses the global behaviour of a possibly dynamic network of agents. The agents, by repeatedly evaluating the program in asynchronous sense-compute-interact rounds, and interacting with neighbours by exchanging data as dictated by the program, could steer self-organising behaviour hopefully fulfilling the intent of the program. Casadei et al. (2021) argue that multiple concurrent and dynamic aggregate computations could pave a path to CI engineering.
While AC adopts a swarm-like self-organisation model, another class of approaches for ACI is given by _multi-agent programming_, as supported e.g. by the JaCaMo platform (Boissier et al., 2020), which comprises _Jason_ for programming cognitive autonomous agents, _CArtAgO_ for programming the distributed artifacts-based environment of the MAS, and _MOise_ for programming agent organisations. However, it is worth noticing that the relationships between MAS research and CI research are often hindered by different terminologies and separate communities. A reason might be that a large part of MAS research properly focusses on composites rather than collectives, i.e., well-structured organisations of heterogeneous intelligent agents rather than self-organising swarms of largely homogeneous and cognitively simple agents.
#### 6.2.2. Automatic design of ACI
Since manually crafting control and behavioural rules of computational agents might be difficult, especially for complex tasks in non-stationary environments, a different approach consists in devising strategies for automatically designing behaviours. The idea is to provide hints about the intended behaviour or the results to be attained by it (e.g., in terms of _specifications_ or _data_), and to leverage mechanisms to generate or find behaviours that satisfy the specification. This can be addressed through _automatic programming_(O'Neill and Spector, 2020), _(machine) learning_(Behjat et al., 2021), and _search_(Russell and Norvig, 2020). For CI systems, these are essentially the approaches followed by prominent methods like, e.g., multi-agent reinforcement learning (MARL) (Busoniu et al., 2008) and evolutionary swarm robotics (Trianni, 2008).
One of the early models and notable example is _COlective INtelligence_, or _COIN_(Wolpert, 2003). Essentially, COIN considers a _collective_ as a system of self-interested agents, trying to maximise their _private utility_ function, sharing an associated _world utility_ giving a measure of the CI of the overall system. MARL is clearly a powerful technique for building CI, and it is currently a hot research area, with several surveys emerging (Canese et al., 2021; Gronauer and Diepold, 2022; Zhang et al., 2019). Learning of collective behaviour may be related but should not be confused with collective learning, which is learning carried out by multiple agents that does not necessarily yield collective behaviour models.
In evolutionary robotics (Trianni, 2008), the idea is to use evolutionary algorithms (i.e., algorithms that use mechanisms inspired by biological evolution for evolving populations of solutions) to optimise models of robot controllers (e.g. mapping inputs from sensors to outputs to actuators) with respect to desired behavioural goals. Various techniques have been proposed in the literature to improve traditional evolutionary approaches, e.g., novelty search (Gomes et al., 2013). An interesting approach for the automatic design of the control logic of swarms is given by _AutoMoDe_(Francesca et al., 2014). AutoMoDe generates modular control software as a probabilistic finite state machine by selecting, composing, and configuring behavioural modules (bias). The idea is to leverage the bias to make the automatic design approach robust to differences between simulation and reality. Another relevant methodology for evolutionary robotics is the so-called _embodied evolution_ approach (Bredeche et al., 2018; Watson et al., 2002), which is based on evolutionary processes that are _distributed_ in a population of robots situated in an environment, to support online and long-term adaptivity. Embodied evolution is an interesting setting for studying aspects like embodied
intelligence, co-evolution, the role of environmental niches, the relationship between optimisation and selection pressure, locality of interaction, etc. The combination of learning and evolution is also a very interesting research direction (Gupta et al., 2021).
A second possibility for automatic design comes from _program synthesis_(Gulwani et al., 2017), which is the field studying the task of automatically crafting programs (in some given programming language) that satisfy a specified intent. Particularly interesting are the recent attempts of combining program synthesis and reinforcement learning\(-\)cf. Aguzzi et al.; Bastani et al. (2022, 2020). However, in the context of CI, this direction has not yet been investigated, representing an opportunity for future research (cf. next sections).
As a final remark, we stress that manual and automatic design can be seen as the extremes of a continuum, and that hybrid approaches can be used\(-\)cf. interactive program synthesis (Zhang et al., 2020).
### Relationships between humans and machines in HMCI
In HMCI, it is possible to distinguish multiple threads of research. A first classification could be based on the aforementioned distinction between knowledge-oriented and behaviour-oriented CI. Other classifications can be made by considering what kind of entity plays the role of _controller_ and _executor_:
1. tasking crowds of humans (Ganti et al., 2011; Guo et al., 2014; Sari et al., 2019; Zhen et al., 2021)\(-\)cf. crowdsourcing (Zhen et al., 2021) and crowdsensing (Guo et al., 2014);
2. using humans to guide machine operations, e.g., interactively (Yu et al., 2021);
or considering what entity plays the role of _input_ and _output_
1. using AI to extract or mine intelligence from human contributions (Alag, 2008; Segaran, 2007);
2. using humans (or _human computation_) to extract value from machine contributions, especially in tasks where machines cannot (yet) generally perform well, such as visual recognition and language understanding (Quinn and Bederson, 2011).
or, finally, considering humans and machines as peers and hence the so-called _human-machine networks_(Tsvetkova et al., 2017) or _social machines_(Berners-Lee, 1999; Buregio et al., 2013).
Regarding the engineering of social machines, a notable macro-programming approach is given by the _SmartSociety_ platform (Scekic et al., 2020), which is based on abstractions like persistent and transient teams of human/machine peers, and collective-based tasks. The approach can be used for human orchestration and human tasking activities like those found in crowdsourcing and hybrid collectives.
Concerning the general design of ACI in social machines, Peeters et al. (2021) propose three principles: (i) goals from the collective, technological, and human perspectives should be considered simultaneously; (ii) development effort should continously embrace all the product's lifecycle; and (iii) the requirements of observability, predictability, explainability, and directability should be considered at all abstractions levels (AI, team, and society).
### Collective tasks
Another main classification of CI engineering research is by the kind of collective task that is addressed. A _collective task_ can be defined as a task that _requires_ more than one individual to be carried out. Notice that CI may be seen as a requirement or mechanism for solving collective tasks (cf. the general CI interpretation) or, conversely, CI might be defined (and measured) in terms of the ability to solve a set of collective tasks in a variety of environments.
Multiple taxonomies of collective tasks have been proposed in the literature. For instance, Brambilla et al. (2013) classify collective behaviours (of swarm robotics systems) into (i) spatially-organising behaviours, (ii) navigation behaviours, (iii) collective decision making, and (iv) others. Other reviews of swarm robotics tasks include (Bayindir, 2016; Nedjah and Junior, 2019). Moreover, collective tasks can be classified also according the three paradigms discussed in (He et al., 2019) and reviewed in previous sections: isolation, collaboration, and feedback.
In the following, we review material for two general, main kinds of collective tasks - collective decision making and collective learning - and then point out references to other kinds of tasks.
#### 6.4.1. Collective decision making
Collective decision making is the problem of how groups reach decisions without any centralised leadership (Bose et al., 2017; Prasetyo et al., 2019). This is also known as _group decision making_(Tang and Liao, 2021; Zhang et al., 2019).
Decision making and its collective counterpart can be classified according to the nature of the decision to be made. Reaching consensus and multi-agent task allocation are two common kinds of collective decision-making behaviours, typical in swarm robotics (Brambilla et al., 2013). Guttmann (2009) classifies MAS decision making by four dimensions: _(i) use of models of self vs. models of others_; _(ii) individual inputs vs. group input_; _(iii) learning vs. non-learning_, depending on whether decision making spans multiple rounds or just one round; and _(iv) collaboration vs. competition_. Surowiecki (2005) distinguishes three kinds of problems or tasks of distributed decision making: _(i) cognition_, _(ii) cooperation_, and _(iii) coordination_.
Collective decision making is often supported by self-organisation mechanisms based on, e.g., collective perception (Schmickl et al., 2006), voter models (Valentini et al., 2014), opinion formation models (de Oca et al., 2011), and self-stabilising leader election (Pianini et al., 2022).
Recent surveys on collective decision making include the following. Valentini et al. (2017) focus on discrete consensus achievement, and propose a formal definition of the _best-of-n_ problem (choice of the best alternative among \(n\) available options); then, they define a taxonomy based on different classes of the problem, and classify the literature on discrete consensus agreement accordingly. Zhang et al. (2019) provide a review of consensus models in collective decision making, and compare them based on multiple criteria for measuring consensus efficiency. They also argue that two interesting research directions include (i) _large-scale_ collective decision making and (ii) addressing social relationships and opinion evolution. Tang and Liao (2021) provide a review of literature around five challenges in large-scale collective decision making with big data: dimension reduction, weighting and aggregation of decision information, behaviour management, cost management, and knowledge distribution and increase. Rizk et al. (2018) provide a survey of decision making in MASs. The survey focusses on five cooperative decision-making models: Markov decision processes (and variants), control theory, game theory, graph theory, and swarm intelligence. These models are discussed along the dimensions of heterogeneity, scalability, and communication bandwidth--which are also crucial research challenges. Particularly challenging is also decision making in dynamic environments (Prasetyo et al., 2019; Rizk et al., 2018). Other challenges include security, privacy, and trust; approaches to address these include, e.g., blockchain consensus (Pournaras, 2020).
#### 6.4.2. Collective learning
Learning is intimately related to intelligence (Jensen, 1989). Collective learning is learning backed by a collective process, with coordination and exchange of information between individuals and artifacts (Fadul, 2009). As a multi-disciplinary theme, it is studied both in areas like sociology and organisational theory (Fadul, 2009; Garavan and Carbery, 2012), and in AI research (Bock, 1993). Collective learning spans both the knowledge-oriented and behavioural perspectives of CI, and is the main technique for automatic design of ACI. Goals of collective learning include supporting individual learning (Fenwick, 2008), producing collective
knowledge, and promoting collective decision making [Garavan and Carbery 2012]. As a wide concept, collective learning can be interpreted along multiple perspectives [Garavan and Carbery 2012]: e.g., as the independent aggregation of individual learning, or as a collaborative activity. So, collective learning is related but not necessarily the same as cooperative and collaborative learning [Fadul 2009]. These different views can also be found in AI and ACI research.
Artificial collective learning includes distributed machine learning [Verbraken et al. 2020]: examples include centralised, federated, and decentralised machine learning systems. In _centralised learning_, the different individuals of the system provide data to a central entity that performs the actual learning process. So, in this case, the core learning process is not collective, though it would be collective if considered by a larger perspective that includes data generation. In _federated learning_[Kairouz et al. 2021], the idea is that individual independent workers perform machine learning tasks on local data sets, producing models that are then aggregated by a master into a global model without the need of sharing data samples. It enables to address data privacy issues. The combination of multiple models is also called _ensemble learning_[Dong et al. 2020]. Hegedus et al. (2021) propose _gossip-based learning_ as a decentralised alternative to _federated learning_, where no central entities are used and models are gossiped and merged throughout the nodes of the system. Collective learning might be supervised or unsupervised. An example of an unsupervised decentralised collective learning approach is provided by Pournaras et al. (2018).
Another important example of collective learning is MARL [Busoniu et al. 2008], which considers learning by collections of reinforcement-learning agents. MARL algorithms are commonly classified depending on whether they address _fully cooperative, fully competitive_, and _mixed cooperative/competitive_ problems. In fully cooperative problems, the agents are given a _common reward signal_ that evaluates the collective action of the MAS. Instead, in fully competitive problems, the agents have opposite goals. Mixed games are in between fully cooperative and fully competitive problems. Three common information structures in MARL are [Zhang et al. 2019]: _(i) centralised structures_, involving a central controller aggregating information from the agents; _(ii) decentralised structures_, with no central entities and neighbourhood interaction; and _(iii) fully decentralised_, namely independent learning, with no information exchanged between the agents. Various formal frameworks have been proposed to address MARL problems, including _COIN_[Wolpert 2003] and _Decentralised Markov Decision Processes (Dec-MDP)_[Oliehoek and Amato 2016]. The reader interested to MARL algorithms and frameworks can check out multiple comprehensive surveys on the topic [Busoniu et al. 2008; Hernandez-Leal et al. 2019; Zhang et al. 2019].
There exist surveys on collective learning. D'Angelo et al. (2019) perform a systematic literature review on learning-based collective self-adaptive systems. Their analysis extracts, as the main characteristics of such systems, the application domains involving groups of agents with the ability to learn, the levels of autonomy of the agents, the levels of knowledge access (i.e., the way in which they explicitly share learning information), and the kinds of behaviours involved (e.g., selfish vs. collaborative). Accordingly, the authors provide a framework for learning collective self-adaptive systems, based on three dimensions: autonomy, knowledge access, and behaviour. The learning goals are analysed w.r.t. the target emergent behaviour; from the analysis, two clusters of works emerge: those where the emergent behaviour is associated to the anticipated learning task, and those where it is not. Among the learning techniques, they report that the majority of research works leverage reinforcement learning, while game theory, supervised learning, probabilistic and other approaches are less investigated in these settings. Resilience and security are deduced as the main open challenges in this research domain.
Pournaras (2020) provides a review of 10-years research on human-centred collective learning for coordinated multi-objective decision making in socio-technical systems, within the context of the _Economic Planning and Optimized Selections (EPOS)_ project. Collective learning is motivated as
a way to address the long-standing _tragedy of the commons_ problem, and argued to be a promising paradigm of artificial intelligence. As research opportunities and challenges, the author identifies: explainability and trust, resilience to plan violations and adversaries, collective learning in organic computing systems, co-evolution of collective human and machine learning, and digital democracy.
Learning is also very related to evolution (Bredeche et al., 2018). Learning and evolution are generally considered as different mechanisms for adaptation working on different time and spatial scales (Anderson et al., 2013; Mataric, 2007). However, these techniques can also be combined (Nolfi and Floreano, 1999): learning can guide evolution (Hinton and Nowlan, 1987) and evolution can improve learning (cf. evolutionary learning--Telikani et al. (2022)), where different architectures for the combination are possible (Sigaud, 2022).
#### 6.4.3. Other collective tasks. Collective action
(Oliver, 1993) commonly refers to the situation where multiple individuals with conflicting goals as well as common goals would benefit from coordinated action. Clearly, the ability to act collectively in an effective manner can be seen as an expression of CI. The problem is addressed mainly in sociology, but computer science also provides tools (e.g., simulations, models etc.), such as the _SOSIEL (Self-Organising Social & Inductive Evolutionary Learning)_ simulation platform (Sotnik, 2018), for studying the problem and investigating solutions for human societies as well as for socio-technical and artificial systems. Collective actions may be supported by collective and self-organised decision-making processes, and leveraging abstractions like _electronic institutions_ and _social capital_(Petruzzi et al., 2015).
_Collective movement_(Navarro and Matia, 2013) is the problem of making a group of agents (e.g., robots, drones, vehicles) move towards a common direction in a cohesive manner. Notice that this is not just about movement per se, but rather moving in conjunction or in order to support other tasks as well--e.g., distributed sensing, exploration, and rescue tasks. Two main sub-problems can be identified (Navarro and Matia, 2013): (i) _formation control_(Yang et al., 2021), when the shape of the group and/or the individual positions' are important; and (ii) _flocking_(Beaver and Malikopoulos, 2021), where such aspects are less important.
_Distributed optimisation_(Yang et al., 2019) refers to the problem of minimising a global objective function, which is the sum of the local objective functions of the members of a collective, in a distributed manner. Distributed optimisation can be a technique for collective decision making.
_Collective knowledge construction_ refers to the creation of new, distributed, and shared knowledge by a collective (Hecker, 2012). This topic is generally studied by considering aspects such as collaboration (Hmelo-Silver, 2003), socio-technical infrastructures (Gruber, 2008), knowledge transfer (Huang and Chin, 2018), the interplay between individual and collective knowledge (Kimmerle et al., 2010) models of information diffusion dynamics (Maleszka, 2019), and lifelong learning (Rostami et al., 2018).
### A view of CI-related fields
Being CI a multi-disciplinary field, the engineering of CI and ACI can benefit from ideas and research results from a variety of fields. It would be useful to have a comprehensive map of research fields contributing to CI.
Though we consider providing a comprehensive research map of CI engineering as a future work, we provide a research map (see Figure 2) from the perspective of _collective adaptive system (CAS)_ research (Bacchiarone et al., 2020; Casadei, 2020; Ferscha, 2015; Nicola et al., 2020). The idea is that CI engineering should be supported through inter-disciplinary research and a systems science perspective (Mobus et al., 2015), also providing a rigorous treatment of system-level properties that could be sustained by CI processes. This includes leveraging studies of abstract
and fundamental kinds of systems such as, for instance, CPS, namely systems that combine discrete and continuous dynamics [14]. Then, a set of inter-related fields can promote the study of peculiar CI phenomena such as emergence, self-organisation, ensemble formation, etc. Such fields include but are not limited to the field of coordination [17], multi-agent systems [18], autonomic/self- computing [19], collective adaptive systems [20; 21; 22; 23], ubiquitous/pervasive computing [24], swarm intelligence [15], and collective computing [16]. Some of these are briefly overviewed in the following.
We noticed multiple times in previous sections how interaction is a key element of CI. _Coordination_ is the interdisciplinary study of interaction [14]. In computer science, interaction was early recognised as a concern related but clearly distinguished from computation [11], hence amenable to separate modelling by so-called coordination languages. A general meta-model of coordination [13] consists of _coordinables_ (the interacting entities), _coordination media_ (the abstractions supporting and constraining interactions), and _coordination laws_ (describing the behaviour of a coordination medium). Languages, abstractions, and patterns, can be used to define the way in which computational components coordinate across aspects like control, information, space, and time. This has motivated the birth of whole communities and long-standing research threads [13; 14].
_Collective adaptive systems (CASs)_ are collectives of agents that can adapt to changing environments with no central controller. Their engineering poses several challenges, tackled in corresponding research communities [15; 16]. CASs are sometimes considered to be heterogeneous [17; 18], contrasted to more homogeneous intelligent swarms, though we tend to disagree with this view. In our view, CASs are a superset of intelligent swarms, which are characterised by _(i) large numbers of (ii) relatively simple (or not particularly intelligent) individuals_[19]. Collectives are generally _quite homogeneous, at least at some level of abstraction_[23], though research works aim to address heterogeneous collective adaptive systems [20] as well as heterogeneous swarms [15; 16], e.g., with systems involving humans and robots [12], or groups of robots with different morphology or behaviour. Swarm robotics is the combination of swarm intelligence and robotics [19; 20].
Coordination, CASs, and swarm robotics can also be seen as sub-fields of the larger field of MASs [13; 18], which itself stemmed from the field of distributed artificial intelligence [14]. In MASs engineering, two main levels and corresponding problems are considered: the _micro level_ of agent design, and the _macro level_ of agent society design. _Autonomy_ (encapsulation of control) and _agency_ (the ability to _act_) are generally considered the two fundamental properties of agents [13; 12; 14], from which other properties like proactiveness, interactivity, and sociality stem. By a software programming and engineering point of view, agents can be considered as an abstraction following active objects and actors [15] that, together other first-class abstractions like artifacts [16], environments [21], and organisations [17], provide a support for the so-called _(multi-)agent-oriented programming_ paradigm [15; 16]. The MAS field/perspective is clearly intimately related to CI.
Like for MASs, the key property of autonomy is at the centre of _autonomic computing_[19], namely the field devoted to the construction of computational systems that are able to manage/adapt themselves with limited or no human intervention. Following this vision,
research has been carried out to find approaches and techniques to endow artificial systems with different _self-* properties_: self-adaptive [de Lemos et al. 2010; Salehie and Tahvildari 2009], self-healing/repairing [Svaier and Dustdar 2011], self-improving/optimising [Bellman et al. 2018], self-organising [Heylighen 2013], and so on. To build autonomic systems, approaches typically distinguish between the _managed system_ and the _managing system_, structuring the latter in terms of _Monitoring_, _Adaptation_, _Planning_, _Execution_, _and Knowledge (MAPE-K)_ components [Kephart and Chess 2003]. In so-called _architecture-based self-adaptation_[Garlan et al. 2004], architectural models of the managed systems are leveraged at runtime to organise the self-managing logic. The managing system could also be distributed and decentralised [Weyns et al. 2010]. If the managed system is a collective, then its self-* properties could be put in relation to its CI. Consider the property of being _self-organising_, characterised by processes that autonomously and resiliently increase/maintain order or structure [Wolf and Holvoet 2004]; it typically emerges from the interaction of several entities. Self-organisation can be considered as a key promoter or element of CI [Rodriguez et al. 2007].
As a last remark, we stress that the aforementioned fields are highly inter- and trans-disciplinary. For instance, MASs can be considered by economical, sociological, organisational, and computational perspectives [Woolridge 2009]. Same goes for coordination [Malone and Crowston 1994]. Moreover, a great source of inspiration is given by natural (e.g., physical and biological) systems, as recognised by a wealth of _nature-inspired coordination_[Zambonelli et al. 2015] and _nature-inspired computing_[Siddique and Adeli 2015] contributions.
Figure 2: A research map of fields and concepts contributing to (research on) CI engineering.
## 7. Research Opportunities and Challenges
With an understanding of the nature of CI and its engineering perspectives, in the following, we discuss a few related research directions that include interesting opportunities and challenges for researchers in CI engineering.
### Programming emergence and macro-programming
The problem of programming emergent and self-organising behaviours is an open research challenge (Gershenson et al., 2020; Varenne et al., 2015) intimately related to CI engineering. Term _macro-programming_ emerged in early 2000s to identify programming approaches with the goal of defining the global behaviour of WSNs (Newton et al., 2005); currently, it generally denotes paradigms aiming at supporting the programming of system- or macro-level properties and behaviours. A recent survey by Casadei (2023) shows that, beside the first wave of research in the context of WSNs, we are witnessing a renewed interest in macro-programming fuelled by scenarios like the Internet of Things, robot swarms, and collective adaptive systems in general. This is also very much related to spatial computing (Beal et al., 2013), as space is often a constraint, a means, or a goal in systems.
The key challenge here is determining what local behavioural rules of the individuals can promote the desired collective behaviour. In particular, we can distinguish two problems (Tumer and Wolpert, 2004). Given a set of individuals and the corresponding local behavioural rules, the _local-to-global mapping problem_ (or _forward problem_) is the problem of determining what global outcomes will be produced. Conversely, the _global-to-local mapping problem_ (or _inverse problem_) is the problem of determining what local behaviours have produced the observed global outcomes. In macro-programming, the latter problem turns into how to map a description of a global intent (macro-program) into local behavioural rules (micro-programs) (Casadei, 2023).
It has been shown that approaches like aggregate computing (Viroli et al., 2019) can support forms of self-organisation and CI with macro-programs that can be encoded as compositions of functions of reusable collective behaviours (Audrito et al., 2022). This is promising, but still little research has been devoted yet at investigating, systematising, and formalising the principles, concepts, and mechanisms of macro-programming in general or specific settings (Casadei, 2023).
### Integration of manual and automatic CI engineering methods
In previous sections, we have discussed how CI can be programmed manually (e.g., through macro-programming languages, or using traditional techniques to connect and extract knowledge from human activity) or automatically (e.g., via multi-agent reinforcement learning techniques or program synthesis). Arguably, the two approaches could be combined to overcome their individual issues. This is a still an unexplored research direction, but early works and ideas are emerging.
A first idea could be to use program synthesis (Gulwani et al., 2017) to synthesise macro-programs expressed in a macro-programming language (Casadei, 2023). This could be coupled with simulation to verify how systems executing synthesised programs operate in various environments. On one hand, since simulations may also be computationally-intensive, it might be necessary to limit simulation to few program candidates. On the other hand, the problem of generating macro-programs might be hard especially if the space of possible programs is very large. Therefore, macro-programming languages admitting few primitives or combinators may be more suitable for this.
Additionally, there exist some recent attempts at combining program synthesis and reinforcement learning (Bastani et al., 2020; Qiu and Zhu, 2022; Verma et al., 2018). For instance, Bastani et al. (2020) discuss approaches to reinforcement learning based on learning programmatic policies (i.e.,
policies in the form of a program), which can provide benefits in terms of interpretability, formal verification, and robustness. Therefore, it would be interesting to consider the application of MARL where policies are expressed in a multi-agent oriented or a macro-programming language. An early attempt has been carried out e.g. in (Aguzzi et al., 2022), where MARL has been used to fill a hole in a sketched aggregate computing program (cf. the _sketching_ technique in program synthesis (Solar-Lezama, 2009)), resulting in a collective adaptive behaviour that improves over a simple, manually encoded collective behaviour.
### Integration of bottom-up and top-down processes
Another interesting research challenge and opportunity for our ability of engineering CI lies in achieving a better understanding of how bottom-up and top-down processes can be integrated\(-\)or, in other words, how emergence and downward causation/feedback can be exploited altogether to provide both flexibility and robustness in collective behaviour. Indeed, we are considering _feedback_ CI paradigm (He et al., 2019), where the aggregation of contributions from the individuals and the environment in turn affects the individuals and the environment. This is also what Lykourentzou et al. (2009) call _active_ CI systems, where collective behaviour is supported by the system level, which are contrasted from _passive_ CI systems where no collective awareness or intentionality is present.
The problem of integrating top-down and bottom-up processes is indeed connected with the problem of _controlling emergence_, addressed in research fields like autonomic computing (Kephart and Chess, 2003), with its _MAPE-K (Monitor-Analyse-Plan-Execute with Knowledge)_ loops, and _organic computing_(Muller-Schloer and Tomforde, 2017), with _observer-controller_ architectures. One issue is that emergence itself is a controversial concept, subject to philosophical and scientific investigation, and often presented with definitions that hardly apply to systems engineering (Muller-Schloer and Sick, 2006). Attempts to defining emergence based on hierarchical system models and ontological approaches (Gignoux et al., 2017) may prove useful. Initial, working classifications of emergence for reasoning in systems engineering may be based e.g. on whether it is _anticipated_ or _not anticipated_, and whether it is _desirable_ or _undesirable_(Iivari, 2016).
Some engineering techniques discussed in this section, such as macro-programming and MARL, could support the design of "controlled emergence" and, on the other way, a deeper understanding on emergence and its relationship with feedback could provide insights for mechanisms or the implementation of such techniques. A macro-program, indeed, could be seen as a top-down structure for emergent processes. Also interesting in this respect are e.g. formal studies carried out on _self-stabilisation_ of aggregate computations (Pianini et al., 2022), which guarantees that stable outputs are eventually achieved from stable inputs.
### Integrating humanity and technology: social machines
A key subfield of CI that is still at its early days is human-machine collective intelligence (HMCI) (Smirnov and Ponomarev, 2019), also known as _hybrid CI_(Moradi et al., 2019; Peeters et al., 2021), or _hybrid CASs_(Scekic et al., 2020). In the systems we consider in this article, we can identify two main domains (Beal et al., 2013): (i) the domain of _space-time_, which corresponds to physical environments and their evolution; and (ii) the domain of _information_, which evolves through computation. Of course, these two domains interact, e.g., by measuring space-time to get associated information, and using information to manipulate space-time, through actuations. Now, addressing the integration of humans and machines passes through the realisation that both kinds of individuals can fully operate on those two domains. That is, humans can be thought as computing machines (cf. the concept of _human computation_(Quinn and Bederson, 2011)), and (computing) machines can operate in the physical world (cf. the notion of _cyber-physical system_(Alur, 2015)).
Indeed, various terms or buzzwords are emerging to denote systems where such integration of humans, computation, and physical systems is present--cf., human CPSs (Liu and Wang, 2020), human-in-the-loop CPSs (Schirner et al., 2013), and crowd computing (Murray et al., 2010). From the perspective of computing, it is worth noting that _collective computing_ based on heterogeneous human-machine collectives was identified by Abowd (2016) as the fourth generation in computing following Weiser's characterisation of evolution of computing from mainframe computing to personal computing to ubiquitous computing (Weiser, 1991).
In order to address the complexity of systems and unleash the potential of humans and technology, it is increasingly important to consider technical aspects together with human, social, and organisational aspects (Bucchiarone et al., 2020). In other words, a key challenge and opportunity revolves around the design of social machines (Berners-Lee, 1999; Buregio et al., 2013), hybrid societies (Hamann et al., 2016), and socio-technical systems (Baxter and Sommerville, 2011). A social machine can be described as _"a computational entity that blends computational and social processes"_ and that is at the intersection of social software, people as computational units, and software as sociable entities (Berners-Lee, 1999; Buregio et al., 2013). In this respect, elements whose formalisation and use might promote the engineering of CI into social machines may include macro-level and collective abstractions (Scekic et al., 2020), social concepts (Bellman et al., 2017), and coordination models (Malone and Crowston, 1994). However, several challenges remain, related to proper modelling of human computation, achieving effective communication and coordination between humans and machines, achieving self-improving system integration (Bellman et al., 2021).
### Summary of recommendations for future research on **ACI engineering**
This section has discussed multiple issues and directions providing for plenty of research opportunities and challenges. To summarise, we recommend the following topics to be further investigated:
* language-based solutions to CI programming, as also fostered by recent research on macro-programming (Casadei, 2023; Sene Junior et al., 2022), possibly also working as a foundation for explainability (Krajna et al., 2022);
* approaches and mechanisms for controlling or steering emergence and self-organisation (Gershenson et al., 2020; Varenne et al., 2015), together with efforts for building a deeper understanding of these very concepts (cf. Gignoux et al. (2017));
* the role of CI across the various level of modern computing systems (e.g., the application level, the middleware level, and the physical system level) (Sene Junior et al., 2022), to address functional as well as non-functional aspects including, e.g., security, resilience, and resource efficiency;
* designs for integrating manual and automatic approaches to CI engineering, for instance along the lines of MARL with specifications (Ritz et al., 2021) or program synthesis (Aguzzi et al., 2022; Bastani et al., 2020) of macro-programs;
* integration of human intelligence with machine intelligence into hybrid, collectively intelligent systems (Peeters et al., 2021; Smirnov and Ponomarev, 2019), e.g., leveraging wearable computing (Ferscha et al., 2014), ways for combining methods for human teamwork with AI, and self-organisation protocols considering both humans and artificial agents (Scekic et al., 2020; Smirnov and Ponomarev, 2019).
Last but not least, we strongly believe that the collective viewpoint has yet to find its place within the software engineering practice. Recent efforts on formal models and languages for CASs (Nicola et al., 2020; Scekic et al., 2020; Viroli et al., 2019) might highlight a path in that direction.
## 8. Conclusion
Collective intelligence (CI) is a rich theme that builds on multi-, inter-, and trans-disciplinary collective endeavours. However, research is largely fragmented across several specific research problems (cf. types of collective tasks), research methods (cf. manual vs. automatic CI design), and even entire computer science research areas (cf. hybrid systems, CASs, MASs, etc.), and comprehensive mapping studies are currently missing, making it difficult for people of diverse backgrounds to get a sense of the overall field and even a sense of CI-related work in their sub-field. This scoping review aimed at providing a comprehensive view on CI for computer scientists and engineers, with emphasis on concepts and perspectives, and also providing some research highlights on the forms of CI that most interest them, namely artificial collective intelligence (ACI), computational collective intelligence (CCI), and human-machine collective intelligence (HMCI). The final part reviews some interesting opportunities and challenges for researchers in computer science and engineering. These point at directions that, despite visionary and preliminary work, are yet to develop: CI programming, integration of manual and automatic techniques for CI engineering, integration of collectiveness and emergence, and hybrid human-machine systems.
|
2302.06918
|
An Image Processing Pipeline for Autonomous Deep-Space Optical
Navigation
|
A new era of space exploration and exploitation is fast approaching. A
multitude of spacecraft will flow in the future decades under the propulsive
momentum of the new space economy. Yet, the flourishing proliferation of
deep-space assets will make it unsustainable to pilot them from ground with
standard radiometric tracking. The adoption of autonomous navigation
alternatives is crucial to overcoming these limitations. Among these, optical
navigation is an affordable and fully ground-independent approach. Probes can
triangulate their position by observing visible beacons, e.g., planets or
asteroids, by acquiring their line-of-sight in deep space. To do so, developing
efficient and robust image processing algorithms providing information to
navigation filters is a necessary action. This paper proposes an innovative
pipeline for unresolved beacon recognition and line-of-sight extraction from
images for autonomous interplanetary navigation. The developed algorithm
exploits the k-vector method for the non-stellar object identification and
statistical likelihood to detect whether any beacon projection is visible in
the image. Statistical results show that the accuracy in detecting the planet
position projection is independent of the spacecraft position uncertainty.
Whereas, the planet detection success rate is higher than 95% when the
spacecraft position is known with a 3sigma accuracy up to 10^5 km.
|
Eleonora Andreis, Paolo Panicucci, Francesco Topputo
|
2023-02-14T09:06:21Z
|
http://arxiv.org/abs/2302.06918v1
|
# An Image Processing Pipeline for Autonomous Deep-Space Optical Navigation +
###### Abstract
A new era of space exploration and exploitation is fast approaching. A multitude of spacecraft will flow in the future decades under the propulsive momentum of the new space economy. Yet, the flourishing proliferation of deep-space assets will make it unsustainable to pilot them from ground with standard radiometric tracking. The adoption of autonomous navigation alternatives is crucial to overcoming these limitations. Among these, optical navigation is an affordable and fully ground-independent approach. Probes can triangulate their position by observing visible beacons, e.g., planets or asteroids, by acquiring their line-of-sight in deep space. To do so, developing efficient and robust image processing algorithms providing information to navigation filters is a necessary action. This paper proposes an innovative pipeline for unresolved beacon recognition and line-of-sight extraction from images for autonomous interplanetary navigation. The developed algorithm exploits the k-vector method for the non-stellar object identification and statistical likelihood to detect whether any beacon projection is visible in the image. Statistical results show that the accuracy in detecting the planet position projection is independent of the spacecraft position uncertainty. Whereas, the planet detection success rate is higher than 95% when the spacecraft position is known with a 3\(\sigma\) accuracy up to \(10^{5}\) km.
## Nomenclature
\(\mathcal{N}\) = inertial reference frame defined as \(\mathcal{N}=\{n,\boldsymbol{n}_{1},\boldsymbol{n}_{2},\boldsymbol{n}_{3}\}\) \({}^{N}_{h}\boldsymbol{r}_{\text{bc}}\) = beacon position vector in homogeneous coordinate in \(\mathcal{N}\) \({}^{N}\boldsymbol{r}_{\text{bc}}\) = beacon position vector in \(\mathcal{N}\) \({}^{N}\boldsymbol{r}\) = spacecraft position vector in \(\mathcal{N}\) \({}^{N}\boldsymbol{\rho}\) = line-of-sight direction of the beacon as seen by the spacecraft in \(\mathcal{N}\) \(\mathcal{C}\) = 3D camera reference frame defined as \(\mathcal{C}=\{c,\boldsymbol{c}_{1},\boldsymbol{c}_{2},\boldsymbol{c}_{3}\}\) \({}^{C}\boldsymbol{\rho}\) = line-of-sight direction of the beacon as seen by the spacecraft in \(\mathcal{C}\)
\begin{tabular}{l l l} \([CN]\) & = & attitude matrix from \(\mathcal{N}\) to \(\mathcal{C}\) \\ \([R_{i}]\) & = & rotation matrix around the \(i\)th axis \\ \(\alpha\) & = & right ascension angle of the camera \\ \(\delta\) & = & declination angle of the camera \\ \(\phi\) & = & twist angle of the camera \\ \(\mathbb{C}\) & = & 2D camera reference frame defined as \(\mathbb{C}=\{C,\mathbf{C}_{1},\mathbf{C}_{2}\}\) \\ \({}^{\mathbb{C}}\mathbf{R}_{\rm bc}\) & = & planet position projection vector in \(\mathbb{C}\) \\ \({}^{\mathbb{C}}_{h}\mathbf{R}_{\rm bc}\) & = & planet position projection vector in homogeneous coordinates in \(\mathbb{C}\) \\ \([K]\) & = & intrinsic camera matrix \\ \(f\) & = & camera focal length \\ \(I_{\rm thr}\) & = & threshold value expressed in pixel intensity for removing the background noise \\ \(\mu_{I}\) & = & mean intensity over the image \\ \(\sigma_{I}\) & = & intensity standard deviation over the image \\ \(T\) & = & tuning parameter for the dynamic thresholding \\ \(I_{i,j}\) & = & intensity of the pixel \((i,j)\) \\ \((X_{i,j},Y_{i,j})\) & = & coordinates of the pixel \((i,j)\) \\ \(w_{i,j}\) & = & weighting parameter of the pixel \((i,j)\) \\ \((X_{c},Y_{c})\) & = & centroid coordinates \\ \(I_{00},I_{01},I_{10}\) & = & image momenta \\ \(\gamma\) & = & interstellar angle \\ \(\mathbf{K}\) & = & k-vector available onboard \\ \(\mathbf{I},\mathbf{J}\) & = & vectors available onboard where the star pairs IDs are stored \\ \([^{N}s]\) & = & matrix whose columns contain the stars line-of-sight directions in \(\mathcal{N}\) \\ \([^{C}s]\) & = & matrix whose columns contain the stars line-of-sight directions in \(\mathcal{C}\) \\ \(n_{R}\) & = & number of samples for the application of the RANSAC algorithm \\ \(\mathbf{s}_{i}\) & = & \(i\)th group of three stars in the image \\ \(\mathbf{e}\) & = & spacecraft rotation principal axis \\ \(\theta\) & = & spacecraft rotation principle angle \\ \(t\) & = & threshold angle in the RANSAC algorithm \\ \({}^{\mathbb{C}}\mathbf{R}_{\rm bc_{0}}\) & = & beacon expected position projection \\ \([P]\) & = & covariance matrix of the beacon position projection \\ \([F]\) & = & Jacobian matrix of the mapping linking \({}^{\mathbb{C}}\mathbf{R}_{\rm bc}\) with the spacecraft pose and beacon position \\ \end{tabular}
\begin{tabular}{l l l} \([S]\) & = & uncertainty covariance matrix of the probe pose and beacon position \\ \(\mathbf{q}\) & = & spacecraft rotation quaternion defined as \(\mathbf{q}=(q_{0},\mathbf{q}_{v})^{\top}\) \\ \([A]\) & = & estimated attitude matrix of the probe \\ \([\mathbb{I}_{n}]\) & = & identity matrix of dimension \(n\) \\ \([0_{n\times m}]\) & = & zero \(n\times m\) matrix \\ \([\mathbf{x}^{\wedge}]\) & = & skew symmetric matrix associated with the cross product \(\mathbf{x}\times\mathbf{y}\). \\ \(\sigma_{i}\) & = & standard deviation of the element \(i\) \\ a & = & \(3\sigma\) covariance ellipse semimajor axis \\ b & = & \(3\sigma\) covariance ellipse semiminor axis \\ \(\psi\) & = & \(3\sigma\) covariance ellipse orientation \\ \(F\) & = & f-number of the camera \\ \(Q_{e}\times T_{\text{lens}}\) & = & quantum efficiency \(\times\) lens transmission of the camera \\ \(\sigma_{d}\) & = & defocus level of the camera \\ \(m_{\text{lim}}\) & = & apparent magnitude threshold considered for the creation of the onboard catalogs \\ \(\epsilon\) & = & k-vector range error \\ \(\mathbf{\mu}_{\text{err}}\) & = & vector representing the mean of the planet position projection errors \\ \([P_{\text{err}}]\) & = & matrix representing the covariance of the planet position projection errors \\ \(\sigma_{\text{ErrRot}}\) & = & standard deviation of the rotation error \\ \end{tabular}
## I. Introduction
A new era of deep-space exploration and exploitation is fast approaching. In the next decade, a flourishing growth of probes will be launched in interplanetary space under the propulsive momentum of the new space economy. Nowadays, deep-space probes are mostly piloted with standard radiometric tracking. Yet, at the current pace, since this approach heavily relies on limited resources, such as ground stations and dedicated teams, its adoption will become unsustainable soon. In other terms, the exploitation of radiometric tracking will hamper the proliferation of deep-space assets [2]. Self-driving deep-space probes, which are free from ground-based support, would overcome these limitations [3]. From a navigation perspective, spacecraft can determine their state by observing the external environment with cameras; major [4] and minor bodies [5] observations can be exploited to triangulate the spacecraft position, provided their ephemerides are known [6]. Yet, in deep space, planets and asteroids are unresolved and their light falls in one pixel only of the observing camera. Thus, they can not be distinguished at first sight from the stars. Indeed, one of the most relevant issues of far-range vision-based navigation (VBN) consists of the recognition and labeling of the celestial beacons in the image against the stellar background.
In 1998, the Deep-Space 1 (DS1) mission [7, 8] proved the feasibility of estimating the probe state in deep space by observing visible asteroids in the asteroid belt [9]. The basic concept of the DS1 onboard autonomous navigation system was to feed an orbit determination algorithm with the unresolved targets' inertial Line-Of-Sight (LoS) vectors extracted from the images taken during the cruise phase. Following studies were then performed in Broschart et al. [5], which shows the achievable positioning accuracies resulting from the exploitation of visible asteroids as beacons between the orbits of Mercury and Jupiter. Yet, when cameras of limited performances are adopted onboard low-priced miniaturized probes, such as CubeSats, asteroids can not be visible in the sensor frame. Only brighter celestial bodies, like planets, can be observed for far-range optical navigation. In this context, the EXTREMA (Engineering Extremely Rare Events in Astrodynamics for Deep-Space Missions in Autonomy) project [10], awarded an ERC Consolidator Grant in 2019, aims to understand how to enable deep-space, limited-budget spacecraft to perform navigation, guidance, and control operations in complete autonomy with respect to ground.
Previous works focus on the implementation of onboard orbit determination algorithms to estimate the probe state [4, 6, 11]. In addition, while detailed pipelines for Image Processing (IP) at mid- and close-range is available [12, 13, 14], the case of deep space is still an open issue.
This work contributes to the state-of-the-art by developing an innovative IP pipeline for the extraction of the beacon projection from camera image in the context of deep-space autonomous navigation. The procedure is composed of two parts. First, the lost-in-space attitude determination problem is solved by adopting the k-vector method [15], which also allows the non-stellar objects recognition in the image. Second, the beacon identification is performed by evaluating the beacon position projection uncertainty ellipse and by selecting the non-stellar object, if any, contained in it. The IP pipeline described is then applied to deep-space CubeSats in the framework of the EXTREMA project.
The paper is structured as follows. First, Sec. II summarizes the necessary mathematical preliminaries. Then, in Sec. III the methodology followed for the development of the IP pipeline is defined. In addition, Sec. IV presents a general study of the algorithm behavior by focusing on the off-nominal scenarios during the extraction of the beacon position projection. Finally, Sec. V gathers the results obtained by the application of the proposed IP pipeline.
## II. Projective Geometry Preliminaries
Let \({}^{N}\mathbf{r}_{\text{bc}}\) and \({}^{N}_{h}\mathbf{r}_{\text{bc}}\) be the beacon position vector in the inertial reference frame \(\mathcal{N}=\{n,\mathbf{n}_{1},\mathbf{n}_{2},\mathbf{n}_{3}\}\) expressed with non-homogeneous and homogeneous coordinates, respectively, and let \({}^{N}\mathbf{r}\) be the spacecraft position in non-homogeneous coordinates. Interested readers can refer to [16] for further details about homogeneous vector representation.
With reference to Fig. 1a, the position of the beacon as seen by the spacecraft in \(\mathcal{N}\) is described as
\[{}^{N}\mathbf{\rho}={}^{N}\mathbf{r}_{\text{bc}}-{}^{N}\mathbf{r} \tag{1}\]
Let a projective camera observe the beacon. The camera frame is defined as \(\mathcal{C}=\{c,\mathbf{c}_{1},\mathbf{c}_{2},\mathbf{c}_{3}\}\). The vector \({}^{N}\mathbf{\rho}\) can be transformed in \(\mathcal{C}\) though a passive rotation from \(\mathcal{N}\) to \(\mathcal{C}\) and applied by the attitude matrix \([CN]\):
\[{}^{C}\mathbf{\rho}=[CN]\ ^{N}\mathbf{\rho} \tag{2}\]
For the definition of the attitude matrix, the Axis-Azimuth representation is adopted [17]. By assuming that the camera boresight is coincident with the third axis of the spacecraft-fixed reference frame,
\[[CN]=[R_{3}(\alpha)]\ [R_{2}(\pi/2-\delta)]\ [R_{3}(\phi)] \tag{3}\]
where \([CN]\) is obtained through a succession of counterclockwise rotations taking into account the camera pointing angles: right ascension \(\alpha\in[0^{\circ},360^{\circ}]\), declination \(\delta\in[-90^{\circ},90^{\circ}]\), and twist angle \(\phi\in[0^{\circ},360^{\circ}]\)[18].
Once \({}^{C}\mathbf{\rho}\) is computed (see Eq. (2)), the 3D point is projected on the image plane by exploiting the pinhole camera model.
With reference to Fig. 0(b), let \(\mathbb{C}=\{C,\mathbf{C}_{1},\mathbf{C}_{2}\}\) be the 2D camera reference frame where \(\mathbf{C}_{1}\) points to the right, \(\mathbf{C}_{2}\) downward, and the reference frame center \(C\) is placed at the upper left-hand corner of the image. The projection of the planet position in \(\mathbb{C}\) is \({}^{C}\mathbf{R}_{\text{bc}}\) and the transformation of the beacon position vector in homogeneous coordinates from \(\mathcal{N}\) to \(\mathbb{C}\) is compactly described as
\[{}_{h}^{\mathbb{C}}\mathbf{R}_{\text{bc}}=[K]\underbrace{[CN]\left[[\mathbb{I}_{3 }]\quad-\ ^{N}\mathbf{r}\right]^{N}_{h}\mathbf{r}_{\text{bc}}}_{c_{\mathbf{\rho}}} \tag{4}\]
where \([K]\) is the intrinsic camera matrix [19]. Eventually, the beacon position projection in \(\mathbb{C}\) in non-homogeneous
Figure 1: Problem and Projective Geometry Preliminaries
coordinates (\({}^{\circ}\mathbf{R}_{\rm bc}\)) becomes
\[{}^{\circ}\mathbf{R}_{\rm bc}=\begin{pmatrix}{}^{\circ}R_{\rm bc_{1}}\\ {}^{\circ}R_{\rm bc_{2}}\end{pmatrix}=\begin{pmatrix}{}^{\circ}\frac{R}{R}_{ \rm bc_{1}}\\ {}^{\circ}\frac{R}{R}_{\rm bc_{2}}\\ {}^{\circ}\frac{R}{R}_{\rm bc_{3}}\end{pmatrix} \tag{5}\]
where \({}^{\circ}_{h}R_{\rm bc_{i}}\) is the \(i\)th component of the beacon position projection in \(\mathbb{C}\) in homogeneous coordinates.
Through an analogous procedure, stars are projected on the camera frame by considering that they are laying on the plane at infinity. Stars positions are usually stored in catalogs, such as the Hipparcos' [20], which provides their right ascension and declination on the celestial sphere [18].
## 3 Methodology
The final goal of the algorithm is the detection of the beacon in the digital image and the extraction of its position projection. From the latter, the beacon LoS direction, which has been proven to be a valuable observable for deep-space VBN [4, 11], can be straightforwardly retrieved.
To achieve its goal, the IP algorithm performs two steps sequentially:
1. The determination of the spacecraft attitude through star asterism identification.
2. The beacon detection in the image.
The flowchart of the proposed IP pipeline is shown in Figure 2.
First, a search-less method based on the k-vector technique [15, 21] and bulked up with a RANSAC [16] procedure is adopted. This provides two results: 1) the match of the stellar objects detected in the deep-space image with the associated identifiers (ID) stored in the star catalog, and 2) the detection of non-stellar objects (spikes) in the image. Once the stars ID are known, the probe orientation is computed by solving the Wahba problem [22] between the stars LoS directions measured in the camera frame and the
Figure 2: Workflow of the proposed IP pipeline.
ones matched in the inertial frame catalog. With the knowledge of the probe attitude and an estimation of its position, a prediction of the beacon position projection in the image can be computed. In addition, its uncertainty can be defined by assuming known the uncertainty of the probe pose, i.e., probe position and orientation, and of the beacon position. As the computed statistical momenta, i.e., the expected beacon projected position and its uncertainty, define the Gaussian probability to find a beacon in that portion of the image, this information can be exploited to identify the celestial body and extract its position projection. Note that if one or more spikes are contained within the beacon projection uncertainty ellipse, the closest spike to the expected beacon projection position is the highest probable to be the desired beacon projection, therefore it is identified as the beacon itself.
The following assumptions represented with the green rounded rectangles in Fig. 2 are made:
1. The camera is previously calibrated, thus the camera intrinsic parameters are known.
2. The catalogs needed for the star identification are available.
3. A rough estimation of the probe position is known.
4. Beacons ephemeris are available onboard.
5. The probe pose and beacons position uncertainties are known.
### Attitude Determination
The attitude determination is based on the execution of three steps, namely:
1. The centroids of the bright objects in the image are found.
2. A subset of the centroids in the image are matched with the onboard-stored star catalog, which contains the stars LoS directions in the inertial reference frame.
3. The attitude of the probe is determined by solving the Wahba problem between the stars LoS directions in the camera and inertial reference frame.
### Centroids Computation
The first step of every optical sensor that has to perform star identification consists of the determination of the location of the bright objects in the image. In deep space, the observed objects are mostly unresolved. For cameras focused at infinity, the light from each star or beacon falls in one pixel only. Since the extracted centroid is the center of the pixel itself, this implies that only pixel accuracy can be achieved. A strategy adopted to achieve subpixel precision in centroid extraction from unresolved objects is to intentionally defocus the camera to spread the incoming light over multiple pixels [23]. When a defocused image is acquired, the centroids of the bright pixels are simply found by computing the center of brightness. In this work, the procedure presented [18] in is applied. This one can be subdivided into the following steps:
1. A threshold value \(I_{\text{thr}}\), expressed in pixel intensity, is set up to remove the background noise. This is of paramount
importance to select those pixels to consider in the computation. The value of \(I_{\text{thr}}\) is determined by applying a dynamic thresholding method that can be tuned to improve the performance of the algorithm: \[I_{\text{thr}}=\mu_{I}+T\sigma_{I}\] (6) where \(\mu_{I}\) is the intensity mean over the image, \(\sigma_{I}\) is the intensity standard deviation over the image, and \(T\) is the tuning parameter.
2. By thresholding the image using \(I_{\text{thr}}\), pixels brighter than the threshold are identified. They form connected portions of the image which can be delimited with squared Region Of Interest (ROI) with a margin of one pixel on each side. All the pixels inside one ROI define a single stellar or non-stellar object projection. Thus, the centroid of the object can be computed by using the pixels inside the associated ROI.
3. The image momenta \(I_{10}\), \(I_{01}\), and \(I_{00}\) inside each ROI are found as \[I_{00}=\sum_{(i,j)\in\text{ROI}}I_{i,j}\ w_{i,j}\hskip 28.452756ptI_{10}=\sum_{(i,j)\in\text{ROI}}X_{i,j}\ I_{i,j}\ w_{i,j}\hskip 28.452756ptI_{01}=\sum_{(i,j)\in \text{ROI}}Y_{i,j}\ I_{i,j}\ w_{i,j}\] (7) where \(X_{i,j}\) and \(Y_{i,j}\) are the pixel coordinates, \(I_{i,j}\) its intensity, and \(w_{i,j}\) the weighting parameter associated with the pixel \((i,j)\). In this work, the weighting parameter is defined as \(w_{i,j}=\frac{I_{i,j}}{I_{i,j_{max}}}\) to give more importance to brighter pixels inside the ROI [24].
4. Once the image momenta for a ROI are computed, the sub-pixel centroid coordinates associated with that ROI are found as \[X_{c}=\frac{I_{10}}{I_{00}}\hskip 56.905512ptY_{c}=\frac{I_{01}}{I_{00}}\] (8)
#### Iii-B2 Stars Identification
The goal of the star identification procedure is to recognize, among the found centroids, the ones that are stars projection in the image. To do so, the problem is rewritten as a registration problem whose goal is to find the correct matching between the observed star asterism and the cataloged stars in the inertial frame. For this purpose, the search-less algorithm (SLA) introduced in [15] is adopted. In this work, the SLA has been preferred over the binary search technique [25] for its higher speed gain rate (from 10 to more than 50 times [26]) and for its capability to identify spikes among the bright objects in the image. This latter is maybe the most important feature since the navigation beacons are just searched among these spikes. Finally, an additional interesting characteristic of the SLA is its independence from any a priori attitude guess and magnitude information.
To be adopted, the SLA requires computing on ground a vector of integers, the k-vector, which contains information for the stars matching starting from the chosen stars invariant. In this work, the invariant chosen to build the star catalog is
the interstellar angle \(\gamma\) which is defined as
\[\gamma_{ml}=\arccos({}^{N}\mathbf{\rho}_{m}^{\top\,N}\mathbf{\rho}_{l}) \tag{9}\]
where \({}^{N}\mathbf{\rho}_{m}\) and \({}^{N}\mathbf{\rho}_{l}\) are the LoS directions to the \(m\)th and \(l\)th star, respectively. Note that the interstellar angle is an invariant parameter with respect to the reference frame considered. Indeed, when the onboard catalog is derived, the interstellar angles are computed from the stars LoS directions expressed in the \(\mathcal{N}\) frame. However, in the operative phase, these angles are matched with the observed interstellar angles obtained from the stars LoS direction in the \(\mathcal{C}\) frame.
The k-vector on-ground computation requires several steps described hereunder. First, the vector \(\mathbf{S}\), which contains the ordered value of all the interstar angles, is computed. Second, the star pairs IDs are stored in two vectors labeled \(\mathbf{I}\) and \(\mathbf{J}\). Finally, the k-vector \(\mathbf{K}\) is gathered, where its \(k\)th element contains the number of elements of vector \(\mathbf{S}\) less than \(\cos\bar{\gamma}=a_{1}k+a_{0}\). The constants \(a_{1}\) and \(a_{0}\) are the coefficients describing the straight line that connects the first and the last element of \(\mathbf{S}\). The interested reader can consult [15] for further details about the method. In this work, to reduce the size of the catalog angles smaller than 35 deg are considered. Moreover, stars whose apparent magnitude is lower than 5.5 are considered for the generation of the invariant. The vectors \(\mathbf{K}\), \(\mathbf{I}\), and \(\mathbf{J}\), and the parameters \(a_{0}\) and \(a_{1}\) are stored on board and are used during onboard star identification. During the operational phase, star identification is performed by finding a set of possible correspondences between the measured inter-star angles and the values contained inside \(\mathbf{K}\). At this point, the IDs of the admissible catalog star pairs are determined by looking into \(\mathbf{I}\) and \(\mathbf{J}\). Finally, to select the right stars-pair among the possible ones, the Reference-Star method [15] is adopted. Similar performances in this last step can be achieved by exploiting the angle pivoting algorithm [27]. When the observed star asterism is recognized, the stars identification algorithm gives as output the vector of the stars identifiers and the matrices \([^{N}s]\) and \([^{C}s]\), whose columns contain the stars LoS directions in the \(\mathcal{N}\) and \(\mathcal{C}\) reference frame, respectively. Moreover, a vector including the position of the spikes in the image is delivered as well.
The objects identified by the k-vector as spikes may be non-stellar objects, such as planets, asteroids, and cosmic rays that are not present in the onboard catalog, or stars that have not been recognized due to errors in the centroid extraction. Yet, when a great number of spikes is present in the image, the star asterisms may not be recognized by the algorithm. In this work, to reduce the number of scenarios in which this failure occurs, a heuristic approach is considered. As faint stars are generally not stored in the onboard catalog and as the centroids extraction depends on the thresholding procedure, when the attitude determination fails, the attitude determination procedure is iterated again by increasing the tuning parameter \(T\) of the intensity threshold \(I_{\text{thr}}\) in Eq. (6). One of the results of this approach consists of diminishing the number of bright objects in the image, which can ultimately lead to the removal of some spikes. The procedure is repeated until observed star asterisms are recognized or less than three stars are detected.
#### 3.1.3 Attitude Determination
Eventually, the probe attitude is determined by solving Wahba's problem [22] between the stars LoS directions in the camera and inertial reference frame. The Wahba's problem solution is computed by the Singular Value Decomposition (SVD) method [28]:
\[[B]=\sum_{i=1}^{n}\ {}^{C}\mathbf{{{}_{S_{i}}}}^{N}\mathbf{{{}_{S_{i}}}}^{\top} \tag{10}\]
where \(n\) is the number of identified stars in the image considered for attitude determination, and \({}^{C}\mathbf{{{}_{S_{i}}}}\) and \({}^{N}\mathbf{{{}_{S_{i}}}}\) are the \(i\)th columns of \([^{C}s]\) and \([^{N}s]\), respectively. As the \([B]\) matrix is not orthonormal due to measurement errors, the closest orthonormal matrix \([A]\) can be computed by imposing the matrix eigenvalues. Thus:
\[[B]=[U][S][V]^{\top}\rightarrow[A]=[U][M][V]^{\top}; \tag{11}\]
where \([M]\) is used to impose a right-handed reference frame, and it is defined as:
\[[M]=\begin{bmatrix}1&0&&0\\ 0&1&&0\\ 0&0&\det[U]\ \det[V]\end{bmatrix} \tag{12}\]
In this work, the robustness of the solution to Wahba's problem is increased thanks to the adoption of a RANdom SAmple Consensus (RANSAC) procedure [1]. The RANSAC algorithm of Fischler and Bolls [29] is a general and robust iterative method able to estimate the parameters of a mathematical model from a set of input data with a large proportion of outliers [16]. It can be also seen as an outliers detection and rejection method.
Here, the RANSAC algorithm aims to detect the bright objects that have been misidentified by the star identification step, which can thus lead to a wrong attitude determination. The star identification procedure can result in a misidentification when a non-stellar object is identified as a star or when a star is labeled with a wrong star identifier. To detect the outliers the attitude of the spacecraft is adopted as the mathematical model for the data fitting. The attitude is estimated \(n_{R}\)-times by selecting randomly every time a group of 3 identified stars. The minimum set of stars needed for attitude determination is chosen to increase the probability of having a group made of different stars at each time. Thus, the estimated \(n_{R}\) spacecraft orientations are compared to identify the best model, which is then adopted for the data fitting. The stars not respecting the best model are considered outliers and are labeled as spikes. In detail, the RANSAC procedure can be summarized as follows:
1. A number of samples \(n_{R}\) is selected.
2. For each \(i\)th sample with \(i\in[1,\ n_{R}]\), a group of 3 stars is randomly selected within the identified stars.
3. For each group, Wahba's problem is solved, and the spacecraft rotation principal axis \(\mathbf{e}_{i}\) is defined. As said before, the attitude of the spacecraft is adopted as the mathematical model for the data fitting, and the rotation principal axis \(\mathbf{e}\) is the chosen attitude representation.
4. To each rotation principal axis \(\mathbf{e}_{i}\), a score is assigned dependent on the number of vectors \(\mathbf{e}_{j}\) ( related to the \(j\)th sample with \(j\in[1,\ n_{R}]\) and \(j\neq i\)) that are within a threshold angle \(t\) of \(\mathbf{e}_{i}\). The set of vectors that satisfy this requirement is called the consensus set of \(\mathbf{e}_{i}\). The size of the consensus set associated with \(\mathbf{e}_{i}\) can be determined as the score of \(\mathbf{e}_{i}\).
5. The vector \(\mathbf{e}_{i}\) characterized by the largest consensus set, thus, by the highest score, is selected as the best model.
6. The best model is then exploited for the data fitting: Only the stars that generate a subset related to a principal rotation vector contained inside the consensus set of the best model are considered inliers. The remaining stars (outliers) are identified as spikes. If two or more consensus sets are characterized by the same dimension, the best vector is chosen arbitrarily among the vectors \(\mathbf{e}_{i}\) related to these consensus sets. The probe attitude is, thus, redetermined by considering only the inlier stars.
A graphical representation of this process is depicted in Fig. 3. In this example, \(n_{R}=4\) for the sake of a clear graphical representation. For each of the four samples, a group of three stars \(s_{i}\) is selected among the ones identified by the star identification step. Then, the associated principal rotation vector is computed for each group, and a score is assigned to each vector. In this case, a score of 2, 1, 1, 0 is assigned to \(\mathbf{e}_{2}\), \(\mathbf{e}_{1}\), \(\mathbf{e}_{3}\), and \(\mathbf{e}_{4}\) directions, respectively. The principal rotation vector \(\mathbf{e}_{2}\) has the highest score and, thus, is chosen as the best mathematical model for the data fitting. Since \(\mathbf{e}_{3}\) and \(\mathbf{e}_{1}\) vectors lie inside the consensus set of \(\mathbf{e}_{2}\), all the star subsets that are adopted to generate these three vectors are considered inliers. Whereas, the remaining stars are identified as outliers and, thus, labeled as spikes.
Figure 3: Graphical Representation of the RANSAC Algorithm
### Beacon Detection
In this section, the second step of the proposed IP pipeline is presented. It starts with the detection of the beacon in the image and ends with the extraction of its associated position projection.
#### 1. Beacon Identification
The beacon identification in the image is performed by computing the statistical momenta associated with the beacon projection, i.e., the expected beacon position projection and its covariance matrix, which define the 3\(\sigma\) Gaussian probability to find a beacon in that portion of the image. Thus, the following computation is carried out.
Once the attitude matrix \([A]\) is determined, and by assuming the beacon ephemerides and the probe position are known with a certain accuracy on each Cartesian component, the expected beacon position projection \({}^{\mathbb{C}}\mathbf{R}_{\text{bc}_{0}}\) is computed (Eqs. (4) and (5)). If the beacon \({}^{\mathbb{C}}\mathbf{R}_{\text{bc}_{0}}\) falls inside the image boundaries, its covariance matrix, which depends on the spacecraft pose and the beacon position uncertainties, can be defined. Once the image plane portion with the highest probability, here considered 3\(\sigma\), has been identified, this information is used to recognize the beacon in the image. If one or more spikes are contained in the 3\(\sigma\) uncertainty ellipse, the spike closest to the expected beacon position projection is labeled as its correct position projection. The closest one is selected because, from a statistical point of view, it is the one with the highest probability of being the projected beacon.
The covariance matrix of the beacon position projection \([P]\) due to the spacecraft pose and beacon position uncertainty is computed as
\[[P]=[F][S][F]^{\top} \tag{13}\]
where \([F]\) is the Jacobian matrix of the mapping linking the beacon position projection \({}^{\mathbb{C}}\mathbf{R}_{\text{bc}}\) with the spacecraft pose and the beacon position, and \([S]\) is the uncertainty covariance matrix of the probe pose and beacon position. To evaluate \([F]\), the variation of \({}^{\mathbb{C}}\mathbf{R}_{\text{bc}}\) with respect to the variation of the spacecraft pose and the beacon position has to be computed. To simplify the calculus, the quaternions \(\mathbf{q}=(q_{0},\mathbf{q}_{v})^{\top}\), where \(q_{0}\) is the scalar part and \(\mathbf{q}_{v}\) is the vectorial part, are chosen to represent the probe attitude matrix. Eq. (14) gives the quaternion representation of the attitude matrix \([A]\)[30]
\[[A]=(q_{0}^{2}-\mathbf{q}_{v}^{\top}\mathbf{q}_{v})\ [\mathbb{I}_{3}]+2\mathbf{q}_{v}\mathbf{q}_{v }^{\top}-2q_{0}[\mathbf{q}_{v}]^{\wedge} \tag{14}\]
Thus, the variation of \(\ {}^{\circ}\mathcal{R}_{\text{bc}}\) with respect to the the variation of the spacecraft pose, i.e., \([A(\mathbf{q}_{C/N})]\) and \({}^{N}\mathbf{r}\), and the beacon position \({}^{N}\mathbf{r}_{\text{bc}}\) can be defined as
\[\delta^{\ \circ}\mathcal{R}_{\text{bc}}=\underbrace{\left[\begin{array}{c} \frac{\partial^{\ \
solution the pose could be coupled.
Once the covariance matrix of the beacon position projection is assessed, the associated 3\(\sigma\) uncertainty ellipse is computed. Let \(\lambda_{\text{max}}\) and \(\lambda_{\text{min}}\) be the largest and smallest eigenvalues of \([P]\), respectively, and \(\mathbf{v}_{\text{max}}\), \(\mathbf{v}_{\text{min}}\) their related eigenvectors. The characteristics of the 3\(\sigma\) covariance ellipse can be computed as:
\[a=\sqrt{11.8292\ \lambda_{\text{max}}} b=\sqrt{11.8292\ \lambda_{\text{min}}} \psi=\arctan\left(\frac{\mathbf{v}_{\text{max}_{2}}}{\mathbf{v}_{\text{ max}_{1}}}\right) \tag{23}\]
where \(a\) is the 3\(\sigma\) covariance ellipse semimajor axis, \(b\) the 3\(\sigma\) covariance ellipse semiminor axis, \(\psi\) the 3\(\sigma\) covariance ellipse orientation (angle of the largest eigenvector towards the image axis \(\mathbf{C}_{1}\)), and \(\mathbf{v}_{\text{max}_{2}}\), \(\mathbf{v}_{\text{max}_{1}}\) the eigenvector related to the maximum eigenvalue along \(\mathbf{C}_{2}\) and \(\mathbf{C}_{1}\) directions, respectively. Note that the value 11.8292 represents the inverse of the chi-square cdf with 2 degrees of freedom at the values in 0.9973 (3\(\sigma\)).
The equation of the uncertainty ellipse of the beacon position projection as a function of the angle \(\theta\) is so derived
\[\begin{bmatrix}x\\ y\end{bmatrix}=\begin{bmatrix}a\cos\theta&b\sin\theta\end{bmatrix}\begin{bmatrix} \cos\psi&\sin\psi\\ -\sin\psi&\cos\psi\end{bmatrix}+\mathbb{C}\mathbf{R}_{\text{bc}_{0}} \tag{24}\]
Eventually, the beacon is identified with the closest spike to the expected beacon position projection contained in the 3\(\sigma\) ellipse.
## 4 Algorithm Assessment
The methodology illustrated so far has a general application. Indeed, it can be adopted to detect any bright beacon, e.g., asteroids and planets, viable for deep-space VBN. Nevertheless, in this work, to test the performance of the IP pipeline, only planet ephemerides are included in the IP pipeline, and only planets are added to the image rendered with the sky-field simulator [18] as the proposed IP procedure is here tested in the framework of the EXTREMA project [31]. Since miniaturized cameras with limited performances are embarked onboard CubeSats, only planets are detectable by the optical sensor and can be detected by the algorithm [4].
Before entering the details of the numerical performances, it is worth providing a qualitative discussion about the possible off-nominal solutions that can be faced besides the estimation of the correct attitude and the correct identification of the planet. Indeed, despite the high rate of success of the IP pipeline, which depends mostly on the probe position uncertainty as shown in Sec. 5, it is important to describe how the off-nominal conditions are verified and which heuristic approach can be adopted to avoid them. During the attitude determination and the planet detection, the algorithm can yield three results:
1. It can provide a solution, which is regarded as correct.
2. It can provide a solution, which is regarded as wrong.
3. It can not converge to a solution.
A more detailed description of these results is provided in the following subsections. First, the nominal and the off-nominal scenarios and the heuristic approaches adopted to prevent the off-nominal scenarios during the attitude determination step are described. Then, the same discussion is performed for the beacon detection step.
### Assessment of the Attitude Determination Procedure
The attitude determination step can yield three results:
1. The star identification algorithm converges to the correct solution
2. The star identification algorithm converges to a wrong solution, which is appointed when the pointing error is greater than 500 arcsec.
3. The star identification algorithm does not converge to a solution. This may be due to different reasons: there are less than three stars in the image; the background noise is too high to identify bright points, or too many spikes are present in the image (usually higher than the 25% [15]).
To decrease the number of scenarios in which the star asterism is not identified, the heuristic approach described in Sec. III.A.2 is implemented. Instead, the RANSAC algorithm described in Sec. III.A.3 is adopted to prevent stars' misidentification. Indeed, the RANSAC method may recognize the errors performed by the star identification step (non-stellar objects identified as stars or stars labeled with a wrong identifier) by appointing the misidentified object as a spike. In this way, misidentified stars are not considered by the pipeline when the attitude of the probe is determined. An analysis of the IP pipeline performances when the RANSAC is not applied is reported in [1].
### Assessment of the Beacon Detection Procedure
The IP pipeline applied to deep-space images generated by the DART Lab can yield three results: correct, wrong, or no identification of the planet. To assess the correctness of these performances, an external control algorithm, independent of the IP pipeline, is exploited to define whether the planet is visible from the camera or not. In other terms, if the camera can effectively detect it or not. The planet visibility is assessed by verifying that the planet must not only fall within the image borders but also have an intensity higher than the camera detectability level, which, in this work, it is set to 120 (out of 255). Since the camera detectability level is established on an arbitrary consideration, this choice may yield two extreme cases where: 1) the planet could be assessed as not visible by the control algorithm, but it could be detected from the IP pipeline anyway, or 2) the planet is assessed as visible by the control algorithm, even if it is highly faint, and the IP pipeline can not detect it (see off-nominal scenario 1.III.F)).
1. **Planet assessed visible in the camera image**. When the planet is visible in the image, the IP pipeline can yield three results:
1. The planet is correctly identified (see Fig. 3(a)). 1. The planet is wrongly identified, which occurs when the planet is associated with a wrong spike, thus, one not corresponding to its correct position. The celestial body is appointed to be wrongly identified when the distance between the real planet position projection (+) and the spike detected as the position projection of the planet (\(\Box\)) is greater than 5 px (see Fig. 3(b)). 1. The planet is not detected by the IP algorithm, although it is visible by the camera. In other terms, the planet is present and visible in the image, but the algorithm does not spot it. Scenarios 1.II) and 1.III) are considered off-nominal scenarios of the IP pipeline. A situation that can lead to scenario 1.II) is: 1. The uncertainty of the spacecraft position is large. Thus, the expected beacon position projection is far from the real one. The uncertainty ellipse increases in size, which may lead to having as the closest spike to the expected beacon position projection not the correct one (see Fig. 3(b)). A numerical study on the probe position uncertainty is investigated in Sec. V.
For what concerns scenario 1.III), six different events are identified as leading to it:
1. The IP pipeline mistakes the celestial body for a star. The expected position projection of the celestial body is detected inside the image, but no spikes are found inside the uncertainty ellipse associated. This scenario occurs since the spike that corresponds to the position projection of
Figure 4: Representations of scenarios 1.I) and 1.II). + represents the real planet position projection, \(\times\) represents the expected planet position projection, and \(\Box\) the found spikes, respectively.
the beacon was wrongly identified as a star by the star identification algorithm. The approach applied to reduce the presence of this scenario consists of the RANSAC algorithm adoption, which assesses the misidentified stars as spikes. In this way, if a planet is identified first as a star by the star identification step, the RANSAC algorithm may recognize the error and relabel the object as a spike, among which the planet will be then searched. 1. The spike associated with the planet position projection falls outside the 3\(\sigma\) uncertainty ellipse, and thus it can not be detected by the algorithm (see Fig. 5a). Note that the 3\(\sigma\) represents a probability of 99.7%, thus, there are few possible scenarios in which the celestial body falls outside. 1. The attitude of the probe has been wrongly determined. Thus, planets are present and detectable in the image, but the pipeline is not able to recognize them (see Fig. 5b). This off-nominal scenario is assessed as a failure of the attitude determination algorithm ( see Scenario I)). 1. The celestial body is close to the image border. The expected planet position projection is outside the image, but the real one is inside. This scenario can be avoided by observing only the planets in the central part of the FoV. 1. The centroid associated with the planet position projection is not evaluated correctly. In the scenario shown in Fig. 5c two bright objects are contiguous. Thus, the centroiding algorithm finds only a centroid, instead of two, which is placed between the two objects. This centroid (the red square in the image) is outside the 3\(\sigma\) bounds of the error ellipse, so the planet is not detected. 1. The planet is considered visible from the camera, but no centroid is associated with it. Thus, it can not be detected by the IP pipeline (see Fig. 5d). This scenario is due to the parameters chosen for the thresholding procedure (Eq. 6) in the IP pipeline. This off-nominal scenario can be avoided during the operational phase by selecting only the brightest planets in the image.
2. **Planet assessed not visible in the camera image.** Three algorithm behaviors are identified when the planet is not visible in the camera image: 1. The beacon is not detected by the IP algorithm as its expected position projection is not in the image. 2. The beacon is not detected by the IP algorithm. The expected position projection of the planet is in the image, but no centroids are associated with it. 2. The beacon is detected by the IP pipeline, although there are no visible planets. Only scenario 2.III) is considered off-nominal for the IP pipeline. For the identification of this event, the same method adopted to recognize the second scenario is applied. A scene that may lead to the scenario 2.III) is: 2. When the expected planet position projection is close to the image border, this one may fall inside the image, even though the spike related to the planet position projection is not contained in it. The failure occurs because, at this point, the planet is associated with a wrong spike present in the image.
## V Results
A quantitative discussion about the performances of the IP pipeline is presented in this section.
### Simulation Settings
A Monte Carlo campaign is carried out to assess the performances of the developed algorithm. The extraction of the beacon position projection is run for 1031 scenarios, wherein at least one planet is present, out of the 3000 scenarios analyzed. In each scenario, the position of the spacecraft is selected by randomly sampling a Gaussian distribution with
Fig. 5: Representations of scenarios 1.I) and 1.II). + represents the real planet position projection, \(\times\) represents the expected planet position projection, and \(\Box\) the found spikes, respectively.
\(\sigma_{x}=\sigma_{y}=3\)AU and \(\sigma_{z}=0.07\)AU and centered in the origin of \(\mathcal{N}\). The \(z\)-component of the probe position is chosen in a narrower interval as the spacecraft is supposed to lie close to the ecliptic plane. Similarly, the orientation of the probe is assigned by randomly sampling a normal distribution \(\alpha\), \(\delta\), and \(\phi\) in the \(3\sigma\) intervals \([0,2\pi]\), \([-0.6,0.6]\), and \([0,2\pi]\), respectively. The declination \(\delta\) is chosen in a narrower interval as planets are distributed close to the ecliptic plane.
Once the probe pose is sampled, the sky-field image is generated by exploiting an improved version of the DART Lab sky simulator presented in [18]. The onboard camera characteristics are reported in Table 1, where F is the f-number, Q\({}_{\rm e}\)\(\times\) T\({}_{\rm lens}\) is the quantum efficiency \(\times\) lens transmission, SEA is the Solar Exclusion Angle, and \(\sigma_{d}\) is the defocus level.
Moreover, although the DART Lab sky-field simulator includes the possibility to simulate the impact of cosmic rays hitting the camera detector [18], this capability is not exploited in this work as the detection of the saturated pixels can be performed outside the proposed IP pipeline. Indeed, cosmic rays are usually easy to identify for under-sampled images because they hit only single pixels *.
Footnote *: [https://www.eso.org/~ohainaut/ccd/CCD_artifacts.html](https://www.eso.org/~ohainaut/ccd/CCD_artifacts.html)
The settings adopted for the first section of the IP pipeline are listed in Table 2, where \(T\) is the tuning parameter of Eq. 6, \(\epsilon\) is the k-vector range error, \(m_{\rm lim}\) is the apparent magnitude threshold of the cataloged stars, \(n_{R}\) is the RANSAC samples, and \(t\) is the RANSAC threshold. Whereas, \(\sigma_{q_{0}}=0\) since \(\delta q_{0}=0\) (Eq. 22), \(\sigma_{\mathbf{q}_{v}}\) is set equal to \(10^{-4}\) as results of a statistical analysis conducted on the error obtained in the attitude determination, and the planet position uncertainty \(\sigma_{\mathbf{r}_{\rm he}}\) is assumed equal to zero because of the high accuracy with which the planets ephemeris are known. Instead, when other objects are observed to navigate, e.g., asteroids [5] or debris [32] (not in a deep-space application), their associated position uncertainty needs to be taken into account.
### Numerical Results
This section presents the numerical performances of the IP pipeline. The performance indexes adopted for the discussion are the angular error for the attitude determination and the beacon projection error for the planet detection step. A sensitivity analysis is performed to study the robustness of the IP pipeline to the initial uncertainty of the probe
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(T\) [-] & \(\epsilon\) [arcsec] & \(m_{\rm lim}\) [-] & \(n_{R}\) [-] & \(t\) [arcsec] \\ \hline
20 & 7 & 5.5 & 20 & 15 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Setup for the Attitude Determination algorithm.**
position \(\sigma_{\mathbf{r}}\) when the latter is set to \(10^{4}\), \(10^{5}\), \(10^{6}\), and \(10^{7}\) km, respectively. Note that these values are chosen following a conservative approach and only with the goal of assessing the robustness of the IP pipeline. Indeed, in deep space, the probe initial position is usually known with an accuracy better than \(10^{4}\) km.
The performances of the IP pipeline are shown in Table 3.
The first part of the algorithm succeeds in the determination of the probe attitude in about 90% of the scenarios (of 1031) independently of the probe position uncertainty. The slight variation observed in the wrong attitude determination cases is due to the random behavior of the RANSAC algorithm. Whereas, the percentage of off-nominal scenarios during the planet identification, and, thus, during the extraction of its position projection, greatly depends on the probe position uncertainty. Indeed, when \(\sigma_{\mathbf{r}}\) increases, the expected planet position projection is further from the real position projection, and its uncertainty ellipse is bigger, which leads to a likelier planet misidentification (see Scenario 1.II.A)). Moreover, the percentage of off-nominal scenarios in planet detection also depends strictly on the success of the attitude determination. Indeed, when attitude determination provides a wrong solution, planet detection fails consequentially. In Table 3, the fifth column represents the total number of cases of no detection or wrong identification when the attitude determination converges to a solution. The total number of scenes in which attitude determination converges to a solution is 962. Instead, the last column represents the number of cases of no detection or wrong identification of the planet when the attitude determination converges to the correct solution. The failure percentage of the beacon detection procedure when the probe attitude is correctly determined is lower than 1 % with a probe position uncertainty up to \(10^{5}\) km. Thus, if a more robust attitude determination procedure is adopted, the total percentage of failure in the beacon detection becomes remarkably lower.
The Gaussian distribution of the planet position projection errors for \(\sigma_{\mathbf{r}}=10^{4}\) km, \(\sigma_{\mathbf{r}}=10^{5}\) km, \(\sigma_{\mathbf{r}}=10^{6}\) km, and \(\sigma_{\mathbf{r}}=10^{7}\) km is shown in Figs. 6a, 6b, 6c, and 6d, respectively. The color bar represents the number of samples lying in each grid interval. When the probe position uncertainty increases, the scenarios where the beacon projection error is over 0.3 pix seem to be filtered out. Indeed, in these cases, the IP algorithm may select a different, wrong, spike as the expected position projection becomes far from the real one (see Scenario 1.II.A)). As a result, the error norm becomes greater than 5 pix, and it is, thus, regarded as a failure of the IP procedure and not represented in pdf distributions.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline \(\sigma_{r}\) & \(\sigma_{\text{ErrRot}}\) & \begin{tabular}{c} \% Wrong Attitude \\ Determination \\ (out of 1031 cases) \\ \end{tabular} & \begin{tabular}{c} \% No Attitude \\ Determination \\ (of 1031 cases) \\ \end{tabular} & \begin{tabular}{c} \% Wrong Beacon \\ Detection \\ (of 962 cases) \\ \end{tabular} &
\begin{tabular}{c} \% Wrong Beacon Detection with \\ Right Attitude Determination \\ (of 962 cases) \\ \end{tabular} \\ \hline \(10^{4}\) & 14.77 & 3.88 (40 cases) & 6.69 (69 cases) & 4.57 (44 cases) & 0.42 (4 cases) \\ \(10^{5}\) & 15.18 & 3.88 (40 cases) & 6.69 (69 cases) & 4.68 (45 cases) & 0.52 (5 cases) \\ \(10^{6}\) & 15.21 & 4.07 (42 cases) & 6.69 (69 cases) & 7.69 (74 cases) & 3.33 (32 cases) \\ \(10^{7}\) & 15.48 & 4.07 (42 cases) & 6.69 (69 cases) & 29.83 (287 cases) & 25.47 (245 cases) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Algorithm Performances
The error ellipses in Fig. 6 are described by the mean and covariance values reported in Table 4. The determinant of the covariance matrix is a representation of the size of the area of the ellipse. Note that the planet position projection is detected with a sub-pixel \(3\sigma\) accuracy for all the four values of \(\sigma_{\mathbf{r}}\). In other terms, the error on the estimated planet position projection is not dependent on the probe position uncertainty but only on the attitude determination and centroids computation errors. The four covariance matrices are characterized by a similar determinant, which is proportional to the area of the ellipse. This feature is one of the advantages of the proposed pipeline for the planet position projection.
Figure 6: **Pdf distribution of the planet position projection errors with 3\(\sigma\) bounds.**
detection in deep-space images. Fig. 7 shows an example scenario where the error is over one pixel. In this case, two bright objects, where one of these is a planet, are overlapped. The centroiding algorithm finds only a centroid, shown with a red square, for this configuration shifted by more than one pixel from the planet real position projection shown with a green cross. Even in this challenging scenario, the IP pipeline can recognize the planet, but the detected planet position projection is affected by a greater error due to this unfortunate geometrical configuration.
## 6 Conclusion
This work proposes a novel and robust beacon detection algorithm for deep-space vision-based navigation. The extracted planet projection in digital images enables the beacon LoS extraction, which paves the way for deep-space navigation by exploiting celestial triangulation.
The algorithm succeeds in the detection of the planet position projection in at least 95 % of the 962 tested scenarios
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline \(\sigma_{r}\) [km] & \(10^{4}\) & \(10^{5}\) & \(10^{6}\) & \(10^{7}\) \\ \hline \([P_{\rm err}]\) [px\({}^{2}\)] & \(\begin{bmatrix}0.008&0.001\\ 0.001&0.007\end{bmatrix}\) & \(\begin{bmatrix}0.011&0.002\\ 0.002&0.007\end{bmatrix}\) & \(\begin{bmatrix}0.013&0.006\\ 0.006&0.011\end{bmatrix}\) & \(\begin{bmatrix}0.007&0.001\\ 0.001&0.005\end{bmatrix}\) \\ \(\mu_{\rm err}\) [px] & [0.0014;-0.0003] & [0.0001;-0.0034] & [-0.0005;-0.0045] & [-0.0005;-0.0041] \\ det([P]) [px\({}^{4}\)] & 5.9e-05 & 7.05e-05 & 9.94e-05 & 7.05e-05 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Mean and covariance of the planet position projection errors when the probe position uncertainty is known with an accuracy of \(10^{4}\), \(10^{5}\), \(10^{6}\), and \(10^{7}\) km.
Figure 7: An example scenario where the planet position projection is affected by an error of more than one pixel.
when the probe position uncertainty is up to \(10^{5}\) km, and a solution for the attitude determination problem is found. Since the rate of failure of the beacon detection is strictly connected to the success of the star identification procedure, the former can be reduced if a more robust procedure for the star asterism identification is exploited, as proposed in Mortari et al. [33] or in Cole and Crassidis [27], whose success rate is 95.8 % and 95 %, respectively. Indeed, the failure percentage of the beacon detection procedure when the probe attitude is correctly determined is lower than 1% with probe position uncertainty up to \(10^{5}\) km.
Moreover, in this work, the IP algorithm is used to detect only planets, but the proposed pipeline is thought to be applied to other non-stellar objects. For example, by including the asteroids ephemerides and the uncertainty of their position projection, the IP algorithm can be adapted to detect asteroids in the image and use them as beacons to navigate. In addition, it can be also specialized to detect Anthropogenic Space Objects for Earth-orbiting satellite navigation [32].
Moreover, it has been noticed that the size of the planet error ellipse increases as the distance probe-planet decreases. Indeed, the angle between the real planet LoS direction and the expected one is larger when the planet is close to the spacecraft. Thus, an higher uncertainty is associated with the closest planets, which means that a misidentification is likelier to occur for them. On the contrary, [34] proves that the vicinity of the planets to the probe is a valuable feature for increasing the state estimation accuracy. Thus, a trade-off between these two features needs to be performed to select the best pair of beacons to track, with which observations the state estimator is fed.
Future analysis should test the performances of the IP pipeline during hardware-in-the-loop simulations. In this context, a camera acquires a star-field image, rendered on a high-resolution screen, and gives the associated matrix of digital counts to the IP algorithm [35]. In addition, future works should focus on the integration of the proposed IP pipeline with orbit determination filters to complete the navigation cycle [4].
## Acknowledgments
This research is part of EXTREMA, a project that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No. 864697).
|
2304.04259
|
CLVOS23: A Long Video Object Segmentation Dataset for Continual Learning
|
Continual learning in real-world scenarios is a major challenge. A general
continual learning model should have a constant memory size and no predefined
task boundaries, as is the case in semi-supervised Video Object Segmentation
(VOS), where continual learning challenges particularly present themselves in
working on long video sequences. In this article, we first formulate the
problem of semi-supervised VOS, specifically online VOS, as a continual
learning problem, and then secondly provide a public VOS dataset, CLVOS23,
focusing on continual learning. Finally, we propose and implement a
regularization-based continual learning approach on LWL, an existing online VOS
baseline, to demonstrate the efficacy of continual learning when applied to
online VOS and to establish a CLVOS23 baseline. We apply the proposed baseline
to the Long Videos dataset as well as to two short video VOS datasets, DAVIS16
and DAVIS17. To the best of our knowledge, this is the first time that VOS has
been defined and addressed as a continual learning problem.
|
Amir Nazemi, Zeyad Moustafa, Paul Fieguth
|
2023-04-09T15:33:07Z
|
http://arxiv.org/abs/2304.04259v1
|
# CLVOS23: A Long Video Object Segmentation Dataset for Continual Learning
###### Abstract
Continual learning in real-world scenarios is a major challenge. A general continual learning model should have a constant memory size and no predefined task boundaries, as is the case in semi-supervised Video Object Segmentation (VOS), where continual learning challenges particularly present themselves in working on long video sequences. In this article, we first formulate the problem of semi-supervised VOS, specifically online VOS, as a continual learning problem, and then secondly provide a public VOS dataset, CLVOS23, focusing on continual learning. Finally, we propose and implement a regularization-based continual learning approach on LWL, an existing online VOS baseline, to demonstrate the efficacy of continual learning when applied to online VOS and to establish a CLVOS23 baseline. We apply the proposed baseline to the Long Videos dataset as well as to two short video VOS datasets, DAVIS16 and DAVIS17. To the best of our knowledge, this is the first time that VOS has been defined and addressed as a continual learning problem. The proposed CLVOS23 dataset has been released at [https://github.com/Amir4g/CLVOS23](https://github.com/Amir4g/CLVOS23).
## 1 Introduction
The goal of Video Object Segmentation (VOS) is to accurately extract a target object at the pixel level from each frame of a given video. In general, there are two categories of VOS solutions: semi-supervised or one-shot VOS, in which the ground-truth masks of the target objects are given in at least one frame at inference time, and unsupervised VOS, in which the VOS model knows nothing about the objects.
Among semi-supervised VOS approaches, online VOS approaches [5, 29, 37] update a part of the VOS model based on the evaluated frames and estimated masks. The idea is that videos contain relevant information beyond just the given frame's mask, which a model can exploit by learning during the evaluation process.
Online model learning, _while_ a video is being analyzed, leads to questions regarding how effectively the model learns from frame to frame, particularly when some aspect of the video looks different than what had been given in the ground-truth frame. This leads to the domain of continual learning, which is a type of machine learning where a model is trained on a sequence of tasks, and is expected to continuously improve its performance on each new task while retaining its ability to perform well on previously-learned tasks.
The current state-of-the-art semi-supervised and specifically online VOS methods [5, 29, 37] perform well on VOS datasets with _short_ videos (up to a few seconds or 100 frames in length) such as DAVIS16 [35], DAVIS17 [35], and YouTube-VOS18 [45]. However, most of these methods do not retain their expected performance on long videos, such as those in the Long Videos dataset [24] as shown in the XMem paper [10]. The question of the poor performance of online VOS on long videos has not been investigated in the VOS field, nor addressed through continual learning.
Continual learning methods are typically tested on classification datasets, like MNIST [22], CIFAR10 [21], and Imagenet [13], or on datasets specifically designed for continual learning, such as Core50 [28]. The classification dataset is fed to the model as a sequential stream of data in online continual learning methods [3]. In contrast to the aforementioned datasets and test scenarios, long video object segmentation has numerous real-world applications, such as video summarization, human-computer interaction, and autonomous vehicles [48].
In this paper, we formulate and address the inefficient performance of the online VOS approaches on long videos as an online continual learning problem. Moreover, we propose a new long-video object segmentation dataset for continual learning (CLVOS23), as a much more realistic and significantly greater challenge for testing VOS methods on long videos. As a baseline, we propose a Regularization-based (prior-focused) Continual Learning (RCL) solution to improve online VOS.
## 2 Related work
Semi-supervised VOS methods try to maximize the benefit from whatever information is given, normally the first frame of the video. Early solutions in the literature [6, 34] fine-tuned a pretrained VOS on the given information in a video at evaluation time. In contrast, current state-of-the-art solutions attempt to benefit from previously evaluated frames and make use of an allocated memory to preserve that information from preceding frames in segmenting the current frame. The so called memory-based VOS approaches [5, 30, 33, 37, 50, 10] also are categorised into two streams, matching-based and online:
* Matching-based VOS methods [11, 18, 25, 27, 40, 44, 47] match the representations of previous frames, stored in memory, with the corresponding features extracted from the current frame.
* Online VOS [5, 6, 26, 37, 42] update (fine-tune) a small model based on the features and estimated masks of preceding frames.
Continual learning [17, 46, 1] is a sequential learning process where the data sequence may come from different domains and tasks; thus, a model is learning from data where distribution drift [16] may occur suddenly or gradually. Catastrophic forgetting is the key challenge in continual learning and it was first defined on neural networks [36, 31] when a neural network model is trained on a sequence of tasks, but has access to the training data for only the current task. In such circumstances, the model learning process is inclined to frequently update those parameters which are heavily influenced by data from the current task, leading to previously-learned tasks to be partially forgotten. The concept of catastrophic forgetting was also defined on other machine learning models [14]. There are three different approaches to catastrophic forgetting: prior-focused (regularization-based) [9, 12], likelihood-focused (rehearsal-based) [4, 7, 43, 49], and hybrid (ensemble) approaches [39, 23].
Elastic Weight Consolidation (EWC) [20] and Memory Aware Synopses (MAS) [2] are two examples of prior-focused methods that employ regularization during training to limit the change of previously learned weights. These methods assume that previously learned task weights can serve as a prior for the current network weights, which are in charge of learning new tasks. Through the use of a penalty term in the loss function, these methods aim to preserve the significant parameters from preceding tasks.
Likelihood-focused (rehearsal) techniques concentrate on minimizing the model's loss function by taking into account historical information. Examples include deep generative replay (DGR) [41] and variational generative replay (VGR) [15], which keep previous data or train generative models on earlier tasks prior to training the new task. Generative Adversarial Networks (GANs) are used in [41] to produce data from each task as samples to be used during the training of a new task.
Finally, as their name implies, hybrid methods seek to combine the benefits of prior-focused and likelihood-focused techniques. As an example, Variational Continual Learning (VCL) [32] combines the posterior from the previous task (i.e., the prior to the current task) with information about the new task (i.e., its likelihood).
The solution proposed in this article is a Regularization-based Continual Learning (RCL) approach, drawing its motivation from EWC [20].
## 3 Problem formulation
An online VOS model \(O_{\Xi}\)[5, 29, 37] is first trained offline to minimize the following loss function and to learn the model parameters \(\Xi\):
\[\Xi=\operatorname*{arg\,min}_{\Xi^{\prime}}\mathcal{L}(O_{\Xi^{\prime}}(F),Y). \tag{1}\]
In Eq. (1), \(\mathcal{L}\) is usually a pixel-wise cross entropy loss [8], \(F\) is an image frame and \(Y\) is the segmented mask in which each pixel of \(F\) is labeled, based on the number of objects in the video sequence. For example, in the case of single-object video, \(Y\) is just a binary foreground/background mask. An online VOS model typically has a U-Net encoder-decoder structure [38], and further comprises the following pieces:
1. A pretrained encoder, extracting feature \(X\) from each frame \(F\);
2. A memory \(\mathcal{M}=\{\mathcal{X},\mathcal{Y}\}\), storing features \(\mathcal{X}\) and their associated labels \(\mathcal{Y}\) / masks. The memory can be updated with input feature \(X_{t}\) and estimated output \(Y_{t}\) at time \(t\);
3. A target model \(\text{C}^{t}\), which is trained on the memory \(\mathcal{M}^{t}\) at time \(t\), and provides information to decoder \(\mathrm{D}\);
4. Pretrained decoder \(\mathrm{D}\) and label encoder \(\mathrm{E}\)[5] networks which obtain temporal information from the target model alongside the encoder's output, to generate a fine-grain output mask \(Y\) from frame \(F\).
The time index \(t\) is based on input time frame. Thus, at time \(t\), \(\text{C}^{t-\Delta_{\text{C}}}\) is updated to \(\text{C}^{t}\) on \(\mathcal{M}^{t}\) where \(\Delta_{\text{C}}\) is the target model update step. Next, the output \(Y_{t+1}\) is estimated from \(\text{C}^{t}\), thus \(\mathcal{M}^{t}\) can be augmented with pairs (\(X_{t+1},Y_{t+1}\)) to create \(\mathcal{M}^{t+1}\). Potentially, we could update \(\mathcal{M}\) at every time frame \(t\), but for practical and computational reasons, we can choose to update the memory every \(\Delta_{\mathcal{M}}\) frames, where \(\Delta_{\mathcal{M}}\) is the memory update step. An analogous target model
update step \(\Delta_{\mathrm{C}}\) is considered for updating \(\mathrm{C}\). This process is depicted in Figure 1.
All of the parameters of the VOS model (\(\Xi\)) are first trained offline on a set of training data containing video frames and annotated labels; however, certain parameters of the model need to be updated online at testing time on the extracted features \(\mathcal{X}\) of evaluated frames and their associated predicted labels \(\mathcal{Y}\) which are kept in the memory \(\mathcal{M}\). In particular, let \(\Theta\) be the parameters of target model \(\mathrm{C}\), consisting mainly of convolutional filter weights, for \(\Theta=\{\theta_{l}\}_{l=1}^{K}\) where \(K\) is the number of target model parameters. It should be emphasized that \(\Theta\) is a rather small subset of the overall parameter set (\(\Xi\)), since the target model \(\mathrm{C}\) is usually a small convolutional neural network for reasons of efficiency. The target model is updated every \(\Delta_{\mathrm{C}}\) frames throughout the video, repeatedly trained on features \(\mathcal{X}\) and associated encoded labels \(\mathrm{E}(\mathcal{Y})\) of stored decoder outputs \(\mathcal{Y}\) from preceding frames. Both \(\mathcal{X}\) and \(\mathcal{Y}\) are stored in memory \(\mathcal{M}\), as shown in Figure 1.
It is worth noting that \(\mathrm{E}\) is a label encoder, generating sub-mask labels from each \(Y\)[5]. For online training of \(\mathrm{C}^{t-\Delta_{\mathrm{C}}}\) at time \(t\), every \(Y\in\mathcal{M}^{t}\) is fed to \(\mathrm{E}\) and we seek a trained model \(\mathrm{C}^{t}\) to learn what \(\mathrm{E}\) specifies from each \(Y\). That is, the target model acts like a dynamic attention model to generate a set of score maps \(\mathrm{E}\big{(}\mathrm{C}^{t}(X)\big{)}\) in order for the segmentation network (D) to produce the segmented output mask \(Y\) associated with each frame \(F\). The loss function \(L\), which is used for the online training of target model \(\mathrm{C}^{t}\) at time \(t\), is
\[L(\Theta^{t},\mathcal{M}^{t})= \tag{2}\] \[\sum_{n=1}^{|\mathcal{M}^{t}|}\Big{\|}d_{n}W_{n}\Big{(}\mathrm{E} (Y_{n})-\mathrm{E}\big{(}\mathrm{C}^{t}(X_{n})\big{)}\Big{)}\Big{\|}_{2}^{2}+ \sum_{k=1}^{K}\lambda\;\theta_{k}^{t\;2},\]
where \(\theta_{k}^{t}\in\Theta^{t}\) is a parameter of \(\mathrm{C}^{t}\) and \(|\mathcal{M}^{t}|\) is the number of feature and mask pairs \(\{X,Y\}\) in the memory \(\mathcal{M}^{t}\).
Depending on the overall architecture, \(\mathrm{E}\) is an offline / pre-trained label encoder network, as in [5], or just a pass-through identity function, as in [37]. It is worth noting that the influence and effect of \(\mathrm{E}\) is not the focus or interest of this paper.
In Eq. (2), \(W_{n}\) is the spatial pixel weight, deduced from \(Y_{n}\), and \(d_{n}\) is the associated temporal weight decay coefficient. In the loss function \(L(\Theta^{t},\mathcal{M}^{t})\), \(W_{n}\) balances the importance of the target and the background pixels in each frame, whereas \(d_{n}\) defines the temporal importance of pair of feature and mask \((X_{n},Y_{n})\) in memory, typically emphasizing more recent frames [5].
## 4 Proposed dataset
As shown in Figure 1, online VOS assumes the change in each video sequence to be gradual, meaning that a constant size of memory \(\mathcal{M}^{t}\) has an adequate capacity to update the target model \(\mathrm{C}^{t-\Delta_{\mathrm{C}}}\) to \(\mathrm{C}^{t}\) for segmenting the current frame \(F_{t+1}\). In the ideal case, where the samples in a video sequence are independent and identically distributed (i.i.d.),
Figure 1: General online VOS framework: The target model \(\mathrm{C}^{t-\Delta_{\mathrm{C}}}\) is updated on memory \(\mathcal{M}^{t}\) to form \(\mathrm{C}^{t}\). The target model \(\mathrm{C}\) is initialized based on the given ground truth mask \(Y_{g}\) and its associated feature \(X_{g}\). The memory \(\mathcal{M}^{t}\) is updated every \(\Delta_{\mathrm{M}}\) time steps (video frames) with new information \((X_{t+1},Y_{t+1})\). The dashed lines show how the target model \(\mathrm{C}\) is updated based on memory \(\mathcal{M}\) every \(\Delta_{\mathrm{C}}\) frames, and the dotted lines show memory update. Our proposed methods focus on the target model component (\(\mathrm{C}\)) of the framework. The frame images used in the figure are taken from the “car” video in the proposed CLVOS23 dataset.
machine learning problems are made significantly easier, since there is then no need to handle distributional drift and temporal dependency in VOS. However, i.i.d. assumption is not valid in video data.
Figure 2 shows three video sequences from the DAVIS2016 dataset, where we can see that target objects do not have an abrupt change through video frames. Objects could have small changes, such as in the "cow" video (the longest video in DAVIS2016 at \(104\) frames), and the other two videos (soaplbox and motocross-jump) possess variations in object appearance, however the changes are gradual. As a result, for such datasets the identically distributed assumption of frames is usually valid, particularly for short videos. It is thus worth mentioning that the YouTube-VOS18 sequences are even shorter than those in DAVIS16 and DAVIS17, where the longest video in the validation set of YouTube-VOS18 has \(36\) frames.
The semi-supervised VOS approaches maintain the i.i.d. assumption for video sequences, despite the fact that this assumption is clear not valid in all video sequences, particularly longer ones. It is precisely for this reason that state-of-the-art semi-supervised VOS models are not expected to have a similar performance on long video datasets [10].
Figure 3 shows the "dressage" video from the Long Videos dataset [24], the dataset consisting of three long sequences with a total of \(7411\) frames. As is clear from Figure 3, an i.i.d. assumption is not at all valid on "dressage" video, because of the \(22\) substantial distribution drifts which take place, a behaviour which is much more closely aligned with the _non_-i.i.d. assumption of continual learning. However, this new continual learning-based interpretation of the long video sequences is discussed for the first time in VOS and continual learning. As the evaluation label mask is chosen uniformly in the Long Videos dataset, it does not show how well a VOS solution handles sudden shifts in the target's appearance. Alternatively, we propose annotating the frames for the evaluation based on the distribution drift that occurs in each video sequence.
Figure 3 shows \(23\) sub-chunks of videos in the "dressage" video of the Long Videos dataset. Each sub-chunk is separated from its previous and next sub-chunks based on the distribution drifts. When an online or offline event, such as a sports competition, is recorded using multiple cameras, these distribution drifts are common in media-provided videos. As a result, in our proposed dataset, we first utilize the following strategy to select candidate frames for annotation and evaluation.
* We select the first frame of each sub-chunk \(S\). It is interesting to see how VOS models handle the distribution drift that happens in the sequence, which is arriving a new task in continual learning.
* The last frame of each sequence is also selected. The first frame ground truth label mask is given to the model as it is set in the semi-supervised VOS scenario.
* One frame from the middle of each sub-chunk is also selected for being annotated.
As shown in Figure 3, selecting the annotated frames uniformly will cause some small sub-chunks (\(S_{11},S_{12},S_{17},S_{19}\)) to be missed in the evaluation. For CLVOS23, in addition to the \(3\) videos from the Long Videos dataset, we added the other \(6\) videos described in Table 1. All frames of the \(6\) new added videos are extracted with the rate of \(15\) Frames Per Second (FPS). To ensure that all distribution drifts are captured, we only annotate the
Figure 2: A set of sub-sampled frames from three videos of the DAVIS16 dataset [35], in each case two rows: actual images (top) and segmented objects (bottom). The first video, “cow” is the longest in DAVIS16, however there is no significant change between frames. There is a gradual change in appearance in the other two videos. The given annotated (ground-truth) frame in each video is highlighted in green.
first frame of each sub-chunk in the Long Videos dataset and add them to the uniformly selected annotated frames. The proposed dataset has following advantages over the Long Videos dataset [24].
* It added \(5951\) frames to \(7411\) frames of the Long Videos dataset.
* CLVOS23 increased the number of annotation frames from \(63\) in the Long Videos dataset to \(284\).
* It increases the number of videos from \(3\) to \(9\).
* The selected annotated frames are chosen based on the distribution drift that happens in the videos (sub-chunks) rather than being uniformly selected.
It is worth noting that for a long VOS dataset, it is very expensive and sometimes unnecessary to annotate all the frames of videos for evaluation. It is worth mentioning that We utilized the Toronto Annotation Suite [19] to annotate the selected frames for evaluation. The frames of new \(6\) videos were resized to have a height of \(480\) pixels. The width of each frame is defined as proportionate to its height. The link to access to the dataset is provided.1
Footnote 1: [https://github.com/Amir4g/CLVOS23](https://github.com/Amir4g/CLVOS23)
## 5 Proposed method
A continual learning system should have a limited constant memory which is essential for a bounded system working on an infinite sequence of data. Thus, we focus on addressing continual learning using the memory-based VOS models and among them we are interested in the online VOS approaches, where part of the model (\(\mathrm{C}\)) is updating on a constant size memory \(\mathcal{M}\).
The LWL method [5], which is an extension over the well-known FRTM framework [37] benefits from a label encoder network \(\mathrm{E}\) that tells the target model \(\mathrm{C}\) what to learn [5]. In this article, LWL has been chosen as the online VOS baseline method. The framework structure that is explained in Figure 1 is followed by LWL, where encoder, decoder \(\mathrm{D}\), and the label encoder \(\mathrm{E}\) are all trained offline; consequently, we do not make any modifications to these components by implementing the proposed solution.
The proposed regularization-based continual learning (RCL) method is inspired by the EWC [20] algorithm, where the network parameters \(\Theta\) of the target model \(\mathrm{C}\) in LWL are regularized to preserve the important parameters and prevent modification during the target model updating steps. The importance of each parameter \(\theta_{k}\) is associated with the magnitude of its related gradient \(\phi_{k}\) during the preceding update steps. Therefore, during each updating (online learning) step \(t\), the training parameters \(\Theta^{t}\) are regularized by the magnitude of the gradients of the target models' parameters \(\Phi=\{\phi_{k}\}_{k=1}^{K}\) and the updated model's parameters \(\Theta=\{\theta_{k}\}_{k=1}^{K}\) of preceding updates, which are stored in the regularizer memory \(\mathcal{M}_{R}\).
Thus, for all features \(\mathcal{X}\) and their related output masks \(\mathcal{Y}\) in the memory \(\mathcal{M}^{t}\), the target model \(\mathrm{C}^{t}\) with parameters \(\Theta^{t}\), and the regularizer memory \(\mathcal{M}_{R}^{t-\Delta_{C}}\), the following loss function defined in Eq. (3) is used for training the target model of LWL:
\[L_{R}(\Theta^{t},\mathcal{M}^{t},\mathcal{M}_{R}^{t-\Delta_{C}})= \tag{3}\] \[L(\Theta^{t},\mathcal{M}^{t})+\lambda\sum_{j=1}^{|\mathcal{M}_{ R}^{t-\Delta_{C}}|}\Phi^{j}\Big{|}\Big{|}\Theta^{t}-\Theta^{j}\Big{|}\Big{|}^{2}\]
where the loss function \(L\) is described in Eq. (2), \(\lambda\) controls the regularisation term, and \(|\mathcal{M}_{R}^{t-\Delta_{C}}|\) shows how many pairs of \(\{\Theta,\Phi\}\) have been stored in \(\mathcal{M}_{R}\) so far. The loss function in Eq. (3) is used to update the target model, and it regularizes the target model training to preserve its previously learned knowledge. The proposed RCL method is depicted in Figure 4. As illustrated in this figure, the proposed RCL can be added to any online VOS method and improve its performance as shown in Section 6.
It is worth noting that the memory \(\mathcal{M}\) is initialized by the encoded features of the given frame \(F_{g}\) and its provided ground-truth mask \(Y_{g}\) as defined in a semi-supervised VOS scenario.
One drawback of the proposed regularization-based method is that it needs to store the parameter importance \(\Phi^{t}\) and the parameters of the target model \(\Theta^{t}\) after each online updating step \(t\); however, a limited number of stored pairs of \(\{\Phi,\Theta\}\) are enough to regularize the updating step of the target model \(\mathrm{C}^{t}\).
Additionally, for a small target model \(\mathrm{C}\), it is feasible to calculate and store the \(\Phi\) and \(\Theta\) during the updating step; however, it is a real challenge for a larger target model.
\begin{table}
\begin{tabular}{l|c|c|c} Video name & \#Sub-chunks (tasks) & \#Frames & \#Annotated frames \\ \hline Message & \(23\) & \(3589\) & \(43\) \\ blueboy & \(27\) & \(1416\) & \(47\) \\ rat & \(22\) & \(2606\) & \(42\) \\ car & \(18\) & \(1109\) & \(37\) \\ dog & \(12\) & \(891\) & \(25\) \\ parkour & \(24\) & \(1578\) & \(49\) \\ skating & \(5\) & \(778\) & \(11\) \\ skiing & \(5\) & \(692\) & \(11\) \\ skiing-long & \(9\) & \(903\) & \(19\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Each video sequence’s specifications in the proposed CLVOS23 dataset. The first three videos (Dressage, Blueboy, and Rat) are taken directly from the Long Videos dataset [24] and we added additional annotated ground-truth frames to each of them to make them more appropriate for continual learning.
## 6 Experimental Result
A fixed setup is used for the evaluated methods, with maximum memory sizes of \(N=32\) for LWL and LWL-RCL as suggested in LWL's original publication. For all experiments, the target model \(\mathrm{C}\) is updated for three epochs on the memory \(\mathcal{M}\) in each updating step to have a fair comparison with the baseline. The target model is updated every time the memory is updated, following the proposed setup in [10].
The memory \(\mathcal{M}^{0}\) is initialized by the given ground truth frame \(F_{g}\). In all of the experiments, as suggested in the semi-supervised online VOS baseline (LWL), the information extracted from \(F_{g}\) is preserved and is used throughout the evaluation of other frames in the video sequence. In the proposed method, the same concept is followed where in the proposed regularisation-based LWL, the importance parameters \(\Phi^{0}\) and the parameters \(\Theta^{0}\) related to the training of the target model \(\mathrm{C}\) on \(X_{g}\) and \(Y_{g}\) are kept in \(\mathcal{M}_{R}\).
In the RCL method, \(\lambda\) is set to \(5\) and the maximum size of \(\mathcal{M}_{R}\) is set to \(20\). We validate these hyper-parameter using cross validation. In LWL, the target model \(\mathrm{C}\) is a small one layer convolutional neural network. Additionally, the same pretrained decoder \(\mathrm{D}\) and encoder models are used for all experiments of LWL. To measure the effectiveness of the proposed method, consistent with the standard DAVIS protocol [35] the mean Jaccard \(\mathcal{J}\) index, mean boundary \(\mathcal{F}\) scores, and the average of \(\mathcal{J}\&\mathcal{F}\) are reported for all evalu
Figure 3: A subset of frames from “dressage” video of the Long Videos dataset [24]. The video consists of \(23\) sub-chunks that are separated from each other by significant distributional drifts or discontinuities. The lower (sparse) row, in each set, show the annotated frames. The annotations provided by [24] are shown without a border, whereas the annotated masks added via this paper, and made available via the CLVOS23 dataset, are shown with blue borders. The four sub-chunks that are missing from the Long Videos dataset are encircled in red.
ated methods. The speed of each method is reported on the DAVIS16 dataset [37] in units of Frames Per Second (FPS). All experiments were performed using one NVIDIA V100 GPU.
The effectiveness of the proposed regularization-based continual learning method (RCL) is evaluated by augmenting an online VOS framework (LWL); however, the proposed method can be extended to any online VOS method having a periodically-updated network model, as in Figure 1.
Table 2 shows the results of the selected baseline (LWL) and the augmented baseline with the proposed regularization-based method (RCL) on the Long Video dataset [24], and the proposed CLVOS23 dataset. Here, six experiments with six different memory and target model update step sizes \(\Delta_{\mathrm{C}}\in\{1,2,4,6,8,10\}\) are conducted (\(\Delta_{\mathcal{M}}=\Delta_{\mathrm{C}}\)), where, the memory \(\mathcal{M}^{t}\) is updated after each target model \(\mathrm{C}^{t-\Delta_{\mathrm{C}}}\) update to \(\mathrm{C}^{t}\). For reference, the means and standard deviations of six runs of two competing methods (LWL and LWL-RCL) are reported in Table 2. As it is represented in Table 2, CLVOS23 is a more difficult VOS dataset in comparison to the Long Videos dataset, since LWL has lower performance on CLVOS23. Additionally, the proposed RCL improves LWL on CLVOS23 more than the Long Videos dataset, which shows CLVOS23 is a more appropriate dataset for evaluating online, continual learning-based contributions.
Furthermore, looking at the standard deviations reported in Table 2, the proposed regularization-based method decreases the standard deviation of reported results with different memory and target model step sizes \(\Delta_{\mathrm{C}}\in\{1,2,4,6,8,10\}\). This indicates that the proposed method is more robust against selecting different frame rates for updating the target model \(\mathrm{C}\).
Table 3 shows the results of the selected baseline on two short VOS datasets (DAVIS16 and DAVIS17). The results show that the proposed RCL method does not have any negative effects on the accuracy of the baseline method (LWL); however, it affects the speed of the baseline since it needs to recalculate the regularization term in Eq. (3) in every epoch of the updating step.
It is worth mentioning that we use the suggested hyper-parameters in the original paper of LWL [5]; nevertheless, the used hyper-parameters are not necessarily the best parameters for LWL on long video datasets, and it is possible to improve the performance of the baseline method on the evaluated dataset by only making some small changes to LWL. The objective of this article is to provide a contin
Figure 4: The proposed online VOS framework, with the proposed RCL approach: At time \(t\), the process of updating \(\mathrm{C}^{t-\Delta_{\mathrm{C}}}\) on \(\mathcal{M}^{t}\) is regularized by all pairs of the target model’s parameters and their associated importance \(\{\Phi,\Theta\}\) in the regularizer memory \(\mathcal{M}^{t-\Delta_{\mathrm{C}}}_{R}\) as shown in Eq. (3). After updating \(\mathrm{C}^{t-\Delta_{\mathrm{C}}}\) to \(\mathrm{C}^{t}\), \(\mathcal{M}^{t-\Delta_{\mathrm{C}}}_{R}\) is updated using \(\{\Phi^{t},\Theta^{t}\}\) calculated from \(\mathrm{C}^{t}\).
ual learning-based VOS dataset and a method that improves any online VOS approaches that struggle with forgetting on long video sequences with abrupt changes in the target object's appearance.
## 7 Conclusion
In this article, we presented a dataset called CLVOS23 to examine the capability of semi-supervised VOS approaches to deal with the forgetting of past frames' learning, and we frame this problem as a continual learning challenge. To help online VOS methods get around memory limitations without sacrificing accuracy, we also proposed adding a regularization-based module to them. The proposed modules can be added to any existing online VOS framework that is already in place to make it more efficient and resistant to distribution drifts that can happen during long video clips, while keeping or even improving performance accuracy. The changes we made to the standard procedure for online VOS made it more accurate on long videos, according to our results. Furthermore, on the short video datasets (DAVIS16, DAVIS17) where the object's appearance does not suddenly change, the proposed methods do not outperform the baselines.
## Acknowledgments
We appreciate the generous support provided by Microsoft Office Media Group and NSERC Alliance for this research project.
|
2307.05417
|
No-resonance conditions, random matrices, and quantum chaotic models
|
In this article we investigate no-resonance conditions for quantum chaotic
and random matrix models. No-resonance conditions are properties on the
spectrum of a model, usually employed as a theoretical tool in the analysis of
late time dynamics. The first order no-resonance condition holds when a
spectrum is non-degenerate, while higher order no-resonance conditions imply
sums of an equal number of energies are non-degenerate outside of permutations
of the indices. The condition is usually assumed to hold for quantum chaotic
models. In this work we use several tests from random matrix theory to
demonstrate that no-resonance conditions are likely to be violated for all
equal sums containing greater than one energy. This is due to the presence of
level-attraction in the spectra after resolving appropriate symmetries. This
result is produced for both a quantum chaotic Hamiltonian and two random matrix
models. We then generalize important bounds in quantum equilibration theory to
a case where the conditions are violated, and to the case of random matrix
models.
|
Jonathon Riddell, Nathan Pagliaroli
|
2023-07-11T16:34:27Z
|
http://arxiv.org/abs/2307.05417v2
|
# No-resonance conditions, random matrices, and quantum chaotic models
###### Abstract
In this article we investigate no-resonance conditions for quantum chaotic and random matrix models. No-resonance conditions are properties on the spectrum of a model, usually employed as a theoretical tool in the analysis of late time dynamics. The first order no-resonance condition holds when a spectrum is non-degenerate, while higher order no-resonance conditions imply sums of an equal number of energies are non-degenerate outside of permutations of the indices. The condition is usually assumed to hold for quantum chaotic models. In this work we use several tests from random matrix theory to demonstrate that no-resonance conditions are likely to be violated for all equal sums containing greater than one energy. This is due to the presence of level-attraction in the spectra after resolving appropriate symmetries. This result is produced for both a quantum chaotic Hamiltonian and two random matrix models. We then generalize important bounds in quantum equilibration theory to a case where the conditions are violated, and to the case of random matrix models.
One of the most ubiquitous observations in many body physics is the connection between the spectral statistics of many body quantum systems and that of random matrices. Quantum systems are not chaotic in the classical sense since unitary time evolution guarantees that the overlap between two states in time is constant. This excludes the classical notation of chaos in quantum systems for which we observe exponential sensitivity to small differences in initial conditions. However, their spectral statistics behave qualitatively differently if their corresponding classical limit is integrable or chaotic. If the classical limit is chaotic, the spectral statistics of the quantum Hamiltonian agree with the predictions of Random Matrix Theory (random) and we refer to these models as quantum chaotic [1]. The notion of quantum chaos can be extended to quantum systems that do not have a well-defined classical limit [2].
An extremely important property of the spectral statistics of a quantum chaotic Hamiltonian is the presence of level-repulsion amongst neighboring energies. Originally this level-repulsion was first modeled for heavy atomic nuclei by Wigner using Gaussian ensembles of random matrices. Since Wigner's work, it has been established that features of the spectrum of classically chaotic quantum systems are accurately described by various ensembles of random matrices[3, 4, 5, 6, 7]. The connection between the spectrum of quantum chaotic systems and random matrices has been well studied in single particle systems [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], along with many body systems [22, 23, 24, 25, 26, 27, 28, 29, 30] and recently has seen a surge of interest in the case of circuit or periodically driven type models [31, 32, 33, 34]. The first to extend Wigner's work were Dyson and Mehta in the series of papers [35, 36, 37, 38, 39]. In particular, Dyson classified the three most immediately relevant ensembles: the Gaussian Unitary Ensemble, the Gaussian Orthogonal Ensemble, and the Gaussian Symmplectic Ensemble in what is known as the "threefold way" [40]. Of the most immediate interest to this work is the Gaussian Orthogonal Ensemble (GOE). The Bohigas, Giannoni, and Schmit (BGS) conjecture [41] states that the GOE has the same level-spacing as a wide class of quantum systems with classical limits [42, 43, 44]. Let \(E_{0}\leq E_{1}\leq E_{2},...\) be a sequence of unfolded energy eigenvalues of the GOE; then Wigner surmised the distribution of average consecutive level-spacings, that is the average of \(s_{k}=E_{k+1}-E_{k}\) for all \(k\) is
\[p(s)=\frac{\pi s}{2}e^{-\pi^{2}s^{2}/4}. \tag{1}\]
To see how to unfold a spectrum see Chapter 6 of [45] or for example [46]. It is important to note that Wigner's Surmise is an approximation [47] of the actual distribution, originally derived in [48]. This was further simplified in terms of Painleve transcendentals [49].
In contrast to level-repulsion, if one considers the level-spacing of i.i.d. random variables, not only does one not see repulsion, but rather one sees attraction [50], which has been used as a marker for non-chaotic systems [2]. In particular after unfolding the spacing of such systems, the distribution is Poisson
\[p(s)=e^{-s}. \tag{2}\]
The presence of level-repulsion and GOE spectral statistics is a hallmark test of Quantum chaos, while Poisson statistics are associated with integrable or non-chaotic models.
A key consequence of the presence of level-repulsion is that the value of the probability density at zero is zero, meaning that we can assume with high probability that we will not find degeneracies in the quantum chaotic spectrum. This observation is useful, for example, when considering dephasing arguments, which has recently been particularly popular in the quantum equilibration community [51; 52; 53; 54; 55; 56; 57; 58; 59]. If we consider the time-evolution of many dynamical functions under unitary dynamics, time-dependent terms in the series will often appear as the following:
\[z\,e^{-i(E_{m}-E_{n})t}, \tag{3}\]
where \(z\) is a complex number and \(t\) is time. Terms such as these survive the infinite time average if and only if \(E_{m}=E_{n}\). In the case of quantum chaotic Hamiltonians it is a safe assumption that any surviving term would imply that \(m=n\), since we do not expect degeneracy due to the presence of level-repulsion. The cases where \(E_{m}=E_{n}\) and \(m\neq n\) are referred to as _resonances_. However, in general dynamical functions can be more complex with terms such as
\[z\,e^{-i(E_{m_{1}}-E_{n_{1}}+E_{m_{2}}-E_{n_{2}}+\ldots)t}. \tag{4}\]
Such terms can, for example, appear in out of time ordered correlators or other higher order correlation functions [60; 61; 62; 63; 64; 52; 65]. To discuss the terms that survive the infinite time average in equation 4 we introduce the qth order no-resonance condition.
**Definition 1**.: _Let \(H\) be a Hamiltonian with spectrum \(H=\sum_{j}E_{j}\ket{E_{j}}\bra{E_{j}}\), and let \(\Lambda_{q},\Lambda_{q}^{\prime}\) be two arbitrary sets of \(q\) energy levels \(\{E_{j}\}\). \(H\) satisfies the q no-resonance condition if for all \(\Lambda_{q},\Lambda_{q}^{\prime}\), the equality_
\[\sum_{j\in\Lambda_{q}}E_{j}=\sum_{j\in\Lambda_{q}^{\prime}}E_{j} \tag{5}\]
_implies that \(\Lambda_{q}=\Lambda_{q}^{\prime}\)._
By definition 1 the set of terms that satisfy the q no-resonance condition are the minimum set of terms that survive the infinite time average as in equation 4. Terms that fall outside of definition 1 are referred to as _\(q\)-resonances_. Typically in the literature it is suggested that quantum chaotic Hamiltonians satisfy definition 1 [66; 67; 68]. This greatly simplifies arguments involving infinite time averages in quantum chaotic models. Despite this condition being somewhat common in the literature, studies only test this condition for the \(q=1\) case where one finds level-repulsion governed by the Wigner-Dyson distribution [2; 27]. As for the \(q=2\) case, an explicit formula is known for the density of states [69], but as far the authors can tell nothing is known about the level-spacing distribution. However, as we will see, the numerical simulations performed in this paper strongly suggest that for the GOE the \(q=2\) level-spacing distribution is Poisson. In the appendix we numerically demonstrate that \(q=3,4\) also appear Poisson and have level-attraction. We then conjecture that all level spacing distributions for \(q\geq 2\) have level-attraction and appear Poissonian.
## I Spectral statistics for a quantum chaotic Hamiltonian
In this section we first investigate what the spectral statistics look like for a specific quantum chaotic model. In particular we study a Heisenberg type model with nearest and next nearest neighbour interactions.
\[H= \sum_{j=1}^{L}J_{1}\left(S_{j}^{+}S_{j+1}^{-}+\text{h.c.}\right)+ \gamma_{1}\,S_{j}^{Z}S_{j+1}^{Z} \tag{6}\] \[+J_{2}\left(S_{j}^{+}S_{j+2}^{-}+\text{h.c.}\right)+\gamma_{2}S_ {j}^{Z}S_{j+2}^{Z}, \tag{7}\]
where \((J_{1},\gamma_{1},J_{2},\gamma_{2})=(-1,1,-0.2,0.5)\) gives us a non-integrable model. This model has a free limit for \((J_{1},0,0,0)\) and an interacting integrable limit for \((J_{1},\gamma_{1},0,0)\). Recently this model was confirmed to obey the eigenstate thermalization hypothesis [70]. We perform full spectrum exact diagonalization in the maximally symmetric sector of this model. In particular, this matrix conserves the total magnetization \(m_{z}=\sum_{j}S_{j}^{Z}\), and is translation invariant. We choose to work in the sector such that \(\langle m_{z}\rangle=0\) with quasi-momenta \(k=0\). This allows us to further diagonalize the model with the spatial reflection symmetry \(P\) and the spin inversion symmetry \(Z\).
In this section we will focus on the spectral statistics for the cases \(q=1\), as a benchmark, and \(q=2\), the first non-resonance condition that is unexplored in the literature. As we will show in the appendix, the behavior for \(q>2\)
is qualitatively similar to \(q=2\). First, let us establish that our model satisfies the usual tests for quantum chaos in the \(q=1\) case. Perhaps the most common test is to investigate the level spacing distribution \(s_{j}=E_{j+1}-E_{j}\). The act of unfolding allows us to have a universal scale for the comparison of spectra of different Hamiltonians. The distribution of \(s_{j}\) for a quantum chaotic model should be a Wigner surmise. To unfold the spectrum we use Gaussian broadening. Namely we map our energies \(E_{k}\) to \(\epsilon_{k}\) in the following way [46],
\[\epsilon_{k}=N(E_{k}), \tag{8}\]
\[N(E)=\int_{-\infty}^{E}\sum_{k}\frac{1}{\sigma_{k}\sqrt{2\pi}}e^{-\frac{(e-E_ {k})^{2}}{2\sigma_{k}^{2}}}de, \tag{9}\]
where we use the same convention as in [46] and take
\[\sigma_{k}=0.608\alpha\Delta_{k}, \tag{10}\]
where \(\Delta=(E_{k+\alpha}-E_{k-\alpha})/2\alpha\) and we find that \(\alpha=20\) is quite suitable for our spectrum.
Fig. 1 demonstrates that our model for \(q=1\) has level-repulsion and appears to have a level spacing distribution well approximated by the Wigner surmise. While this result shows us that our spectrum strongly resembles the predictions of RMT, the unfolding procedure is usually chosen to find such agreement, therefore it is desirable to perform a test that does not need unfolding. Such a test is given by investigating the distribution of ratios between successive gaps [71; 72]. We introduce the ratios
\[r_{j}=\frac{\min\{s_{j},s_{j+1}\}}{\max\{s_{j},s_{j+1}\}}, \tag{11}\]
which tells us that \(r_{j}\in[0,1]\). We emphasize that the \(s_{j}\) we use here don't need to be unfolded gaps. This test can be done with the model's physical spectrum. For the GOE in [73] it was analytically shown that the distribution of the \(r_{j}\) for \(3\times 3\) matrices is given by
\[p(r)=\frac{27}{4}\frac{r+r^{2}}{(1+r+r^{2})^{\frac{7}{2}}}. \tag{12}\]
If instead our energy levels were independent randomly distributed variables we would instead get level-attraction,
\[p(r)=\frac{2}{(1+r)^{2}}. \tag{13}\]
Figure 1: (a) Level spacing \(q=1\), \(L=24\) unfolded data, which looks approximately like a Wigner-surmise exhibiting level propulsion. In black we plot the Wigner-surmise and in purple we plot the Poisson distribution. (b) Ratio test for spectral statistics at \(L=24\). In black we plot the corresponding GOE distribution.
We see in Fig. 1 (b) that our result experiences level-repulsion, agreeing with the distribution in equation 12.
Next we consider the case for \(q=2\). The spectrum we are now interested in is equivalent to the spectrum of the Hamiltonian,
\[\hat{H}_{2}=\hat{H}\otimes\mathbb{I}+\mathbb{I}\otimes\hat{H}, \tag{14}\]
which has the spectrum \(\Lambda_{k,l}=E_{k}+E_{l}\). This construction introduces an unwanted symmetry in the spectrum of \(\hat{H}_{2}\), namely that \(\Lambda_{k,l}=\Lambda_{l,k}\), that is, the spectrum is invariant under permutations of the individual energies' indices. For \(q=2\) this might be understood as a spatial reflection symmetry for a larger two component non-interacting system. Addressing this symmetry is simple. We only consider unique pairs of \((k,l)\), namely, we take \(l>k\), where we also ignore the portion of the spectrum where \(k=l\). Ignoring \(k=l\) does not appear to significantly alter the results but allows us to eliminate trivial multiples of the \(q=1\) spectrum. In fact, the contribution of the \(q=l\) portion of the spectrum is vanishingly small compared to the total size of our spectrum. We further introduce a new index that orders the spectrum \(\alpha=1,2\dots\) such that \(\Lambda_{\alpha}<\Lambda_{\alpha+1}\). With this new spectrum we can analyze the level spacing and ratio distribution.
Fig. 2 indicates that the spectrum of \(\hat{H}_{2}\) experiences level-attraction. This is contrary to the \(q=1\) case which has level-repulsion. Importantly this indicates that the spectrum of \(\hat{H}_{2}\) behaves like an integrable model, and has gaps clustered around \(s=0\). While this does not guarantee violations of the \(q=2\) no-resonance theorem, it does make violations more likely. Likewise, we expect a large amount of pseudo violations such that \(s_{j}=\Lambda_{j+1}-\Lambda_{j}\approx 0\), meaning unless very large time scales (potentially non-physically large) are considered these violations would appear as resonances in the spectrum. Considering this fact, results such as [66, 67, 68, 74] should be investigated to understand the effects of resonances. In the appendix B we demonstrate that the Poisson statistics and level-attraction persist for higher values of \(q\) and conjecture that level-attraction persists for all values of \(q>1\).
One further test we can perform is to test the actual average value of \(r\) we observe in the ratio distribution. \(\langle r\rangle=2\ln 2-1\approx 0.38629436112\) for Poisson systems and \(\langle r\rangle=4-2\sqrt{3}\approx 0.535898384\) for the GOE. Testing this quantity allows us to clearly observe convergence to the predictions of random matrix theory as a function of system size. We see this test in Fig. 3. In the right panel we see the test for \(q=2\) which reveals a strong convergence in agreement with the Poisson predictions. The data at \(L=22\) gives \(\langle r\rangle=0.386294325894\), which confirms seven decimal points of convergence. Therefore, from the perspective of short range correlations in the spectrum we conclude that \(\hat{H}_{2}\) obeys Poisson statistics, and importantly, that the \(q=2\) case experiences level-attraction.
In appendix B, we demonstrate that this level-attraction persists for higher values of \(q\) and speculate that for all \(q>1\) the spectrum must experience level-attraction. In appendix A we repeat our numerical studies but for random matrices, showing that our results from a quantum chaotic Hamiltonian agree with the results of RMT. Importantly our tests here are local tests on the spectrum. It is an open question if the symmetry resolved Hamiltonian \(\hat{H}_{2}\) will still obey Poisson statistics for more complex tests such as investigating the spectral form factor [75, 76]. We leave this question to future work.
We emphasize that the presence of level-attraction does not imply violations of the \(q>1\) no-resonance condition. It does, however, imply the gaps in the spectrum of \(\hat{H}_{2}\) cluster close to zero. If we investigate the probability of finding
Figure 2: (a) Level spacing \(q=2\), \(L=20\) unfolded data, which looks approximately like a Poisson distribution. (b) Ratio test for spectral statistics at \(L=20\) for \(q=2\). In both plots we draw the GOE prediction in black and the independent random variable prediction (Poisson) in purple.
a gap within the range \(0<s<\epsilon\), where \(\epsilon\) is small, we have for the GOE,
\[\int_{0}^{\epsilon}\frac{\pi s}{2}e^{-\pi^{2}s^{2}/4}ds=\frac{1-e^{-\frac{-\pi^{ 2}s^{2}}{4}}}{\pi}\approx\frac{\pi\epsilon^{2}}{4}-\frac{\pi^{3}\epsilon^{4}}{ 32}\dots, \tag{15}\]
so we see the probability is proportional to \(\epsilon^{2}\) for small gaps. On the contrary for the Poisson distribution one intuitively yields something much larger,
\[\int_{0}^{\epsilon}e^{-s}ds=\sinh\epsilon-\cosh\epsilon+1\approx\epsilon- \frac{\epsilon^{2}}{2}\dots, \tag{16}\]
giving us only linear scaling for small gaps. While both probabilities are of course small, the GOE is significantly smaller, giving one a significantly stronger case to assume definition 1 is satisfied in your chaotic model. In the case of Poisson statistics, one might expect to find one or many gaps that are _essentially_ zero due to level-attraction. Infinite time averages are theoretical tools for which we average over times significantly longer than the Heisenberg time \(\tau_{H}\sim e^{S}\), where \(S\) is the thermodynamic entropy at the appropriate energy \(E\)[67]. The presence of essentially zero gaps will lead to terms \(e^{i(E_{k}-E_{k-1})t}\) which are stationary on time scales proportional to \(\tau_{H}\). Despite the presence of such violators, we expect the set of problematic gaps to be small relative to the total Hilbert space dimension. Since it is likely that some violations or cases that are indistinguishable from violations of definition 1 are inevitable, especially for cases using \(q>1\), it is instructive to revisit past results keeping in mind a small number of violations will most likely be present. Below we discuss modifying key results in the field of quantum equilibration theory to accommodate the presence of violations of definition 1.
## II Equilibration and Recurrence
### Physical models
In this section we tackle the problem of equilibration in light of our investigation of the higher order no-resonance conditions and the presence of level-attraction. First, let us review a basic setup. Consider a time independent system with the Hamiltonian \(\hat{H}\) where we label the energy eigen basis as \(\hat{H}|E_{k}\rangle=E_{k}|E_{k}\rangle\). For simplicity, we take the spectrum of \(\hat{H}\) to be discrete and finite. We will initialize our system in some pure state
\[|\psi(t=0)\rangle=\sum_{m}c_{m}|E_{k}\rangle. \tag{17}\]
To track equilibration, we study properties of the expectation value of an observable \(\hat{A}\). This observable is general, but we demand that the largest single value \(||A||\) is independent of system size or saturates to a finite value in the
Figure 3: Plotted data for the convergence of \(\langle r\rangle\) to RMT predictions. In black we plot the GOE predictions and in purple we plot the corresponding Poisson prediction. Left: We see the \(q=1\) data converge to the GOE prediction as a function of system size. Right: We see the \(q=2\) data.
thermodynamic limit. In what follows we will assume our spectrum has level-repulsion, so that we may safely assume,
\[E_{m}=E_{l}\implies m=l \tag{18}\]
If our observable equilibrates, its finite time value \(\langle\hat{A}(t)\rangle=\langle\psi(t)|\hat{A}|\psi(t)\rangle\) must relax to its infinite time average value i.e.
\[\bar{A}=\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\langle\hat{A}(t)\rangle dt= \lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\sum_{m,n}\bar{c}_{m}c_{n}A_{m,n}e^{i (E_{m}-E_{n})t}dt=\sum_{m}|c_{m}|^{2}A_{m,m}. \tag{19}\]
\(\bar{A}\) is usually written in terms of the diagonal ensemble \(\omega=\sum_{m}|c_{m}|^{2}|E_{m}\rangle\langle E_{m}|\) as \(\bar{A}=\operatorname{Tr}\left(\omega\hat{A}\right)\). A typical quantity to study in quantum equilibration would be the variance of the expectation value around \(\bar{A}\). This was studied and bounded in [74; 77] assuming that the \(q=2\) no-resonance condition was satisfied. The variance is written as
\[\mu_{2}=\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\left(\langle\hat{A}(t)\rangle -\bar{A}\right)^{2}dt. \tag{20}\]
It was famously found in [74] that this variance can be bounded by the purity of the diagonal ensemble
\[\mu_{2}\leq||A||^{2}\operatorname{Tr}\left(\omega^{2}\right). \tag{21}\]
Note equation 21 holds as a consequence of the \(q=2\) no-resonance condition holding. The purity of the diagonal ensemble usually decays exponentially fast with respect to the system size (see for example Fig. 2 in [68]). If one assumes higher order \(q\) no-resonance conditions, it was recently found that, for higher moments,
\[\mu_{q}=\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\left(\langle\hat{A}(t) \rangle-\bar{A}\right)^{q}dt, \tag{22}\]
a similar bound can be found [68],
\[|\mu_{q}|\leq\left(q||A||\sqrt{\operatorname{Tr}\left(\omega^{2}\right)} \right)^{q}. \tag{23}\]
In light of section I and the presence of level-attraction for higher order \(q\), these results should be updated to reflect the high probability of there being a violation of the \(q\) no-resonance condition.
**Theorem 1**.: _Suppose we have a model that has violations of the \(q\) no-resonance condition. Then the moments \(\mu_{q}\) can be bounded as_
\[|\mu_{q}|\leq||A||^{q}\left(q^{q}+\frac{\mathcal{N}_{q,L}}{2q}\right)\sqrt{ \operatorname{Tr}\left(\omega^{2}\right)^{q}}, \tag{24}\]
_where \(\mathcal{N}_{q,L}\) is the maximum number of times one \(E_{m}\) appears in violations of the \(q\) no-resonance condition for a given system size \(L\). We call the \(E_{m}\)'s that appear in more than one violation of the resonance condition exceptional violators._
Proof.: Terms that contribute to \(\mu_{q}\) are sums of energies that are equal. Let \(\Lambda_{q}\) and \(\Lambda_{q}^{\prime}\) be sets of indices corresponding to particular energies,
\[\sum_{m\in\Lambda_{q}}E_{m}=\sum_{m\in\Lambda_{q}^{\prime}}E_{m}. \tag{25}\]
The no-resonance condition picks out the trivial set of energies that satisfy this equality, which is when \(\Lambda_{q}=\Lambda_{q}^{\prime}\). These contributions were bounded in [68; 74]. We collect the remaining violations in a set \(\mathcal{S}\) and write,
\[|\mu_{q}|\leq\left(q||A||\sqrt{\operatorname{Tr}\left(\omega^{2}\right)} \right)^{q}+\left|\sum_{\Lambda_{q}\in\mathcal{S}}\prod_{j=1}^{q}\bar{c}_{m_{j }}c_{n_{j}}A_{m_{j},n_{j}}\right|, \tag{26}\]
where we have identified \(\Lambda_{q}\in\mathcal{S}=\{m_{j},n_{j}\}\). The second term can be bounded as follows.
\[\left|\sum_{\Lambda_{q}\in\mathcal{S}}\prod_{j=1}^{q}\bar{c}_{m_{j}}c_{n_{j}}A_{ m_{j},m_{j}}\right|\leq||A||^{q}\sum_{\Lambda_{q}\in\mathcal{S}}\prod_{j=1}^{q}|c_{m_ {j}}||c_{n_{j}}|. \tag{27}\]
Since all \(|c_{m_{j}}|\) are positive, we may use the inequality of arithmetic and geometric means, giving
\[\leq\frac{||A||^{q}}{2q}\sum_{\Lambda_{q}\in\mathcal{S}}\sum_{j=1}^{q}\left(|c _{m_{j}}|^{2q}+|c_{n_{j}}|^{2q}\right). \tag{28}\]
We know that \(\operatorname{Tr}\left(\omega^{q}\right)=\sum_{m}|c_{m}|^{2q}\). Assuming an individual \(|c_{m_{j}}|^{2q}\) contributes at most \(\mathcal{N}_{q,L}\) times, we have that
\[\leq\frac{||A||^{q}\mathcal{N}_{q,L}}{2q}\operatorname{Tr}\left(\omega^{q} \right). \tag{29}\]
We lastly recall that \(\operatorname{Tr}\left(\omega^{q}\right)\leq\operatorname{Tr}\left(\omega^{2 }\right)^{q/2}\), which completes the proof.
Accommodating the presence of degenerate gaps for the \(q=2\) case has been considered before in [78]. Our bound reads,
\[|\mu_{2}|\leq||A||^{2}\left(1+\frac{\mathcal{N}_{2,L}}{4}\right)\operatorname {Tr}\left(\omega^{2}\right). \tag{30}\]
Instead, one can likewise write [78] as
\[|\mu_{2}|\leq N(\epsilon)||A||^{2}\operatorname{Tr}\left(\omega^{2}\right), \tag{31}\]
where \(N(\epsilon)\) is the maximum number of energy gaps in any interval for \(\epsilon>0\), i.e.
\[N(\epsilon)=\max_{E}|\{(k,l)|\;\;E_{k}-E_{l}\in[E,E+\epsilon)\}|. \tag{32}\]
One can recover the maximum degeneracy of the gaps by considering \(\lim_{\epsilon\to 0^{+}}N(\epsilon)\). In the limit of non-degenerate gaps these bounds are identical, and only differ by a constant factor for a small number of degeneracies in the gaps. Our result might in theory give better constant factors than the result in [78], however \(N(\epsilon)\) is likely a more intuitive quantity and easier to work with numerically.
We next wish to understand the properties of \(\mathcal{N}_{q,L}\), which in practice is challenging to study numerically. The worst scaling it could have is the total number of violations, i.e. \(0\leq\mathcal{N}_{q,L}\leq|\mathcal{S}|\). As we have noted earlier, the presence of level-attraction does not imply \(|\mathcal{S}|>0\). An easy property to understand however is that if \(\mathcal{N}_{q,L}\geq 2\) this implies at the very least that \(\mathcal{N}_{q+1,L}\geq 1\). To see this consider \(q=2\) for an exceptional violator \(E_{m}\) that appears at least twice. We might have \(E_{m}\) as an exceptional violator as,
\[E_{m}+E_{n}=E_{p}+E_{l},\;\;E_{m}+E_{k}=E_{r}+E_{h}. \tag{33}\]
This implies for \(q=3\), a violation of the no-resonance condition is
\[E_{p}+E_{l}+E_{k}=E_{r}+E_{h}+E_{n}. \tag{34}\]
Despite two exceptional violations for \(q=2\) implying at least one for \(q=3\), this does not imply \(\mathcal{N}_{q,L}\) is decreasing in \(q\).
To get a handle on the size of \(\mathcal{N}_{q,L}\) we can attempt to quantify the expected or average behavior of the quantity. First, let us assume we randomly generated the set \(S\). We will assume that the indices which appear are uniformly generated, so each element of \(S\) can be understood to be a tuple of \(2q\) indices, \((m_{1},\dots m_{2q})\). These indices are not necessarily independent. For example, they cannot be equal to each other under our assumptions. Despite this, in the large \(L\) limit this dependence cannot effect results due to the smallness of \(q\) and the corresponding exponential nature of the number of possible indices \(2^{L}\). We can therefore focus on the first index of each tuple \(m_{1}\). Our goal will
be to predict the average number of times \(m_{1}\) ends up being the same index. It can at most appear \(|S|\) times, and thus we wish to compute
\[\langle\mathcal{N}_{q,L}\rangle=\sum_{n=1}^{|S|}np(n), \tag{35}\]
where \(p(n)\) is the probability of the same index appearing \(n\) times. The total number of configurations possible for the first index of each tuple in \(S\) is \(2^{|S|L}\), and therefore we must simply count the number of configurations where \(n\) copies of the same \(m_{1}\) appear. This is given by
\[\binom{|S|}{n}2^{L(|S|-n)}, \tag{36}\]
which gives the following formula for our expected value
\[\langle\mathcal{N}_{q,L}\rangle=\sum_{n=0}^{|S|}\frac{n\binom{|S|}{n}}{2^{Ln} }=\frac{|S|}{2^{L}}(2^{-L}+1)^{|S|-1}. \tag{37}\]
We now have some special limiting cases to consider. Suppose that \(|S|\propto c2^{L}\) for some constant \(c\). Then the expected value of \(\langle\mathcal{N}_{q,L}\rangle\) is \(c\,e^{c}\) as \(L\) goes to infinity. However, if \(|S|\) has sub exponential growth, for example if it scales as \(L\), then the expected value goes to zero for large system size as \(\mathcal{O}(L/2^{L})=\mathcal{O}(|S|/2^{L})\). Therefore we expect that for most cases, even with modest violations of the no-resonance condition we expect \(\lim_{L\to\infty}N_{q,L}\) to be finite and quite small.
### A Random Matrix Theory Approach
In this section we will show how one could compute \(\mu_{q}\) for the GUE and GOE with an unfolded spectrum in the large \(N\) limit. We can rewrite equation 22 for finite \(T\) as
\[\mu_{q}(T)=\sum_{i_{1},j_{1},\ldots,i_{q},j_{q}}\left(\prod_{k=1}^{q}c_{i_{k} }\overline{e}_{j_{k}}\langle i_{k}|A|j_{k}\rangle\right)\frac{1}{T}\int_{0}^{ T}e^{i\sum_{k=1}^{q}(\lambda_{i_{k}}-\lambda_{j_{k}})t}dt, \tag{38}\]
where the eigenvalues are unfolded and from either a \(N\times N\) GUE or GOE distributed matrix. We define its moments as its expectation value.
Define the \(n\)-level spectral form factor as
\[\mathcal{K}_{2n}^{\overline{\beta}}(t)=\frac{1}{N^{2n}}\left\langle\sum_{i_{1 },j_{1},\ldots,i_{q},j_{q}}^{N}\right.\]
where the subscript \(\overline{\beta}=1\) or \(2\) denotes the GOE and GUE expectation values, respectively. Then we may express the expectation values of \(\mu_{q}\) in terms of its its \(q\)-level spectral form factor
\[\langle\mu_{q}(T)\rangle_{\overline{\beta}} =\lim_{N\to\infty}\sum_{i_{1},j_{1},\ldots,i_{q},j_{q}}\left(\prod _{k=1}^{q}c_{i_{k}}\overline{e}_{j_{k}}\langle i_{k}|A|j_{k}\rangle\right) \frac{1}{T}\int_{0}^{T}\left\langle e^{i\sum_{k=1}^{q}(\lambda_{i_{k}}-\lambda _{j_{k}})t}\right\rangle_{\overline{\beta}}dt \tag{39}\] \[=\left(\frac{1}{T}\int_{0}^{T}\mathcal{K}_{2|B|}^{\overline{\beta }}(t)dt\right)\sum_{i_{1}\neq j_{1},\ldots,i_{q}\neq j_{q}}\left(\prod_{k=1}^ {q}c_{i_{k}}c_{j_{k}}\langle i_{k}|A|j_{k}\rangle\right)\] (40) \[=\left(\frac{1}{T}\int_{0}^{T}\mathcal{K}_{2|B|}^{\overline{\beta }}(t)dt\right)(\operatorname{Tr}(A\rho-\omega))^{q}. \tag{41}\]
It is also worth noting that this equation is so general that it applies to any random matrix ensemble. Usually the GUE and GOE are of interest, but progress has been made studying the spectral form factor for other matrix ensembles. For example see [79; 80; 81].
The \(q\)-level spectral form factor can be computed explicitly, but it is a computationally heavy task. For example see [82], where it is computed but for ensembles that are not unfolded. In particular, for the GOE and GUE, the 2-level spectral form factor of the unfolded spectrum has a well-known explicit formula in the large \(N\) limit [45]. This leads to the following result.
**Theorem 2**.: _For any fixed \(T\) greater than zero, the GOE and GUE the expectation value of \(\mu_{2}(T)\) goes to zero as \(1/N^{2}\) in the large \(N\) limit. Furthermore, if \(T\) goes to infinity at the same rate as \(N\) (i.e. \(N=T\)) then \(\mu_{2}(T)\) goes to zero as \(1/T\)._
Proof.: From [45; 81], we know that for large \(N\) the spectral form factors can be approximated by
\[\mathcal{K}_{2}^{1}(t)\approx\left\{\begin{array}{ll}\frac{4t}{\pi N^{2}}+ \frac{2t}{\pi N^{2}}\ln(1+\frac{4t}{\pi N})&\text{if }0\leq t\leq\frac{\pi N}{2}\\ \frac{2}{N}+\frac{2t}{\pi N^{2}}\ln\left(\frac{\frac{4t}{\pi N}+1}{\pi N}-1 \right)&\text{if }t\geq\frac{\pi N}{2}\end{array}\right. \tag{42}\]
and
\[\mathcal{K}_{2}^{2}(t)\approx\left\{\begin{array}{ll}\frac{2t}{\pi N^{2}}& \text{if }0\leq t\leq\frac{\pi N}{2}\\ \frac{1}{N}&\text{if }t\geq\frac{\pi N}{2}\end{array}\right.. \tag{43}\]
Clearly, the first part of every piecewise function will dominate for large \(N\), thus completing the first claim.
Next, set \(T=N\). Taking the above quantities time averages we get
\[\frac{1}{T}\int_{0}^{T}\mathcal{K}_{2}^{1}(t)dt\approx\frac{1}{T}\left(\frac{ 3}{2\,\pi}-\frac{\pi}{16}\ln\left(1+\frac{4}{\pi}\right)+\frac{1}{\pi}\ln\left( 1+\frac{4}{\pi}\right)+\frac{3\,\pi}{32}+\frac{1}{4}\right) \tag{44}\]
and
\[\frac{1}{T}\int_{0}^{T}\mathcal{K}_{2}^{2}(t)dt\approx\frac{1}{\pi\,T}. \tag{45}\]
This proves the second claim.
As we demonstrate in appendix A, the spectrum of the random matrix Hamiltonian likewise experiences level-attraction for \(q\geq 2\). However, despite the presence of level-attraction, the above RMT result indicates that we still should expect \(\mu_{q}\to 0\) indicating equilibration on average of our observable.
## III Conclusion
In this work we have explored spectral statistics of chaotic Hamiltonians, namely the statistics surrounding sums of energies. We found that despite being chaotic, sums of energies displayed Poisson statistics instead of Wigner-Dyson statistics. This was demonstrated numerically for both a chaotic spin Hamiltonian and the GOE. The presence of level-attraction leads one to believe that accounting for potential degeneracies or "resonances" in infinite time averages of some dynamical quantities is necessary. We applied this observation to the theory of equilibration where we generalized known bounds to accommodate for degeneracies. Assuming the number of degeneracies is not exponentially large in system size, we demonstrated that the the bounds can be easily generalized to accomodate the presence of resonances. We further used techniques from RMT to prove that, for the GOE, moments of equilibration go to zero in the thermodynamic limit.
## IV Acknowledgements
J.R. would like to thank Bruno Bertini, Marcos Rigol and Alvaro Alhambra for fruitful conversations. J.R. would like to extend special thanks in particular to Bruno who gave valuable feedback at various stages of the project. J.R. acknowledges the support of Royal Society through the University Research Fellowship No. 201101. N.J.P. acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSERC).
|
2305.14363
|
Benchmarking the human brain against computational architectures
|
The human brain has inspired novel concepts complementary to classical and
quantum computing architectures, such as artificial neural networks and
neuromorphic computers, but it is not clear how their performances compare.
Here we report a new methodological framework for benchmarking cognitive
performance based on solving computational problems with increasing problem
size. We determine computational efficiencies in experiments with human
participants and benchmark these against complexity classes. We show that a
neuromorphic architecture with limited field-of-view size and added noise
provides a good approximation to our results. The benchmarking also suggests
there is no quantum advantage on the scales of human capability compared to the
neuromorphic model. Thus, the framework offers unique insights into the
computational efficiency of the brain by considering it a black box.
|
Céline van Valkenhoef, Catherine Schuman, Philip Walther
|
2023-05-15T08:00:26Z
|
http://arxiv.org/abs/2305.14363v1
|
# Benchmarking the human brain against computational architectures
###### Abstract
The human brain has inspired novel concepts complementary to classical and quantum computing architectures, such as artificial neural networks and neuromorphic computers, but it is not clear how their performances compare. Here we report a new methodological framework for benchmarking cognitive performance based on solving computational problems with increasing problem size. We determine computational efficiencies in experiments with human participants and benchmark these against complexity classes. We show that a neuromorphic architecture with limited field-of-view size and added noise provides a good approximation to our results. The benchmarking also suggests there is no quantum advantage on the scales of human capability compared to the neuromorphic model. Thus, the framework offers unique insights into the computational efficiency of the brain by considering it a black box.
## Introduction
The first mathematical model of computation as defined by Alan Turing in 1936 was inspired by human calculators [1]. In subsequent decades, the brain continued to serve as an inspiration for the development of computational models and architectures [2, 3, 4, 5] and in turn, these technologies were used to model brain function [6, 7]. This recurring cycle has led in particular to the development of artificial neural networks [8] and neuromorphic hardware [9], on which machine-learning algorithms complete human-like tasks, from image recognition to natural language processing. The architecture and function of neuromorphic computers are based on neurons and synapses in which processing and memory are collocated. Due to the parallel processing and scalability inherent to neuromorphic computing, large numbers of neurons can operate simultaneously [10]. Nevertheless, it remains an open question how well the information processing in the human brain performs, and if this can be described within our current frameworks of artificial intelligence, computability and algorithms [11, 12, 13].
In order to compare the computational power of different computer architectures, it is useful to study their resource requirements in terms of time and space for solving a given computational problem, without specific assumptions regarding the underlying architecture or any algorithms. Of particular interest is how the resource requirements change when the size of the computational problem, also referred to as its computational complexity, increases [14]. This relation can be expressed as a mathematical relationship, which in combination with the type of computational problem and the type of computer that solves the problem determines its computational class. For example, the complexity class 'polynomial time' (**P**) contains all decision problems that can be solved by classical computers in polynomial time, and the 'non-deterministic polynomial time' class (**NP**) contains all decision problems that can be verified, but not necessarily solved, in polynomial time by classical computers.
Neuromorphic devices process information in distinctly different ways to classical [15] and quantum computers [16], which makes it more difficult to define the computational complexity of neuromorphic algorithms [17]. (A comparison of essential features of classical, quantum and neuromorphic architectures is provided in Table 2 in the Supplementary information.) To be more specific, let us consider the resource requirements for solving the computational problem of unstructured search, which aims to find the unique input that matches a particular output value for an unknown function. The simplest algorithm
|
2307.15604
|
Integrated Digital Reconstruction of Welded Components: Supporting
Improved Fatigue Life Prediction
|
In the design of offshore jacket foundations, fatigue life is crucial.
Post-weld treatment has been proposed to enhance the fatigue performance of
welded joints, where particularly high-frequency mechanical impact (HFMI)
treatment has been shown to improve fatigue performance significantly.
Automated HFMI treatment has improved quality assurance and can lead to
cost-effective design when combined with accurate fatigue life prediction.
However, the finite element method (FEM), commonly used for predicting fatigue
life in complex or multi-axial joints, relies on a basic CAD depiction of the
weld, failing to consider the actual weld geometry and defects. Including the
actual weld geometry in the FE model improves fatigue life prediction and
possible crack location prediction but requires a digital reconstruction of the
weld. Current digital reconstruction methods are time-consuming or require
specialised scanning equipment and potential component relocation. The proposed
framework instead uses an industrial manipulator combined with a line scanner
to integrate digital reconstruction as part of the automated HFMI treatment
setup. This approach applies standard image processing, simple filtering
techniques, and non-linear optimisation for aligning and merging overlapping
scans. A screened Poisson surface reconstruction finalises the 3D model to
create a meshed surface. The outcome is a generic, cost-effective, flexible,
and rapid method that enables generic digital reconstruction of welded parts,
aiding in component design, overall quality assurance, and documentation of the
HFMI treatment.
|
Anders Faarbæk Mikkelstrup, Morten Kristiansen
|
2023-07-28T15:04:22Z
|
http://arxiv.org/abs/2307.15604v1
|
# Integrated Digital Reconstruction of Welded Components: Supporting Improved Fatigue Life Prediction
###### Abstract
In the design of offshore jacket foundations, fatigue life is crucial. Post-weld treatment has been proposed to enhance the fatigue performance of welded joints, where particularly high-frequency mechanical impact (HFMI) treatment has been shown to improve fatigue performance significantly. Automated HFMI treatment has improved quality assurance and can lead to cost-effective design when combined with accurate fatigue life prediction. However, the finite element method (FEM), commonly used for predicting fatigue life in complex or multi-axial joints, relies on a basic CAD depiction of the weld, failing to consider the actual weld geometry and defects. Including the actual weld geometry in the FE model improves fatigue life prediction and possible crack location prediction but requires a digital reconstruction of the weld. Current digital reconstruction methods are time-consuming or require specialised scanning equipment and potential component relocation. The proposed framework instead uses an industrial manipulator combined with a line scanner to integrate digital reconstruction as part of the automated HFMI treatment setup. This approach applies standard image processing, simple filtering techniques, and non-linear optimisation for aligning and merging overlapping scans. A screened Poisson surface reconstruction finalises the 3D model to create a meshed surface. The outcome is a generic, cost-effective, flexible, and rapid method that enables generic digital reconstruction of welded parts, aiding in component design, overall quality assurance, and documentation of the HFMI treatment.
3D scanning, point cloud registration, post-weld treatment, quality assurance, FEM modelling
## I Introduction
When designing jacket foundations for the offshore industry, fatigue life is commonly a determining factor. Several methods for post-weld treatment have been proposed, significantly improving the fatigue performance of welded joints. One of which is high-frequency mechanical impact (HFMI) treatment, which has been proven effective throughout the literature, outperforming conventional methods such as burr-grinding [1]. HFMI treatment essentially works by locally hammering the weld toe at a frequency exceeding 90 Hz. By smoothing the transition between weld and base material, the stress concentration is reduced, while the plastic deformation of the weld toe introduces compressive residual stress; hence, counteracting the tensile stresses that cause fatigue.
However, HFMI treatment must adhere to strict guidelines from the International Institute of Welding (IIW) to yield the expected fatigue life improvements. The IIW guidelines [2] include geometrical requirements for the resulting treatment groove, such as width, depth and placement of the groove, which can be difficult for a human operator to adhere to when operating a heavy and vibrating tool for extended periods. Additionally, operator bias cannot be omitted, which can further affect the resulting fatigue performance of the treated weld [3].
As a result, certification agencies like DNV [4] do not recommend using HFMI treatment at the design stage. To overcome this issue, [5] proposed a methodology for performing automated HFMI treatment to improve quality assurance. The proposed methodology used an industrial manipulator with input from a 3D scanner to identify the areas requiring treatment. The study showed a significant improvement in the treatment variance evaluated based on the quantitative quality metrics proposed by IIW, i.e. groove depth, width and placement. However, the proposed method for quality assurance only evaluates the geometry and is thus unable to provide any fatigue life prediction. Hence, accurate fatigue life prediction can support a more cost-effective design through material reduction, as the expected fatigue improvement can be documented.
In the case of complex or multi-axially loaded joints, the finite element method (FEM) is commonly applied for fatigue life prediction. However, typically FEM modelling is based on a simple and generic CAD representation of the weld, which does not consider the actual weld geometry and weld defects, which are well-known to influence the stress concentrations [6]. To improve the FE model, the actual weld geometry can be included in the model, i.e. by digitally reconstructing the welded geometry using 3D scanning. This method has been
shown to improve the fatigue life prediction and the prediction of the possible crack location [6][7].
However, scanning the welded joints for digital reconstruction is often manual and time-consuming or requires specialised scanning tools and possible relocation of the part, making it unfit for use in a production setup [8]. Furthermore, the resulting point cloud should have the highest possible resolution and accuracy, preferably below 0.1 \(\mathrm{mm}\), to ensure that the areas of potential stress concentrations are captured in the reconstructed geometry. Therefore, handheld techniques are unsuitable, as these commonly offer a maximum achievable spatial resolution of up to 0.2 \(\mathrm{mm}\), even when using external targets [8].
Therefore, this work proposes a generic framework for cost-effective, flexible, and rapid digital reconstruction of welded components for enhanced FEM modelling. The overall aim is to support improved fatigue life prediction for a more cost-effective design of welded components. The proposed framework is developed to be applied as an integrated part of the setup in which automated post-weld treatment is performed, such as the setup in [5]. The suggested approach relies only on an industrial manipulator with a line scanner to comprehensively scan the sample from multiple angles to produce a complete surface representation. Prevalent image processing techniques and simple filtering techniques combined with non-linear optimisation are applied for aligning and merging the overlapping point clouds. This is followed by a screened Poisson surface reconstruction to create the full watertight 3D representation suitable for FEM modelling. Note that the proposed framework is not limited to setups for post-weld treatment but could be used in a setup for automated welding.
The paper is organised as follows: Section II presents the processing steps and the main principles behind the framework. The experimental setup for validating the framework is presented in Section III, while the results are presented in Section IV. Lastly, the concluding remarks and suggestions for future works are presented in Section V.
## II Method
The proposed framework is based on small-scale S355 dog bone samples of welded joints for the offshore industry, as shown in Fig. 1. However, the framework can be applied to any component suitable and feasible to scan. A complete and accurate representation of the entire sample is required to achieve the most accurate fatigue life prediction, including any misalignment or distortion. Although, a high resolution around the area of the weld is essential. Any contours excessively smoothed in the reconstruction will significantly affect the predicted fatigue life. Therefore, it is necessary to scan the sample from a multitude of orientations and positions, resulting in multiple point clouds that must be aligned, merged, and meshed to be suitable for use in FE modelling.
Estimating spatial transformations that align the point clouds are known as point cloud registration. The point cloud alignment commonly consists of two stages: a rough initial alignment and a secondary fine-tuning [9]. The rough initial alignment is usually performed based on the scanner's position during acquisition, whereas the fine-tuning of the spatial transformation can roughly be divided into targetless registration and target-based registration.
The most well-known algorithm for targetless point cloud registration is perhaps the ICP (iterative closest point) proposed by [10]. However, as these methods commonly rely on matching unique point pairs in the point clouds, they are not well suited for texture-less and plain surfaces and suffer from a lack of robustness.
Instead, the target-based approach uses artificial targets that have been well distributed and placed in a unique pattern on the object, such that several targets are visible from all scanning angles. Identifying and locating the target pairs in the point clouds makes it possible to compute the spatial transformation between them. In this work, the target-based approach has been chosen, as it is beneficial when scanning texture-less objects that do not have distinctive features to be used for aligning the point clouds, and results in an accurate and robust registration [11][12]. Moreover, the targets can be placed in areas that do not affect results from the FEM modelling.
Below, the framework for the digital target-based reconstruction of welded components is presented in further detail.
### _Selection of targets_
The targets can be of various types, including spheres and high-contrast plane paper targets. In this case, circular paper stickers of equal size are chosen as target points. The placement of the targets is an important aspect, as studies have shown that non-uniformly placed targets in a line at equal height can result in inaccuracies in the computed spatial transformations [11]. The chosen targets are, therefore, placed
Fig. 1: The figure illustrates the transformation of a real-world welded dog bone sample into an FE model, used to predict fatigue life and potential crack location. The depicted targets aid in aligning the point clouds to form a comprehensive surface representation of the sample. Note that the red areas of the FEM model indicate the highest stresses and, consequently, the most probable locations for cracks.
uniformly on the fixture in a semi-random pattern with a density that ensures that a minimum of 3 targets are present in two overlapping point clouds. The targets (circles) can be seen on the sample in Fig. 1.
### _Pre-processing of the point cloud_
Initially, noisy data is removed from the scan to achieve a clean point cloud. This is primarily done through a statistical approach, where the Euclidean distance of each point is compared to a region of its neighbouring points. If its distance exceeds the global mean and standard deviation, the point is considered an outlier and removed [13].
### _Alignment and merging_
The initial course alignment is based on the defined robotic scanning trajectories, while the subsequent fine adjustment is based on the approach proposed by [14]. However, the proposed approach in this work makes use of captured backscatter from the laser to create a 2D image grey-scale image of the scanned surface, wherein the circular targets are identified and located based on the Hough transform [15]. Note that a dictionary is created such that a transformation back to 3D is possible. The principle is essentially to pair matching targets between overlapping scans, i.e., by locating the same circles in each overlapping scan, it is possible to compute the spatial transformation between the scans.
The approach consists of two main steps:
1. **Filtering outlier circles:** The first step is to filter out any target not present in overlapping point clouds. The approach is to compute the Euclidean distance between all targets in all possible combinations in each scan. Next, the list of computed distances of each scan is compared to each other. If a unique distance (within a specified tolerance) is found, and the associated target does not have other matches, it is omitted from the list of potential matches. The principle is illustrated in Fig. 2.
2. **Matching circles:** The second step is to match the targets between the scans. As the points clouds are rigid and an initial course alignment has been performed, it is possible to assume a pure translation. Therefore, circles are matched by ranking the Euclidean distances between the overlapping point clouds. The distance with the most similarities is utilised as the reference for selected matching target pairs. The principle is illustrated in Fig. 3. It should be noted that the assumption of pure translation is only applied when matching the targets and not when computing the transformation.
With matched target pairs, it is possible to compute the spatial transformation of the chain of overlapping point clouds with a point-to-point approach based on [16]. As such, the transformation is determined through a non-linear optimisation, which minimises the Euclidean distance between the target pairs. After alignment, the point clouds must be merged to create a homogeneous resolution. This is done by applying a box grid filter that merges overlapping points. The entire process is performed automatically. The use of prevalent and simple image processing operations removes the dependency on expensive commercial software solutions.
### _Removal of unwanted objects_
A connectivity analysis segments the different objects in the point cloud to remove unwanted objects. Since the targets have been placed on the sample, it is possible to use this information to determine which segmented objects are part of the sample. Any unwanted objects, such as parts of the fixture, can then be removed from the scan. In other cases, the known size of the geometry could also have been used as a feature to identify it.
### _Surface reconstruction_
The aligned and merged point cloud is automatically imported into Meshlab using command-line integration. Meshlab is an open-source program for processing and editing 3D meshes. In Meshlab, the normals of the point cloud are computed using neighbouring points. This is followed by a screened Poisson surface reconstruction, which meshes the final surface, creating a watertight surface. Lastly, the reconstructed representation is exported as a.stl file, which can be imported into the FEM software of choice, such as ANSYS.
## III Experimental setup
The experimental setup is similar to the setup presented in [5], designed to perform automated post-veld treatment. The setup, illustrated in Fig. 4, consists of two main components, a KUKA KR60-3 industrial manipulator and a Wenglor MLWL131 line scanner. All computations are performed in
Fig. 3: The principle behind matching the targets. Point clouds 1 and point cloud 2 depict overlapping areas. Note that the green lines are parallel and with an equal length for the matching points (1 and 1) and (2 and 2) in the two point clouds. Due to the assumption of pure translation, this indicates that the targets match. Instead, the crossing red lines indicate a mismatch, as such a match would require a significant rotational transformation. The example presents the use of points 1 and 2, although the operation is carried out for all points that have been selected using the principle presented in Fig. 2.
Fig. 2: The principle behind filtering out targets. Point clouds 1 and point cloud 2 depict overlapping areas. The blue markers indicate the centre positions of the located targets. The green lines illustrate the matching distances between the two overlapping point clouds, point cloud 1 and point cloud 2. The red line and point illustrate a unique distance with an associated point that only exists in point cloud 1; hence, the point is omitted. Note that the example is based on a randomly selected reference point, however, the operation is carried out for all points.
MATLAB 2019b and Meshlab v.2020.07 with an Intel Core i7 (i7- 9750H) CPU @ 2.6 GHz.
As stated in Section II, the testing sample is a small-scale sample of an offshore jacket structure which is welded using gas metal arc welding in a double-sided joint configuration. The sample is made of S355 construction steel and has been water jet cut into the shape of a dog bone with a size of 100 \(\times\) 570 \(\times\) 20 \(\mathrm{mm}^{3}\) for fatigue testing.
All robotic trajectories have been manually programmed and follow a linear trajectory in which the velocity and orientation are kept constant throughout the entire motion. Further details of scanning parameters can be found in Table I.
Before scanning, the sample must be prepared. This includes removing any surface contamination, such as paint and rust, around the weld area to avoid any geometrical details being concealed. Additionally, the sample is sprayed with scanning spray to reduce reflectivity (when scanning reflective samples), and the reference targets (stickers) are manually and semi-randomly placed for alignment. Upon scanning, the sample must be roughly positioned at a specific position (\(\pm\) 10 \(\mathrm{mm}\)) to ensure it is correctly scanned and merged.
## IV Results and discussion
Due to the line scanner's limited field of view (FoV), 32 scanning passes are performed to ensure adequate overlay between individual scans, depicted in Fig. 5. To cover the bottom half of the sample, it is manually rotated 180deg around a single axis. In total, the scanning duration of a sample is \(\sim\)30 min, where \(\sim\)18 min is pure scanning time. This is clear from Table I. Note that with the applied scanner, the scanning speed and the resolution between the scan lines (\(y\)-direction) are co-dependent. Hence, the scanning speed can be increased at the expense of the resolution. Similarly, the FoV can be expanded by moving further away from the sample, but it results in a lower resolution. Lastly, the scanning time can be reduced by optimising the overlay, which has not been done in this work.
Having scanned the sample, the individual point clouds are then aligned, merged and denoised based on the principles presented in Section II. This resulted in a complete point cloud of \(\sim\)34,000,000 points, represented as a \(\sim\)900 MB ASCII file with a resolution of 0.1 \(\frac{\mathrm{mm}}{px}\). The ASCII file is then imported into Meshlab, where the normals are computed using 100 neighbouring points. This is followed by the screened Poisson surface reconstruction, where a reconstruction depth of 12 has been chosen to ensure a high level of detail. The computational time is \(\sim\)15 min; however, the implementation is not optimised. The result is a 504 MB.stl file consisting of \(\sim\)5,000,000 vertices and \(\sim\)10,000,000 facets. Reducing the construction depth could reduce the file size, but also reduces the level of detail. The procedure is illustrated in Fig. 6 along with the final reconstructed result.
When closely observing the reconstructed sample, slight misalignment of overlapping scans can be noticed (\(<\)0.05 \(\mathrm{mm}\)). This is due to several factors, such as the scanning speed or orientation not being constant, but primarily due to incorrect detection of the target centres. Other errors have also been observed, such as floating points due to spurious reflection or false representation of the surface from noisy points. Even a pen ink on the surface affects the reconstruction, underlining the sensitivity of the process. The problems primarily occur when scanning highly reflective surfaces. This is illustrated in Fig. 7, which depicts the errors observed when reconstructing a treated sample. Since the sample is scanned from all directions, the lack of visible misalignment indicates that the framework
Fig. 4: The experimental setup, consisting primarily of the industrial manipulator and the line scanner.
provides an accurate representation. However, to determine the precise accuracy of the approach, the proposed framework must be applied to evaluate the dimensions of a reference part with known dimensions.
The results demonstrate that the framework effectively reconstructs the welded component with sufficient accuracy for fatigue life prediction through FEM modelling.
## V Conclusion
The proposed framework allows for the digital reconstruction of welded samples directly in the production setup using an industrial manipulator and a line scanner. The framework relies on prevalent image processing techniques and non-linear optimisation to align and merge several point clouds, representing the entire geometry of the sample. This is followed by a screened Poisson surface reconstruction to form the final 3D representation. Focus has been placed on developing a generic
Fig. 5: Depiction of all 32 scans required to form a complete representation of the scanned samples.
Fig. 6: The operations required to reconstruct the chosen sample. First, the 32 individual scans of the ends, top and bottom, are merged and aligned to form the two halves, which are then merged, and the noise is removed. Lastly, the sample is meshed, reconstructed, and saved as a.st file for FEM modelling. Note the detail included in the representation, such as the reference targets and the course edges due to water jet cutting.
approach that does not rely on specific hardware or expensive commercial software. The result is a flexible, convenient, and cost-effective approach that can be applied for a range of uses, including predicting the fatigue life improvements of the treatment, aiding at the design stage of the component based on knowledge of the possible crack location, and improved quality assurance and documentation of automated HFMI treatment.
The future work includes some aspects, including optimisation of the scanning procedure, e.g., by using robotic simulation software to reduce the number of scans and targets and, thereby, the processing time. Similarly, a non-uniform point cloud resolution could be applied, such that the resolution is increased near the weld where a high level of detail is required and decreased away from the weld. This would result in a reduced file size and reduced computational time. Lastly, rotating the sample could be performed with a rotational axis controlled by the robot to avoid manual interference.
## Acknowledgements
The authors would like to gratefully acknowledge Joachim Emil Kokholm Hersboll for providing insight into FEM modelling of welded components.
|
2301.00839
|
Integrable and superintegrable 3d Newtonian potentials using quadratic
first integrals: A review
|
The determination of the first integrals (FIs) of a dynamical system and the
subsequent assessment of their integrability or superintegrability in a
systematic way is still an open subject. One method which has been developed
along these lines for second order autonomous dynamical systems is the
so-called direct method. According to this method, one assumes a general
functional form for the FI I and requires the condition dI/dt=0 along the
dynamical equations. This results to a system of partial differential equations
(PDEs) to which one adds the necessary integrability conditions of the involved
scalar quantities. It is found that the final system of PDEs breaks into two
sets: a. One set containing geometric elements only and b. A second set with
geometric and dynamical quantities. Then, provided the geometric quantities are
known or can be found, one uses the second set to compute the FIs and,
accordingly, assess on the integrability of the dynamical system. The solution
of the system of PDEs for quadratic FIs (QFIs) has been given in a recent paper
J. Math. Phys. 61, 122701 (2020). In the present work, we consider the
application of this solution to Newtonian autonomous conservative dynamical
systems with three degrees of freedom, and compute integrable and
superintegrable potentials whose integrability is determined via autonomous and
time-dependent QFIs. The geometric elements of these systems are the ones of
the Euclidean space which are known. Setting various values for the parameters
determining the geometric elements, we determine in a systematic way all known
integrable and superintegrable potentials in E3 together with new ones. For
easy reference, the results are collected in tables so that the present work
may act as an updated review on the subject of second order
integrable/superintegrable potentials in E3.
|
Antonios Mitsopoulos, Michael Tsamparlis
|
2023-01-02T19:11:22Z
|
http://arxiv.org/abs/2301.00839v1
|
# Integrable and superintegrable 3d Newtonian potentials using quadratic first integrals: A review
###### Abstract
The determination of the first integrals (FIs) of a dynamical system and the subsequent assessment of their integrability or superintegrability in a systematic way is still an open subject. One method which has been developed along these lines for holonomic autonomous dynamical systems with dynamical equations \(\ddot{q}^{a}=-\Gamma^{a}_{bc}(q)\dot{q}^{a}\dot{q}^{c}-Q^{a}(q)\), where \(\Gamma^{a}_{bc}(q)\) are the coefficients of the Riemannian connection defined by the kinetic metric of the system and \(-Q^{a}(q)\) are the generalized forces, is the so-called direct method. According to this method, one assumes a general functional form for the FI \(I\) and requires the condition \(\frac{dI}{dt}=0\) along the dynamical equations. This results to a system of partial differential equations (PDEs) to which one adds the necessary integrability conditions of the involved scalar quantities. It is found that the final system of PDEs breaks into two sets: a. One set containing geometric elements only and b. A second set with geometric and dynamical quantities. Then, provided the geometric quantities are known or can be found, one uses the second set to compute the FIs and, accordingly, assess on the integrability of the dynamical system. The'solution' of the system of PDEs for quadratic FIs (QFIs) has been given in a recent paper (M. Tsamparlis and A. Mitsopoulos, J. Math. Phys. **61**, 122701 (2020) ). In the present work, we consider the application of this'solution' to Newtonian autonomous conservative dynamical systems with three degrees of freedom, and compute integrable and superintegrable potentials \(V(x,y,z)\) whose integrability is determined via autonomous and/or time-dependent QFIs. The geometric elements of these systems are the ones of the Euclidean space \(E^{3}\) which are known. Setting various values for the parameters determining the geometric elements, we determine in a systematic way all known integrable and superintegrable potentials in \(E^{3}\) together with new ones obtained in this work. For easy reference, the results are collected in tables so that the present work may act as an updated review of the QFIs of Newtonian autonomous conservative dynamical systems with three degrees of freedom. It is emphasized that by assuming different values for the parameters, other authors may find more integrable potentials of this type of systems.
Keywords: Integrable potentials; superintegrable potentials; 3d Newtonian potentials; quadratic first integrals; time-dependent first integrals; autonomous conservative dynamical systems; Killing tensors.
## 1 Introduction
According to Liouville integrability theorem [1], a three-dimensional (3d) Newtonian autonomous conservative system is (Liouville) integrable if it admits three (functionally) independent first integrals (FIs) in involution.
Integrable systems that admit five independent FIs are called maximally superintegrable, while if they admit four independent FIs they are called minimally superintegrable. A superintegrable potential is always integrable; however, some authors [2, 3, 4, 5] define superintegrability without the requirement of integrability, that is, they look only for sets of independent FIs whose number exceeds the degrees of freedom of the system.
For 3d Newtonian autonomous conservative systems one quadratic FI (QFI) is the Hamiltonian \(H\); therefore, one needs two additional independent autonomous1 FIs in involution in order to establish integrability. If in addition to these FIs there exist one/two more independent autonomous or time-dependent FIs, then the system is minimally/maximally superintegrable. Besides establishing superintegrability, time-dependent FIs can be used also to establish the integrability of a dynamical system provided they are in involution (see e.g. [6, 7]).
Footnote 1: These additional FIs must be autonomous because the Poisson bracket (PB) of the Hamiltonian with an arbitrary time-dependent FI \(J(t,q,\dot{q})\) does not vanish. Indeed, we have \(\{H,J\}=\frac{\partial J}{\partial t}\neq 0\).
The maximum number of independent autonomous FIs of a Hamiltonian dynamical system of \(n\) degrees of freedom is \(2n-1\). However, if time-dependent FIs are considered, this maximum limit can be exceeded. For example, the 3d potential \(V=-kr^{2}\), where \(r=\sqrt{x^{2}+y^{2}+z^{2}}\) and \(k\) is an arbitrary constant, admits the six (five are enough) time-dependent linear FIs (LFIs) \(I_{3a\pm}\), \(a=1,2,3\) (see Table V in [8]):
\[k>0 : I_{3a\pm}=e^{\pm\sqrt{2k}t}\left(\dot{q}_{a}\mp\sqrt{2k}q_{a}\right)\] \[k<0 : I_{3a\pm}=e^{\pm i\sqrt{-2k}t}\left(\dot{q}_{a}\mp i\sqrt{-2k}q_ {a}\right)\]
which are functionally independent. Since the three LFIs \(I_{3a+}\) (or \(I_{3a-}\)) are also in involution, the considered 3d potential is superinetgarble.
Concerning the number of the free parameters that define a 3d superintegrable potential, the following terminology is used (see e.g. [5]):
a. The degenerate (or three-parameter) potentials, and
b. The non-degenerate (or four-parameter) potentials.
In many works [3, 4, 5, 9], the term second order superintegrable potentials is used for potentials that are superintegrable due to QFIs only. Such potentials have the following special properties [3, 4]:
1) Multi-integrability. They are integrable in multiple ways and the comparison of ways of integration leads to new facts about the system.
2) They are multi-separable.
3) The second order symmetries expressed by second order Killing tensors (KTs) generate a closed quadratic algebra. In the quantum case, the representation of this algebra yields results concerning the spectral resolution of the Schrodinger operator and the other symmetry operators.
There are two types of integrable potentials in \(E^{3}\). The decomposable potentials (or 2+1 separable integrable potentials) generated from integrable potentials in \(E^{2}\) and the non-decomposable ones.
Let \(V(x,y)\) be a 2d integrable potential in \(E^{2}\) which admits an additional autonomous FI \(I_{1}\). Then, the 3d Newtonian \(z\)-separable potential \(\bar{V}(x,y,z)=V(x,y)+F(z)\), where \(F\) is an arbitrary smooth function of \(z\), is a \(2+1\) separable integrable potential in \(E^{3}\). The integrability of these potentials is due to the three independent FIs \(H,I_{1}\) and \(I_{2}=\frac{1}{2}\dot{z}^{2}+F(z)\) which are in involution. If \(V(x,y)\) is superintegrable with respect to (wrt) two additional FIs, say \(J_{1}\) and \(J_{2}\), then \(\bar{V}(x,y,z)\) is minimally superintegrable because of the four independent FIs \(H,J_{1},J_{2}\), and \(I_{2}\). If in addition to \(J_{1}\) and \(J_{2}\) the 2d superintegrable potential \(V(x,y)\) admits also a time-dependent FI \(J_{3}\), then \(\bar{V}(x,y,z)\) is maximally superintegrable. For example, the second potential of Table II in [2] is not minimally superintegrable but maximally superintegrable because it admits in addition the time-dependent FIs \(I_{73a}\) and \(I_{73b}\) from the last Table of [10].
The non-decomposable (i.e. non-separable) 3d Newtonian integrable potentials \(V(x,y,z)\) cannot be written in the form \(\bar{V}(x,y,z)=V(x,y)+F(z)\) where \(V(x,y)\) is a 2d Newtonian integrable potential. In general, their determination is more difficult and various methods of escalating complexity have been proposed. Furthermore, the existing results concern autonomous FIs only and are limited in number. The purpose of the present work is to provide a systematic (i.e. algorithmic) method which enables one to determine integrable and superintegrable potentials in \(E^{3}\) using autonomous and time-dependent QFIs. The method relies on Theorem 1 [11] (see section 3) which relates the QFIs of the dynamical system with the dynamical elements (i.e. the potential) and the geometry defined by the kinetic energy of the system. The structure of the paper is as follows.
In section 2, we determine the 3d integrable/superintegrable 2+1 decomposable potentials directly from the well-known 2d integrable/superintegrable potentials listed in the reference works [10] and [12]. The results are presented in tables where the known potentials with the corresponding reference are listed together with the new ones determined in this work. In section 3, we state Theorem 1 from which follows that there are three types of QFIs to consider, denoted as \(I_{(1,\ell)},I_{(2,\ell)},I_{(3)}\), which are expressed in terms of the geometric elements of the kinetic metric and the potential function. In section 4, we state the geometric quantities of \(E^{3}\) which are required for the application of Theorem 1. It is seen that the number of parameters introduced from the KT components is large. This remark and the fact that the associated system of PDEs is overdetermined have the result that one will find special solutions only by assuming particular values of the geometric parameters. In section 5, we consider the QFI \(I_{(1,1)}\) (\(\ell=1\)) and the relevant PDEs for this case. We consider various values for the parameters and recover all the existing results together with new ones. For easy reference, the various potentials are grouped in Tables 4 - 7. In section 6, we consider the potentials admitting QFIs of the type \(I_{(2,0)}\) (\(\ell=0\)). These results are presented in Tables 8 - 10. In section 7, we consider time-dependent LFIs/QFIs of the type \(I_{(3)}\) and the results are collected in Tables 11 - 13. In section 8, we compare and discuss the results listed in the tables with the existing results of the literature. Finally, in section 9, we draw our conclusions.
### List of abbreviations and notations/conventions
For the convenience of the reader, we give a list of abbreviations and notations used throughout the text.
Abbreviations:
* FI = first integral
* HV = homothetic vector
* KT = Killing tensor
* KV = Killing vector
* LFI = linear first integral
* \(N\)d = \(N\)-dimensional
* ODE = ordinary differential equation
* PB = Poisson bracket
* PDE = partial differential equation
* QFI = quadratic first integral
Mathematical notations/conventions:
* \(E^{n}\) = \(n\)-dimensional Euclidean space
* \(r=\sqrt{x^{2}+y^{2}+z^{2}}\), \(R=\sqrt{x^{2}+y^{2}}\), \(\tan\theta=\frac{y}{x}\), and \(w=x+iy=Re^{i\theta}\).
* The angular momentum \(\mathbf{M}\equiv M_{i}=(M_{1},M_{2},M_{3})=(y\dot{z}-z\dot{y},z\dot{x}-x\dot{z},x\dot{y}-y\dot{x})\) with square magnitude \(\mathbf{M}^{2}=M_{1}^{2}+M_{2}^{2}+M_{3}^{2}\).
* The kinetic metric \(\gamma_{ab}(q)\) of the dynamical system is used for lowering and raising the indices.
* A comma indicates partial derivative and a semicolon Riemannian covariant derivative. Coordinate systems of \(E^{3}\):
* Cartesian coordinates: \((x,y,z)\).
* Spherical coordinates: \((r,\theta,\phi)\) with \(x=r\sin\theta\cos\phi\), \(y=r\sin\theta\sin\phi\) and \(z=r\cos\theta\).
* Parabolic cylindrical coordinates: \((\lambda^{\prime},\mu^{\prime},z)\) with \(\lambda^{\prime}=R+y\) and \(\mu^{\prime}=R-y\).
* Rotational parabolic coordinates: \((\zeta,\eta,\phi)\) with \(\zeta=r+z\), \(\eta=r-z\), \(\phi=\tan^{-1}\left(\frac{y}{x}\right)\) or, equivalently, \(x=\sqrt{\zeta\eta}\cos\phi\), \(y=\sqrt{\zeta\eta}\sin\phi\), \(z=\frac{1}{2}\left(\zeta-\eta\right)\).
## 2 Integrable/superintegrable 2+1 separable potentials
As it has been remarked, the \(2+1\) separable integrable/superintegrable potentials in \(E^{3}\) are given in terms of the integrable/superintegrable potentials \(\Phi(x,y)\) in \(E^{2}\). From the latter potentials, the ones that admit LFIs/QFIs are collected in the review papers [10] and [12]. Using these results, the \(2+1\) separable potentials in \(E^{3}\)
\[V(x,y,z)=\Phi(x,y)+F(z) \tag{1}\]
where \(F(z)\) is an arbitrary smooth function, are integrable/superintegrable due to the additional QFI \(I=\frac{1}{2}\dot{z}^{2}+F(z)\) which is in involution with the FIs of \(\Phi(x,y)\).
Applying the above procedure to the results of [10, 12], we find the integrable and superintegrable potentials in \(E^{3}\) listed in Tables 1 - 3. The QFI of the Hamiltonian \(H\) is not included in the tables. In Tables 2 and 3, we compare with the results of [2]. A similar comparison cannot be done in Table 1 because in [2] only superintegrable potentials are considered. Concerning the notation, we set \(r=\sqrt{x^{2}+y^{2}+z^{2}}\), \(R=\sqrt{x^{2}+y^{2}}\) and the angular momentum \(M_{i}=(y\dot{z}-z\dot{y},z\dot{x}-x\dot{z},x\dot{y}-y\dot{x})\).
\begin{tabular}{|l|l|} \hline \multicolumn{2}{|c|}{Integrable \(2+1\) separable potentials} \\ \hline Potential & LFIs and QFIs \\ \hline \(V=F_{1}\left(\frac{R^{2}}{2}+b_{1}y-b_{2}x\right)+F_{2}(z)\) & \(I_{1}=M_{3}-b_{1}\dot{x}-b_{2}\dot{y}\), \(I_{2}=\frac{1}{2}\dot{z}^{2}+F_{2}(z)\) \\ \hline \(V=\frac{F_{1}\left(\frac{k}{2}\right)}{R^{2}}+F_{2}(R)+F_{3}(z)\) & \(I_{1}=M_{3}^{2}+2F_{1}\left(\frac{y}{x}\right)\), \(I_{2}=\frac{1}{2}\dot{z}^{2}+F_{3}(z)\) \\ \hline \(V=\frac{k}{x^{2}+\dot{\epsilon}y^{2}}+F_{1}(R)+F_{2}(z)\) & \(I_{1}=M_{3}^{2}+\frac{2k(1-\dot{\epsilon})y^{2}}{x^{2}+\dot{\epsilon}y^{2}}\), \(I_{2}=\frac{1}{2}\dot{z}^{2}+F_{2}(z)\) \\ \hline \(V=\frac{F_{1}(u)-F_{2}(v)}{u^{2}-v^{2}}+F_{3}(z)\) & \(I_{1}=M_{3}^{2}+A\dot{x}^{2}+\frac{v^{2}F_{1}(u)-u^{2}F_{2}(v)}{u^{2}-v^{2}}\) \\ \(u^{2}=R^{2}+A+\left[(R^{2}+A)^{2}-4Ax^{2}\right]^{1/2}\) and & \(I_{2}=\frac{1}{2}\dot{z}^{2}+F_{3}(z)\) \\ \(v^{2}=R^{2}+A-\left[(R^{2}+A)^{2}-4Ax^{2}\right]^{1/2}\) & \(I_{2}=\frac{1}{2}\dot{z}^{2}+F_{3}(z)\) \\ \hline \(V=\frac{F_{1}(u)-F_{2}(v)}{u^{2}-v^{2}}+F_{3}(z)\) & \(I_{1}=M_{3}^{2}+A(\dot{x}\pm i\dot{y})^{2}+\frac{v^{2}F_{1}(u)-u^{2}F_{2}(v)}{ u^{2}-v^{2}}\) \\ \(u^{2}=R^{2}+\left[R^{4}-4A(x\pm iy)^{2}\right]^{1/2}\) and & \(I_{2}=\frac{1}{2}\dot{z}^{2}+F_{3}(z)\) \\ \(v^{2}=R^{2}-\left[R^{4}-4A(x\pm iy)^{2}\right]^{1/2}\) & \(I_{2}=\frac{1}{2}\dot{z}^{2}+F_{3}(z)\) \\ \hline \(V=\frac{F_{1}(R+y)+F_{2}(R-y)}{R}+F_{3}(z)\) & \(I_{1}=-M_{3}\dot{x}+\frac{(R+y)F_{2}(R-y)-(R-y)F_{1}(R+y)}{R}\) \\ \(I_{2}=\frac{1}{2}\dot{z}^{2}+F_{3}(z)\) & \(-\frac{1}{2}\dot{z}^{2}+F_{3}(z)\) \\ \hline \(V=\bar{w}^{-1/2}\left[F_{1}(w+\sqrt{\bar{w}})+F_{2}(w-\sqrt{\bar{w}})\right]+F_ {3}(z)\) & \(-i\left(1-\frac{w}{\sqrt{\bar{w}}}\right)F_{1}(w+\sqrt{\bar{w}})+\) \\ \(w=x+iy\) and \(\bar{w}=x-iy\) & \(-i\left(1-\frac{w}{\sqrt{\bar{w}}}\right)F_{2}(w-\sqrt{\bar{w}})\) \\ & \(I_{2}=\frac{1}{2}\dot{z}^{2}+F_{3}(z)\) \\ \hline \(V=\frac{F_{1}(w)}{R}+F_{2}^{\prime}(w)+F_{3}(z)\) & \(I_{1}=-M_{3}(\dot{x}\pm i\dot{y})-iwV+iF_{2}(w)\) \\ \(F_{2}^{\prime}=\frac{d\bar{F}_{2}}{d\omega}\) and \(w=x\pm iy\) & \(I_{2}=\frac{1}{2}\dot{z}^{2}+F_{3}(z)\) \\ \hline \(V=F_{1}(x)+F_{2}(y)+F_{3}(z)\) & \(I_{1}=\frac{1}{2}\dot{x}^{2}+F_{1}\), \(I_{2}=\frac{1}{2}\dot{y}^{2}+F_{2}\), \(I_{3}=\frac{1}{2}\dot{z}^{2}+F_{3}\) \\ \hline \(V=F_{1}\left(y+b_{0}x+\sqrt{b_{0}^{2}+1}x\right)+\) & \(I_{1}=A\dot{x}^{2}+B\dot{y}^{2}+2C\dot{x}\dot{y}+(A+B)(F_{1}+F_{2})+\) \\ \(+F_{2}\left(y+b_{0}x-\sqrt{b_{0}^{2}+1}x\right)+F_{3}(z)\) & \(+2C\sqrt{b_{0}^{2}+1}(F_{1}-F_{2})\) \\ where \(b_{0}\equiv\frac{A-B}{2C}\) & \(I_{2}=\frac{1}{2}\dot{z}^{2}+F_{3}(z)\) \\ \hline \(V(b_{0}=0)=F_{1}(y+x)+F_{2}(y-x)+F_{3}(z)\) & \(I_{1}=\dot{x}\dot{y}+F_{1}-F_{2}\), \(I_{2}=\frac{1}{2}\dot{z}^{2}+F_{3}(z)\) \\ \hline \end{tabular}
Table 1: Integrable potentials \(V(x,y,z)=\Phi(x,y)+F(z)\) in \(E^{3}\), where \(\Phi(x,y)\) are integrable potentials in \(E^{2}\).
\begin{table}
\begin{tabular}{|l|c|l|} \hline \multicolumn{2}{|c|}{Minimally superintegrable \(2+1\) separable potentials} \\ \hline Potential & Ref [2] & LFIs and QFls \\ \hline \(V=cx+F_{1}(y-bx)+F_{2}(z)\) & & \(I_{1}=\dot{x}+b\dot{y}+ct\) \\ \(c\neq 0\), \(\frac{d^{2}F_{1}}{dw^{2}}\neq 0\) and \(w\equiv y-bx\) & New & \(I_{2}=(\dot{x}+b\dot{y})^{2}+2c(x+by)\) \\ \(c\neq 0\), \(\frac{d^{2}F_{1}}{dw^{2}}\neq 0\) and \(w\equiv y-bx\) & New & \(I_{3}=\frac{1}{2}\dot{z}^{2}+F_{2}(z)\) \\ \hline \(V=F_{1}(y-bx)+F_{2}(z)\) & & \(I_{1}=\dot{x}+b\dot{y}\) \\ \(\frac{d^{2}F_{1}}{dw^{2}}\neq 0\) and \(w\equiv y-bx\) & New & \(I_{2}=t(\dot{x}+b\dot{y})-(x+by)\) \\ & & \(I_{3}=\frac{1}{2}\dot{z}^{2}+F_{2}(z)\) \\ \hline \(V=\frac{k_{1}}{2}(x^{2}+4y^{2})+\frac{k_{2}}{x^{2}}+k_{3}y+F(z)\) & Table II & \(I_{1}=M_{3}\dot{x}+k_{1}yx^{2}-\frac{2k_{2}y}{x^{2}}+\frac{k_{3}}{2}x^{2}\) \\ & \(k_{3}=0\) & \(I_{2}=\frac{1}{2}\dot{x}^{2}+\frac{k_{1}}{2}x^{2}+\frac{k_{2}}{x^{2}}\) \\ & \(x\leftrightarrow y\) & \(I_{3}=\frac{1}{2}\dot{y}^{2}+2k_{1}y^{2}+k_{3}y\) \\ & \(x\leftrightarrow y\) & \(I_{4}=\frac{1}{2}\dot{z}^{2}+F(z)\) \\ \hline \(V=\frac{k_{1}}{x^{2}}+\frac{k_{2}}{R}+\frac{k_{3}y}{Rx^{2}}+F(z)\) & Table II & \(I_{1}=M_{3}^{2}+2k_{1}\frac{y^{2}}{x^{2}}+2k_{3}\frac{Ry}{x^{2}}\) \\ & \(x\leftrightarrow y\) & \(I_{2}=M_{3}\dot{x}-2k_{1}\frac{y}{x^{2}}-k_{2}\frac{y}{R}-k_{3}\frac{x^{2}+2 y^{2}}{Rx^{2}}\) \\ & & \(I_{3}=\frac{1}{2}\dot{z}^{2}+F(z)\) \\ \hline \(V=\frac{k_{1}}{R}+k_{2}\frac{\sqrt{t+y}}{R}+k_{3}\frac{\sqrt{R-y}}{R}+F(z)\) & Table II & \(I_{1}=M_{3}\dot{x}-\frac{k_{1}y}{R}-\frac{k_{3}(R+y)\sqrt{R-y}-k_{2}(R-y)\sqrt{ R+y}}{R}\) \\ & & \(I_{2}=M_{3}\dot{y}+G(x,y)\) \\ & & \(I_{3}=\frac{1}{2}\dot{z}^{2}+F(z)\) \\ & & \(G_{,x}=-yV_{,y}\) and \(G_{,y}=2xV_{,y}-yV_{,x}\) \\ \hline \(V=F_{1}(x)+\frac{k}{(y+c)^{2}}+F_{2}(z)\) & New & \(I_{1}=\frac{1}{2}\dot{x}^{2}+F_{1}\) \\ \(V=F_{1}(x)+\frac{k}{(y+c)^{2}}+F_{2}(z)\) & New & \(I_{2}=\frac{1}{2}\dot{y}^{2}+\frac{k}{(y+c)^{2}}\) \\ & & \(I_{3}=\frac{1}{2}\dot{z}^{2}+F_{2}\) \\ & & \(I_{4}=-\frac{t^{2}}{2}\dot{y}^{2}+t(y+c)\dot{y}-t^{2}\frac{k}{(y+c)^{2}}-\frac {1}{2}y^{2}-cy\) \\ \hline \(V=\frac{\lambda}{2}R^{2}+b_{1}y-b_{2}x+F(z)\) & & \(I_{1}=\frac{1}{2}\dot{x}^{2}+\frac{1}{2}\lambda x^{2}-b_{2}x\) \\ \(\lambda\neq 0\) & New & \(I_{3}=\frac{1}{2}\dot{y}^{2}+\frac{1}{2}\lambda y^{2}+b_{1}y\) \\ & & \(I_{4}=\dot{x}\dot{y}+\lambda xy+b_{1}x-b_{2}y\) \\ & & \(I_{5}=\frac{1}{2}\dot{z}^{2}+F(z)\) \\ \hline \end{tabular}
\end{table}
Table 2: Minimally superintegrable potentials \(V(x,y,z)=\Phi(x,y)+F(z)\) in \(E^{3}\), where \(\Phi(x,y)\) are superintegrable potentials in \(E^{2}\).
**Note 1:** The results indicated as 'New' in Tables 2 and 3 do not appear in [2] where only autonomous QFIs are considered.
**Note 2:** In Table II of [2], the potential (see Table 3)
\[V=\frac{k}{2}R^{2}+\frac{b}{x^{2}}+\frac{c}{y^{2}}+F(z) \tag{2}\]
where \(k,b,c\) are arbitrary constants and \(F(z)\) is an arbitrary smooth function, is said to be minimally superintegrable because of the four independent autonomous QFIs:
\[I_{1}=M_{3}^{2}+2b\frac{y^{2}}{x^{2}}+2c\frac{x^{2}}{y^{2}},\,\,I_{2}=\frac{1}{ 2}\dot{z}^{2}+F(z),\,\,\,I_{3}=\frac{1}{2}\dot{x}^{2}+\frac{k}{2}x^{2}+\frac{b }{x^{2}},\,\,\,I_{4}=\frac{1}{2}\dot{y}^{2}+\frac{k}{2}y^{2}+\frac{c}{y^{2}}.\]
However, using in addition the time-dependent QFIs:
\[\mbox{For }k=0:\,\,\,\,\,I_{5}=-\frac{t^{2}}{2}\dot{y}^{2}+ty\dot{y}-t^{2} \frac{c}{y^{2}}-\frac{1}{2}y^{2},\,\,\,I_{6}=-\frac{t^{2}}{2}\dot{x}^{2}+tx \dot{x}-t^{2}\frac{b}{x^{2}}-\frac{1}{2}x^{2}\]
\[\mbox{For }k=-\frac{\lambda^{2}}{4}\neq 0\mbox{:}\ \ \ I_{5}=e^{\lambda t}\left[-\dot{x }^{2}+\lambda x\dot{x}-\frac{\lambda^{2}}{4}x^{2}-\frac{2b}{x^{2}}\right],\ \ I_{6}=e^{\lambda t}\left[-\dot{y}^{2}+\lambda y\dot{y}-\frac{\lambda^{2}}{4}y ^{2}-\frac{2c}{y^{2}}\right]\]
it is seen that the potential (2) for these values of \(k\) is maximally superintegrable.
Moreover, if we assume the canonical transformation \(x\to x+c_{1}\) and \(y\to y+c_{2}\) where \(c_{1}\) and \(c_{2}\) are arbitrary constants, it is shown that the potential (2) is transformed canonically into the last two potentials of Table 3. Indeed, for \(k=0\), \(b=k_{1}\) and \(c=k_{2}\), we get the potential
\[V=\frac{k_{1}}{(x+c_{1})^{2}}+\frac{k_{2}}{(y+c_{2})^{2}}+F(z)\]
while for \(k=-\frac{\lambda^{2}}{4}\), \(b=-k_{1}\) and \(c=-k_{2}\), we get the potential
\[V=-\frac{\lambda^{2}}{8}R^{2}-\frac{\lambda^{2}}{4}\left(c_{1}x+c_{2}y\right) -\frac{k_{1}}{(x+c_{1})^{2}}-\frac{k_{2}}{(y+c_{2})^{2}}-\frac{\lambda^{2}}{8 }(c_{1}^{2}+c_{2}^{2})+F(z).\]
The constant term \(-\frac{\lambda^{2}}{8}(c_{1}^{2}+c_{2}^{2})\) is overlooked because it does not contribute to the dynamical equations.
**Note 3:** From Table 2, we observe that the minimally superintegrable potential
\[V=\frac{k_{1}}{R}+k_{2}\frac{\sqrt{R+y}}{R}+k_{3}\frac{\sqrt{R-y}}{R}+F(z) \tag{3}\]
where \(k_{1},k_{2},k_{3}\) are arbitrary constants and \(F(z)\) is an arbitrary smooth function, admits the two autonomous QFIs:
\[I_{1} = M_{3}\dot{x}-\frac{k_{1}y}{R}+\frac{k_{2}(R-y)\sqrt{R+y}}{R}- \frac{k_{3}(R+y)\sqrt{R-y}}{R} \tag{4}\] \[I_{2} = M_{3}\dot{y}+G(x,y). \tag{5}\]
The function \(G(x,y)\) must satisfy the system of PDEs:
\[G_{,x}+yV_{,y} = 0 \tag{6}\] \[G_{,y}+yV_{,x}-2xV_{,y} = 0. \tag{7}\]
Using the parabolic cylindrical coordinates \((\lambda^{\prime},\mu^{\prime},z)\) (see eqs. (3.19) and (3.51) in [2]) with \(\lambda^{\prime}=R+y\) and \(\mu^{\prime}=R-y\), the QFI (4) becomes2
Footnote 2: We recall that the coordinates \(\lambda^{\prime},\mu^{\prime}\) are either positive or zero because \(\lambda^{\prime}+\mu^{\prime}=2R\), \(\lambda^{\prime}-\mu^{\prime}=2y\), and \(\lambda^{\prime}\mu^{\prime}=x^{2}\).
\[I_{1}=M_{3}\dot{x}-\frac{2}{\lambda^{\prime}+\mu^{\prime}}\left[\frac{k_{1}}{2 }(\lambda^{\prime}-\mu^{\prime})-k_{2}\mu^{\prime}\sqrt{\lambda^{\prime}}+k_{ 3}\lambda^{\prime}\sqrt{\mu^{\prime}}\right]. \tag{8}\]
The QFI \(I_{2}\) in eq. (3.57) of [2] is not correct and should be replaced by the QFI (8).
In the parabolic cylindrical coordinates \((u,v,z)\) with \(u=R+x\), \(v=R-x\) and3\(x,y>0\), the system of PDEs (6) - (7) becomes \(G_{,v}=uV_{,v}\) and \(G_{,u}=-vV_{,u}\). The solution of this system is
Footnote 3: For \(x,y>0\) we have: \(\sqrt{R+x}+\sqrt{R-x}=\sqrt{2}\sqrt{R+y}\) and \(\sqrt{R+x}-\sqrt{R-x}=\sqrt{2}\sqrt{R-y}\).
\[G(u,v)=\frac{2}{u+v}\left[\frac{k_{1}}{2}(u-v)-(k_{2}+k_{3})v\sqrt{\frac{u}{2} }+(k_{2}-k_{3})u\sqrt{\frac{v}{2}}\right]\]
or, equivalently, in Cartesian coordinates
\[G(x,y)=\frac{1}{R}\left[k_{1}x-(k_{2}+k_{3})(R-x)\sqrt{\frac{R+x}{2}}+(k_{2}-k _{3})(R+x)\sqrt{\frac{R-x}{2}}\right].\]
Then, the QFI (5) is
\[I_{2}=M_{3}\dot{y}+\frac{2}{u+v}\left[\frac{k_{1}}{2}(u-v)-(k_{2}+k_{3})v \sqrt{\frac{u}{2}}+(k_{2}-k_{3})u\sqrt{\frac{v}{2}}\right]. \tag{9}\]
There is a misprint in the QFI \(I_{3}\) of eq. (3.57) in [2]; the correct answer is the QFI (9).
**Note 4:** The two superintegrable potentials given in eq. (17) of [4] are subcases of the potential (see Table 2)
\[V=\frac{\lambda}{2}R^{2}+b_{1}y-b_{2}x+F(z) \tag{10}\]
for \(F(z)=\frac{\lambda}{2}z^{2}+b_{3}z\) and \(F(z)=\frac{\lambda}{8}z^{2}+\frac{b_{3}}{z^{2}}\), where \(b_{3}\) is an arbitrary constant.
**Note 5:** The potential (see Table 2)
\[V_{1}=cx+F_{1}(y-bx)+F_{2}(z) \tag{11}\]
where \(c\) is an arbitrary non-zero constant, \(w\equiv y-bx\) and \(\frac{d^{2}F_{1}}{dw^{2}}\neq 0\), admits the following LFIs/QFIs (apart from the Hamiltonian \(H\)):
\[I_{1}=\dot{x}+b\dot{y}+ct,\,\,\,I_{2}=t(\dot{x}+b\dot{y})-(x+by)+\frac{c}{2}t ^{2},\,\,\,I_{3}=(\dot{x}+b\dot{y})^{2}+2c(x+by),\,\,\,I_{4}=\frac{1}{2}\dot{ z}^{2}+F_{2}(z).\]
We compute the PBs:
\[\{H,I_{1}\}=c,\,\,\,\{H,I_{2}\}=I_{1},\,\,\,\{I_{1},I_{2}\}=1+b^{2},\,\,\,\{I_ {1},I_{3}\}=-2c(1+b^{2}),\,\,\,\{I_{2},I_{3}\}=-2(1+b^{2})I_{1}.\]
The three FIs \(H,I_{3},I_{4}\) are (functionally) independent and in involution; therefore, the potential (11) is integrable. The five FIs \(H,I_{1},I_{2},I_{3},I_{4}\) are not independent because \(I_{1}^{2}=I_{3}+2cI_{2}\). However, the four FIs \(H,I_{3},I_{4},I_{1}\), or the \(H,I_{3},I_{4},I_{2}\), are independent and, therefore, the potential (11) is minimally superintegrable.
## 3 The Theorem for QFIs
In order to compute in a systematic way the QFIs of non-decomposable potentials, we need to recall a theorem which is proved in [11].
**Theorem 1**: _The independent QFIs of the \(n\)-dimensional autonomous holonomic dynamical system_
\[\ddot{q}^{a}=-\Gamma^{a}_{bc}(q)\dot{q}^{b}\dot{q}^{c}-Q^{a}(q) \tag{12}\]
_where \(q^{a}\) are the coordinates of the configuration space, \(\dot{q}^{a}=\frac{dq^{a}}{dt}\), \(t\) is the time variable, \(\Gamma^{a}_{bc}(q)\) are the Riemannian connection coefficients of the kinetic metric \(\gamma_{ab}(q)\) defined by the kinetic energy of the system and \(-Q^{a}(q)\) are the generalized forces, are the following:_
_Integral 1._
\[I_{(1,\ell)} = \left(-\frac{t^{2\ell}}{2\ell}L_{(2\ell-1)(a;b)}-...-\frac{t^{4} }{4}L_{(3)(a;b)}-\frac{t^{2}}{2}L_{(1)(a;b)}+C_{ab}\right)\dot{q}^{a}\dot{q}^{ b}+t^{2\ell-1}L_{(2\ell-1)a}\dot{q}^{a}+...+t^{3}L_{(3)a}\dot{q}^{a}+\] \[+tL_{(1)a}\dot{q}^{a}+\frac{t^{2\ell}}{2\ell}L_{(2\ell-1)a}Q^{a} +...+\frac{t^{4}}{4}L_{(3)a}Q^{a}+\frac{t^{2}}{2}L_{(1)a}Q^{a}+G(q)\]
_where4\(C_{ab}(q)\) and \(L_{(M)(a;b)}(q)\) for \(M=1,3,...,2\ell-1\) are KTs, \(\left(L_{(2\ell-1)b}Q^{b}\right)_{,a}=-2L_{(2\ell-1)(a;b)}Q^{b}\), \(\left(L_{(k-1)b}Q^{b}\right)_{,a}=-2L_{(k-1)(a;b)}Q^{b}-k(k+1)L_{(k+1)a}\) for \(k=2,4,...,2\ell-2\), and \(G_{,a}=2C_{ab}Q^{b}-L_{(1)a}\)._
Footnote 4: We note that for \(\ell=0\) the conditions for the QFI \(I_{(1,0)}\) are given by nullifying all the vectors \(L_{(M)a}\).
_Integral 2._
\[I_{(2,\ell)} = \left(-\frac{t^{2\ell+1}}{2\ell+1}L_{(2\ell)(a;b)}-...-\frac{t^{3 }}{3}L_{(2)(a;b)}-tL_{(0)(a;b)}\right)\dot{q}^{a}\dot{q}^{b}+t^{2\ell}L_{(2 \ell)a}\dot{q}^{a}+...+t^{2}L_{(2)a}\dot{q}^{a}+\] \[+L_{(0)a}\dot{q}^{a}+\frac{t^{2\ell+1}}{2\ell+1}L_{(2\ell)a}Q^{a} +...+\frac{t^{3}}{3}L_{(2)a}Q^{a}+tL_{(0)a}Q^{a}\]
_where \(L_{(a;b)}(q)\) for \(M=0,2,...,2\ell\) are KTs, \(\left(L_{(2\ell)b}Q^{b}\right)_{,a}=-2L_{(2\ell)(a;b)}Q^{b}\), and \(\left(L_{(k-1)b}Q^{b}\right)_{,a}=-2L_{(k-1)(a;b)}Q^{b}-k(k+1)L_{(k+1)a}\) for \(k=1,3,...,2\ell-1\)._
_**Integral 3.**_
\[I_{(3)}=e^{\lambda t}\left(-L_{(a;b)}\dot{q}^{a}\dot{q}^{b}+\lambda L_{a}\dot {q}^{a}+L_{a}Q^{a}\right)\]
_where the vector \(L_{a}(q)\) is such that \(L_{(a;b)}\) is a KT and \(\left(L_{b}Q^{b}\right)_{,a}=-2L_{(a;b)}Q^{b}-\lambda^{2}L_{a}\)._
Notation: The Einstein summation convention is used, round (square) brackets indicate symmetrization (antisymmetrization) of the enclosed indices, indices enclosed between vertical lines are overlooked by antisymmetrization or symmetrization symbols, a comma indicates partial derivative and a semicolon Riemannian covariant derivative.
Before we proceed, we recall the geometric quantities of the Euclidean space \(E^{3}\) required by Theorem 1.
## 4 The geometric quantities of \(E^{3}\)
- \(E^{3}\) admits three gradient Killing vectors (KVs) \(\partial_{x},\partial_{y},\partial_{z}\) whose generating functions are \(x,y,z\), respectively, and three non-gradient KVs \(y\partial_{x}-x\partial y\), \(z\partial_{y}-y\partial_{z}\), \(z\partial_{x}-x\partial_{z}\). These vectors are written collectively as
\[L_{a}=\left(\begin{array}{c}b_{1}-b_{4}y+b_{5}z\\ b_{2}+b_{4}x-b_{6}z\\ b_{3}-b_{5}x+b_{6}y\end{array}\right) \tag{13}\]
where \(b_{1},b_{2},...,b_{6}\) are arbitrary constants.
- The general second order KT in \(E^{3}\) has independent components:
\[C_{11} = \frac{a_{6}}{2}y^{2}+\frac{a_{1}}{2}z^{2}+a_{4}yz+a_{5}y+a_{2}z+ a_{3}\] \[C_{12} = \frac{a_{10}}{2}z^{2}-\frac{a_{6}}{2}xy-\frac{a_{4}}{2}xz-\frac{ a_{14}}{2}yz-\frac{a_{5}}{2}x-\frac{a_{15}}{2}y+a_{16}z+a_{17}\] \[C_{13} = \frac{a_{14}}{2}y^{2}-\frac{a_{4}}{2}xy-\frac{a_{1}}{2}xz-\frac{ a_{10}}{2}yz-\frac{a_{2}}{2}x+a_{18}y-\frac{a_{11}}{2}z+a_{19} \tag{14}\] \[C_{22} = \frac{a_{6}}{2}x^{2}+\frac{a_{7}}{2}z^{2}+a_{14}xz+a_{15}x+a_{12} z+a_{13}\] \[C_{23} = \frac{a_{4}}{2}x^{2}-\frac{a_{14}}{2}xy-\frac{a_{10}}{2}xz-\frac{ a_{7}}{2}yz-(a_{16}+a_{18})x-\frac{a_{12}}{2}y-\frac{a_{8}}{2}z+a_{20}\] \[C_{33} = \frac{a_{1}}{2}x^{2}+\frac{a_{7}}{2}y^{2}+a_{10}xy+a_{11}x+a_{8} y+a_{9}\]
where \(a_{K}\) with \(K=1,2,...,20\) are arbitrary constants.
- The vector \(L_{a}\) generating the reducible KT \(C_{ab}=L_{(a;b)}\) is
\[L_{a}=\left(\begin{array}{cc}-a_{15}y^{2}-a_{11}z^{2}+a_{5}xy+a_{2}xz+2(a_{1 6}+a_{18})yz+a_{3}x+2a_{4}y+2a_{1}z+a_{6}\\ -a_{5}x^{2}-a_{8}z^{2}+a_{15}xy-2a_{18}xz+a_{12}yz+2(a_{17}-a_{4})x+a_{13}y+2a _{7}z+a_{14}\\ -a_{2}x^{2}-a_{12}y^{2}-2a_{16}xy+a_{11}xz+a_{8}yz+2(a_{19}-a_{1})x+2(a_{20}-a _{7})y+a_{9}z+a_{10}\end{array}\right) \tag{15}\]
and the generated KT is
\[C_{ab}=\left(\begin{array}{cc}a_{5}y+a_{2}z+a_{3}&-\frac{a_{5}}{2}x-\frac{ a_{15}}{2}y+a_{16}z+a_{17}&-\frac{a_{2}}{2}x+a_{18}y-\frac{a_{11}}{2}z+a_{19}\\ -\frac{a_{8}}{2}x-\frac{a_{15}}{2}y+a_{16}z+a_{17}&a_{15}x+a_{12}z+a_{13}&-(a_{ 16}+a_{18})x-\frac{a_{12}}{2}y-\frac{a_{8}}{2}z+a_{20}\\ -\frac{a_{2}}{2}x+a_{18}y-\frac{a_{11}}{2}z+a_{19}&-(a_{16}+a_{18})x-\frac{a_{ 12}}{2}y-\frac{a_{8}}{2}z+a_{20}&a_{11}x+a_{8}y+a_{9}\end{array}\right) \tag{16}\]
which is a subcase of the general KT (14) for \(a_{1}=a_{4}=a_{6}=a_{7}=a_{10}=a_{14}=0\).
## 5 The QFI \(I_{(1,1)}\) where \(\ell=1\)
We set \(L_{(1)a}=L_{a}\) and the QFI \(I_{(1,\ell)}\) for \(\ell=1\) becomes
\[I_{(1,1)}=\left(-\frac{t^{2}}{2}L_{(a;b)}+C_{ab}\right)\dot{q}^{a}\dot{q}^{b}+tL _{a}\dot{q}^{a}+\frac{t^{2}}{2}L_{a}V^{,a}+G(x,y,z) \tag{17}\]
where \(C_{ab}\) is a second order KT given by (14), the vector \(L_{a}\) is given by (15), the generated KT \(L_{(a;b)}\) is the (16) and the following conditions must be satisfied:
\[\left(L_{b}V^{,b}\right)_{,a} = -2L_{(a;b)}V^{,b} \tag{18}\] \[G_{,a} = 2C_{ab}V^{,b}-L_{a}. \tag{19}\]
Equations (18) and (19) must be supplemented with the three integrability conditions for the function \(G\) and the three integrability conditions for the function \(L_{a}V^{,a}\).
Finally, we have an overdetermined system of twelve PDEs with unknowns the two functions \(G(x,y,z)\) and \(V(x,y,z)\), and forty free parameters. Obviously, the general solution is not possible, and we have to look for special solutions which are achieved by introducing simplifying assumptions.
### Case \(L_{a}=0\)
In this case, the QFI (17) is the well-known autonomous QFI
\[I_{(1,1)}(L_{a}=0)=C_{ab}\dot{q}^{a}\dot{q}^{b}+G(x,y,z) \tag{20}\]
where the second order KT \(C_{ab}\) has independent components
\[C_{11} = \frac{a_{6}}{2}y^{2}+\frac{a_{1}}{2}z^{2}+a_{4}yz+a_{5}y+a_{2}z+a _{3}\] \[C_{12} = \frac{a_{10}}{2}z^{2}-\frac{a_{6}}{2}xy-\frac{a_{4}}{2}xz-\frac{ a_{14}}{2}yz-\frac{a_{5}}{2}x-\frac{a_{15}}{2}y+a_{16}z+a_{17}\] \[C_{13} = \frac{a_{14}}{2}y^{2}-\frac{a_{4}}{2}xy-\frac{a_{1}}{2}xz-\frac{ a_{10}}{2}yz-\frac{a_{2}}{2}x+a_{18}y-\frac{a_{11}}{2}z+a_{19} \tag{21}\] \[C_{22} = \frac{a_{6}}{2}x^{2}+\frac{a_{7}}{2}z^{2}+a_{14}xz+a_{15}x+a_{12} z+a_{13}\] \[C_{23} = \frac{a_{4}}{2}x^{2}-\frac{a_{14}}{2}xy-\frac{a_{10}}{2}xz-\frac{ a_{7}}{2}yz-(a_{16}+a_{18})x-\frac{a_{12}}{2}y-\frac{a_{8}}{2}z+a_{20}\] \[C_{33} = \frac{a_{1}}{2}x^{2}+\frac{a_{7}}{2}y^{2}+a_{10}xy+a_{11}x+a_{8}y +a_{9}\]
the parameters \(a_{1},...,a_{20}\) are arbitrary constants and the function \(G(x,y,z)\) satisfies the condition
\[G_{,a}=2C_{ab}V^{,b}. \tag{22}\]
The integrability condition \(G_{,[ab]}=0\) gives:
\[0 = C_{12}\left(V_{,yy}-V_{,xx}\right)+\left[\frac{a_{6}(y^{2}-x^{2} )}{2}+\frac{(a_{1}-a_{7})z^{2}}{2}-(a_{14}x-a_{4}y)z-a_{15}x+a_{5}y+(a_{2}-a_{ 12})z+\right.\] \[\left.+a_{3}-a_{13}\right]V_{,xy}+C_{13}V_{,yz}-C_{23}V_{,xz}+ \frac{3}{2}(a_{6}y+a_{4}z+a_{5})V_{,x}-\frac{3}{2}(a_{6}x+a_{14}z+a_{15})V_{, y}+\] \[+\left(\frac{3a_{14}}{2}y-\frac{3a_{4}}{2}x+2a_{18}+a_{16} \right)V_{,z}\] \[0 = C_{13}\left(V_{,zz}-V_{,xx}\right)+\left[\frac{a_{1}(z^{2}-x^{2 })}{2}+\frac{(a_{6}-a_{7})y^{2}}{2}-(a_{10}x-a_{4}z)y-a_{11}x+(a_{5}-a_{8})y+a_ {2}z+\right. \tag{24}\] \[\left.+a_{3}-a_{9}\right]V_{,xz}+C_{12}V_{,yz}-C_{23}V_{,xy}+ \frac{3}{2}(a_{4}y+a_{1}z+a_{2})V_{,x}+\left(\frac{3a_{10}}{2}z-\frac{3a_{4} }{2}x+2a_{16}+a_{18}\right)V_{,y}-\] \[-\frac{3}{2}(a_{1}x+a_{10}y+a_{11})V_{,z}\] \[0 = C_{23}\left(V_{,zz}-V_{,yy}\right)+\left[\frac{a_{7}(z^{2}-y^{2} )}{2}+\frac{(a_{6}-a_{1})x^{2}}{2}-(a_{10}y-a_{14}z)x+(a_{15}-a_{11})x-a_{8}y+ a_{12}z+\right.\] \[\left.+a_{13}-a_{9}\right]V_{,yz}+C_{12}V_{,xz}-C_{13}V_{,xy}+ \left(\frac{3a_{10}}{2}z-\frac{3a_{14}}{2}y+a_{16}-a_{18}\right)V_{,x}+\frac{3 }{2}(a_{14}x+a_{7}z+a_{12})V_{,y}-\]
\[V(x,y,z)=F_{2}(\bar{w})w+F_{3}(\bar{w})+F_{4}(z) \tag{33}\]
is a new integrable potential due to the additional QFI \(I=\frac{1}{2}\dot{z}^{2}+F_{4}(z)\).
4) \(a_{19}\neq 0\) and \(a_{20}=ia_{19}\).
The potential is
\[V(x,y,z)=F_{2}^{\prime}z^{2}+F_{3}(w)z+F_{4}(w)+F_{2}(w)\bar{w} \tag{34}\]
where \(w=x+iy\), \(\bar{w}=x-iy\), \(F_{2},F_{3},F_{4}\) are arbitrary smooth functions of their arguments, and \(F_{2}^{\prime}\equiv\frac{dF_{2}}{dw}\).
The associated autonomous QFI (20) is
\[I_{(1,1)}=\frac{1}{2}\dot{z}\left(\dot{x}+i\dot{y}\right)+F_{2}(w)z+\frac{1}{2} \int F_{3}(w)dw. \tag{35}\]
Because the potential (34) is of the general form (28) for \(F_{1}(w,z)=F_{2}^{\prime}z^{2}+F_{3}(w)z+F_{4}(w)\), it admits the additional QFI (29). Therefore, it is integrable because the independent QFIs \(H\), (29) and (35) are in involution6.
Footnote 6: The PB of the QFIs (29) and (35) vanishes because for an integrable function of the form \(M(w)=\int F(w)dw\) with \(w=x+iy\), it holds that: \(M^{\prime}\equiv\frac{dM}{dw}=F\), \(M_{,x}=F\) and \(M_{,y}=iF\).
For \(F_{2}=k_{1}w+k_{2}\) and \(F_{3}=k_{3}\) where \(k_{1},k_{2},k_{3}\) are arbitrary constants, the potential (34) becomes
\[V(x,y,z)=k_{1}r^{2}+k_{2}\bar{w}+k_{3}z+F_{4}(w). \tag{36}\]
This is a new minimally superintegrable potential because it is separable in the coordinate \(z\).
5) \(a_{19}\neq 0\) and \(a_{20}=-ia_{19}\).
The potential is
\[V(x,y,z)=F_{2}^{\prime}z^{2}+F_{3}(\bar{w})z+F_{4}(\bar{w})+F_{2}(\bar{w})w. \tag{37}\]
where \(w=x+iy\) and \(F_{2}^{\prime}\equiv\frac{dF_{2}}{dw}\).
The associated autonomous QFI (20) is
\[I_{(1,1)}=\frac{1}{2}\dot{z}\left(\dot{x}-i\dot{y}\right)+F_{2}(\bar{w})z+ \frac{1}{2}\int F_{3}(\bar{w})d\bar{w}. \tag{38}\]
The potential (37) is of the general form (31) for \(F_{1}(\bar{w},z)=F_{2}^{\prime}z^{2}+F_{3}(\bar{w})z+F_{4}(\bar{w})\); therefore, it is integrable due to the additional QFI (32).
Moreover, for \(F_{2}=k_{1}\bar{w}+k_{2}\) and \(F_{3}=k_{3}\), the potential (37) becomes
\[V(x,y,z)=k_{1}r^{2}+k_{2}w+k_{3}z+F_{4}(\bar{w}). \tag{39}\]
This is a new minimally superintegrable potential because it is separable in the coordinate \(z\).
#### 5.1.2 The components of the KT \(C_{ab}\) are linear functions of \(x,y,z\)
The possibly non-zero parameters are the \(a_{2},a_{5},a_{8},a_{11},a_{12},a_{15},a_{16}\), and \(a_{18}\). In this case, there are six different combinations which lead to new results.
1) \(a_{2}=a\) and \(a_{5}=b\), where \(a,b\) are arbitrary constants.
The potential is
\[V(x,y,z)=(a^{2}+b^{2})x^{2}+4(az+by)^{2}+\frac{k_{1}}{x^{2}}+k_{2}(az+by)+F(ay- bz) \tag{40}\]
where \(F\) is an arbitrary smooth function of its argument and \(k_{1},k_{2}\) are arbitrary constants. For \(a=0\), the potential (40) reduces to the minimally superintegrable potential of the form (see Table 2)
\[V(x,y,z)=\frac{k_{1}}{2}(x^{2}+4y^{2})+\frac{k_{2}}{x^{2}}+k_{3}y+F(z). \tag{41}\]
The associated QFI (20) is
\[I_{(1,1)}=(aM_{2}-bM_{3})\dot{x}-\frac{k_{2}}{2}(a^{2}+b^{2})x^{2}-2(a^{2}+b^{ 2})(az+by)x^{2}+\frac{2k_{1}(az+by)}{x^{2}} \tag{42}\]
where \(M_{i}=(y\dot{z}-z\dot{y},z\dot{x}-x\dot{z},x\dot{y}-y\dot{x})\) is the angular momentum.
Since the potential (40) is separable in the coordinate \(x\), it admits the additional QFI
\[I=\frac{1}{2}\dot{x}^{2}+(a^{2}+b^{2})x^{2}+\frac{k_{1}}{x^{2}}.\]
However, it is not integrable because the PB \(\{I_{(1,1)},I\}\neq 0\).
2) \(a_{2}=a\) and \(a_{12}=b\), where \(a,b\) are arbitrary constants.
We find the fully separable potential
\[V(x,y,z)=k_{1}(x^{2}+y^{2}+4z^{2})+\frac{k_{2}}{x^{2}}+\frac{k_{3}}{y^{2}}+k_{4}z \tag{43}\]
where \(k_{1},k_{2},k_{3}\), and \(k_{4}\) are arbitrary constants. We note that the potential in Table I of [2] is a subcase of the potential (43) for \(k_{4}=0\).
The associated QFI (20) consists of the following two independent QFIs:
\[J_{1} = M_{2}\dot{x}+2z\left(\frac{k_{2}}{x^{2}}-k_{1}x^{2}\right)-\frac {k_{4}}{2}x^{2} \tag{44}\] \[J_{2} = -M_{1}\dot{y}+2z\left(\frac{k_{3}}{y^{2}}-k_{1}y^{2}\right)-\frac {k_{4}}{2}y^{2}. \tag{45}\]
Moreover, the potential (43) is of the integrable form \(V=\frac{F_{1}\left(\frac{x}{y}\right)}{R^{2}}+F_{2}(R)+F_{3}(z)\) (see Table 1) for
\[F_{1}\left(\frac{y}{x}\right)=k_{2}\left[1+\left(\frac{y}{x}\right)^{2} \right]+k_{3}\left[1+\left(\frac{x}{y}\right)^{2}\right],\,\,\,F_{2}(R)=k_{1 }(x^{2}+y^{2}),\,\,\,F_{3}(z)=4k_{1}z^{2}+k_{4}z.\]
Therefore, it admits the additional QFI
\[J_{3}=\frac{1}{2}M_{3}^{2}+k_{2}\left(\frac{y}{x}\right)^{2}+k_{3}\left(\frac {x}{y}\right)^{2}. \tag{46}\]
In order to compare the QFIs (44), (45), (46) with the QFIs \(I_{3},I_{4}\) of eq. (3.43) in [2], we set \(k_{4}=0\) and we use the rotational parabolic coordinates (see eqs. (3.8) and (3.9) in [2]):
\[\zeta=r+z,\,\,\,\eta=r-z,\,\,\,\phi=\tan^{-1}\left(\frac{y}{x}\right)\]
or equivalently
\[x=\sqrt{\zeta\eta}\cos\phi,\,\,\,y=\sqrt{\zeta\eta}\sin\phi,\,\,\,z=\frac{1}{ 2}\left(\zeta-\eta\right).\]
We compute (for \(k_{4}=0\)):
\[J_{1} = M_{2}\dot{x}+(\zeta-\eta)\left(\frac{k_{2}}{\zeta\eta\cos^{2} \phi}-k_{1}\zeta\eta\cos^{2}\phi\right) \tag{47}\] \[J_{2} = -M_{1}\dot{y}+(\zeta-\eta)\left(\frac{k_{3}}{\zeta\eta\sin^{2} \phi}-k_{1}\zeta\eta\sin^{2}\phi\right)\] (48) \[J_{3} = \frac{1}{2}M_{3}^{2}+\frac{k_{2}}{\cos^{2}\phi}+\frac{k_{3}}{\sin ^{2}\phi}-k_{2}-k_{3}. \tag{49}\]
Then, \(J_{3}=I_{3}-k_{2}-k_{3}\) and
\[J_{1}+J_{2}=M_{2}\dot{x}-M_{1}\dot{y}-(\zeta-\eta)\left(k_{1}\zeta\eta-\frac{ k_{2}}{\zeta\eta\cos^{2}\phi}-\frac{k_{3}}{\zeta\eta\sin^{2}\phi}\right)=I_{4}.\]
There is a misprint in eq. (3.43) of [2] concerning the leading term of the QFI \(I_{4}\). It must be \(L_{2}P_{1}-L_{1}P_{2}\).
We conclude that the potential (43) is maximally superintegrable. However, from the seven QFIs (the QFIs \(H,J_{1},J_{2},J_{3}\) plus the three QFIs arising from the separability of \(x,y,z\)) only five are functionally independent.
3) \(a_{2}=a\) and \(a_{8}=b\), where \(a,b\) are arbitrary constants.
We find the separable potential
\[V(x,y,z)=\frac{k_{1}}{4}(x^{2}+16y^{2}+4z^{2})+\frac{k_{2}}{x^{2}}+k_{3}y \tag{50}\]
where \(k_{1},k_{2}\), and \(k_{3}\) are arbitrary constants.
The associated QFI (20) consists of the following two independent QFIs:
\[J_{1} = M_{2}\dot{x}+z\left(\frac{2k_{2}}{x^{2}}-\frac{k_{1}}{2}x^{2}\right) \tag{51}\] \[J_{2} = M_{1}\dot{z}-z^{2}\left(2k_{1}y+\frac{k_{3}}{2}\right). \tag{52}\]
Therefore, the separable potential (50) is a new maximally superintegrable potential.
4) \(a_{2}=a\) and \(a_{16}=-a_{18}=\frac{k}{2}\), where \(a,b\) are arbitrary constants.
The potential is
\[V(x,y,z)=\frac{k}{\sqrt{(ax+by)^{2}+(a^{2}+b^{2})z^{2}}} \tag{53}\]
where \(k\) is an arbitrary constant.
The associated QFI (20) is
\[I_{(1,1)}=aM_{2}\dot{x}-bM_{1}\dot{x}+azV. \tag{54}\]
In order to show that the potential (53) is integrable, we need one more independent FI in involution.
5) \(a_{2}=a_{12}\neq 0\).
In this case, we find the following three potentials7:
Footnote 7: We note that any linear combination of these potentials is a solution of the system of PDEs (23) - (25) for \(a_{2}=a_{12}\neq 0\).
\[V_{1}(x,y,z) = \frac{F_{1}\left(\frac{y}{x}\right)}{R^{2}}+k(R^{2}+4z^{2}) \tag{55}\] \[V_{2}(x,y,z) = \frac{F_{1}\left(\frac{y}{x}\right)}{R^{2}}+\frac{k_{1}z}{rR^{2} }-\frac{k_{2}}{r}\] (56) \[V_{3}(x,y,z) = \frac{F_{1}\left(\frac{y}{x}\right)}{R^{2}}+kz \tag{57}\]
where \(k,k_{1},k_{2}\) are arbitrary constants, \(R=\sqrt{x^{2}+y^{2}}\) and \(r=\sqrt{x^{2}+y^{2}+z^{2}}\). The first two potentials, i.e. \(V_{1}\) and \(V_{2}\), are included in Table II of [2], whereas the third potential \(V_{3}\) is not included.
The associated QFIs (20) are:
- For the potential8 (55):
Footnote 8: In Table II of [2], there is a misprint in the QFIs \(I_{3}\) associated with the potentials (55) and (56). The leading part of \(I_{3}\) should be \(L_{2}P_{1}-P_{2}L_{1}\).
\[I_{(1,1)} = M_{2}\dot{x}-M_{1}\dot{y}+\frac{2zF_{1}\left(\frac{y}{x}\right)} {x^{2}+y^{2}}+\frac{2k_{1}z^{2}}{r(x^{2}+y^{2})}+\frac{k_{1}}{r}-\frac{k_{2}z} {r} \tag{58}\] \[= M_{2}\dot{x}-M_{1}\dot{y}+\frac{(\zeta-\eta)F_{1}(\tan\phi)}{ \zeta\eta}+\frac{k_{1}(\zeta^{2}+\eta^{2})}{\zeta\eta(\zeta+\eta)}-\frac{k_{2 }(\zeta-\eta)}{\zeta+\eta}.\]
- For the potential (57):
\[I_{(1,1)} = M_{2}\dot{x}-M_{1}\dot{y}+\frac{2zF_{1}\left(\frac{y}{x}\right)} {x^{2}+y^{2}}-k\frac{R^{2}}{2}. \tag{59}\]
We note that both the potentials (55) and (57) are minimally superintegrable, because they are of the form \(V=\frac{F_{1}\left(\frac{y}{x}\right)}{R^{2}}+F_{2}(R)+F_{3}(z)\) (see Table 1).
Moreover, from cases 2) and 3) of the following subsection 5.1.3, the potential (56) becomes minimally superintegrable because it admits the additional QFIs (75) and (81) which are also in involution.
6) \(a_{2}\neq 0\), \(a_{12}=-a_{2}\), \(a_{16}=ia_{2}\) and \(a_{18}=-i\frac{a_{2}}{2}\).
The potential is
\[V(x,y,z)=k_{1}(R^{2}+4z^{2})+k_{2}z+\frac{k_{3}}{w^{2}}+k_{4}\frac{\bar{w}}{w^{3}} \tag{61}\]
where \(k_{1},k_{2},k_{3},k_{4}\) are arbitrary constants, \(w=x+iy\), and \(\bar{w}=x-iy\). This result coincides with the potential given in eq. (14) of [4] if we apply the canonical transformation \(x\to y\), \(y\to z\) and \(z\to x\).
The associated autonomous QFI (20) is
\[I_{1}=\frac{1}{2}(\dot{x}+i\dot{y})\left(M_{2}-iM_{1}\right)-k_{1}zw^{2}- \frac{k_{2}}{4}w^{2}-k_{4}\frac{z}{w^{2}}. \tag{62}\]
Moreover, the potential (61) admits the additional QFIs:
\[I_{2} = \frac{1}{2}\left(\dot{x}+i\dot{y}\right)^{2}+k_{1}w^{2}-\frac{k_ {4}}{w^{2}} \tag{63}\] \[I_{3} = \frac{1}{2}\dot{z}^{2}+4k_{1}z^{2}+k_{2}z\] (64) \[I_{4} = \frac{1}{2}M_{3}^{2}+k_{3}e^{-2i\theta}+k_{4}e^{-4i\theta}\] (65) \[I_{5} = \frac{1}{2}\left(M_{2}\dot{x}-M_{1}\dot{y}\right)+k_{3}\frac{z}{w ^{2}}+k_{4}\frac{z\bar{w}}{w^{3}}-k_{1}zR^{2}-k_{2}\frac{R^{2}}{4} \tag{66}\]
because it is of the form (28) for \(F_{1}=4k_{1}z^{2}+k_{2}z+\frac{k_{3}}{w^{2}}\) and \(F_{2}=k_{1}w+\frac{k_{4}}{w^{3}}\), and of the form (see subsection 5.1.2 and Table 6)
\[V(x,y,z)=\frac{F_{1}\left(\frac{y}{x}\right)}{R^{2}}+k_{1}(R^{2}+4z^{2})+k_{2}z \tag{67}\]
for \(F_{1}\left(\frac{y}{x}\right)=k_{3}e^{-2i\theta}+k_{4}e^{-4i\theta}\), where \(\tan\theta=\frac{y}{x}\). Therefore, the potential (61) is maximally superintegrable.
#### 5.1.3 The components of the KT \(C_{ab}\) depend on \(xy,xz,yz\), \(x^{2}\), \(y^{2},z^{2}\)
In this case, the possibly non-zero parameters are the \(a_{1},a_{4},a_{6},a_{7},a_{10}\), and \(a_{14}\). New results are produced for the following six cases.
1) \(a_{1}=a,a_{6}=b\), and \(a_{7}=c\), where \(a,b,c\) are arbitrary constants.
The potential is (see Table II in [2])
\[V(x,y,z)=\frac{k_{1}}{x^{2}}+\frac{k_{2}}{y^{2}}+\frac{k_{3}}{z^{2}}+F(r) \tag{68}\]
where \(k_{1},k_{2},k_{3}\) are arbitrary constants, \(r=\sqrt{x^{2}+y^{2}+z^{2}}\) and \(F\) is an arbitrary smooth function of \(r\).
The associated QFI (20) consists of the following three independent FIs (one for each parameter \(a,b,c\)):
\[I_{1} = \frac{1}{2}M_{1}^{2}+k_{2}\frac{z^{2}}{y^{2}}+k_{3}\frac{y^{2}}{z ^{2}} \tag{69}\] \[I_{2} = \frac{1}{2}M_{2}^{2}+k_{1}\frac{z^{2}}{x^{2}}+k_{3}\frac{x^{2}}{ z^{2}}\] (70) \[I_{3} = \frac{1}{2}M_{3}^{2}+k_{1}\frac{y^{2}}{x^{2}}+k_{2}\frac{x^{2}}{ y^{2}}. \tag{71}\]
Using spherical coordinates \(x=r\sin\theta\cos\phi\), \(y=r\sin\theta\sin\phi\) and \(z=r\cos\theta\), the QFIs (69) - (71) coincide with those found in Table II of [2]. Moreover, by adding the above QFIs, we find the QFI
\[I_{4}=\frac{1}{2}\mathbf{M}^{2}+\frac{k_{1}}{\sin^{2}\theta\cos^{2}\phi}+ \frac{k_{2}}{\sin^{2}\theta\sin^{2}\phi}+\frac{k_{3}}{\cos^{2}\theta} \tag{72}\]
where \(\mathbf{M}^{2}=M_{1}^{2}+M_{2}^{2}+M_{3}^{2}\) is the square magnitude of the angular momentum.
Even though the potential (68) admits the four independent QFIs \(H,I_{1},I_{2}\), and \(I_{3}\), it is not integrable because the PBs \(\{I_{i},I_{j}\}\neq 0\).
For \(F(r)=kr^{2}\), where \(k\) is an arbitrary constant, the potential (68) becomes (see Table I in [2])
\[V(x,y,z)=k\left(x^{2}+y^{2}+z^{2}\right)+\frac{k_{1}}{x^{2}}+\frac{k_{2}}{y^{2 }}+\frac{k_{3}}{z^{2}} \tag{73}\]
which is maximally superintegrable (see Tables 1 and 3).
2) \(a_{6}\neq 0\).
The potential is
\[V(x,y,z)=\frac{F_{1}\left(\frac{y}{x}\right)}{R^{2}}+F_{2}(R,z) \tag{74}\]
where \(F_{1}\) and \(F_{2}\) are arbitrary smooth functions of their arguments.
The associated QFI (20) is
\[I_{(1,1)}=\frac{1}{2}M_{3}^{2}+F_{1}\left(\frac{y}{x}\right). \tag{75}\]
If \(F_{2}(R,z)=F_{3}(R)+F_{4}(z)\) where \(F_{3}\) and \(F_{4}\) are arbitrary smooth functions, then the potential (74) is integrable (see Table 1).
3) \(a_{1}=a_{6}=a_{7}\neq 0\).
The potential is
\[V(x,y,z;m)=\sum_{j=1}^{m}\frac{F_{j}\left(\frac{y}{x}\right)}{R^{2}}N_{j}\left( \frac{z}{R}\right)+F(r) \tag{76}\]
where \(F_{j},N_{j}\) and \(F\) with \(j=1,2,...,m\) are smooth functions of their arguments.
The associated QFI (20) is
\[I_{(1,1)}=\frac{1}{2}{\bf M}^{2}+\sum_{j=1}^{m}\frac{r^{2}F_{j}\left(\frac{y}{ x}\right)}{R^{2}}N_{j}\left(\frac{z}{R}\right). \tag{77}\]
We note that for
a. \(m=2\), \(N_{1}=F_{2}=1\), \(N_{2}=k_{2}\frac{R^{2}}{z^{2}}\), \(F(r)=k_{1}r^{2}\); and
b. \(m=2\), \(N_{1}=F_{2}=1\), \(N_{2}=\frac{k_{1}^{2}\frac{r}{R^{2}}}{\sqrt{1+\frac{r^{2}}{R^{2}}}}\), \(F(r)=-\frac{k_{2}}{r}\)
the potential (76) reduces, respectively, to the following potentials (see Table II in [2]):
\[V_{1}(x,y,z) = \frac{F_{1}\left(\frac{y}{x}\right)}{x^{2}+y^{2}}+k_{1}(x^{2}+y^ {2}+z^{2})+\frac{k_{2}}{z^{2}} \tag{78}\] \[V_{2}(x,y,z) = \frac{F_{1}\left(\frac{y}{x}\right)}{x^{2}+y^{2}}+\frac{k_{1}z}{r (x^{2}+y^{2})}-\frac{k_{2}}{r} \tag{79}\]
where \(k_{1}\) and \(k_{2}\) are arbitrary constants. Both the above potentials are also of the general form (74).
The associated QFIs (77) are as follows:
- For the potential (78):
\[I_{(1,1)}=\frac{1}{2}{\bf M}^{2}+\frac{r^{2}F_{1}\left(\frac{y}{x}\right)}{x^ {2}+y^{2}}+\frac{k_{2}(x^{2}+y^{2})}{z^{2}}. \tag{80}\]
-For the potential (79):
\[I_{(1,1)}=\frac{1}{2}{\bf M}^{2}+\frac{r^{2}F_{1}\left(\frac{y}{x}\right)}{x^ {2}+y^{2}}+\frac{k_{1}zr}{x^{2}+y^{2}}. \tag{81}\]
We note that the potential (78) is minimally superintegrable because it is separable in the coordinate \(z\) and is also of the form (74).
4) \(a_{1}\neq 0\), \(a_{7}=-a_{1}\) and \(a_{10}=ia_{1}\).
The potential is
\[V(x,y,z)=F_{3}(w)+\frac{z^{2}}{w}F_{2}(w)+\frac{F_{4}\left(\frac{z}{w}\right)} {w^{2}}+F_{2}(w)\bar{w} \tag{82}\]
where \(w=x+iy\), \(\bar{w}=x-iy\), and \(F_{2}\), \(F_{3}\), \(F_{4}\) are arbitrary smooth functions of their arguments.
The associated QFI (20) is
\[I_{1}=\frac{1}{2}\left(M_{2}-iM_{1}\right)^{2}+F_{4}\left(\frac{z}{w}\right). \tag{83}\]
Moreover, the potential (82) admits the additional QFI
\[I_{2}=\left(\dot{x}+i\dot{y}\right)^{2}+4\int F_{2}(w)dw \tag{84}\]
because it is of the form (28) with \(F_{1}=F_{3}(w)+\frac{z^{2}}{w}F_{2}(w)+\frac{F_{4}\left(\frac{z}{w}\right)}{w^ {2}}\). Since the PB \(\{I_{1},I_{2}\}=0\), the potential (82) is (Liouville) integrable.
Finally, for \(F_{2}=k_{1}w\) and \(F_{4}=k_{2}\frac{w^{2}}{z^{2}}\), the potential (82) becomes
\[V(x,y,z)=k_{1}r^{2}+\frac{k_{2}}{z^{2}}+F_{3}(w) \tag{85}\]
which is a new minimally superintegrable potential due to the separability in the \(z\)-coordinate.
5) \(a_{1}\neq 0\), \(a_{7}=-a_{1}\) and \(a_{10}=-ia_{1}\).
Similarly to the previous case 4), we find the integrable potential
\[V(x,y,z)=F_{3}(\bar{w})+\frac{z^{2}}{\bar{w}}F_{2}(\bar{w})+\frac{F_{4}\left( \frac{z}{\bar{w}}\right)}{\bar{w}^{2}}+F_{2}(\bar{w})w \tag{86}\]
and the associated QFI
\[I_{1}=\frac{1}{2}\left(M_{2}+iM_{1}\right)^{2}+F_{4}\left(\frac{z}{\bar{w}} \right). \tag{87}\]
Moreover, the potential (86) admits the additional QFI
\[I_{2}=(\dot{x}-i\dot{y})^{2}+4\int F_{2}(\bar{w})d\bar{w} \tag{88}\]
because it is of the form (31) for \(F_{1}=F_{3}(\bar{w})+\frac{z^{2}}{\bar{w}}F_{2}(\bar{w})+\frac{F_{4}\left( \frac{z}{\bar{w}}\right)}{\bar{w}^{2}}\).
Finally, for \(F_{2}=k_{1}\bar{w}\) and \(F_{4}=k_{2}\frac{\bar{w}^{2}}{z^{2}}\), the potential (86) becomes
\[V(x,y,z)=k_{1}r^{2}+\frac{k_{2}}{z^{2}}+F_{3}(\bar{w}) \tag{89}\]
which is a new minimally superintegrable potential due to the separability in the \(z\)-coordinate.
6) \(a_{4}\neq 0\) and \(a_{14}=-ia_{4}\).
The potential is (see eq. (12) of [4])
\[V(x,y,z)=k_{1}r^{2}+\frac{k_{2}}{w^{2}}+k_{3}\frac{z}{w^{3}}+k_{4}\frac{R^{2}- 3z^{2}}{w^{4}} \tag{90}\]
where \(k_{1},k_{2},k_{3},k_{4}\) are arbitrary constants and \(w=x+iy\).
The associated QFI (20) is
\[I_{1}=M_{3}\left(iM_{1}-M_{2}\right)+k_{3}\frac{y}{w}-2ik_{2}\frac{z}{w}-\frac {3ik_{3}}{2}\frac{z^{2}}{w^{2}}-4ik_{4}\frac{z(x^{2}+y^{2}-z^{2})}{w^{3}}. \tag{91}\]
Moreover, the potential (90) admits the additional QFIs:
\[I_{2} = \frac{1}{2}\left(M_{2}-iM_{1}\right)^{2}+k_{3}\frac{z}{w}-4k_{4} \frac{z^{2}}{w^{2}} \tag{92}\] \[I_{3} = \frac{1}{2}\dot{z}\left(\dot{x}+i\dot{y}\right)+k_{1}zw-\frac{k_{ 3}}{4w^{2}}+k_{4}\frac{z}{w^{3}}.\] (93) \[I_{4} = \frac{1}{2}\left(\dot{x}+i\dot{y}\right)^{2}+k_{1}w^{2}-\frac{k_{ 4}}{w^{2}} \tag{94}\]
\[I_{5} = \frac{1}{2}{\bf M}^{2}+k_{2}\frac{r^{2}}{w^{2}}+k_{3}\frac{zr^{2}}{w ^{3}}+k_{4}\frac{r^{2}(x^{2}+y^{2}-3z^{2})}{w^{4}}. \tag{95}\]
Specifically, we have the following:
1) It admits the QFI (92) because it is of the form (82) for \(F_{2}=k_{1}w+\frac{k_{4}}{w^{3}}\), \(F_{3}=\frac{k_{2}}{w^{2}}\) and \(F_{4}=k_{3}\frac{z}{w}-4k_{4}\frac{z^{2}}{w^{2}}\).
2) It admits the QFI (93) because it is of the form (34) for \(F_{2}=k_{1}w+\frac{k_{4}}{w^{3}}\), \(F_{3}=\frac{k_{3}}{w^{3}}\) and \(F_{4}=\frac{k_{2}}{w^{2}}\).
3) It admits the QFI (94) because it is of the form (28) for \(F_{1}=k_{1}z^{2}+\frac{k_{2}}{w^{2}}+k_{3}\frac{z}{w^{3}}-3k_{4}\frac{z^{2}}{w ^{4}}\) and \(F_{2}=k_{1}w+\frac{k_{4}}{w^{4}}\).
4) It admits the QFI (95) because it is of the form (76) for \(m=4\), \(N_{1}=1\), \(F_{1}=k_{2}e^{-2i\theta}\), \(N_{2}=\frac{z}{R}\), \(F_{2}=k_{3}e^{-3i\theta}\), \(N_{3}=1\), \(F_{3}=k_{4}e^{-4i\theta}\), \(N_{4}=\frac{z^{2}}{R^{2}}\), \(F_{4}=-3k_{4}e^{-4i\theta}\) and \(F(r)=k_{1}r^{2}\).
We note that the variable \(\theta=\tan^{-1}\left(\frac{y}{x}\right)\); hence, \(w=x+iy=Re^{i\theta}\) and \(R^{2}=w\bar{w}\).
We conclude that the potential (90) is maximally superintegrable. Specifically, it is integrable due to the triplet \(H,I_{2},I_{4}\) and superintegrable because it admits the five independent QFIs \(H,I_{1},I_{2},I_{3},I_{4}\).
#### 5.1.4 The components of the KT \(C_{ab}\) depend on products of \(x,y,z\) of mixed degree
In this subsection, we continue by considering mixed combinations of the twenty parameters \(a_{1},...,a_{20}\) so that the components of the KT \(C_{ab}\) contain products of \(x,y,z\) of mixed degree. We note that we do not exhaust all possible cases; therefore, other authors could consider other cases and determine new non-decomposable integrable/superintegrable potentials in \(E^{3}\).
1) The only non-vanishing parameters are the \(a_{3}=\frac{iB}{4}\), \(a_{5}=B\), \(a_{13}=-\frac{iB}{4}\), \(a_{17}=-\frac{B}{4}\) and \(a_{15}=iB\), where \(B\) is an arbitrary constant.
The KT (21) is
\[C_{ab}=\left(\begin{array}{ccc}By+\frac{iB}{4}&-\frac{B}{2}x-\frac{iB}{2}y -\frac{B}{4}&0\\ -\frac{B}{2}x-\frac{iB}{2}y-\frac{B}{4}&iBx-\frac{iB}{4}&0\\ 0&0&0\end{array}\right). \tag{96}\]
For the KT (96) the system of PDEs (23) - (25) gives the potential
\[V(x,y,z)=4k_{1}\left(R^{2}-\frac{\bar{w}^{3}}{2}\right)+k_{2}\left(2w-3\bar{ w}^{2}\right)+k_{3}\bar{w}+F(z) \tag{97}\]
where \(k_{1},k_{2},k_{3}\) are arbitrary constants and \(F(z)\) is an arbitrary smooth function.
The associated QFI (20) is
\[I_{1} = \frac{1}{4}\left(\dot{x}+i\dot{y}\right)^{2}+iM_{3}\left(\dot{x} -i\dot{y}\right)-2k_{1}\left(\frac{3}{4}\bar{w}^{4}-w^{2}+R^{2}\bar{w}\right)- \tag{98}\] \[-2k_{2}\left(\bar{w}^{3}+2R^{2}\right)+k_{3}\left(\frac{\bar{w}^ {2}}{2}+w\right).\]
Moreover, the potential (97) admits the additional QFIs:
\[I_{2} = \frac{1}{8}\left(\dot{x}-i\dot{y}\right)^{2}+k_{1}\bar{w}^{2}+k_{ 2}\bar{w} \tag{99}\] \[I_{3} = \frac{1}{2}\dot{z}^{2}+F(z) \tag{100}\]
because it is of the form (31) for \(F_{1}=-2k_{1}\bar{w}^{3}-3k_{2}\bar{w}^{2}+k_{3}\bar{w}+F(z)\) and \(F_{2}=4k_{1}\bar{w}+2k_{2}\), and it is separable on the \(z\)-coordinate. Therefore, it is minimally superintegrable due to the four independent QFIs \(H,I_{1},I_{2},I_{3}\).
We note that the potential given in eq. (15) of [4] is a subcase of (97) for \(F(z)=k_{1}z^{2}+\frac{k_{4}}{z^{2}}\), where \(k_{4}\) is an arbitrary constant. As it will be shown below, in this special case, the resulting potential admits additional QFIs which promote it to a maximally superintegrable potential.
2) The only non-vanishing parameters are the \(a_{1}=C\), \(a_{7}=-C\), \(a_{8}=-iD+2iC\), \(a_{10}=-iC\) and \(a_{11}=D\), where \(C,D\) are arbitrary constants.
The KT (21) is
\[C_{ab}=\left(\begin{array}{ccc}\frac{C}{2}z^{2}&-\frac{iC}{2}z^{2}&-\frac{C }{2}xz+\frac{iC}{2}yz-\frac{D}{2}z\\ -\frac{iC}{2}z^{2}&-\frac{C}{2}z^{2}&\frac{iC}{2}xz+\frac{C}{2}yz+i\left( \frac{D}{2}-C\right)z\\ -\frac{C}{2}xz+\frac{iC}{2}yz-\frac{D}{2}z&\frac{iC}{2}xz+\frac{C}{2}yz+i\left( \frac{D}{2}-C\right)z&\frac{C}{2}(x^{2}-y^{2})+iCy(2-x)+D(x-iy)\end{array} \right). \tag{101}\]
For the KT (101) the system of PDEs (23) - (25) gives the potential (see eq. (15) of [4])
\[V(x,y,z)=4k_{1}\left(R^{2}-\frac{\bar{w}^{3}}{2}+\frac{z^{2}}{4}\right)+k_{2} \left(2w-3\bar{w}^{2}\right)+k_{3}\bar{w}+\frac{k_{4}}{z^{2}} \tag{102}\]
where \(k_{1},k_{2},k_{3},\) and \(k_{4}\) are arbitrary constants.
The associated QFI (20) consists of the following independent QFIs:
\[I_{1} = \frac{1}{2}\left(M_{2}+iM_{1}\right)^{2}+2i\dot{z}M_{1}+k_{1}z^{2 }\left(3\bar{w}^{2}-4iy\right)+2k_{2}z^{2}\left(2\bar{w}+1\right)- \tag{103}\] \[-k_{3}z^{2}+\frac{k_{4}}{z^{2}}\left(\bar{w}^{2}+4iy\right)\] \[I_{2} = \frac{1}{2}\dot{z}\left(M_{2}+iM_{1}\right)+k_{1}z^{2}\bar{w}+k_ {2}z^{2}-k_{4}\frac{\bar{w}}{z^{2}}. \tag{104}\]
Moreover, the potential (102) admits the three additional QFIs (98) - (100) because it is of the form (97) for \(F(z)=k_{1}z^{2}+\frac{k_{4}}{z^{2}}\). Therefore, it is maximally superintegrable.
3) The only non-vanishing parameters are the:
\[a_{1}=-2C,\,\,\,a_{2}=iB-C,\,\,\,a_{5}=a_{8}=iA,\,\,\,a_{7}=2C,\,\,\,a_{9}= \frac{iB}{4}-\frac{C}{4},\,\,\,a_{10}=-2iC,\,\,\,a_{11}=a_{15}=A,\]
\[a_{12}=-iB+2C,\,\,\,a_{13}=\frac{C}{4},\,\,\,a_{16}=-B-\frac{3iC}{2},\,\,\,a_ {18}=\frac{B}{2}+\frac{3iC}{2},\,\,\,a_{20}=\frac{iA}{4}\]
where \(A,B,\) and \(C\) are arbitrary constants.
The KT (21) has independent components:
\[C_{11} = -Cz^{2}+iAy+(iB-C)z\] \[C_{12} = -iCz^{2}-\frac{iA}{2}\bar{w}-\left(B+\frac{3iC}{2}\right)z\] \[C_{13} = Cxz-iCyz-\frac{iB-C}{2}x+\frac{1}{2}(B+3iC)y-\frac{A}{2}z \tag{105}\] \[C_{22} = Cz^{2}+Ax-(iB-2C)z-\frac{C}{4}\] \[C_{23} = iCxz-Cyz+\frac{B}{2}x+\frac{iB-2C}{2}y-\frac{iA}{2}z+\frac{iA}{4}\] \[C_{33} = -Cx^{2}+Cy^{2}-2iCxy+Aw+\frac{iB-C}{4}\]
where \(w=x+iy\).
For the KT (105) the system of PDEs (23) - (25) gives the potential (see eq. (16) of [4])
\[V(x,y,z)=k_{1}w+k_{2}\left(3w^{2}+z\right)+k_{3}\left(4w^{3}+3wz+\frac{\bar{w} }{4}\right)+k_{4}\left(\frac{5}{2}w^{4}+\frac{r^{2}}{2}+3w^{2}z\right) \tag{106}\]
where \(k_{1},k_{2},k_{3},\) and \(k_{4}\) are arbitrary constants.
The associated QFI (20) consists of the following independent QFIs:
\[I_{1} = M_{3}\dot{w}-\left(M_{1}+iM_{2}\right)\dot{z}-\frac{1}{2}\dot{y} \dot{z}+\frac{ik_{1}}{2}\left(w^{2}-z\right)+ik_{2}\left(2w^{3}-zw+\frac{i}{2} y\right)-\frac{ik_{3}}{8}\left(w^{2}-z\right)+ \tag{107}\] \[+ik_{3}\left(3w^{4}-z^{2}+iyw\right)-\frac{k_{4}}{2}y\left(w^{2} +z\right)+ik_{4}w\left(2w^{4}+zw^{2}-z^{2}\right)\] \[I_{2} = M_{1}\left(2\dot{x}+i\dot{y}\right)+iM_{2}\dot{x}+M_{3}\dot{z}+ \frac{i}{4}\dot{z}^{2}+\frac{ik_{2}}{2}\left(z-w^{2}\right)+ik_{3}w\left(z-w^{ 2}\right)+\] (108) \[+\frac{ik_{4}}{4}\left(z^{2}+2zw^{2}-3w^{4}\right)\]
\[I_{3} = \left(M_{1}+iM_{2}\right)^{2}+\left(2iM_{1}-M_{2}\right)\dot{w}-iM_ {3}\dot{z}+\frac{1}{4}\left(\dot{y}^{2}-\dot{z}^{2}\right)+k_{1}zw+\frac{ik_{1} }{2}y+ \tag{109}\] \[+2k_{2}w\left(2zw+iy\right)+\frac{k_{2}}{2}\left(w^{2}-z\right)+k _{3}w^{3}(6z+1)+k_{3}zw\left(2z-\frac{1}{4}\right)+\] \[+ik_{3}y\left(3w^{2}-\frac{1}{8}\right)-k_{3}xz+k_{4}w^{4}\left(4 z+\frac{3}{4}\right)+2ik_{4}yw^{3}+\] \[+k_{4}z^{2}\left(3w^{2}-\frac{1}{4}\right)+\frac{k_{4}}{2}\left( \frac{y^{2}}{2}-zR^{2}\right).\]
We note that the parameter \(A\) produces the QFI \(I_{1}\), \(B\) the \(I_{2}\), and \(C\) the \(I_{3}\).
Moreover, the potential (106) admits the additional QFIs:
\[I_{4} = \dot{w}^{2}+k_{3}w+k_{4}w^{2} \tag{110}\] \[I_{5} = \dot{z}\dot{w}+\left(k_{4}w+\frac{k_{3}}{2}\right)z+k_{4}w^{3}+ \frac{3k_{3}}{2}w^{2}+k_{2}w \tag{111}\]
because it is of the form (28) for \(F_{1}=k_{1}w+k_{2}(3w^{2}+z)+k_{3}w(4w^{2}+3z)+k_{4}\left(\frac{5}{2}w^{4}+3w ^{2}z+\frac{z^{2}}{2}\right)\) and \(F_{2}=\frac{k_{4}}{2}w+\frac{k_{3}}{4}\); and of the form (34) for \(F_{2}=\frac{k_{4}}{2}w+\frac{k_{3}}{4}\), \(F_{3}=3k_{4}w^{2}+3k_{3}w+k_{2}\) and \(F_{4}=\frac{5k_{4}}{2}w^{4}+4k_{3}w^{3}+3k_{2}w^{2}+k_{1}w\).
We compute the PB \(\{I_{2},I_{4}\}=0\); therefore, the potential (106) is maximally superintegrable.
#### 5.1.5 Special superintegrable potentials
In this subsection, we construct potentials whose form belongs to two or more of the previous general results. We have the following cases:
1) Consider the potential (see Table I in [2])
\[V(x,y,z)=-\frac{c_{1}}{r}+\frac{c_{2}}{x^{2}}+\frac{c_{3}}{y^{2}} \tag{112}\]
where \(c_{1},c_{2}\), and \(c_{3}\) are arbitrary constants. This potential is of the general form (68) for \(F(r)=-\frac{c_{1}}{r}\), \(k_{1}=c_{2}\), \(k_{2}=c_{3}\) and \(k_{3}=0\); and of the form (56) for \(k_{1}=0\), \(k_{2}=c_{1}\) and \(F_{1}\left(\frac{y}{x}\right)=c_{2}\left[1+\left(\frac{y}{x}\right)^{2}\right]+ c_{3}\left[1+\left(\frac{x}{y}\right)^{2}\right]\). Therefore, it admits the additional QFIs:
\[I_{1} = \frac{1}{2}M_{1}^{2}+c_{3}\frac{z^{2}}{y^{2}} \tag{113}\] \[I_{2} = \frac{1}{2}M_{2}^{2}+c_{2}\frac{z^{2}}{x^{2}}\] (114) \[I_{3} = \frac{1}{2}M_{3}^{2}+c_{2}\frac{y^{2}}{x^{2}}+c_{3}\frac{x^{2}}{ y^{2}}\] (115) \[I_{4} = M_{2}\dot{x}-M_{1}\dot{y}-2z\left(\frac{c_{1}}{2r}-\frac{c_{2}}{ x^{2}}-\frac{c_{3}}{y^{2}}\right). \tag{116}\]
We conclude that the potential (112) is maximally superintegrable because the QFIs \(H,I_{3},I_{4}\) are in involution and the five QFIs \(H,I_{1},I_{2},I_{3},I_{4}\) are functionally independent.
2) Consider the potential (see Table I in [2])
\[V(x,y,z)=\frac{c_{1}y}{x^{2}R}+\frac{c_{2}}{x^{2}}+\frac{c_{3}}{z^{2}} \tag{117}\]
where \(c_{1},c_{2}\), and \(c_{3}\) are arbitrary constants. This potential is of the form \(V=\frac{k_{1}}{x^{2}}+\frac{k_{2}}{R}+\frac{k_{3}y}{Rx^{2}}+F(z)\) (see Table 2) for \(k_{1}=c_{2}\), \(k_{2}=0\), \(k_{3}=c_{1}\), and \(F(z)=\frac{c_{3}}{z^{2}}\); and of the form (78) for \(k_{1}=0\), \(k_{2}=c_{3}\), and \(F_{1}\left(\frac{y}{x}\right)=\left(\frac{c_{1}}{\sqrt{1+\frac{y^{2}}{y^{2}}}}+ c_{2}\right)\left(1+\frac{y^{2}}{x^{2}}\right)\). Therefore, it admits the additional QFIs:
\[I_{1} = \frac{1}{2}\dot{z}^{2}+\frac{c_{3}}{z^{2}} \tag{118}\]
\[I_{2} = \frac{1}{2}M_{3}^{2}+c_{2}\frac{y^{2}}{x^{2}}+c_{1}\frac{yR}{x^{2}} \tag{119}\] \[I_{3} = M_{3}\dot{x}-2c_{2}\frac{y}{x^{2}}-c_{1}\frac{x^{2}+2y^{2}}{x^{2}R}\] (120) \[I_{4} = \frac{1}{2}\mathbf{M}^{2}+c_{1}\frac{yr^{2}}{Rx^{2}}+c_{2}\frac{ r^{2}}{x^{2}}+c_{3}\frac{R^{2}}{z^{2}} \tag{121}\]
and it is maximally superintegrable.
3) Another maximally superintegrable potential is the (see Table I in [2])
\[V(x,y,z)=\frac{c_{1}y}{x^{2}R}+\frac{c_{2}}{x^{2}}+c_{3}z \tag{122}\]
where \(c_{1},c_{2}\), and \(c_{3}\) are arbitrary constants. This potential is of the form \(V=\frac{k_{1}}{x^{2}}+\frac{k_{2}}{R}+\frac{k_{3}y}{Rx^{2}}+F(z)\) (see Table 2) for \(k_{1}=c_{2}\), \(k_{2}=0\), \(k_{3}=c_{1}\), and \(F(z)=c_{3}z\); and of the form (57) for \(k=c_{3}\), and \(F_{1}\left(\frac{y}{x}\right)=\left(\frac{c_{1}}{\sqrt{1+\frac{x^{2}}{y^{2}}} }+c_{2}\right)\left(1+\frac{y^{2}}{x^{2}}\right)\). Therefore, it admits the additional QFIs:
\[I_{1} = \frac{1}{2}\dot{z}^{2}+c_{3}z \tag{123}\] \[I_{2} = \frac{1}{2}M_{3}^{2}+c_{2}\frac{y^{2}}{x^{2}}+c_{1}\frac{yR}{x^{2}}\] (124) \[I_{3} = M_{3}\dot{x}-2c_{2}\frac{y}{x^{2}}-c_{1}\frac{x^{2}+2y^{2}}{x^{2 }R}\] (125) \[I_{4} = M_{2}\dot{x}-M_{1}\dot{y}+c_{1}\frac{2yz}{x^{2}R}+c_{2}\frac{2z}{ x^{2}}-c_{3}\frac{R^{2}}{2}. \tag{126}\]
4) Consider the potential (see eq. (11) of [4])
\[V(x,y,z)=k_{1}r^{2}+k_{2}\frac{\bar{w}}{w^{3}}+\frac{k_{3}}{w^{2}}+\frac{k_{4} }{z^{2}} \tag{127}\]
where \(k_{1},k_{2},k_{3},k_{4}\) are arbitrary constants, \(w=x+iy\) and \(\bar{w}=x-iy\).
This potential admits the additional QFIs:
\[I_{1} = \frac{1}{2}\left(M_{2}-iM_{1}\right)^{2}+k_{4}\frac{w^{2}}{z^{2}} -k_{2}\frac{z^{2}}{w^{2}}. \tag{128}\] \[I_{2} = \frac{1}{2}\left(\dot{x}+i\dot{y}\right)^{2}+k_{1}w^{2}-\frac{k_{2 }}{w^{2}}\] (129) \[I_{3} = \frac{1}{2}M_{3}^{2}+k_{2}e^{-4i\theta}+k_{3}e^{-2i\theta}=\frac {1}{2}M_{3}^{2}+k_{2}\left(\frac{\bar{w}}{w}\right)^{2}+k_{3}\frac{\bar{w}}{w}\] (130) \[I_{4} = \frac{1}{2}\dot{z}^{2}+k_{1}z^{2}+\frac{k_{4}}{z^{2}}\] (131) \[I_{5} = \frac{1}{2}\mathbf{M}^{2}+k_{2}\frac{r^{2}\bar{w}}{w^{3}}+k_{3} \frac{r^{2}}{w^{2}}+k_{4}\frac{r^{2}}{z^{2}} \tag{132}\]
because it is of the form (82) for \(F_{2}=k_{1}w+\frac{k_{3}}{w^{3}}\), \(F_{3}=\frac{k_{3}}{w^{2}}\) and \(F_{4}=-k_{2}\frac{z^{2}}{w^{2}}+k_{4}\frac{w^{2}}{z^{2}}\); of the form (28) for \(F_{1}=k_{1}z^{2}+\frac{k_{3}}{w^{2}}+\frac{k_{4}}{z^{2}}\) and \(F_{2}=k_{1}w+\frac{k_{2}}{w^{3}}\); of the form (74) for \(F_{1}=k_{2}e^{-4i\theta}+k_{3}e^{-2i\theta}\) and \(F_{2}=k_{1}r^{2}+\frac{k_{4}}{z^{2}}\); separable on the \(z\)-coordinate; and of the form (76) for \(m=2\), \(F_{1}=N_{2}=1\), \(N_{1}=k_{4}\frac{R^{2}}{z^{2}}\), \(F_{2}=k_{2}e^{-4i\theta}+k_{3}e^{-2i\theta}\) and \(F(r)=k_{1}r^{2}\).
The variable \(\theta=\tan^{-1}\left(\frac{y}{x}\right)\) and, hence, \(w=x+iy=Re^{i\theta}\). We recall that
\[w=Re^{i\theta}\implies e^{in\theta}=\left(\frac{w}{R}\right)^{n}\implies e^{in \theta}=\left(\frac{1+i\frac{y}{x}}{\sqrt{1+\left(\frac{y}{x}\right)^{2}}} \right)^{n}\]
where \(n\) is an arbitrary real constant. If \(n=2k\in\mathbb{R}\), then \(e^{2ik\theta}=\left(\frac{w}{\bar{w}}\right)^{k}\) because \(R^{2}=w\bar{w}\).
We conclude that the potential (127) is maximally superintegrable.
We collect the results of this section in Tables 4 - 7.
\begin{table}
\begin{tabular}{|l|l|} \hline Potential & LFIs and QFIs \\ \hline \(V=F_{1}\left(cz+by+(\sqrt{a^{2}+b^{2}+c^{2}}+a)x\right)+\) & \(\begin{array}{l}I_{1}=(a\dot{x}+b\dot{y}+c\dot{z})\,\dot{x}+a(F_{1}+F_{2})+\\ +\sqrt{a^{2}+b^{2}+c^{2}}(F_{1}-F_{2})\end{array}\) \\ \hline \(V=(a^{2}+b^{2})x^{2}+4(az+by)^{2}+\frac{k_{1}}{x^{2}}+\) & \(\begin{array}{l}I_{1}=aM_{2}\dot{x}-bM_{3}\dot{x}-\frac{k_{2}}{2}(a^{2}+b^{2} )x^{2}-\\ -2(a^{2}+b^{2})(az+by)x^{2}+\frac{2k_{1}(az+by)}{x^{2}}\\ I_{2}=\frac{1}{2}\dot{x}^{2}+(a^{2}+b^{2})x^{2}+\frac{k_{1}}{x^{2}}\end{array}\) \\ \hline \(V=\frac{k}{\sqrt{(ax+by)^{2}+(a^{2}+b^{2})z^{2}}}\) & \(\begin{array}{l}I_{1}=aM_{2}\dot{x}-bM_{1}\dot{x}+azV\end{array}\) \\ \hline \(V=\frac{k_{1}}{x^{2}}+\frac{k_{2}}{y^{2}}+\frac{k_{3}}{z^{2}}+F(r)\) & \(\begin{array}{l}I_{2}=\frac{1}{2}M_{2}^{2}+k_{1}\frac{z^{2}}{x^{2}}+k_{3} \frac{z^{2}}{x^{2}}\\ I_{3}=\frac{1}{2}M_{3}^{2}+k_{1}\frac{z^{2}}{x^{2}}+k_{2}\frac{z^{2}}{y^{2}} \end{array}\) \\ \hline \(V=\frac{F_{1}(\frac{x}{2})}{x^{2}+y^{2}}+F_{2}(x^{2}+y^{2},z)\) & \(\begin{array}{l}I_{1}=\frac{1}{2}M_{3}^{2}+F_{1}\left(\frac{x}{r}\right)\\ I=\frac{1}{2}M^{2}+\sum_{j=1}^{m}\frac{r^{2}F_{j}\left(\frac{x}{r}\right)}{R^{2 }}N_{j}\left(\frac{z}{R}\right)\end{array}\) \\ \hline \(V=F_{1}(w,z)+F_{2}(w)\bar{w}\) & \(\begin{array}{l}I_{1}=(\dot{x}+i\dot{y})^{2}+4\int F_{2}(w)dw\\ I_{1}=(\dot{x}-i\dot{y})^{2}+4\int F_{2}(\bar{w})d\bar{w}\end{array}\) \\ \hline \end{tabular}
\end{table}
Table 4: Possibly non-integrable potentials \(V(x,y,z)\) in \(E^{3}\) that admit one or more QFIs of the type \(I_{(1,1)}\) which are not in involution.
\begin{table}
\begin{tabular}{|l|l|} \hline Potential & LFIs and QFIs \\ \hline \(V=F_{2}(w)\bar{w}+F_{3}(w)+F_{4}(z)\) & \(\begin{array}{l}I_{1}=(\dot{x}+i\dot{y})^{2}+4\int F_{2}(w)dw\\ I_{2}=\frac{1}{2}\dot{z}^{2}+F_{4}(z)\end{array}\) \\ \hline \(V=F_{2}(\bar{w})w+F_{3}(\bar{w})+F_{4}(z)\) & \(\begin{array}{l}I_{1}=(\dot{x}-i\dot{y})^{2}+4\int F_{2}(\bar{w})d\bar{w}\\ I_{2}=\frac{1}{2}\dot{z}^{2}+F_{4}(z)\end{array}\) \\ \hline \(V=F_{2}^{\prime}z^{2}+F_{3}(w)z+F_{4}(w)+F_{2}(w)\bar{w}\) & \(\begin{array}{l}I_{1}=\frac{1}{2}\dot{z}(\dot{x}+i\dot{y})+F_{2}(w)z+\frac{1}{2 }\int F_{3}(w)dw\\ I_{2}=(\dot{x}+i\dot{y})^{2}+4\int F_{2}(w)dw\\ I_{2}=(\dot{x}-i\dot{y})^{2}+4\int F_{2}(\bar{w})d\bar{w}\end{array}\) \\ \hline \(V=F_{3}(w)+\frac{z^{2}}{w}F_{2}(w)+\frac{F_{4}\left(\frac{x}{r}\right)}{w^{2}}+ F_{2}(w)\bar{w}\) & \(\begin{array}{l}I_{1}=\frac{1}{2}\left(M_{2}-iM_{1}\right)^{2}+F_{4}\left(\frac{x}{w} \right)\\ I_{2}=(\dot{x}+i\dot{y})^{2}+4\int F_{2}(w)dw\end{array}\) \\ \hline \(V=F_{3}(\bar{w})+\frac{z^{2}}{w}F_{2}(\bar{w})+\frac{F_{4}\left(\frac{x}{r} \right)}{w^{2}}+F_{2}(\bar{w})w\) & \(\begin{array}{l}I_{1}=\frac{1}{2}\left(M_{2}+iM_{1}\right)^{2}+F_{4}\left( \frac{x}{w}\right)\\ I_{2}=(\dot{x}-i\dot{y})^{2}+4\int F_{2}(\bar{w})d\bar{w}\end{array}\) \\ \hline \end{tabular}
\end{table}
Table 5: Integrable potentials \(V(x,y,z)\) in \(E^{3}\) that admit QFIs of the type \(I_{(1,1)}\).
\begin{table}
\begin{tabular}{|l|c|l|} \hline Potential & Ref [2] & LFIs and QFIs \\ \hline \(V=\frac{F_{1}\left(\frac{y}{x}\right)}{R^{2}}+k_{1}(R^{2}+4z^{2})+k_{2}z\) & Table II & \(I_{1}=\frac{1}{2}\hat{z}^{2}+4k_{1}z^{2}+k_{2}z\) \\ \(V=\frac{F_{1}\left(\frac{y}{x}\right)}{R^{2}}+k_{1}r^{2}+\frac{k_{2}}{z^{2}}\) & Table II & \(I_{2}=\frac{1}{2}M_{3}^{2}+F_{1}\left(\frac{y}{x}\right)\) \\ & & \(I_{3}=M_{2}\hat{x}-M_{1}\hat{y}+\frac{2zF_{1}\left(\frac{y}{x}\right)}{R^{2}}-2 k_{1}zR^{2}-k_{2}\frac{R^{2}}{2}\) \\ \hline \(V=\frac{F_{1}\left(\frac{y}{x}\right)}{R^{2}}+\frac{k_{1}z}{rR^{2}}-\frac{k_{2} }{r}\) & Table II & \(I_{1}=\frac{1}{2}M_{3}^{2}+F_{1}\left(\frac{y}{x}\right)\) \\ & & \(I_{3}=M_{2}\hat{x}-M_{1}\hat{y}+\frac{2zF_{1}\left(\frac{y}{x}\right)}{R^{2}}+ \frac{2k_{1}z^{2}}{rR^{2}}+\frac{k_{1}}{r}-\frac{k_{2}z}{r}\) \\ \hline \(V=\frac{F_{1}\left(\frac{y}{x}\right)}{R^{2}}+k_{1}r^{2}+\frac{k_{2}}{z^{2}}\) & Table II & \(I_{2}=\frac{1}{2}M_{3}^{2}+F_{1}\left(\frac{y}{x}\right)^{2}\) \\ & & \(I_{3}=\frac{1}{2}M_{3}^{2}+\frac{r^{2}F_{1}\left(\frac{y}{x}\right)}{R^{2}}+ \frac{k_{2}R^{2}}{z^{2}}\) \\ \hline \(V=4k_{1}\left(R^{2}-\frac{y^{3}}{2}\right)+k_{2}\left(2w-3\bar{w}^{2}\right)+\) & New & \(\begin{array}{l}I_{1}=\frac{1}{8}\left(\hat{x}-iy\right)^{2}+k_{1}\bar{w}^{2} +k_{2}\bar{w}\\ I_{2}=\frac{1}{4}\bar{w}^{2}+iM_{3}\left(\hat{x}-i\hat{y}\right)-2k_{1}\left( \frac{3}{4}\bar{w}^{4}-w^{2}+R^{2}\bar{w}\right)-\\ -2k_{2}\left(\bar{w}^{3}+2R^{2}\right)+k_{3}\left(\frac{\bar{w}^{2}}{2}+w \right)\\ I_{3}=\frac{1}{2}\hat{z}^{2}+F(z)\end{array}\) \\ \hline \(V=k_{1}r^{2}+k_{2}\bar{w}+k_{3}z+F_{4}(w)\) & New & \(\begin{array}{l}I_{1}=\frac{1}{2}\hat{z}\left(\hat{x}+i\hat{y}\right)+k_{1}wz+ k_{2}z+\frac{k_{3}}{2}w\\ I_{2}=\left(\hat{x}+iy\right)^{2}+2k_{1}w^{2}+4k_{2}w\\ I_{3}=\frac{1}{2}\hat{z}^{2}+k_{1}z^{2}+k_{3}z\\ I_{3}=\frac{1}{2}\hat{z}^{2}+k_{1}z^{2}+k_{2}\bar{w}\end{array}\) \\ \hline \(V=k_{1}r^{2}+\frac{k_{2}}{z^{2}}+F_{3}(w)\) & New & \(\begin{array}{l}I_{1}=\frac{1}{2}\left(M_{2}-iM_{1}\right)^{2}+k_{2}\frac{w^{2} }{z^{2}}\\ I_{2}=\left(\hat{x}+i\hat{y}\right)^{2}+2k_{1}w^{2}\\ I_{3}=\frac{1}{2}\hat{z}^{2}+k_{1}z^{2}+\frac{k_{2}}{z^{2}}\end{array}\) \\ \hline \(V=k_{1}r^{2}+\frac{k_{2}}{z^{2}}+F_{3}(\bar{w})\) & New & \(\begin{array}{l}I_{1}=\frac{1}{2}\left(M_{2}+iM_{1}\right)^{2}+k_{2}\frac{w^{2} }{z^{2}}\\ I_{2}=\left(\hat{x}-i\hat{y}\right)^{2}+2k_{1}\bar{w}^{2}\end{array}\) \\ \hline \(V=k_{1}r^{2}+\frac{k_{2}}{z^{2}}+F_{3}(\bar{w})\) & New & \(\begin{array}{l}I_{1}=\frac{1}{2}\left(M_{2}+iM_{1}\right)^{2}+k_{2}\frac{w^{2} }{z^{2}}\\ I_{2}=\left(\hat{x}-i\hat{y}\right)^{2}+2k_{1}w^{2}\\ I_{3}=\frac{1}{2}\hat{z}^{2}+k_{1}z^{2}+\frac{k_{2}}{z^{2}}\end{array}\) \\ \hline \end{tabular}
\end{table}
Table 6: Minimally superintegrable potentials \(V(x,y,z)\) in \(E^{3}\) that admit QFIs of the type \(I_{(1,1)}\).
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{Maximally superintegrable potentials} \\ \hline Potential & Ref [2] & Ref [4] & LFIs and QFIs \\ \hline & & & \(I_{1}=\frac{1}{2}\dot{x}^{2}+k_{1}x^{2}+\frac{k_{2}}{2}\) \\ & & & \(I_{2}=\frac{1}{2}\dot{y}^{2}+k_{1}y^{2}+\frac{k_{2}}{y^{2}}\) \\ \(V=k_{1}(R^{2}+4z^{2})+\frac{k_{2}}{x^{2}}+\frac{k_{3}}{y^{2}}+k_{4}z\) & Table I & eq. (13) & \(I_{3}=\frac{1}{2}\dot{z}^{2}+4k_{1}z^{2}+k_{4}z\) \\ & & \(I_{4}=M_{2}\dot{x}+2z\left(\frac{k_{2}}{x^{2}}-k_{1}x^{2}\right)-\frac{k_{4}}{ 2}x^{2}\) \\ & & \(I_{5}=-M_{1}\dot{y}+2z\left(\frac{k_{3}}{y^{2}}-k_{1}y^{2}\right)-\frac{k_{4}}{ 2}y^{2}\) \\ & & & \(I_{6}=\frac{1}{2}M_{3}^{2}+k_{2}\left(\frac{x}{y}\right)^{2}+k_{3}\left(\frac{ x}{y}\right)^{2}\) \\ \hline \(V=\frac{k_{1}}{4}\left(x^{2}+16y^{2}+4z^{2}\right)+\frac{k_{2}}{x^{2}}+\) & New & New & \(I_{1}=\frac{1}{2}\dot{x}^{2}+\frac{k_{1}}{4}x^{2}+\frac{k_{2}}{x^{2}}\) \\ & & & \(I_{2}=\frac{1}{2}\dot{y}^{2}+4k_{1}y^{2}+k_{3}y\) \\ & & & \(I_{3}=\frac{1}{2}\dot{z}^{2}+k_{1}z^{2}\) \\ & & & \(I_{4}=M_{2}\dot{x}+z\left(\frac{2k_{2}}{x^{2}}-\frac{k_{1}}{2}x^{2}\right)\) \\ & & & \(I_{5}=M_{1}\dot{z}-z^{2}\left(2k_{1}y+\frac{k_{3}}{2}\right)\) \\ \hline & & & \(I_{1}=\frac{1}{2}\dot{x}^{2}+kx^{2}+\frac{k_{1}}{x^{2}}\) \\ & & & \(I_{2}=\frac{1}{2}\dot{y}^{2}+ky^{2}+\frac{k_{2}}{x^{2}}\) \\ \(V=kr^{2}+\frac{k_{1}}{x^{2}}+\frac{k_{2}}{y^{2}}+\frac{k_{3}}{z^{2}}\) & Table I & eq. (10) & \(I_{3}=\frac{1}{2}\dot{z}^{2}+k^{2}+\frac{k_{3}}{z^{2}}\) \\ & & & \(I_{4}=\frac{1}{2}M_{1}^{2}+k_{2}\frac{z^{2}}{y^{2}}+k_{3}\frac{z^{2}}{z^{2}}\) \\ & & & \(I_{5}=\frac{1}{2}M_{2}^{2}+k_{1}\frac{z^{2}}{x^{2}}+k_{3}\frac{z^{2}}{x^{2}}\) \\ & & & \(I_{6}=\frac{1}{2}M_{3}^{2}+k_{1}\frac{y^{2}}{x^{2}}+k_{2}\frac{x^{2}}{y^{2}}\) \\ \hline \(V=-\frac{c_{1}}{r}+\frac{c_{2}}{x^{2}}+\frac{c_{3}}{y^{2}}\) & Table I & \(\begin{array}{l}\text{not}\\ \text{included}\end{array}\) & \(\begin{array}{l}I_{1}=\frac{1}{2}M_{1}^{2}+c_{3}\frac{y^{2}}{y^{2}}\) \\ \text{not}\\ \text{included}\end{array}\) & \(\begin{array}{l}I_{2}=\frac{1}{2}M_{2}^{2}+c_{2}\frac{z^{2}}{x^{2}}\) \\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\\ \text{not}\text{not}\\ \text{not}\\ \text{not}\\ \text{not}\text{not}\\ \text{not}\\ \text{not}\\ \text{not}\text{not}\\ \text{not}\\ \text{not}\text{not}\\ \text{not}\\ \text{not}\text{not}\\ \text{not}\\ \text{not}\text{not}\\ \text{not}\text{not}\\ \text{not}\text{not}\\ \text{not}\\ \text{not}\text{not}\\ \text{not}\text{not}\\ \text{not}\text{not}\\ \text{not}\text{not}\\ \text{not}\text{not}\\ \text{not}\text{not}\\ \text{not}\text{not}\\ \text{not}\text{not}\\ \text{not}\text{not}\text{not}\\ \text{not}\text{not}\\ \text{not}\text{not}\text{not}\\ \text{not}\text{not}\text{not}\\ \text{not}\text{not}\text{not}\\ \text{not}\text{not}\\ \text{not}\text{not}\\ \text{not}\text{not}\text{not}\text{not}\\ \text{not}\text{not}\text{not}\\ \text{not
**Remark:** The potential (68) admits the four independent QFIs \(H,I_{1},I_{2}\) and \(I_{3}\) (see Table 4); however, it is not second order integrable because the PBs \(\{I_{i},I_{j}\}\neq 0\) for \(i\neq j\). In Table II of [2], it is claimed that this potential is minimally superintegrable because in that paper superintegrability is defined without the requirement of the integrability (i.e. the vanishing of the PBs). Indeed, we have:
\[I_{4}\equiv\{I_{1},I_{3}\}=\{I_{2},I_{1}\}=\{I_{3},I_{2}\}=M_{1}\left(x^{2} \hat{y}\dot{z}+2k_{1}\frac{yz}{x^{2}}\right)+M_{2}\left(y^{2}\dot{x}\dot{z}+2k _{2}\frac{xz}{y^{2}}\right)+M_{3}\left(z^{2}\dot{x}\dot{y}+2k_{3}\frac{xy}{z^{ 2}}\right). \tag{133}\]
The third order (cubic) FI \(I_{4}\) cannot be used for establishing integrability because the PBs \(\{I_{i},I_{4}\}\neq 0\), where \(i,j=1,2,3\).
## 6 The QFI \(I_{(2,0)}\) where \(\ell=0\)
We set \(L_{(0)a}=L_{a}\) and the QFI \(I_{(2,\ell)}\) for \(\ell=0\) becomes
\[I_{(2,0)}=-tL_{(a;b)}\dot{q}^{a}\dot{q}^{b}+L_{a}\dot{q}^{a}+tL_{a}V^{.a} \tag{134}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(V=k_{1}(R^{2}+4z^{2})+k_{2}z+\frac{k_{3}}{w^{2}}+\) & not included & eq. (14) & \(\begin{array}{l}I_{1}=\frac{1}{2}\left(\dot{x}+i\hat{y}\right)^{2}+k_{1}w^{2}- \frac{k_{4}}{w^{2}}\\ I_{2}=\frac{k_{3}}{2}\dot{x}+i\hat{y}\left(M_{2}-iM_{1}\right)-k_{1}zw^{2}- \\ -\frac{k_{3}}{2}w^{2}-k_{4}\frac{z^{2}}{w^{2}}\end{array}\) \\ \(V=k_{1}\left(R^{2}-\frac{\hat{w}^{2}}{2}+\frac{z^{2}}{4}\right)+k_{2}\left(2w-3 \bar{w}^{2}\right)+k_{3}\bar{w}+\) & not included & eq. (15) & \(\begin{array}{l}I_{3}=\frac{1}{2}\dot{z}^{2}+k_{1}z^{2}+\frac{k_{4}}{w^{2}} \\ I_{4}=\frac{1}{2}\left(M_{2}+iM_{1}\right)^{2}+2izM_{1}+\\ +k_{1}z^{2}\left(3\bar{w}^{2}-4iy\right)+2k_{2}z^{2}\left(2\bar{w}+1\right)- \\ -k_{3}z^{2}+\frac{k_{4}}{2}\left(\bar{w}^{2}+4iy\right)\\ I_{5}=\frac{1}{2}\dot{z}\left(M_{2}+iM_{1}\right)+k_{1}z^{2}\bar{w}+k_{2}z^{2} -k_{4}\frac{\bar{w}}{z^{2}}\end{array}\) \\ \hline \(V=4k_{1}\left(R^{2}-\frac{\hat{w}^{2}}{2}+\frac{z^{2}}{4}\right)+\) & not included & eq. (15) & \(\begin{array}{l}I_{3}=\frac{1}{2}\dot{z}^{2}+k_{1}z^{2}+\frac{k_{4}}{w^{2}} \\ I_{4}=\frac{1}{2}\left(M_{2}+iM_{1}\right)^{2}+2izM_{1}+\\ +k_{1}z^{2}\left(3\bar{w}^{2}-4iy\right)+2k_{2}z^{2}\left(2\bar{w}+1\right)- \\ -k_{3}z^{2}+\frac{k_{4}}{2}\left(\bar{w}^{2}+4iy\right)\\ I_{5}=\frac{1}{2}\dot{z}\left(M_{2}+iM_{1}\right)+k_{1}z^{2}\bar{w}+k_{2}z^{2} -k_{4}\frac{\bar{w}}{z^{2}}\end{array}\) \\ \hline \(V=4k_{1}\left(R^{2}-\frac{\hat{w}^{2}}{2}+\frac{z^{2}}{4}\right)+\) & not included & eq. (15) & \(\begin{array}{l}I_{3}=\frac{1}{2}\dot{z}^{2}+k_{1}z^{2}+\frac{k_{4}}{z^{2}} \\ I_{4}=\frac{1}{2}\left(M_{2}+iM_{1}\right)^{2}+2izM_{1}+\\ +k_{1}z^{2}\left(3\bar{w}^{2}-4iy\right)+2k_{2}z^{2}\left(2\bar{w}+1\right)- \\ -k_{3}z^{2}+\frac{k_{4}}{2}\left(\bar{w}^{2}+4iy\right)\\ I_{5}=\frac{1}{2}\dot{z}\left(M_{2}+iM_{1}\right)+k_{1}z^{2}\bar{w}+k_{2}z^{2} -k_{4}\frac{\bar{w}}{z^{2}}\end{array}\) \\ \hline \(V=4k_{1}\left(R^{2}-\frac{\hat{w}^{2}}{2}+\frac{z^{2}}{4}\right)+\) & not included & eq. (15) & \(\begin{array}{l}I_{3}=\frac{1}{2}\dot{z}^{2}+k_{1}z^{2}+\frac{k_{4}}{z^{2}} \\ I_{4}=\frac{1}{2}\left(M_{2}+iM_{1}\right)^{2}+2izM_{1}+\\ +k_{1}z^{2}\left(3\bar{w}^{2}-4iy\right)+2k_{2}z^{2}\left(2\bar{w}+1\right)- \\ -k_{3}z^{2}+\frac{k_{4}}{2}\left(\bar{w}^{2}+4iy\right)\\ I_{5}=\frac{1}{2}\dot{z}\left(M_{2}+iM_{1}\right)+k_{1}z^{2}\bar{w}+k_{2}z^{2}-k _{4}\frac{\bar{w}}{z^{2}}\end{array}\) \\ \hline \(V=4k_{1}\left(R^{2}-\frac{\hat{w}^{2}}{2}+\frac{z^{2}}{4}\right)+\) & not included & eq. (15) & \(\begin{array}{l}I_{3}=\frac{1}{2}\dot{z}^{2}+k_{1}z^{2}+\frac{k_{4}}{z^{2}} \\ I_{4}=\frac{1}{2}\left(M_{2}+iM_{1}\right)^{2}+2izM_{1}+\\ +k_{1}z^{2}\left(3\bar{w}^{2}-4iy\right)+2k_{2}z^{2}\left(2\bar{w}+1\right)- \\ -k_{3}z^{2}+\frac{k_{4}}{2}\left(\bar{w}^{2}+4iy\right)\\ I_{5}=\frac{1}{2}\dot{z}\left(M_{2}+iM_{1}\right)+k_{1}z^{2}\bar{w}+k_{2}z^{2}-k _{4}\frac{\bar{w}}{z^{2}}\end{array}\) \\ \hline \(V=4k_{1}\left(R^{2}-\frac{\hat{w}^{2}}{2}+\frac{z^{2}}{4}\right)+\) & not included & eq. (15) & \(\begin{array}{l}I_{3}=\frac{1}{2}\dot{z}^{2}+k_{1}z^{2}+\frac{k_{2}}{z^{2}} \\ I_{4}=\frac{1}{2}\left(M_{2}+iM_{1}\right)^{2}+2izM_{1}+\\ +k_{1}z^{2}\left(3\bar{w}^{2}-4iy\right)+2k_{2}z^{2}\left(2\bar{w}+1\right)- \\ -k_{3}z^{2}+\frac{k_{4}}{2}\left(\bar{w}^{2}+4iy\right)\\ I_{5}=\frac{1}{2}\dot{z}\left(M_{2}+iM_{1}\right)+k_{1}z^{2}\bar{w}+k_{2}z^{2}-k _{4}\frac{\bar{w}}{z^{2}}\end{array}\) \\ \hline \(V=4k_{1}\left(R^{2}-\frac{\hat{w}^{2}}{2}+\frac{z^{2}}{4}\right)+\) & not included & eq. (15) & \(\begin{array}{l}I_{3}=\frac{1}{2}\dot{z}^{2}+k_{1}z^{2}+\frac{k_{4}}{z^{2}} \\ I_{4}=\frac{1}{2}\left(M_{2}+iM_{1}\right)^{2}+2izM_{1}+\\ +k_{1}z^{2}\left(3\bar{w}^{2}-4iy\right)+2k_{2}z^{2}\left(2\bar{w}+1\right)- \\ -k_{3}z^{2}+\frac{k_{4}}{2}\left(\bar{w}^{2}+4iy\right)\\ I_{5}=\frac{1}{2}\dot{z}\left(M_{2}+iM_{1}\right)+k_{1}z^{2}\bar{w}+k_{2}z^{2}-k _
where the vector \(L_{a}\) is given by (15), the generated KT \(L_{(a;b)}\) is given by (16) and the following condition is satisfied
\[\left(L_{b}V^{,b}\right)_{,a}=-2L_{(a;b)}V^{,b}. \tag{135}\]
Condition (135) is a subcase of the general condition (22) in the case that the function \(G=-L_{a}V^{,a}\) and the general second order KT \(C_{ab}=L_{(a;b)}\) is reducible. In section 5.1, we have computed (not all) pairs of functions \((G,V)\) which satisfy the condition (22). Therefore, in order to find potentials \(V(x,y,z)\) that admit QFIs of the form (134), it is sufficient to solve the constraint
\[G=-L_{a}V^{,a} \tag{136}\]
for all pairs \((G,V)\) for which the KT \(C_{ab}\) is given by the reducible form (16). If the constraint (136) is not satisfied for some pairs \((G,V)\), then the corresponding potentials \(V\) of these pairs do not admit QFIs of the type \(I_{(2,0)}\).
Moreover, the QFI (134) is written as
\[I_{(2,0)}=-Jt+L_{a}\dot{q}^{a}\]
where \(J\) is the associated autonomous QFI (20). The PB \(\{H,I_{(2,0)}\}=\frac{\partial I_{(2,0)}}{\partial t}=-J\). Therefore:
_The time-dependent QFI \(I_{(2,0)}\) generates an autonomous QFI of the type \(I_{(1,0)}\)._
This is an interesting connection between (first degree) time-dependent and autonomous QFIs.
We consider the following cases.
1) In section 5.1.2, we determined the functions:
\[V(x,y,z) = (a^{2}+b^{2})x^{2}+4(az+by)^{2}+\frac{k_{1}}{x^{2}}+k_{2}(az+by)+ F(ay-bz) \tag{137}\] \[G(x,y,z) = -\frac{k_{2}}{2}(a^{2}+b^{2})x^{2}-2ab(ay+bz)x^{2}-2(a^{3}z+b^{3} y)x^{2}+\frac{2k_{1}(by+az)}{x^{2}}. \tag{138}\]
Then, the vector
\[L_{a}=\left(\begin{array}{c}bxy+axz+2b_{2}y+2b_{1}z+b_{3}\\ -bx^{2}-2b_{2}x+2b_{4}z+b_{6}\\ -ax^{2}-2b_{1}x-2b_{4}y+b_{5}\end{array}\right) \tag{139}\]
where \(b_{1},b_{2},...,b_{6}\) are arbitrary constants. Replacing (137), (138) and (139) in the condition (136), we find:
\[b_{1}=b_{2}=b_{3}=b_{4}=0,\,\,\,a=\pm ib,\,\,\,b_{5}=\pm b_{6}.\]
Therefore, the potential (137) becomes9 (see the potential \(V=F_{1}(y-bx)+F_{2}(z)\) in Table 2)
Footnote 9: The function \(F(iz\pm y)\) is either the \(F(y+iz)\) or the \(F(y-iz)\). Therefore, we can write \(F(y\pm iz)\).
\[V(x,y,z)=\frac{k_{1}}{x^{2}}+\underbrace{4b(y\pm iz)^{2}+bk_{2}(y\pm iz)+F(y \pm iz)}_{=F_{1}(y\pm iz)}=\frac{k_{1}}{x^{2}}+F_{1}(y\pm iz) \tag{140}\]
and the vector
\[L_{a}=\left(\begin{array}{c}bx(y\pm iz)\\ -bx^{2}+b_{6}\\ \pm i(-bx^{2}+b_{6})\end{array}\right). \tag{141}\]
The associated time-dependent QFI (134) is
\[I_{(2,0)} = -bt(y\pm iz)\dot{x}^{2}+btx\dot{x}(\dot{y}\pm iz)+b(y\pm iz)x\dot {x}-(bx^{2}-b_{6})\dot{y}\mp i(bx^{2}-b_{6})\dot{z}-\frac{2k_{1}bt(y\pm iz)}{ x^{2}} \tag{142}\] \[= b_{6}J_{1}-bJ_{2}\]
which contains the independent FIs:
\[J_{1}=\dot{y}\pm i\dot{z},\,\,J_{2}=t\left(\dot{x}^{2}+\frac{2k_{1}}{x^{2}} \right)(y\pm iz)-x\dot{x}(y\pm iz)-J_{1}x(t\dot{x}-x).\]
From section 5.1.2 we have that the potential (140) admits also the autonomous QFIs:
\[J_{3}=(\pm iM_{2}-M_{3})\dot{x}+\frac{2k_{1}(y\pm iz)}{x^{2}},\ \ J_{4}=\frac{1}{2} \dot{x}^{2}+\frac{k_{1}}{x^{2}}.\]
We note that \(J_{2}=J_{3}t-x\dot{x}(y\pm iz)+J_{1}x^{2}\).
The potential (140) is maximally superintegrable due to the five linearly independent FIs \(H,J_{1},J_{2},J_{3}\), and \(J_{4}\). The autonomous FIs \(H,J_{1},J_{4}\) are in involution. This is a new result which could not be found in [2] because of the additional time-dependent QFI \(J_{2}\).
The PBs are:
\[\{H,J_{2}\}=\frac{\partial J_{2}}{\partial t}=J_{3},\ \ \{J_{1},J_{2}\}=\{J_{1},J_{ 3}\}=\{J_{1},J_{4}\}=0,\ \ \{J_{3},J_{4}\}=-J_{1}\left(\dot{x}^{2}+\frac{2k_{1}}{x^{2}}\right),\]
\[\{J_{2},J_{3}\}=-2(M_{3}\mp iM_{2})^{2}-\frac{4k_{1}}{x^{2}}(y\pm iz)^{2},\ \ \{J_{2},J_{4}\}=-\left(J_{1}t+y\pm iz\right)\left(\dot{x}^{2}+\frac{2k_{1}}{x^{2 }}\right)+2J_{1}x\dot{x}.\]
2) In section 5.1.2, we determined the functions:
\[V(x,y,z) = \frac{F_{1}\left(\frac{y}{x}\right)}{R^{2}}+\frac{k_{1}z}{rR^{2} }-\frac{k_{2}}{r} \tag{143}\] \[G(x,y,z) = a_{2}\frac{2\hskip 1.0ptz\hskip 1.0ptF_{1}\left(\frac{y}{x}\right)}{R ^{2}}+a_{2}\frac{2k_{1}z^{2}}{rR^{2}}+a_{2}\frac{k_{1}}{r}-a_{2}\frac{k_{2}z }{r}. \tag{144}\]
Then, the vector
\[L_{a}=\left(\begin{array}{c}axz+2b_{2}y+2b_{1}z+b_{3}\\ \@@LTX@noalign{\vskip 6.0pt}\omit\cr ayz-2b_{2}x+2b_{4}z+b_{6}\\ \@@LTX@noalign{\vskip 6.0pt}\omit\cr-aR^{2}-2b_{1}x-2b_{4}y+b_{5}\end{array} \right).\]
Replacing (143), (144) and (145) in the condition (136), we get:
\[b_{1}=b_{2}=b_{3}=b_{4}=b_{5}=b_{6}=0,\ \ k_{2}=0.\]
Therefore, the potential (143) becomes
\[V(x,y,z)=\frac{F_{1}\left(\frac{y}{x}\right)}{R^{2}}+\frac{k_{1}z}{rR^{2}}=R^ {-2}\left[F_{1}\left(\frac{y}{x}\right)+\frac{k_{1}z}{r}\right]\]
and the vector \(L_{b}=a\left(xz,yz,-R^{2}\right)\).
The associated time-dependent QFI (134) is
\[I_{(2,0)}=-J_{1}t+z(x\dot{x}+y\dot{y})-(x^{2}+y^{2})\dot{z}\]
where \(J_{1}\) is the autonomous QFI
\[J_{1}=M_{2}\dot{x}-M_{1}\dot{y}+\frac{2zF_{1}\left(\frac{y}{x}\right)}{x^{2}+ y^{2}}+\frac{2k_{1}z^{2}}{r(x^{2}+y^{2})}+\frac{k_{1}}{r}.\]
From Table 6 we have that the potential (146) admits the additional autonomous QFIs:
\[J_{2}=\frac{1}{2}M_{3}^{2}+F_{1}\left(\frac{y}{x}\right),\ \ J_{3}=\frac{1}{2} \textbf{M}^{2}+\frac{r^{2}F_{1}\left(\frac{y}{x}\right)}{x^{2}+y^{2}}+\frac{k _{1}zr}{x^{2}+y^{2}}.\]
Therefore, (146) is a new maximally superintegrable potential due to the five independent QFIs \(H,J_{1},J_{2},J_{3}\), and (147). We note that this potential was considered to be minimally superintegrable (see e.g. [2]) because only autonomous QFIs were considered.
The PBs are \(\{H,I_{(2,0)}\}=-J_{1}\) and \(\{I_{(2,0)},J_{2}\}=0\).
3) In section 5.1.1, we determined the functions:
\[V(x,y,z) = F_{1}(x)+F_{2}(y)+F_{3}(z)\]
\[G(x,y,z) = 2a_{3}F_{1}(x)+2a_{13}F_{2}(y)+2a_{9}F_{3}(z). \tag{149}\]
Then, the vector
\[L_{a}=\left(\begin{array}{c}a_{3}x+2b_{2}y+2b_{1}z+b_{3}\\ a_{13}y-2b_{2}x+2b_{4}z+b_{6}\\ a_{9}z-2b_{1}x-2b_{4}y+b_{5}\end{array}\right). \tag{150}\]
Replacing (148), (149) and (150) in the condition (136), we obtain the following ordinary differential equation (ODE):
\[0 = a_{3}\left[xF_{1}^{\prime}+2F_{1}(x)\right]+b_{3}F_{1}^{\prime}+a _{13}\left[yF_{2}^{\prime}+2F_{2}(y)\right]+b_{6}F_{2}^{\prime}+a_{9}\left[zF_ {3}^{\prime}+2F_{3}(z)\right]+b_{5}F_{3}^{\prime}+ \tag{151}\] \[+2b_{2}\left(F_{1}^{\prime}y-F_{2}^{\prime}x\right)+2b_{1}\left( F_{1}^{\prime}z-F_{3}^{\prime}x\right)+2b_{4}\left(F_{2}^{\prime}z-F_{3}^{ \prime}y\right)\]
where \(F_{1}^{\prime}=\frac{dF_{1}}{dx}\), \(F_{2}^{\prime}=\frac{dF_{2}}{dy}\) and \(F_{3}^{\prime}=\frac{dF_{1}}{dz}\).
We consider the following subcases:
3.1. Subcase \(b_{1}=b_{2}=b_{4}=0\) and the pairs \((a_{3},b_{3})\), \((a_{13},b_{6})\), \((a_{9},b_{5})\) are not the origin \((0,0)\).
Then, the ODE (151) gives:
\[a_{3}\left[xF_{1}^{\prime}+2F_{1}(x)\right]+b_{3}F_{1}^{\prime} = \lambda_{1} \tag{152}\] \[a_{13}\left[yF_{2}^{\prime}+2F_{2}(y)\right]+b_{6}F_{2}^{\prime} = \lambda_{2}\] (153) \[a_{9}\left[zF_{3}^{\prime}+2F_{3}(z)\right]+b_{5}F_{3}^{\prime} = -\lambda_{1}-\lambda_{2} \tag{154}\]
where \(\lambda_{1}\) and \(\lambda_{2}\) are arbitrary constants, and the vector \(L_{a}=\left(\begin{array}{c}a_{3}x+b_{3}\\ a_{13}y+b_{6}\\ a_{9}z+b_{5}\end{array}\right)\).
Solving the system of ODEs (152) - (154), we find the functions:
\[F_{1}(x)=\frac{\lambda_{1}\left(\frac{a_{3}}{2}x^{2}+b_{3}x\right)+c_{1}}{(a_{ 3}x+b_{3})^{2}},\,\,\,F_{2}(y)=\frac{\lambda_{2}\left(\frac{a_{13}}{2}y^{2}+b_ {6}y\right)+c_{2}}{(a_{13}y+b_{6})^{2}},\,\,\,F_{3}(z)=-\frac{(\lambda_{1}+ \lambda_{2})\left(\frac{a_{9}}{2}z^{2}+b_{5}z\right)+c_{3}}{(a_{9}z+b_{5})^{2}}\]
where \(c_{1},c_{2}\), and \(c_{3}\) are arbitrary constants.
Then, the potential (148) becomes
\[V(x,y,z)=\frac{\lambda_{1}\left(\frac{a_{3}}{2}x^{2}+b_{3}x\right)+c_{1}}{(a_ {3}x+b_{3})^{2}}+\frac{\lambda_{2}\left(\frac{a_{13}}{2}y^{2}+b_{6}y\right)+c _{2}}{(a_{13}y+b_{6})^{2}}-\frac{(\lambda_{1}+\lambda_{2})\left(\frac{a_{9}}{ 2}z^{2}+b_{5}z\right)+c_{3}}{(a_{9}z+b_{5})^{2}}. \tag{155}\]
The associated time-dependent QFI (134) is
\[I_{(2,0)}=-Jt+(a_{3}x\dot{x}+a_{13}y\dot{y}+a_{9}z\dot{z})+b_{3}\dot{x}+b_{6} \dot{y}+b_{5}\dot{z} \tag{156}\]
where \(J=2a_{3}I_{1}+2a_{13}I_{2}+2a_{9}I_{3}\) is the sum of the three separated QFIs:
\[I_{1}=\frac{1}{2}\dot{x}^{2}+F_{1}(x),\,\,\,I_{2}=\frac{1}{2}\dot{y}^{2}+F_{2}( y),\,\,\,I_{3}=\frac{1}{2}\dot{z}^{2}+F_{3}(z). \tag{157}\]
Therefore, (155) is a new minimally superintegrable potential due to the four independent QFIs \(I_{1},I_{2},I_{3}\), and (156). We note that (155) depends on the eleven parameters \(a_{3},a_{9},a_{13},b_{3},b_{5},b_{6},c_{1},c_{2},c_{3},\lambda_{1}\) and \(\lambda_{2}\); hence, the time-dependent QFI (156) is irreducible.
- For \(\lambda_{1}=\lambda_{2}=0\) and \(a_{3}a_{13}a_{9}\neq 0\) we obtain the potential10
Footnote 10: Since \(a_{3}a_{13}a_{9}\neq 0\), we can set \(b_{3}=m_{1}a_{3}\), \(b_{6}=m_{2}a_{13}\), \(b_{5}=m_{3}a_{9}\), \(k_{1}=\frac{c_{1}}{a_{3}^{3}}\), \(k_{2}=\frac{c_{2}}{a_{13}^{2}}\) and \(k_{3}=\frac{c_{3}}{a_{9}^{3}}\).
\[V(x,y,z)=\frac{k_{1}}{(x+m_{1})^{2}}+\frac{k_{2}}{(y+m_{2})^{2}}+\frac{k_{3}}{(z +m_{3})^{2}} \tag{158}\]
where \(k_{1},k_{2},k_{3},m_{1},m_{2}\), and \(m_{3}\) are new arbitrary constants.
Then, the associated time-dependent QFI (156) consists of the independent QFIs:
\[I_{4}=-2I_{1}t+(x+m_{1})\dot{x},\,\,\,I_{5}=-2I_{2}t+(y+m_{2})\dot{y},\,\,\,I_{6 }=-2I_{3}t+(z+m_{3})\dot{z}.\]
Therefore, the potential (158) is maximally superintegrable due to the independent QFIs \(I_{1},I_{2},...,I_{6}\). Because time-dependent FIs are considered, the maximum number of independent FIs is six (i.e. greater than five).
3.2. Subcase \(b_{1}=b_{2}=b_{4}=0\), \(a_{3}\neq 0\) and \(a_{9}=a_{13}=b_{5}=b_{6}=0\) (i.e. two pairs of parameters from subcase 3.1 vanish).
From the system of ODEs (152) - (154), we find that \(\lambda_{1}=\lambda_{2}=0\) and the potential (148) becomes
\[V(x,y,z)=\frac{k_{1}}{(x+m_{1})^{2}}+F_{2}(y)+F_{3}(z) \tag{159}\]
where \(k_{1},m_{1}\) are arbitrary constants and \(F_{2}(y)\), \(F_{3}(z)\) are arbitrary smooth functions.
The associated time-dependent QFI (134) is
\[I_{(2,0)}=-2I_{1}t+(x+m_{1})\dot{x} \tag{160}\]
where the QFI \(I_{1}=\frac{1}{2}\dot{x}^{2}+\frac{k_{1}}{(x+m_{1})^{2}}\). Therefore, the potential (159) is minimally superintegrable (see Table 2).
3.3. Subcase \(b_{1}=b_{2}=b_{4}=0\), \(a_{9}=b_{5}=0\) and the pairs \((a_{3},b_{3})\), \((a_{13},b_{6})\) are not the origin \((0,0)\).
From the system of ODEs (152) - (154), we find that \(\lambda_{2}=-\lambda_{1}\) and the potential (148) becomes
\[V(x,y,z)=\frac{\lambda_{1}\left(\frac{a_{3}}{2}x^{2}+b_{3}x\right)+c_{1}}{(a_{ 3}x+b_{3})^{2}}-\frac{\lambda_{1}\left(\frac{a_{13}}{2}y^{2}+b_{6}y\right)+c_{ 2}}{(a_{13}y+b_{6})^{2}}+F_{3}(z) \tag{161}\]
where \(F_{3}(z)\) is an arbitrary smooth function.
The associated time-dependent QFI (134) is
\[I_{(2,0)}=-2(a_{3}I_{1}+a_{13}I_{2})t+(a_{3}x+b_{3})\dot{x}+(a_{13}y+b_{6})\dot {y} \tag{162}\]
where the QFIs \(I_{1}\) and \(I_{2}\) are given by (157). We note that the potential (161) is a minimally superintegrable potential.
- For \(\lambda_{1}=0\) and \(a_{3}a_{13}\neq 0\) we obtain the maximally superintegrable potential (see Table 3)
\[V(x,y,z)=\frac{k_{1}}{(x+m_{1})^{2}}+\frac{k_{2}}{(y+m_{2})^{2}}+F_{3}(z) \tag{163}\]
which admits the additional time-dependent QFIs:
\[I_{4}=-2I_{1}t+(x+m_{1})\dot{x},\,\,\,I_{5}=-2I_{2}t+(y+m_{2})\dot{y}.\]
3.4. Subcase \(a_{3}=a_{9}=a_{13}=0\) (autonomous LFIs, \(L_{(a;b)}=0\) and \(G=0\)).
The ODE (151) becomes
\[2b_{2}\left(F_{1}^{\prime}y-F_{2}^{\prime}x\right)+2b_{1}\left(F_{1}^{\prime}z -F_{3}^{\prime}x\right)+2b_{4}\left(F_{2}^{\prime}z-F_{3}^{\prime}y\right)+b_{ 3}F_{1}^{\prime}+b_{6}F_{2}^{\prime}+b_{5}F_{3}^{\prime}=0 \tag{164}\]
and the vector
\[L_{a}=\left(\begin{array}{c}2bzy+2b_{1}z+b_{3}\\ -2b_{2}x+2b_{4}z+b_{6}\\ -2b_{1}x-2b_{4}y+b_{5}\end{array}\right). \tag{165}\]
The ODE (164) admits solutions of the form:
\[F_{1}(x)=kx^{2}+k_{1}x,\,\,\,F_{2}(y)=ky^{2}+k_{2}y,\,\,\,F_{3}(z)=kz^{2}+k_{3}z \tag{166}\]
where \(k,k_{1},k_{2}\), and \(k_{3}\) are arbitrary constants. Then, we get the separable potential
\[V(x,y,z)=kr^{2}+k_{1}x+k_{2}y+k_{3}z. \tag{167}\]
Replacing (166) in (164), we find the following system of equations:
\[k_{1}b_{3}+k_{2}b_{6}+k_{3}b_{5} = 0 \tag{168}\] \[kb_{3}-k_{2}b_{2}-k_{3}b_{1} = 0 \tag{169}\]
\[kb_{6}+k_{1}b_{2}-k_{3}b_{4} = 0 \tag{170}\] \[kb_{5}+k_{1}b_{1}+k_{2}b_{4} = 0. \tag{171}\]
We consider the following cases.
- Case \(k=0\).
The potential (167) becomes
\[V(x,y,z)=k_{1}x+k_{2}y+k_{3}z \tag{172}\]
where \(k_{1}k_{2}k_{3}\neq 0\) in order to have a 3d potential.
Solving the system of equations (168) - (171) for \(k=0\), we find:
\[b_{1}=-\frac{k_{2}}{k_{1}}b_{4},\,\,\,b_{2}=\frac{k_{3}}{k_{1}}b_{4},\,\,\,b_{3 }=-\frac{k_{2}}{k_{1}}b_{6}-\frac{k_{3}}{k_{1}}b_{5}.\]
The associated QFI (134) reduces to the LFI
\[I=L_{a}\dot{q}^{a}=-2b_{4}\sum_{i=1}^{3}k_{i}M_{i}-b_{5}(k_{3}\dot{x}-k_{1} \dot{z})-b_{6}(k_{2}\dot{x}-k_{1}\dot{y})\]
which consists of the LFIs:
\[J_{1}=\sum_{i=1}^{3}k_{i}M_{i},\,\,\,J_{2}=k_{3}\dot{x}-k_{1}\dot{z},\,\,\,J_{ 3}=k_{2}\dot{x}-k_{1}\dot{y}.\]
Therefore, the separable potential (172) is maximally superintegrable.
- Case \(k\neq 0\).
The system of equations (168) - (171) implies that \(b_{3}=\frac{k_{2}}{k}b_{2}+\frac{k_{3}}{k}b_{1}\), \(b_{5}=-\frac{k_{1}}{k}b_{1}-\frac{k_{2}}{k}b_{4}\), and \(b_{6}=\frac{k_{3}}{k}b_{4}-\frac{k_{1}}{k}b_{2}\).
Similarly, we find the LFIs:
\[J_{1}=2kM_{1}+k_{2}\dot{z}-k_{3}\dot{y},\,\,\,J_{2}=2kM_{2}+k_{3}\dot{x}-k_{1} \dot{z},\,\,\,J_{3}=2kM_{3}+k_{1}\dot{y}-k_{2}\dot{x}.\]
Therefore, the separable potential (167) is maximally superintegrable. We note that the \(k\neq 0\) introduces the term \(kr^{2}\) which is the oscillator; therefore, the corresponding change in the FIs is the addition of the components of the angular momentum.
### Case \(L_{a}\) is a KV
We consider that \(L_{a}\) is a KV in \(E^{3}\). Then, \(L_{(a;b)}=0\) and the time-dependent QFI (134) becomes the time-dependent LFI
\[I=L_{a}\dot{q}^{a}+st \tag{173}\]
where the arbitrary constant \(s\) satisfies the condition
\[L_{a}V^{,a}=s. \tag{174}\]
Replacing the general KV \(L_{a}\) given by (13) in (174), we find the PDE
\[\left(b_{1}-b_{4}y+b_{5}z\right)\frac{\partial V}{\partial x}+\left(b_{2}+b_{ 4}x-b_{6}z\right)\frac{\partial V}{\partial y}+\left(b_{3}-b_{5}x+b_{6}y \right)\frac{\partial V}{\partial z}=s \tag{175}\]
where \(b_{1},...,b_{6}\) are arbitrary constants.
We consider the following cases.
1) Case \(b_{1}\neq 0\) and \(b_{4}=b_{5}=b_{6}=0\).
Then, the PDE (175) gives the potential
\[V=c_{1}x+F(y-c_{2}x,z-c_{3}x) \tag{176}\]
where \(c_{1}=\frac{s}{b_{1}}\), \(c_{2}=\frac{b_{2}}{b_{1}}\), \(c_{3}=\frac{b_{3}}{b_{1}}\) and \(F\) is an arbitrary smooth function of its arguments.
The associated LFI (173) is
\[I=\dot{x}+c_{2}\dot{y}+c_{3}\dot{z}+c_{1}t. \tag{177}\]
2) Case \(b_{2}\neq 0\), \(b_{1}=0\) and \(b_{4}=b_{5}=b_{6}=0\).
We find a subcase of the potential (176) for \(x\leftrightarrow y\) and \(c_{2}=0\).
3) Case \(b_{3}\neq 0\), \(b_{1}=b_{2}=0\) and \(b_{4}=b_{5}=b_{6}=0\).
We find a subcase of the potential (176) for \(c_{1}=c\), \(c_{2}=c_{3}=0\) and \(x\leftrightarrow z\).
4) Case \(b_{4}\neq 0\) and \(b_{5}=b_{6}=0\).
Then, the PDE (175) gives the potential
\[V=c_{0}\tan^{-1}\left(\frac{x+c_{2}}{y+c_{1}}\right)+F\left[\frac{1}{2}(x^{2} +y^{2})+c_{2}x+c_{1}y,z+c_{3}\tan^{-1}\left(\frac{x+c_{2}}{y+c_{1}}\right)\right] \tag{178}\]
where \(c_{0}=\frac{s}{b_{4}}\), \(c_{1}=-\frac{b_{1}}{b_{4}}\), \(c_{2}=\frac{b_{2}}{b_{4}}\), \(c_{3}=-\frac{b_{4}}{b_{4}}\) and \(F\) is an arbitrary function of its arguments.
The associated LFI (173) is
\[I=M_{3}-c_{1}\dot{x}+c_{2}\dot{y}-c_{3}\dot{z}+c_{0}t. \tag{179}\]
5) Case \(b_{4}\neq 0\), \(b_{6}=0\) and \(b_{2}=b_{3}=0\).
Then, the PDE (175) gives the potential
\[V=\frac{c_{0}}{\sqrt{1+c_{1}^{2}}}\tan^{-1}\left(\frac{y+c_{1}z+c_{2}}{\sqrt{ 1+c_{1}^{2}x}}\right)+F\left(z-c_{1}y,x^{2}+(1-c_{1}^{2})y^{2}+2c_{2}y+2c_{1} yz\right) \tag{180}\]
where \(c_{0}=\frac{s}{b_{4}}\), \(c_{1}=-\frac{b_{5}}{b_{4}}\), \(c_{2}=-\frac{b_{1}}{b_{4}}\) and \(F\) is an arbitrary function of its arguments.
The associated LFI (173) is
\[I=M_{3}-c_{1}M_{2}-c_{2}\dot{x}+c_{0}t. \tag{181}\]
6) Case \(b_{1}=b_{2}=b_{3}=s=0\) and \(b_{6}\neq 0\).
Then, the PDE (175) gives the potential
\[V=F(r,x-c_{1}y-c_{2}z) \tag{182}\]
where \(c_{1}=-\frac{b_{4}}{b_{4}}\), \(c_{2}=-\frac{b_{4}}{b_{4}}\) and \(F\) is an arbitrary function of its arguments.
The associated LFI (173) is
\[I=M_{1}-c_{1}M_{2}-c_{2}M_{3}. \tag{183}\]
We collect the results of section 6 in Tables 8 - 10.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multicolumn{2}{|c|}{Minimally superintegrable potentials} \\ \hline Potential & Ref [2] & LFIs and QFIs \\ \hline & & \(I=-Jt+(a_{3}x\dot{x}+a_{13}y\dot{y}+a_{9}z\dot{z})+\) \\ & & \(+b_{3}\dot{x}+b_{6}\dot{y}+b_{5}\dot{z}\) \\ \(V=\frac{\lambda_{1}\big{(}\frac{a_{3}}{2}x^{2}+b_{3}x^{2}\big{)}+c_{1}}{(a_{3}x +b_{3})^{2}}+\frac{\lambda_{2}\big{(}\frac{a_{13}}{2}y^{2}+b_{6}y\big{)}+c_{2}}{ (a_{13}y+b_{6})^{2}}-\) & New & \(I_{1}=\frac{1}{2}\dot{x}^{2}+\frac{\lambda_{1}\big{(}\frac{a_{3}}{2}x^{2}+b_{3 }x\big{)}+c_{1}}{(a_{3}x+b_{3})^{2}}\) \\ \(-\frac{(\lambda_{1}+\lambda_{2})\big{(}\frac{a_{3}}{2}z^{2}+b_{5}z\big{)}+c_{3 }}{(a_{9}z+b_{5})^{2}}\) & & \(I_{2}=\frac{1}{2}\dot{y}^{2}+\frac{\lambda_{2}\big{(}\frac{a_{3}}{2}y^{2}+b_{6 }y\big{)}+c_{2}}{(a_{13}y+b_{6})^{2}}\) \\ & & \(I_{3}=\frac{1}{2}\dot{z}^{2}-\frac{(\lambda_{1}+\lambda_{2})\big{(}\frac{a_{3}} {2}z^{2}+b_{5}z\big{)}+c_{3}}{(a_{9}z+b_{5})^{2}}\) \\ \hline \(V=\frac{k_{1}}{(x+m_{1})^{2}}+F_{2}(y)+F_{3}(z)\) & New & \(I_{1}=\frac{1}{2}\dot{x}^{2}+F_{2}(y)\) \\ & & \(I_{3}=\frac{1}{2}\dot{z}^{2}+F_{3}(z)\) \\ & & \(I_{4}=-2I_{1}t+(x+m_{1})\dot{x}\) \\ \hline \(V=\frac{\lambda_{1}\big{(}\frac{a_{3}}{2}x^{2}+b_{3}x\big{)}+c_{1}}{(a_{3}x+b _{3})^{2}}-\) & New & \(I_{1}=\frac{1}{2}\dot{x}^{2}+\frac{\lambda_{1}\big{(}\frac{a_{3}}{2}x^{2}+b_{ 3}x\big{)}+c_{1}}{(a_{3}x+b_{3})^{2}}\) \\ \(-\frac{\lambda_{1}\big{(}\frac{a_{13}}{2}y^{2}+b_{6}y\big{)}+c_{2}}{(a_{13}y+b _{6})^{2}}+F_{3}(z)\) & New & \(I_{3}=\frac{1}{2}\dot{z}^{2}+F_{3}(z)\) \\ & & \(I_{4}=-2(a_{3}I_{1}+a_{13}I_{2})t+(a_{3}x+b_{3})\dot{x}+\) \\ & & \(+(a_{13}y+b_{6})\dot{y}\) \\ \hline \end{tabular}
\end{table}
Table 8: Minimally superintegrable potentials \(V(x,y,z)\) in \(E^{3}\) that admit time-dependent QFIs of the form \(I_{(2,0)}\).
\begin{table}
\begin{tabular}{|l|l|l|} \hline Potential & LFIs and QFIs \\ \hline \(V=c_{1}x+F(y-c_{2}x,z-c_{3}x)\) & \(I=\dot{x}+c_{2}\dot{y}+c_{3}\dot{z}+c_{1}t\) \\ \hline \(V=c_{0}\tan^{-1}\left(\frac{x+c_{2}}{y+c_{1}}\right)+\) & \\ \(+F\left[\frac{1}{2}(x^{2}+y^{2})+c_{2}x+c_{1}y,z+c_{3}\tan^{-1}\left(\frac{x+c _{2}}{y+c_{1}}\right)\right]\) & \(I=M_{3}-c_{1}\dot{x}+c_{2}\dot{y}-c_{3}\dot{z}+c_{0}t\) \\ \hline \(V=\frac{c_{0}}{\sqrt{1+c_{1}^{2}}}\tan^{-1}\left(\frac{y+c_{1}z+c_{2}}{\sqrt{1 +c_{1}^{2}x}}\right)+\) & \\ \(+F\left(z-c_{1}y,x^{2}+(1-c_{1}^{2})y^{2}+2c_{2}y+2c_{1}yz\right)\) & \\ \hline \(V=F(r,x-c_{1}y-c_{2}z)\) & \(I=M_{1}-c_{1}M_{2}-c_{2}M_{3}\) \\ \hline \end{tabular}
\end{table}
Table 10: Possibly non-integrable potentials \(V(x,y,z)\) in \(E^{3}\) that admit LFIs of the form \(I=L_{a}\dot{q}^{a}+st\).
\begin{table}
\begin{tabular}{|l|l|l|} \hline Potential & Ref [2] & LFIs and QFIs \\ \hline \(V=\frac{k_{1}}{x^{2}}+F_{1}(y\pm iz)\) & New & \(\begin{array}{l}J_{1}=\dot{y}\pm iz\\ J_{2}=J_{3}t-x\dot{x}(y\pm iz)+J_{1}x^{2}\\ J_{3}=(\pm iM_{2}-M_{3})\dot{x}+\frac{2k_{1}(y\pm iz)}{x^{2}}\\ J_{4}=\frac{1}{2}\dot{x}^{2}+\frac{k_{1}}{x^{2}}\\ J_{5}=2kM_{2}+k_{3}\dot{x}-k_{1}\dot{z}\\ I_{6}=2kM_{3}+k_{1}\dot{y}-k_{2}\dot{x}\end{array}\) \\ \hline \end{tabular}
\end{table}
Table 9: Maximally superintegrable potentials \(V(x,y,z)\) in \(E^{3}\) that admit QFIs of the form \(I_{(2,0)}\).
## 7 The QFI \(I_{(3)}\)
In this section, we consider the QFI
\[I_{(3)}=e^{\lambda t}\left(-L_{(a;b)}\dot{q}^{a}\dot{q}^{b}+\lambda L_{a}\dot{q}^ {a}+L_{a}V^{,a}\right)\]
where the vector \(L_{a}\) is given by (15), the generated KT \(L_{(a;b)}\) is given by (16) and the following condition is satisfied
\[\left(L_{b}V^{,b}\right)_{,a}=-2L_{(a;b)}V^{,b}-\lambda^{2}L_{a}. \tag{184}\]
We consider several cases concerning the parameters \(a_{1},a_{2},...,a_{20}\) which define the vector \(L_{a}\) given in (15).
Case containing KVs and the HV: parameters \(a_{1},a_{3},a_{4},a_{6},a_{7},a_{9},a_{10},a_{13},a_{14}\)
In this case, the vector \(L_{a}\) given in (15) has the general form
\[L_{a}=\left(\begin{array}{c}k_{1}x\\ k_{2}y\\ k_{3}z\end{array}\right)+\left(\begin{array}{c}b_{1}-b_{4}y+b_{5}z\\ b_{2}+b_{4}x-b_{6}z\\ b_{3}-b_{5}x+b_{6}y\end{array}\right) \tag{185}\]
where \(k_{1},...,k_{3},b_{1},...,b_{6}\) are arbitrary constants and the generated KT \(L_{(a;b)}=diag(k_{1},k_{2},k_{3})\).
We assume \(k_{1}=k_{2}=k_{3}=k\) is an arbitrary constant. Then, the vector (185) is the linear combination of the homothetic vector (HV) with the gradient and non-gradient KVs. The KT \(L_{(a;b)}=k\delta_{ab}\) and the time-dependent QFI \(I_{(3)}\) becomes
\[I=e^{\lambda t}\left(-k\dot{q}^{a}\dot{q}_{a}+\lambda L_{a}\dot{q}^{a}+L_{a}V^ {,a}\right). \tag{186}\]
The condition (184) is
\[\left(L_{b}V^{,b}+2kV\right)_{,a}+\lambda^{2}L_{a}=0. \tag{187}\]
From the integrability condition of (187), we get:
\[L_{a,b}-L_{b,a}=0\implies L_{a,b}=k\delta_{ab}\implies b_{4}=b_{5}=b_{6}=0.\]
This implies that only the HV and the gradient KVs survive, that is, the vector (185) becomes
\[L_{a}=\left(\begin{array}{c}kx+b_{1}\\ ky+b_{2}\\ kz+b_{3}\end{array}\right). \tag{188}\]
We consider the following special cases.
1) Case \(k=0\), \(b_{3}=0\) and \(b_{1}\neq 0\).
The vector \(L_{a}=(b_{1},b_{2},0)\). Then, equation (187) gives the potential
\[V(x,y,z)=\frac{\lambda^{2}}{2}\left(c_{1}^{2}-1\right)x^{2}+c_{2}x-c_{1} \lambda^{2}xy+F(y-c_{1}x,z) \tag{189}\]
where \(c_{1}\equiv\frac{b_{2}}{b_{1}}\), \(c_{2}\) are arbitrary constants and \(F\) is an arbitrary smooth function of its arguments.
The associated time-dependent LFI is
\[I=e^{\lambda t}\left(\lambda\dot{x}+c_{1}\lambda\dot{y}-\lambda^{2}x-c_{1} \lambda^{2}y+c_{2}\right). \tag{190}\]
We note that \(\{H,I\}=\frac{\partial I}{\partial t}=\lambda I\).
- For \(c_{1}=0\), the potential (189) becomes
\[V(x,y,z)=-\frac{\lambda^{2}}{2}x^{2}+c_{2}x+F(y,z) \tag{191}\]
and the associated LFI (190) is
\[I=e^{\lambda t}\left(\lambda\dot{x}-\lambda^{2}x+c_{2}\right). \tag{192}\]
In the case that \(F(y,z)=F_{1}(y)+F_{2}(z)\), the potential (191) is separable; therefore, it is minimally superintegrable due to the additional independent LFI (192).
2) Case \(k=0\) and \(b_{1}=b_{2}=b_{3}\).
We have \(L_{a}=(1,1,1)\).
The potential (after the transformation \(x\leftrightarrow z\))
\[V(x,y,z)=\frac{\lambda^{2}}{2}x^{2}+kx-\lambda^{2}(y+z)x+F(x-z,y-z) \tag{193}\]
where \(k\) is an arbitrary constant and \(F\) is an arbitrary smooth function of its arguments.
The associated LFI is
\[I=e^{\lambda t}\left[\lambda(\dot{x}+\dot{y}+\dot{z})-\lambda^{2}(x+y+z)+k \right]. \tag{194}\]
3) Case \(k\neq 0\).
We find the potential
\[V(x,y,z)=-\frac{\lambda^{2}}{8}r^{2}-\frac{\lambda^{2}}{4}\left(\frac{b_{1}}{ k}x+\frac{b_{2}}{k}y+\frac{b_{3}}{k}z\right)+\frac{1}{\left(z+\frac{b_{3}}{k} \right)^{2}}F\left(\frac{y+\frac{b_{2}}{k}}{x+\frac{b_{1}}{k}},\frac{z+\frac{ b_{3}}{k}}{x+\frac{b_{1}}{k}}\right) \tag{195}\]
where \(F\) is an arbitrary function of its arguments.
The associated QFI is
\[I = e^{\lambda t}\left[\left(\dot{x}-\frac{\lambda}{2}x\right)^{2}+ \left(\dot{y}-\frac{\lambda}{2}y\right)^{2}+\left(\dot{z}-\frac{\lambda}{2}z \right)^{2}-\lambda\left(\frac{b_{1}}{k}\dot{x}+\frac{b_{2}}{k}\dot{y}+\frac{ b_{3}}{k}\dot{z}\right)+\right. \tag{196}\] \[\left.+\frac{\lambda^{2}}{2}\left(\frac{b_{1}}{k}x+\frac{b_{2}}{k }y+\frac{b_{3}}{k}z+\frac{b_{1}^{2}}{2k^{2}}+\frac{b_{2}^{2}}{2k^{2}}+\frac{ b_{3}^{2}}{2k^{2}}\right)+\frac{2F}{\left(z+\frac{b_{3}}{k}\right)^{2}}\right].\]
3.1. For \(b_{1}=b_{2}=b_{3}=0\).
The potential
\[V(x,y,z)=-\frac{\lambda^{2}}{8}r^{2}+\frac{F\left(\frac{y}{x},\frac{\dot{z}}{ x}\right)}{z^{2}} \tag{197}\]
and the associated QFI is
\[I=e^{\lambda t}\left[\dot{x}^{2}+\dot{y}^{2}+\dot{z}^{2}-\lambda(x\dot{x}+y \dot{y}+z\dot{z})+\frac{\lambda^{2}}{4}r^{2}+\frac{2F\left(\frac{y}{x},\frac{ \dot{z}}{x}\right)}{z^{2}}\right]. \tag{198}\]
3.2. For \(b_{1}=k\) and \(b_{2}=b_{3}=0\).
The potential
\[V(x,y,z)=-\frac{\lambda^{2}}{8}r^{2}-\frac{\lambda^{2}}{4}x+\frac{F\left( \frac{y}{x+1},\frac{z}{x+1}\right)}{z^{2}} \tag{199}\]
and the associated QFI is
\[I=e^{\lambda t}\left[\left(\dot{x}-\frac{\lambda}{2}x\right)^{2}+\left(\dot{y }-\frac{\lambda}{2}y\right)^{2}+\left(\dot{z}-\frac{\lambda}{2}z\right)^{2}- \lambda\dot{x}+\frac{\lambda^{2}}{2}x+\frac{\lambda^{2}}{4}+\frac{2F}{z^{2}} \right]. \tag{200}\]
3.3. For \(b_{1}=b_{2}=k\) and \(b_{3}=0\).
The potential
\[V(x,y,z)=-\frac{\lambda^{2}}{8}r^{2}-\frac{\lambda^{2}}{4}(x+y)+\frac{1}{z^{2 }}F\left(\frac{y+1}{x+1},\frac{z}{x+1}\right) \tag{201}\]
and the associated QFI is
\[I=e^{\lambda t}\left[\left(\dot{x}-\frac{\lambda}{2}x\right)^{2}+\left(\dot{ y}-\frac{\lambda}{2}y\right)^{2}+\left(\dot{z}-\frac{\lambda}{2}z\right)^{2}- \lambda(\dot{x}+\dot{y})+\frac{\lambda^{2}}{2}(x+y+1)+\frac{2F}{z^{2}}\right]. \tag{202}\]
3.4. For \(b_{1}=b_{2}=b_{3}=k\).
The potential
\[V(x,y,z)=-\frac{\lambda^{2}}{8}r^{2}-\frac{\lambda^{2}}{4}\left(x+y+z\right)+\frac {1}{\left(z+1\right)^{2}}F\left(\frac{y+1}{x+1},\frac{z+1}{x+1}\right) \tag{203}\]
and the associated QFI is
\[I = e^{\lambda t}\left[\left(\dot{x}-\frac{\lambda}{2}x\right)^{2}+ \left(\dot{y}-\frac{\lambda}{2}y\right)^{2}+\left(\dot{z}-\frac{\lambda}{2}z \right)^{2}-\lambda\left(\dot{x}+\dot{y}+\dot{z}\right)+\right. \tag{204}\] \[\left.+\frac{\lambda^{2}}{2}\left(x+y+z+\frac{3}{2}\right)+\frac {2F}{\left(z+1\right)^{2}}\right].\]
3.5. For \(F\left(\frac{y+\frac{b_{2}}{2}}{x+\frac{b_{3}}{2}},\frac{z+\frac{b_{3}}{2}}{x +\frac{b_{3}}{2}}\right)=F_{1}\left(\frac{y+\frac{b_{2}}{2}}{x+\frac{b_{3}}{2 }},\frac{x+\frac{b_{3}}{2}}{x}\right)+\frac{c_{0}}{\left(x+\frac{b_{1}}{2} \right)^{2}}=F_{1}\left(\frac{y+\frac{b_{2}}{2}}{x+\frac{b_{3}}{2}}\right)+ \frac{c_{0}}{\left(x+\frac{b_{1}}{2}\right)^{2}}\), where \(c_{0}\) is an arbitrary constant.
The potential
\[V(x,y,z)=-\frac{\lambda^{2}}{8}r^{2}-\frac{\lambda^{2}}{4}\left(c_{1}x+c_{2}y+ c_{3}z\right)+\frac{c_{0}}{\left(x+c_{1}\right)^{2}}+\frac{1}{\left(z+c_{3} \right)^{2}}F_{1}\left(\frac{y+c_{2}}{z+c_{3}}\right) \tag{205}\]
where \(c_{i}=\frac{b_{i}}{k}\).
The associated QFI consists of the independent QFIs:
\[I_{1} = e^{\lambda t}\left[\left(\dot{x}-\frac{\lambda}{2}x\right)^{2}- \lambda c_{1}\left(\dot{x}-\frac{\lambda}{2}x-\frac{\lambda c_{1}}{4}\right) +\frac{2c_{0}}{\left(x+c_{1}\right)^{2}}\right] \tag{206}\] \[I_{2} = e^{\lambda t}\left[\left(\dot{y}-\frac{\lambda}{2}y\right)^{2}+ \left(\dot{z}-\frac{\lambda}{2}z\right)^{2}-\lambda\left(c_{2}\dot{y}+c_{3} \dot{z}\right)+\frac{\lambda^{2}}{2}\left(c_{2}y+c_{3}z+\frac{c_{2}^{2}}{2}+ \frac{c_{3}^{2}}{2}\right)+\frac{2F_{1}}{\left(z+c_{3}\right)^{2}}\right]. \tag{207}\]
4) Case \(k_{1}=k_{2}=k_{3}=k\) and \(L_{a}=V_{,a}\).
We find the potential
\[V(x,y,z)=\frac{k}{2}r^{2}+b_{1}x+b_{2}y+b_{3}z. \tag{208}\]
Then, equation (187) gives \(k=-\frac{\lambda^{2}}{4}\) and the potential (208) becomes
\[V(x,y,z)=-\frac{\lambda^{2}}{8}r^{2}+b_{1}x+b_{2}y+b_{3}z. \tag{209}\]
The associated QFI is
\[I=e^{\lambda t}\left[\frac{\lambda^{2}}{4}\sum_{i=1}^{3}\left(\dot{q}^{i}- \frac{\lambda}{2}q^{i}\right)^{2}+\lambda(b_{i}\dot{q}^{i})-\frac{\lambda^{2} }{2}b_{i}q^{i}+\sum_{i=1}^{3}b_{i}^{2}\right]. \tag{210}\]
This QFI consists of the independent QFIs:
\[I_{1}=e^{\lambda t}\left[\frac{\lambda^{2}}{4}\left(\dot{x}-\frac{\lambda}{2} x\right)^{2}+\lambda b_{1}\dot{x}-\frac{\lambda^{2}}{2}b_{1}x+b_{1}^{2}\right],\,\,\,I_{2}=e^{ \lambda t}\left[\frac{\lambda^{2}}{4}\left(\dot{y}-\frac{\lambda}{2}y\right)^ {2}+\lambda b_{2}\dot{y}-\frac{\lambda^{2}}{2}b_{2}y+b_{2}^{2}\right],\]
\[I_{3}=e^{\lambda t}\left[\frac{\lambda^{2}}{4}\left(\dot{z}-\frac{\lambda}{2} z\right)^{2}+\lambda b_{3}\dot{z}-\frac{\lambda^{2}}{2}b_{3}z+b_{3}^{2}\right].\]
Therefore, the potential (209) is maximally superintegrable (see Table 3).
5) Case \(k_{1}k_{2}k_{3}\neq 0\) and \(b_{4}=b_{5}=b_{6}=0\).
The potential
\[V(x,y,z)=-\frac{\lambda^{2}}{8}r^{2}-\frac{\lambda^{2}}{4}\left(\frac{b_{1}}{k_{1} }x+\frac{b_{2}}{k_{2}}y+\frac{b_{3}}{k_{3}}z\right)+\frac{c_{1}}{\left(x+\frac{b _{1}}{k_{1}}\right)^{2}}+\frac{c_{2}}{\left(y+\frac{b_{2}}{k_{2}}\right)^{2}}+ \frac{c_{3}}{\left(z+\frac{b_{3}}{k_{3}}\right)^{2}} \tag{211}\]
where \(c_{1},c_{2}\), and \(c_{3}\) are arbitrary constants.
The associated QFI gives the following three independent QFIs
\[I_{i}=e^{\lambda t}\left[\left(\dot{q}^{i}-\frac{\lambda}{2}q^{i}\right)^{2}- \lambda\frac{b_{i}}{k_{i}}\left(\dot{q}^{i}-\frac{\lambda}{2}q^{i}-\frac{ \lambda b_{i}}{4k_{i}}\right)+\frac{2c_{i}}{\left(q^{i}+\frac{b_{i}}{k_{i}} \right)^{2}}\right] \tag{212}\]
where \(i=1,2,3\) and \(q^{i}=(x,y,z)\).
Therefore, the separable potential (211) is maximally superintegrable (see Table 3).
We note that, as expected, for \(k_{1}=k_{2}=k_{3}=k\) the resulting potential (211) belongs to the family of potentials (195) if we set
\[F\left(\frac{y+\frac{b_{2}}{k}}{x+\frac{b_{1}}{k}},\frac{z+\frac{b_{1}}{k}}{x+ \frac{b_{1}}{k}}\right)=c_{1}\left(\frac{z+\frac{b_{1}}{k}}{x+\frac{b_{1}}{k} }\right)^{2}+c_{2}\left(\frac{y+\frac{b_{2}}{k}}{x+\frac{b_{1}}{k}}\right)^{- 2}\left(\frac{z+\frac{b_{3}}{k}}{x+\frac{b_{1}}{k}}\right)^{2}+c_{3}.\]
6) Case \(k_{1}b_{2}\neq 0\), \(k_{2}=k_{3}=0\) and \(b_{4}=b_{5}=b_{6}=0\).
The vector \(L_{a}=(k_{1}x+b_{1},b_{2},b_{3})\).
The potential
\[V(x,y,z)=-\frac{\lambda^{2}}{8}\left[x^{2}+4(1-c_{1}^{2})y^{2}\right]-\frac{ \lambda^{2}}{4}\left(c_{2}x+4c_{1}yz\right)+c_{3}y+\frac{c_{4}}{(x+c_{2})^{2} }+F(z-c_{1}y) \tag{213}\]
where \(c_{1}=\frac{b_{3}}{b_{2}}\), \(c_{2}=\frac{b_{1}}{k_{1}}\), \(c_{3}\), \(c_{4}\) are arbitrary constants and \(F\) is an arbitrary smooth function of its arguments.
We find the independent FIs:
\[I_{1} = e^{\lambda t}\left[\left(\dot{x}-\frac{\lambda}{2}x\right)^{2}- \lambda c_{2}\left(\dot{x}-\frac{\lambda}{2}x-\frac{\lambda}{4}c_{2}\right)+ \frac{2c_{4}}{\left(x+c_{2}\right)^{2}}\right] \tag{214}\] \[I_{2} = e^{\lambda t}\left[\dot{y}+c_{1}\dot{z}-\lambda(y+c_{1}z)+\frac {c_{3}}{\lambda}\right]. \tag{215}\]
We note that for \(c_{1}=0\) we obtain the separable potential
\[V(x,y,z)=-\frac{\lambda^{2}}{8}\left(x^{2}+4y^{2}\right)-\frac{\lambda^{2}}{4 }c_{2}x+c_{3}y+\frac{c_{4}}{(x+c_{2})^{2}}+F(z) \tag{216}\]
which is a new maximally superintegrable potential due to the additional time-dependent FIs (214) and (215). The potential (see Table 7)
\[V(x,y,z)=-\frac{\lambda^{2}}{8}\left(R^{2}+4z^{2}\right)+\frac{c_{4}}{x^{2}}+ \frac{c_{0}}{y^{2}}+c_{3}z. \tag{217}\]
is a subcase of (216) for \(y\leftrightarrow z\), \(c_{2}=0\) and \(F(z)=-\frac{\lambda^{2}}{8}z^{2}+\frac{c_{0}}{z^{2}}\).
### Parameters \(a_{17},a_{19},a_{20}\): The components \(L_{(a;b)}\) are constant and non-diagonal
In the following cases, the only non-vanishing parameters are the \(a_{17},a_{19}\), and \(a_{20}\).
1) Case \(a_{17}\neq 0\), \(a_{20}=0\) and \(a_{19}\) is free.
The vector \(L_{a}=(0,2a_{17}x,2a_{19}x)\) and the KT \(L_{(a;b)}=\left(\begin{array}{ccc}0&a_{17}&a_{19}\\ a_{17}&0&0\\ a_{19}&0&0\end{array}\right)\).
Then, equation (184) gives the potential
\[V(x,y,z)=-\frac{\lambda^{2}}{2}x^{2}+F\left(z-cy\right) \tag{218}\]
where \(c=\frac{a_{19}}{a_{17}}\) and \(F\) is an arbitrary smooth function.
The associated QFI is
\[I=e^{\lambda t}(\dot{x}-\lambda x)(\dot{y}+c\dot{z}). \tag{219}\]
From Table 2, the potential (218) admits the additional autonomous FIs: \(I_{1}=\frac{1}{2}\dot{x}^{2}-\frac{\lambda^{2}}{2}x^{2}\) and \(I_{2}=\dot{y}+c\dot{z}\). Therefore, the QFI (219) contains the independent LFI \(I_{3}=e^{\lambda t}(\dot{x}-\lambda x)\).
We conclude that (218) is a new minimally superintegrable potential.
2) Case \(a_{17}=\frac{\alpha}{2}\neq 0\), \(a_{19}=0\) and \(a_{20}=\frac{\beta}{2}\).
The vector \(L_{a}=(0,\alpha x,\beta y)\), where \(\alpha\) and \(\beta\) are arbitrary constants and the KT \(L_{(a;b)}=\left(\begin{array}{ccc}0&\frac{\alpha}{2}&0\\ \frac{\alpha}{2}&0&\frac{\beta}{2}\\ 0&\frac{\beta}{2}&0\end{array}\right)\).
The potential
\[V(x,y,z)=-\frac{\lambda^{2}}{2(1+c_{1}^{2})}\left(x^{2}+c_{1}^{2}y^{2}\right) -\frac{\lambda^{2}}{2(1+c_{1}^{2})}\left(z-2c_{1}x\right)^{2}+c_{2}\left(z-2 c_{1}x\right) \tag{220}\]
where \(c_{1}=\frac{\beta}{\alpha}\) and \(c_{2}\) are arbitrary constants.
The associated QFI is
\[I=e^{\lambda t}\left[(\dot{x}-\lambda x)\dot{y}+c_{1}(\dot{y}-\lambda y)\dot{ z}-\frac{\lambda^{2}c_{1}}{1+c_{1}^{2}}(c_{1}x-z)y-c_{1}c_{2}y\right]. \tag{221}\]
Moreover, the potential (220) admits the additional autonomous QFI \(I_{1}=\frac{1}{2}\dot{y}^{2}-\frac{\lambda^{2}c_{1}^{2}}{2(1+c_{1}^{2})}y^{2}\) because the \(y\)-coordinate is separated from the coordinates \(x\) and \(z\).
Parameters \(a_{2},a_{5},a_{8},a_{11},a_{12},a_{15},a_{16},a_{18}\): The components \(L_{(a;b)}\) are linear on \(x,y,z\)
We consider the following cases:
1) \(\underline{a_{15}}\) is the only non-vanishing parameter.
The vector \(L_{a}=a_{15}(-y^{2},xy,0)\) and the KT \(L_{(a;b)}=a_{15}\left(\begin{array}{ccc}0&-\frac{y}{2}&0\\ -\frac{y}{2}&x&0\\ 0&0&0\end{array}\right)\).
The potential
\[V(x,y,z)=-\frac{\lambda^{2}}{2}R^{2}+\frac{c_{1}x}{y^{2}R}+\frac{c_{2}}{y^{2} }+F(z) \tag{222}\]
where \(c_{1},c_{2}\) are arbitrary constants and \(F(z)\) is an arbitrary smooth function.
The associated QFI is
\[I=e^{\lambda t}\left[M_{3}(\dot{y}-\lambda y)+\frac{2c_{2}x}{y^{2}}+\frac{c_{ 1}(y^{2}+2x^{2})}{y^{2}R}\right]. \tag{223}\]
We note that the potential (222) is of the integrable form (see Table 1) \(V=\frac{F_{1}\left(\frac{\alpha}{2}\right)}{R^{2}}+F_{2}(R)+F_{3}(z)\) with
\[F_{1}\left(\frac{y}{x}\right)=\left(\frac{c_{1}}{\sqrt{1+\frac{y^{2}}{x^{2}}} }+c_{2}\right)\left(1+\frac{x^{2}}{y^{2}}\right),\,\,\,F_{2}(R)=-\frac{ \lambda^{2}}{2}R^{2}. \tag{224}\]
Therefore, it is a new minimally superintegrable potential due to the additional autonomous QFIs:
\[I_{1}=\frac{1}{2}\dot{z}^{2}+F_{3}(z),\,\,\,I_{2}=\frac{1}{2}M_{3}^{2}+\frac{ (c_{1}R+c_{2}x)x}{y^{2}}.\]
Moreover, for \(F(z)=-\frac{\lambda^{2}}{2}z^{2}+\frac{c_{3}}{z^{2}}\), where \(c_{3}\) is an arbitrary constant, the resulting potential
\[V(x,y,z)=-\frac{\lambda^{2}}{2}r^{2}+\frac{c_{1}x}{y^{2}R}+\frac{c_{2}}{y^{2}}+ \frac{c_{3}}{z^{2}} \tag{225}\]
is a subcase of the minimally superintegrable potential (78) with \(F_{1}\left(\frac{y}{x}\right)\) as given in (224). Hence, (225) is a new maximally superintegrable potential due to the additional autonomous QFI (see Table 6)
\[I_{3}=\frac{1}{2}{\bf M}^{2}+\frac{c_{1}xr^{2}}{y^{2}R}+c_{2}\frac{r^{2}}{y^{2 }}+\frac{c_{3}R^{2}}{z^{2}}. \tag{226}\]
2) \(\underline{a_{2}}\) and \(\underline{a_{12}}\) are the only non-vanishing parameters.
The vector \(L_{a}=(a_{2}xz,a_{12}yz,-a_{2}x^{2}-a_{12}y^{2})\) and the KT \(L_{(a;b)}=\left(\begin{array}{ccc}a_{2}z&0&-\frac{a_{2}}{2}x\\ 0&a_{12}z&-\frac{a_{12}}{2}y\\ -\frac{a_{2}}{2}x&-\frac{a_{12}}{2}y&0\end{array}\right)\).
The potential (see Table 3)
\[V(x,y,z)=-\frac{\lambda^{2}}{2}r^{2}+\frac{k_{1}}{x^{2}}+\frac{k_{2}}{y^{2}} \tag{227}\]
where \(k_{1}\) and \(k_{2}\) are arbitrary constants.
The associated QFI consists of the independent QFIs:
\[I_{1}=e^{\lambda t}\left[M_{2}(\dot{x}-\lambda x)+\frac{2k_{1}z}{x^{2}}\right],\,\,\,I_{2}=e^{\lambda t}\left[M_{1}(\dot{y}-\lambda y)-\frac{2k_{2}z}{y^{2}} \right].\]
Therefore, the separable potential (227) is maximally superintegrable.
3) Case \(a_{2}=a_{12}\).
The vector \(L_{a}=a_{2}(xz,yz,-R^{2})\) and the KT \(L_{(a;b)}=a_{2}\left(\begin{array}{ccc}z&0&-\frac{x}{2}\\ 0&z&-\frac{y}{2}\\ -\frac{x}{2}&-\frac{y}{2}&0\end{array}\right)\).
The potential
\[V(x,y,z)=-\frac{\lambda^{2}}{2}r^{2}+\frac{c_{1}z}{rR^{2}}+\frac{F\left(\frac {y}{x}\right)}{R^{2}} \tag{228}\]
where \(c_{1}\) is an arbitrary constant and \(F\left(\frac{y}{x}\right)\) is an arbitrary smooth function.
The associated QFI is
\[I_{1}=e^{\lambda t}\left[M_{2}\left(\dot{x}-\lambda x\right)-M_{1}\left(\dot{y }-\lambda y\right)+\frac{c_{1}}{r}+\frac{2c_{1}z^{2}}{rR^{2}}+\frac{2zF\left( \frac{y}{x}\right)}{R^{2}}\right]. \tag{229}\]
We note that the potential (228) belongs to the general family of potentials (74); hence, it admits the additional autonomous QFI (see Table 4)
\[I_{2}=\frac{1}{2}M_{3}^{2}+F\left(\frac{y}{x}\right). \tag{230}\]
If \(c_{1}=0\), the resulting potential
\[V(x,y,z)=-\frac{\lambda^{2}}{2}r^{2}+\frac{F\left(\frac{y}{x}\right)}{R^{2}} \tag{231}\]
is a new maximally superintegrable potential due to the additional autonomous QFIs (see Table 6):
\[I_{3}=\frac{1}{2}\dot{z}^{2}-\frac{\lambda^{2}}{2}z^{2},\,\,\,I_{4}=\frac{1}{ 2}{\bf M}^{2}+\frac{r^{2}F\left(\frac{y}{x}\right)}{R^{2}}.\]
We note that the potential (231) is of the form (78) for \(k_{1}=-\frac{\lambda^{2}}{2}\) and \(k_{2}=0\).
4) \(\underline{a_{3}},a_{6},a_{10},a_{14}\) are non-vanishing and \(a_{2}a_{13}\neq 0\).
The vector \(L_{a}=\left(\begin{array}{c}a_{2}xz+a_{3}x+a_{6}\\ a_{13}y+a_{14}\\ -a_{2}x^{2}+a_{10}\end{array}\right)\) and the KT \(L_{(a;b)}=\left(\begin{array}{ccc}a_{2}z+a_{3}&0&-\frac{a_{2}}{2}x\\ 0&a_{13}&0\\ -\frac{a_{2}}{2}x&0&0\end{array}\right)\).
The potential (see Table 3)
\[V(x,y,z)=-\frac{\lambda^{2}}{8}(4x^{2}+4z^{2}+y^{2})-\lambda^{2}c_{1}z-\frac{ \lambda^{2}}{4}c_{2}y+\frac{k}{(y+c_{2})^{2}} \tag{232}\]
where \(k,c_{1}=\frac{a_{2}}{a_{2}}\), and \(c_{2}=\frac{a_{14}}{a_{13}}\) are arbitrary constants.
The associated QFI consists of the independent FIs:
\[I_{1} = e^{\lambda t}\left(\dot{x}-\lambda x\right) \tag{233}\] \[I_{2} = e^{\lambda t}\left\{\left[\dot{y}-\frac{\lambda}{2}(y+c_{2}) \right]^{2}+\frac{2k}{(y+c_{2})^{2}}\right\}\] (234) \[I_{3} = e^{\lambda t}\left[\dot{z}-\lambda(z+c_{1})\right]\] (235) \[I_{4} = M_{2}+c_{1}\dot{x}. \tag{236}\]
We note that \(\{I_{2},I_{p}\}=0\) where \(p=1,2,3,4\), \(\{I_{1},I_{3}\}=0\), \(\{I_{1},I_{4}\}=I_{3}\) and \(\{I_{4},I_{3}\}=I_{1}\).
The potential (232) is integrable because the independent FIs \(I_{1},I_{2},I_{3}\) are in involution or, directly, because it is separable. It is also maximally superintegrable due to the additional independent FIs \(I_{4}\) and \(H\), where \(H\) is the Hamiltonian.
5) Case \(a_{2}\neq 0\) and \(a_{3},a_{13}\) are non-vanishing.
The vector \(L_{a}=\left(\begin{array}{c}a_{2}xz+a_{3}x\\ a_{13}y\\ -a_{2}x^{2}\end{array}\right)\) and the KT \(L_{(a;b)}=\left(\begin{array}{ccc}a_{2}z+a_{3}&0&-\frac{a_{2}}{2}x\\ 0&a_{13}&0\\ -\frac{a_{2}}{2}x&0&0\end{array}\right)\).
The potential
\[V(x,y,z)=-\frac{\lambda^{2}}{2}\left(x^{2}+z^{2}\right)-\frac{\lambda^{2}}{8} y^{2}+\frac{c_{1}}{x^{2}}+\frac{c_{2}}{y^{2}}-\lambda^{2}c_{3}z+\frac{k(z+c_{3})}{ x^{2}\sqrt{(z+c_{3})^{2}+x^{2}}} \tag{237}\]
where \(k,c_{1},c_{2}\), and \(c_{3}=\frac{a_{3}}{a_{2}}\) are arbitrary constants.
The associated QFI consists of the following independent QFIs:
\[I_{1} = e^{\lambda t}\left[\left(\dot{y}-\frac{\lambda}{2}y\right)^{2}+ \frac{2c_{2}}{y^{2}}\right] \tag{238}\] \[I_{2} = e^{\lambda t}\left[(M_{2}+c_{3}\dot{x})(\dot{x}-\lambda x)+ \frac{2c_{1}(z+c_{3})}{x^{2}}+k\frac{x^{2}+2(z+c_{3})^{2}}{x^{2}\sqrt{x^{2}+(z +c_{3})^{2}}}\right]. \tag{239}\]
It is well-known that the dynamical equations (and hence the associated FIs) of a regular Lagrangian system are preserved if:
a. We add an arbitrary constant \(c\) to the potential \(V\) of the system.
b. We apply a canonical transformation.
Then, the potential (237) is a subcase of the minimally superintegrable potential (222). Indeed, by adding the constant \(c=-\frac{\lambda^{2}}{2}c_{3}^{2}\) to (237), we obtain the equivalent potential
\[V(x,y,z)=-\frac{\lambda^{2}}{2}\left[x^{2}+(z+c_{3})^{2}\right]+\frac{k(z+c_{ 3})}{x^{2}\sqrt{(z+c_{3})^{2}+x^{2}}}+\frac{c_{1}}{x^{2}}-\frac{\lambda^{2}}{8 }y^{2}+\frac{c_{2}}{y^{2}}. \tag{240}\]
If we apply the canonical transformation \(x\to y\), \(y\to z\) and \(z\to x-c_{3}\), the potential (240) becomes
\[V(x,y,z)=-\frac{\lambda^{2}}{2}R^{2}+\frac{kx}{y^{2}R}+\frac{c_{1}}{y^{2}}- \frac{\lambda^{2}}{8}z^{2}+\frac{c_{2}}{z^{2}} \tag{241}\]
which is a subcase of (222) for \(F(z)=-\frac{\lambda^{2}}{8}z^{2}+\frac{c_{2}}{z^{2}}\).
The potential (241) is a new maximally superintegrable potential due to the following independent QFIs:
\[I_{1} = e^{\lambda t}\left[\left(\dot{z}-\frac{\lambda}{2}z\right)^{2}+ \frac{2c_{2}}{z^{2}}\right] \tag{242}\] \[I_{2} = e^{\lambda t}\left[M_{3}(\dot{y}-\lambda y)+\frac{2c_{1}x}{y^{2} }+\frac{k(y^{2}+2x^{2})}{y^{2}R}\right].\] (243) \[I_{3} = \frac{1}{2}\dot{z}^{2}-\frac{\lambda^{2}}{8}z^{2}+\frac{c_{2}}{z ^{2}}\] (244) \[I_{4} = \frac{1}{2}M_{3}^{2}+\frac{(kR+c_{1}x)x}{y^{2}}. \tag{245}\]
We recall that the potential (225) is another maximally superintegrable potential which is also a subcase of (222) but for a different choice of the function \(F(z)\). If we rename \(\lambda\to 2\lambda\), the QFI (242) is admitted also by (225) because the \(z\)-coordinate is separated from \(x\) and \(y\).
We collect the results of section 7 in Tables 11 - 13.
\begin{table}
\begin{tabular}{|l|l|} \hline Potential & LFIs and QFIs \\ \hline \hline \(V=\frac{\lambda^{2}}{8}r^{2}-\frac{\lambda^{2}}{4}\left(c_{1}x+c_{2}y+c_{3}z \right)+\frac{c_{0}}{2}\) & \(I=e^{\lambda t}\left(\lambda\dot{x}+c_{1}\lambda\dot{y}-\lambda^{2}x-c_{1} \lambda^{2}y+c_{2}\right)\) \\ \hline \(V=\frac{\lambda^{2}}{2}x^{2}+kx-\lambda^{2}(y+z)x+\frac{c_{0}}{2}\) & \(I=e^{\lambda t}\left[\lambda(\dot{x}+\dot{y}+\dot{z})-\lambda^{2}(x+y+z)+k\right]\) \\ \hline \(V=-\frac{\lambda^{2}}{8}r^{2}-\frac{\lambda^{2}}{4}x+\frac{F\left(\frac{y}{x+ 1},\frac{z+1}{x+1}\right)}{z^{2}}\) & \(I=e^{\lambda t}\left[\left(\dot{x}-\frac{\lambda}{2}x\right)^{2}+\left(\dot{y} -\frac{\lambda}{2}y\right)^{2}+\left(\dot{z}-\frac{\lambda}{2}z\right)^{2}-\lambda \dot{x}+\right.\) \\ \hline \(V=-\frac{\lambda^{2}}{8}r^{2}-\frac{\lambda^{2}}{4}(x+y)+\frac{1}{z^{2}}F \left(\frac{y+1}{x+1},\frac{z}{x+1}\right)\) & \(I=e^{\lambda t}\left[\left(\dot{x}-\frac{\lambda}{2}x\right)^{2}+\left(\dot{y} -\frac{\lambda}{2}y\right)^{2}+\left(\dot{z}-\frac{\lambda}{2}z\right)^{2}-\) \\ \(\left.+\frac{1}{z^{2}}F\left(\frac{y+1}{x+1},\frac{z}{x+1}\right)\) & \(-\lambda(\dot{x}+\dot{y})+\frac{\lambda^{2}}{2}(x+y+1)+\frac{2F}{z^{2}}\right]\) \\ \hline \(V=-\frac{\lambda^{2}}{8}r^{2}-\frac{\lambda^{2}}{4}(x+y+z)+\frac{1}{z^{2}}F \left(\frac{y+1}{x+1},\frac{z+1}{x+1}\right)\) & \(I=e^{\lambda t}\left[\left(\dot{x}-\frac{\lambda}{2}x\right)^{2}+\left(\dot{y} -\frac{\lambda}{2}y\right)^{2}+\left(\dot{z}-\frac{\lambda}{2}z\right)^{2}- \lambda(\dot{x}+\dot{y}+\dot{z})+\right.\) \\ \(\left.+\frac{1}{(z+1)^{2}}F\left(\frac{y+1}{x+1},\frac{z+1}{x+1}\right)\) & \(+\frac{\lambda^{2}}{2}x+\frac{\lambda^{2}}{4}+\frac{2F}{z^{2}}\right]\) \\ \hline \(V=-\frac{\lambda^{2}}{8}r^{2}-\frac{\lambda^{2}}{4}(x+y+z)+\frac{1}{z^{2}}F \left(\frac{y+1}{x+1},\frac{z+1}{x+1}\right)\) & \(I=e^{\lambda t}\left[\left(\dot{x}-\frac{\lambda}{2}x\right)^{2}+\left(\dot{y} -\frac{\lambda}{2}y\right)^{2}+\left(\dot{z}-\frac{\lambda}{2}z\right)^{2}- \lambda(\dot{x}+\dot{y}+\dot{z})+\right.\) \\ \(\left.+\frac{1}{(z+1)^{2}}F\left(\frac{y+1}{x+1},\frac{z+1}{x+1}\right)\) & \(+\frac{\lambda^{2}}{2}\left(x+y+z+\frac{3}{2}\right)+\frac{2F}{(z+1)^{2}}\right]\) \\ \hline \(V=-\frac{\lambda^{2}}{8}r^{2}-\frac{\lambda^{2}}{4}(c_{1}x+c_{2}y+c_{3}z)+ \frac{c_{0}}{(x+c_{1})^{2}}+\frac{1}{(z+c_{3})^{2}}F_{1}\left(\frac{y+c_{2}}{ z+c_{3}}\right)\) & \(I_{1}=e^{\lambda t}\left[\left(\dot{x}-\frac{\lambda}{2}x\right)^{2}-\lambda c_{1} \left(\dot{x}-\frac{\lambda}{2}x-\frac{\lambda c_{1}}{4}\right)+\frac{2c_{0}}{ (x+c_{1})^{2}}\right]\) \\ \(\left.+\frac{c_{0}}{(x+c_{1})^{2}}+\frac{1}{(z+c_{3})^{2}}F_{1}\left(\frac{y+c_{ 2}}{z+c_{3}}\right)\) & \(I_{2}=e^{\lambda t}\left[\left(\dot{y}-\frac{\lambda}{2}y\right)^{2}+\left( \dot{z}-\frac{\lambda}{2}z\right)^{2}-\lambda\left(c_{2}\dot{y}+c_{3}\dot{z} \right)+\right.\) \\ & \(\left.+\frac{\lambda^{2}}{2}\left(c_{2}y+c_{3}z+\frac{c_{2}^{2}}{2}+\frac{c_{3} ^{2}}{2}\right)+\frac{2F_{1}}{(z+c_{3})^{2}}\right]\) \\ \hline \(V=-\frac{\lambda^{2}}{2}\left[x^{2}+4(1-c_{1}^{2})y^{2}\right]-\frac{c_{0}}{ \lambda}\left(c_{2}x+4c_{1}yz\right)+\frac{c_{0}}{(x+c_{2})^{2}}+F(z-c_{1}y)\) & \(I_{1}=e^{\lambda t}\left[\left(\dot{x}-\frac{\lambda}{2}x\right)^{2}-\lambda c_{2} \left(\dot{x}-\frac{\lambda}{2}x-\frac{\lambda}{4}c_{2}\right)+\frac{2c_{1}}{ (x+c_{2})^{2}}\right]\) \\ \(\left.+c_{3}y+\frac{c_{4}}{(x+c_{2})^{2}}+F(z-c_{1}y)\) & \(I_{2}=e^{\lambda t}\left[\dot{y}+c_{1}\dot{z}-\lambda(y+c_{1}z)+\frac{c_{3}}{ \lambda}\right]\) \\ \hline \(V=-\frac{\lambda^{2}}{2(1+c_{1}^{2})}\left(x^{2}+c_{1}^{2}y^{2}\right)-\frac{ 3}{2(1+c_{1}^{2})}\left(z-2c_{1}x\right)^{2}+\frac{c_{0}}{2}\left[\left(\dot{x }-\lambda x\right)\dot{y}+c_{1}(\dot{y}-\lambda y)\dot{z}-\frac{\lambda^{2}c_{1} }{1+c_{1}^{2}}(c_{1}x-z)y-c_{1}c_{2}y\right]\) \\ \(\left.+c_{2}\left(z-2c_{1}x\right)\) & \(I_{2}=e^{\lambda t}\left[\left(\dot{x}-\frac{\lambda}{2}x\right)-\frac{\lambda}{ 2}y\right]-\frac{\lambda^{2}c_{1}}{1+c_{1}^{2}}(c_{1}x-z)y-c_{1}c_{2}y\right]\) \\ \hline \(V=-\frac{\lambda^{2}}{2}r^{2}+\frac{c_{1}z}{rR^{2}}+\frac{F\left(\frac{y}{z} \right)}{R^{2}}\) & \(I_{1}=e^{\lambda t}\left[M_{2}\left(\dot{x}-\lambda x\right)-M_{1}\left(\dot{y}- \lambda y\right)+\frac{c_{1}}{r}+\frac{2c_{1}z^{2}}{rR^{2}}+\frac{2zF\left( \frac{y}{z}\right)}{R^{2}}\right]\) \\ & \(I_{2}=\frac{1}{2}M_{3}^{2}+F\left(\frac{y}{z}\right)\) \\ \hline \end{tabular}
\end{table}
Table 11: Possibly non-integrable potentials \(V(x,y,z)\) in \(E^{3}\) that admit time-dependent LFIs/QFIs of the form \(I_{(3)}\).
\begin{table}
\begin{tabular}{|l|c|c|} \hline \multicolumn{3}{|c|}{Minimally superintegrable potentials} \\ \hline Potential & Ref [2] & LFIs and QFIs \\ \hline \multirow{3}{*}{\(V=-\frac{\lambda^{2}}{2}x^{2}+c_{2}x+F_{1}(y)+F_{2}(z)\)} & & \(I_{1}=\frac{1}{2}\dot{x}^{2}-\frac{\lambda^{2}}{2}x^{2}+c_{2}x\) \\ & & \(I_{2}=\frac{1}{2}\dot{y}^{2}+F_{1}(y)\) \\ & & \(I_{3}=\frac{1}{2}\dot{z}^{2}+F_{2}(z)\) \\ & & \(I_{4}=e^{\lambda t}\left(\lambda\dot{x}-\lambda^{2}x+c_{2}\right)\) \\ \hline \multirow{3}{*}{\(V=-\frac{\lambda^{2}}{2}x^{2}+F\left(z-cy\right)\)} & \multirow{3}{*}{New} & \(I_{1}=\frac{1}{2}\dot{x}^{2}-\frac{\lambda^{2}}{2}x^{2}\) \\ & & \(I_{2}=\dot{y}+c\dot{z}\) \\ & & \(I_{3}=e^{\lambda t}(\dot{x}-\lambda x)\) \\ \hline \multirow{3}{*}{\(V=-\frac{\lambda^{2}}{2}R^{2}+\frac{c_{1}x}{y^{2}R}+\frac{c_{2}}{y^{2}}+F(z)\)} & \multirow{3}{*}{New} & \(I_{1}=\frac{1}{2}\dot{z}^{2}+F(z)\) \\ & & \(I_{2}=\frac{1}{2}M_{3}^{2}+\frac{(c_{1}R+c_{2}x)x}{y^{2}}\) \\ \cline{1-1} & & \(I_{3}=e^{\lambda t}\left[M_{3}(\dot{y}-\lambda y)+\frac{2c_{3}x}{y^{2}}+\frac {c_{1}(y^{2}+2x^{2})}{y^{2}R}\right]\) \\ \hline \end{tabular}
\end{table}
Table 12: Minimally superintegrable potentials \(V(x,y,z)\) in \(E^{3}\) that admit time-dependent LFIs/QFIs of the form \(I_{(3)}\).
## 8 Comparison with existing results
As we have remarked in section 1, the main review works in this topic are the works of Evans in [2] and Kalnins in [4]. Therefore, it is imperative to discuss how the present review is related to these.
\begin{table}
\begin{tabular}{|l|l|l|} \hline \multicolumn{3}{|c|}{Maximally superintegrable potentials} \\ \hline Potential & Ref [2] & LFIs and QFIs \\ \hline \(V=-\frac{\lambda^{2}}{8}r^{2}+b_{1}x+b_{2}y+b_{3}z\) & New & \(I_{i}=\frac{\dot{q}_{i}^{2}}{2}-\frac{\lambda^{2}}{8}q_{i}^{2}+b_{i}q_{i}\) \\ \(V=-\frac{\lambda^{2}}{8}r^{2}+b_{1}x+b_{2}y+b_{3}z\) & New & \(I_{i}=\frac{\dot{q}_{i}^{2}}{2}-\frac{\lambda^{2}}{8}q_{i}^{2}+\lambda b_{i} \dot{q}_{i}-\frac{\lambda^{2}}{2}b_{i}q_{i}+b_{i}^{2}\) \\ \(V=-\frac{\lambda^{2}}{8}r^{2}-\frac{\lambda^{2}}{4}\left(b_{1}x+b_{2}y+b_{3}z \right)+\frac{c_{1}}{\left(x+b_{1}\right)^{2}}+\frac{c_{2}}{\left(y+b_{2} \right)^{2}}+\frac{c_{3}}{\left(x+b_{3}\right)^{2}}\) & New & \(I_{i}=\frac{\dot{q}_{i}^{2}}{2}-\frac{\lambda^{2}}{8}q_{i}^{2}-\frac{\lambda^ {2}}{4}b_{i}q_{i}+\frac{c_{i}}{\left(x+b_{i}\right)^{2}}\) \\ \(+\frac{c_{1}}{\left(x+b_{1}\right)^{2}}+\frac{c_{2}}{\left(y+b_{2}\right)^{2}}+ \frac{c_{3}}{\left(x+b_{3}\right)^{2}}\) & \(J_{i}=e^{\lambda t}\left[\left(\dot{q}^{i}-\frac{\lambda}{2}q^{i}\right)^{2}- \lambda b_{i}\left(\dot{q}^{i}-\frac{\lambda}{2}q^{i}-\frac{\lambda b_{i}}{4} \right)+\frac{2c_{i}}{\left(q^{i}+b_{i}\right)^{2}}\right]\) \\ \hline \(V=-\frac{\lambda^{2}}{8}\left(x^{2}+4y^{2}\right)-\frac{\lambda^{2}}{4}c_{2}x+ c_{3}y+\) & New & \(I_{1}=\frac{\dot{x}^{2}}{2}-\frac{\lambda^{2}}{2}y^{2}+c_{3}y\) \\ \(+\frac{c_{4}}{\left(x+c_{2}\right)^{2}}+F(z)\) & New & \(I_{3}=\frac{\dot{x}^{2}}{2}+F(z)\) \\ & & \(I_{4}=e^{\lambda t}\left[\left(\dot{x}-\frac{\lambda}{2}x\right)^{2}-\lambda c _{2}\left(\dot{x}-\frac{\lambda}{2}x-\frac{\lambda}{4}c_{2}\right)+\frac{2c_ {4}}{\left(x+c_{2}\right)^{2}}\right]\) \\ & & \(I_{5}=e^{\lambda t}\left(\dot{y}-\lambda y+\frac{c_{3}}{\lambda}\right)\) \\ \hline \(V=-\frac{\lambda^{2}}{2}r^{2}+\frac{c_{1}x}{y^{2}R}+\frac{c_{2}}{y^{2}}\) & New & \(I_{1}=\frac{1}{2}\dot{z}^{2}-\frac{\lambda^{2}}{2}z^{2}+\frac{c_{3}}{z^{2}}\) \\ & & \(I_{2}=\frac{1}{2}M_{3}^{2}+\frac{(c_{1}R+c_{2}x)x}{y^{2}}\) \\ & & \(I_{3}=e^{\lambda t}\left[M_{3}(\dot{y}-\lambda y)+\frac{2c_{2}x}{y^{2}}+\frac{ c_{1}\left(y^{2}+2x^{2}\right)}{y^{2}R}\right]\) \\ & & \(I_{4}=\frac{1}{2}\mathbf{M}^{2}+\frac{c_{1}x^{2}R}{y^{2}R}+c_{2}\frac{r^{2}}{ y^{2}}+\frac{c_{3}R^{2}}{z^{2}}\) \\ \hline \(V=-\frac{\lambda^{2}}{2}r^{2}+\frac{F\left(\frac{x}{2}\right)}{R^{2}}\) & New & \(I_{1}=\frac{\dot{x}^{2}}{2}-\frac{\lambda^{2}}{2}x^{2}+\frac{k_{1}}{x^{2}}\) \\ & & \(I_{2}=\frac{\dot{y}^{2}}{2}-\frac{\lambda^{2}}{2}y^{2}+\frac{k_{2}}{y^{2}}\) \\ & & \(I_{3}=\frac{\dot{z}^{2}}{2}-\frac{\lambda^{2}}{2}z^{2}\) \\ & & \(I_{4}=e^{\lambda t}\left[M_{2}(\dot{x}-\lambda x)+\frac{2k_{1}z}{x^{2}}\right]\) \\ & & \(I_{5}=e^{\lambda t}\) & \(M_{1}(\dot{y}-\lambda y)-\frac{2k_{2}z}{y^{2}}\) \\ \hline \(V=-\frac{\lambda^{2}}{2}r^{2}+\frac{F\left(\frac{x}{2}\right)}{R^{2}}\) & New & \(I_{1}=e^{\lambda t}\) & \(M_{2}\left(\dot{x}-\lambda x\right)-M_{1}\left(\dot{y}-\lambda y\right)+\frac{2 zF\left(\frac{x}{2}\right)}{R^{2}}\) \\ \(V=-\frac{\lambda^{2}}{2}r^{2}+\frac{F\left(\frac{x}{2}\right)}{R^{2}}\) & New & \(I_{2}=\frac{1}{2}M_{3}^{2}+F\left(\frac{y}{x}\right)\) \\ & & \(I_{3}=\frac{1}{2}\dot{z}^{2}-\frac{\lambda^{2}}{2}z^{2}\) \\ & & \(I_{4}=\frac{1}{2}\mathbf{M}^{2}+\frac{r^{2}F\left(\frac{y}{x}\right)}{R^{2}}\) \\ \hline \(V=-\frac{\lambda^{2}}{8}(4x^{2}+4z^{2}+y^{2})-\) & New & \(I_{1}=e^{\lambda t}\left(\dot{x}-\lambda x\right)\) \\ \(-\lambda^{2}c_{1}z-\frac{\lambda^{2}}{4}c_{2}y+\frac{k}{\left(y+c_{2}\right)^{2}}\) & New & \(I_{2}=e^{\lambda t}\left[\left(\dot{y}-\frac{\lambda}{2}(y+c_{2})\right]^{2}+ \frac{2k}{\left(y+c_{2}\right)^{2}}\right\}\) \\ & & \(I_{3}=e^{\lambda t}\left[\dot{z}-\lambda(z+c_{1})\right]\) \\ & & \(I_{4}=M_{2}+c_{1}\dot{x}\) \\ \hline \(V=-\frac{\lambda^{2}}{2}R^{2}+\frac{kx}{y^{2}R}+\frac{c_{1}}{y^{2}}-\) & New & \(I_{1}=e^{\lambda t}\left[\left(\dot{z}-\frac{\lambda}{2}z\right)^{2}+\frac{2c_ {2}}{z^{2}}\right]\) \\ \(-\frac{\lambda^{2}}{8}z^{2}+\frac{c_{2}}{z^{2}}\) & New & \(I_{2}=e^{\lambda t}\left[M_{3}(\dot{y}-\lambda y)+\frac{2c_{1}x}{y^{2}}+\frac{k \left(y^{2}+2x^{2}\right)}{y^{2}R}\right]\) \\ & & \(I_{3}=\frac{1}{2}\dot{z}^{2}-\frac{\lambda^{2}}{8}z^{2}+\frac{c_{2}}{z^{2}}\) \\ & & \(I_{4}=\frac{1}{2}M_{3}^{2}+\frac{\left(kR+c_{1}x\right)^{2}x}{y^{2}}\) \\ \hline \end{tabular}
\end{table}
Table 13: Maximally superintegrable potentials \(V(x,y,z)\) in \(E^{3}\) that admit time-dependent LFIs/QFIs of the form \(I_{(3)}\).
### Evans work [2]
Evans in [2], using the separability of the Hamilton-Jacobi equation in \(E^{3}\), determined all minimally and maximally superintegrable potentials with autonomous QFIs of the form \(I=K_{ab}(q)\dot{q}^{a}\dot{q}^{b}+G(q)\). The author did not consider (autonomous or time-dependent) LFIs and time-dependent QFIs. In particular, in Table I of [2] are given five maximally superintegrable potentials and in Table II of [2] eight minimally superintegrable potentials.
As it can be seen from Tables 1 - 13, all the results of [2] have been recovered plus new ones. Therefore, the claim made in [2] that all second order superintegrable potentials in \(E^{3}\) are determined is not valid.
Furthermore, it should be noted that there are misprints in some results of [2]. Indeed, we have:
1) In eq. (3.43) of [2], the leading term of the QFI \(I_{4}\) must be \(L_{2}P_{1}-P_{2}L_{1}\).
2) In Table II of [2], the leading part of the QFIs \(I_{3}\) associated with the potentials (55) and (56) should be \(L_{2}P_{1}-P_{2}L_{1}\).
3) The QFI \(I_{2}\) in eq. (3.57) of [2] should be replaced with the QFI (8).
4) The QFI \(I_{3}\) in eq. (3.57) of [2] should be replaced with the QFI (9).
### Kalnins et all work [4]
In [4], the authors discussed classical 3d superintegrable nondegenerate (i.e. four-parameter) potentials on a conformally flat real or complex space. They proved that the quadratic algebra always closes at order six (the '\(5\implies 6\) Theorem'), that is, the space of autonomous QFIs is 6d. Moreover, using the Stackel transformation (an invertible conformal mapping between superintegrable structures on distinct spaces), they gave strong evidence (no proof) that all nondegenerate 3d superintegrable systems are Stackel transforms of constant curvature systems (i.e. the complex Euclidean space or the the complex 3-sphere). This means that in order to obtain all nondegenerate conformally flat superintegrable systems, it is sufficient to classify those in the complex Euclidean space and on the complex 3-sphere. Finally, they found eight families of superintegrable systems that are separable in generic coordinates.
Comparing the results of [4] with the results of the present work, we note the following:
1) All seven maximally superintegrable Euclidean potentials given in eqs. (10) - (16) of [4] are recovered (see Table 7).
2) The potentials given in eqs. (10) and (13) of [4] have been found earlier in Table I of [2]. The potential (13) is more general from the one found by Evans.
3) It is proved in section 5 that the potentials (11), (12), (14), (16) of [4] are subcases of the more general potential (28) for specific forms of the arbitrary smooth functions \(F_{1}(w,z)\) and \(F_{2}(w)\). This justifies the fact that these potentials admit a QFI of the form \(I=\dot{w}^{2}+G(x,y,z)\) where \(w=x+iy\).
4) The potential (15) of [4] is a subcase of (31) and hence admits a QFI of the form \(I=\dot{\bar{w}}^{2}+G(x,y,z)\).
5) The potentials (12), (16) of [4] are of the integrable form (34); therefore, they admit a QFI of the form \(I=\dot{z}\dot{w}+G(x,y,z)\).
6) The potentials (11), (12) of [4] are of the integrable form (82); therefore, they admit a QFI of the form \(I=(M_{2}-iM_{1})^{2}+G(x,y,z)\).
7) The potential (15) of [4] is a subcase of the new minimally superintegrable potential (97) for \(F(z)=k_{1}\,z^{2}+\frac{k_{4}}{z^{2}}\). For this reason, it admits an additional QFI of the form \(I=\frac{1}{4}\dot{w}^{2}+iM_{3}\dot{\bar{w}}+G(x,y,z)\).
8) The two additional maximally superintegrable potentials given in eq. (17) of [4] are just subcases of the last maximally superintegrable potential in Table 3 for \(k_{1}=k_{2}=0\) when \(F(z)=-\frac{\lambda^{2}}{8}z^{2}+c_{3}z\) and \(F(z)=-\frac{\lambda^{2}}{32}z^{2}+\frac{c_{3}}{2^{3}}\).
Therefore, with the systematic application of Theorem 1, we have found all the results of [4] plus new ones; especially time-dependent QFIs.
## 9 Conclusions
The aim of the present work was twofold: a. To assess the second order integrability of autonomous conservative dynamical systems of the form \(q^{a}=-V^{,a}(q)\) where \(a=1,2,3\) in a systematic, i.e. algorithmic, way; and b. To enrich, if possible, the existing results of the main sources on this topic which are found in the review papers [2] and [4]. Therefore, the present work should be approached as an updated review of the integrable/superintegrable 3d Newtonian autonomous conservative dynamical systems that admit LFIs/QFIs.
We have considered two types of integrable and superintegrable 3d Newtonian potentials. Potentials of the form \(\Phi(x,y)+F(z)\) which are \(2+1\) decomposable and hence their QPIs follow from the QPIs of the 2d potentials \(\Phi(x,y)\); and non-decomposable potentials \(V(x,y,z)\) in \(E^{3}\) which cannot be treated in this way. These latter potentials we have searched using the algorithm of Theorem 1.
After a detailed study of the three types of QPIs \(I_{(1,\ell)},I_{(2,\ell)},I_{(3)}\) considered in Theorem 1, we have recovered all known integrable/superintegrable potentials together with new ones. It has also been shown that many of the existing results are in fact special cases of more general ones for specific values of the free parameters/functions. For convenience, the results in each case have been collected in tables which contain the known results with the appropriate reference and the new ones found in the present work. These results can be used in many ways in the study of the dynamical systems and, especially, in the case of more complex systems. One such study will be given elsewhere.
|
2304.00017
|
A novel class of electro-mechanical metamaterials for stress reduction
through electric fields
|
While most previous developed metamaterials only consider a single physical
effect, we introduce a novel class of electro-mechanical metamaterials, which
allows a direct controllable reduction of the total stress by applying an
electric field counteracting the mechanical stress. The solution of the
resulting minimization problem yields a relation involving the eigenvalues of
the mechanical stress tensor. Additionally, we evaluate the constrained cases
allowing only tensile or compressive stresses, respectively, and consider the
plane stress problem. We show numerical results for all cases and discuss, to
what extent a stress reduction is possible.
|
Mischa Blaszczyk, Klaus Hackl
|
2023-03-30T14:41:39Z
|
http://arxiv.org/abs/2304.00017v1
|
# A novel class of electro-mechanical metamaterials for stress reduction through electric fields
###### Abstract
While most previous developed metamaterials only consider a single physical effect, we introduce a novel class of electro-mechanical metamaterials, which allows a direct controllable reduction of the total stress by applying an electric field counteracting the mechanical stress. The solution of the resulting minimization problem yields a relation involving the eigenvalues of the mechanical stress tensor. Additionally, we evaluate the constrained cases allowing only tensile or compressive stresses, respectively, and consider the plane stress problem. We show numerical results for all cases and discuss, to what extent a stress reduction is possible.
_Keywords--_ metamaterials, electro-mechanical coupling, stress reduction, Maxwell stress tensor
## 1 Introduction
Metamaterials are artificially created multiscale materials, which may possess properties not found in nature. In contrast to specific length scales, differentiation between the scales is done by considering the microscale as consisting of an array of (periodic) unit cells, whose precise geometry and arrangement enable the often unusual physical effects observable in metamaterials, while the macroscale refers to the scale where these effects emerge. The material properties of the metamaterial can significantly differ from the original components it is made from. An overview and the state of the art can be found e.g. in the review articles [1, 2].
Examples of electromagnetic metamaterials include substances with simultaneously negative permittivity and permeability [3], metamaterial absorbers [4], artificial magnetism [5], negative refraction index [6], topological insulators [7], electromagnetic waveguides [8] and performance augmented antennas [9]. Metamaterials are not limited to electromagnetism. Acoustical, optical and mechanical metamaterials are also of great research interest and oftentimes show analogies to electromagnetic metamaterials. Examples for proposed or observed phenomenons are acoustic waveguides [10], bandgaps [11], optical super lenses [12], seismic metamaterials [13, 14], materials with negative bulk modulus and negative mass density [15], auxetic materials, i.e. materials with negative Poisson's ratio [16], ultra lightweight but very strong materials [17] and metamaterial cloaking [18].
While most metamaterials only consider a single physical effect (e.g. acoustic or electric metamaterials), in this paper we propose a novel class of theoretical metamaterials considering mechanical and electric effects. The key idea is to combine insulating and conducting materials, with the aim to directly control the total stress of the insulating material by applying an electric field, generated by the conducting material, counteracting the mechanical stress. A unit cell of our material could be constructed e.g. by surrounding an insulating material cube by a (conducting) capacitor in each spatial direction. Figure 1 shows a possible periodic array in two dimensions. The electric field between two capacitor plates may be assumed constant and the combination of capacitors in each spatial direction ensures that the direction of the resulting electric field can be controlled arbitrarily. For sufficiently small unit cell sizes, the mechanical stress within the bulk material of a single unit cell may also assumed to be constant and a stress reduction would be possible.
Some materials show a very different behavior for tensile and compressive loads, e.g. concrete [19], which is important in many engineering applications. Therefore, we also investigate the minimization problem with the additional constraint, that only tensile or compres
sive stresses, respectively, are allowed. Additionally, we discuss the two-dimensional case of plane stress. We obtain mappings related to the eigenvalues for the different problems. Lastly, we discuss our findings and to what extent the mechanical stress can be reduced.
## 2 Material model
### Stress minimization
We start by assuming the stress in our insulating bulk material depends on the mechanical stress tensor
\[\mathbf{\sigma}=\begin{bmatrix}\sigma_{xx}&\sigma_{xy}&\sigma_{xz}\\ \sigma_{xy}&\sigma_{yy}&\sigma_{yz}\\ \sigma_{xz}&\sigma_{yz}&\sigma_{zz}\end{bmatrix} \tag{1}\]
and the Maxwell stress tensor for linear materials [20]
\[\mathbf{\tau}^{m}=\varepsilon_{0}(\varepsilon_{r}\mathbf{E}\otimes\mathbf{E}- \frac{1}{2}(\mathbf{E}\cdot\mathbf{E})\;\mathbf{I}), \tag{2}\]
both written in tensor notation. Here, \(\varepsilon_{0}\) is the vacuum permittivity, \(\varepsilon_{r}\) is the relative permittivity of the material and \(\mathbf{E}\) is the electric field. The symbol \(\mathbf{I}\) denotes the identity tensor. Thus, the total stress \(\mathbf{\sigma}^{t}\) of our material can be written as
\[\mathbf{\sigma}^{t}=\mathbf{\sigma}+\mathbf{\tau}^{m}, \tag{3}\]
resulting in a symmetric tensor, as both \(\mathbf{\sigma}\) and \(\mathbf{\tau}^{m}\) are symmetric. In three dimensions, the mechanical stress tensor consists of six independent components, while the electric field only possesses three independent components. Therefore, it is immediately clear, that a perfect stress absorption (i.e. \(\mathbf{\sigma}^{t}=\mathbf{0}\)) is only possible for rare special cases. However, the best possible reduction - choosing \(\mathbf{E}\) such that \(||\mathbf{\sigma}^{t}||^{2}\) becomes minimal - can still be calculated. Here, \(||\cdot||\) is the Frobenius norm. The minimization problem reads
\[||\mathbf{\sigma}^{t}||^{2}=||\mathbf{\sigma}+\mathbf{\tau}^{m}||^{2}\rightarrow\min_{ \mathbf{E}}. \tag{4}\]
The necessary condition for the minimization is
\[\nabla_{\mathbf{E}}||\mathbf{\sigma}+\mathbf{\tau}^{m}||^{2}\overset{!}{=}\mathbf{0}, \tag{5}\]
where \(\nabla\) is the Nabla operator. To solve the problem, we write down at first the Maxwell stress tensor in index notation using the Einstein summation convention as
\[\tau^{m}_{ij}=\varepsilon_{0}(\varepsilon_{r}E_{i}E_{j}-\frac{1}{2}E_{k}E_{k }\delta_{ij}). \tag{6}\]
Then, we calculate its derivative with respect to the electric field
\[\frac{\partial\tau^{m}_{ij}}{\partial E_{e}}=\varepsilon_{0}(\varepsilon_{r} \delta_{ie}E_{j}+\varepsilon_{r}E_{i}\delta_{je}-\frac{1}{2}2\delta_{ke}E_{k }\delta_{ij}). \tag{7}\]
Using symmetry of the included tensors, Eq. (5) yields in index notation
\[\frac{\partial(||\sigma_{ij}+\tau^{m}_{ij}||^{2})}{\partial E_{e}}=2(\sigma_ {ij}+\tau^{m}_{ij})\frac{\partial\tau^{m}_{ij}}{\partial E_{e}}=0. \tag{8}\]
Inserting the derivative (Eq. (7)) and dividing by \(2\varepsilon_{0}\), we obtain
\[\varepsilon_{0}(\varepsilon_{r}E_{i}E_{j}-\frac{1}{2}E_{k}E_{k} \delta_{ij})(\varepsilon_{r}\delta_{ie}E_{j}+\varepsilon_{r}\delta_{je}E_{i}\] \[-\delta_{ij}E_{e})+\sigma_{ij}(\varepsilon_{r}\delta_{ie}E_{j}+ \varepsilon_{r}\delta_{je}E_{i}-\delta_{ij}E_{e})=0, \tag{9}\]
which can be simplified to
\[\varepsilon_{0}(2\varepsilon_{r}^{2}-\varepsilon_{r})E_{e}E_{j}E _{j}+\varepsilon_{0}(\frac{3}{2}-\varepsilon_{r})(E_{k}E_{k}E_{e})\] \[+\varepsilon_{r}\sigma_{ej}E_{j}+\varepsilon_{r}\sigma_{ie}E_{i} -\sigma_{ii}E_{e}=0. \tag{10}\]
In this form, the result can be transformed back into tensor notation:
\[\varepsilon_{0}|\mathbf{E}|^{2}\mathbf{E}(2\varepsilon_{r}^{2}-2\varepsilon_{ r}+\frac{3}{2})+2\varepsilon_{r}\mathbf{\sigma}\cdot\mathbf{E}-\text{tr}\mathbf{\sigma} \mathbf{E}=\mathbf{0}. \tag{11}\]
In Eq. (11), \(|\cdot|\) denotes the Euclidean norm. We rearrange the equation to obtain the eigenvalue problem for the mechanical stress tensor \(\mathbf{\sigma}\):
\[\mathbf{\sigma}\cdot\mathbf{E}=(\frac{1}{2\varepsilon_{r}}\text{tr}\mathbf{\sigma}-( \varepsilon_{r}-1+\frac{3}{4\varepsilon_{r}})\varepsilon_{0}|\mathbf{E}|^{2}) \;\mathbf{E}. \tag{12}\]
Thus it follows that in order to minimize the total stress, the electric field needs to have the form \(\mathbf{E}=\alpha\mathbf{N}\), where \(\mathbf{N}\) is a unit eigenvector of \(\mathbf{\sigma}\) and \(\alpha:=|\mathbf{E}|\) can be calculated from Eq. (12), denoting the stress tensor eigenvalues \(\lambda_{i}\), as
\[\lambda_{i}=\frac{1}{2\varepsilon_{r}}\text{tr}\mathbf{\sigma}-(\varepsilon_{r}-1+ \frac{3}{4\varepsilon_{r}})\varepsilon_{0}|\mathbf{E}|^{2}, \tag{13}\]
resulting in
\[\alpha=|\mathbf{E}|=\sqrt{\frac{\text{tr}\mathbf{\sigma}-2\varepsilon_{r}\lambda_ {i}}{2\varepsilon_{0}(\varepsilon_{r}^{2}-\varepsilon_{r}+\frac{3}{4})}}. \tag{14}\]
Figure 1: Conceptional periodic microscale arrangement (left) and unit cell (right) in two dimensions.
It should be noted here, that - depending on the specific stress state and the parameter \(\varepsilon_{r}\) - a real solution cannot always be found. In order to obtain a minimized solution for the problem, \(|{\bf E}|\) would be required to become imaginary, which is unphysical.
In the remainder of this paper, we will restrict ourselves to the case of non-polarizable materials, i.e. \(\varepsilon_{r}=1\). While the influence of \(\varepsilon_{r}\) may be significant for the results, a parameter study would go beyond the scope of this contribution. It should be noted, that the eigenvalues of the Maxwell stress tensor then always take the form [21]
\[\lambda_{\tau}=\{-\lambda_{m},-\lambda_{m},+\lambda_{m}\}, \tag{15}\]
with
\[\lambda_{m}=\varepsilon_{0}\frac{|{\bf E}|^{2}}{2}\geq 0. \tag{16}\]
In the following, we use the convention \(\lambda_{3}\geq\lambda_{2}\geq\lambda_{1}\). Using the results of the minimization, the absolute value of the Maxwell stress tensor eigenvalues depends on the eigenvalues of the mechanical stress as
\[\lambda_{m,1} = \frac{1}{3}(-\lambda_{1}+\lambda_{2}+\lambda_{3}),\] \[\lambda_{m,2} = \frac{1}{3}(\lambda_{1}-\lambda_{2}+\lambda_{3}),\] \[\lambda_{m,3} = \frac{1}{3}(\lambda_{1}+\lambda_{2}-\lambda_{3}), \tag{17}\]
which can be proven by calculating the corresponding value for \(\alpha\) from Eq. (14) and inserting the result into Eq. (16), using the principal axes form of \(\sigma\). The index number corresponds to the specific eigenvalue of the stress tensor. With respect to the original coordinate system, the Maxwell stress tensor then takes the form
\[\mathbf{\tau}^{m}=\varepsilon_{0}(\alpha^{2}{\bf N}\otimes{\bf N}- \frac{1}{2}\alpha^{2}{\bf I}). \tag{18}\]
Again considering the principal axes, we now calculate the relative stress reduction
\[\sigma_{\rm rel}:=\frac{||\mathbf{\sigma}+\mathbf{\tau}^{m }||}{||\mathbf{\sigma}||}, \tag{19}\]
depending on the eigenvalues of the mechanical stress tensor. Using the expression
\[||\mathbf{\sigma}+\mathbf{\tau}^{m}||^{2}= ||\mathbf{\sigma}||^{2}+2{\rm tr}(\mathbf{\sigma}\cdot\mathbf{\tau}^{m})+||\mathbf{\tau}^{m}||^{2}\] \[= (\lambda_{1}^{2}+\lambda_{2}^{2}+\lambda_{3}^{2})+2(\lambda_{1} \lambda_{m}-\lambda_{2}\lambda_{m}-\lambda_{3}\lambda_{m})\] \[+\frac{1}{3}(-\lambda_{1}+\lambda_{2}+\lambda_{3})^{2}, \tag{20}\]
we obtain
\[\sigma_{\rm rel,1} =\sqrt{\frac{2}{3}(1+\frac{\lambda_{1}\lambda_{2}+\lambda_{1} \lambda_{3}-\lambda_{2}\lambda_{3}}{\lambda_{1}^{2}+\lambda_{2}^{2}+\lambda_{3 }^{2}})}, \tag{21}\] \[\sigma_{\rm rel,2} =\sqrt{\frac{2}{3}(1+\frac{\lambda_{1}\lambda_{2}-\lambda_{1} \lambda_{3}+\lambda_{2}\lambda_{3}}{\lambda_{1}^{2}+\lambda_{2}^{2}+\lambda_{3 }^{2}})},\] (22) \[\sigma_{\rm rel,3} =\sqrt{\frac{2}{3}(1+\frac{-\lambda_{1}\lambda_{2}+\lambda_{1} \lambda_{3}+\lambda_{2}\lambda_{3}}{\lambda_{1}^{2}+\lambda_{2}^{2}+\lambda_{3 }^{2}})}. \tag{23}\]
From this result we are able to proof that choosing the smallest eigenvalue of \(\sigma\), \(\lambda_{1}\), yields the optimal solution, i.e. \(\sigma_{\rm rel}\) becomes minimal compared to the other eigenvalues. A detailed derivation can be found in the supplemental material [22], which also includes additional details regarding the constrained minimization and some simple numerical examples. Finally, we note that since our starting point (Eq. (4)) uses the Frobenius norm, which only attains values in the range \(\geq 0\) and will approach infinity for \(|{\bf E}|\rightarrow\infty\), the calculated critical point \(\lambda_{m,1}\) is the global minimum of our function.
### Tensile stress
In the remainder of this paper, we only consider the principal axes system, since as we have shown previously we are always able to transform our problem to a coordinate system with respect to the principal axes. Our aim here is to evaluate, whether a stress minimization is possible when only tensile stresses are allowed to occur and how the eigenvalues of the Maxwell stress tensors have to be calculated to obtain the minimal possible stress. Then, inserting this result into Eq. (16) yields the electric field necessary for this task.
Due to the structure of the Maxwell stress tensor and its eigenvalues Eq. (15), it is immediately clear that this task is impossible, if \(\sigma\) possesses two or three negative eigenvalues. We will discuss the other cases in detail. For the case of tensile stresses, we show the complete calculations here once. A detailed derivation for all cases, following the same procedure, can be found in the supplemental material [22]. First, we assume
\[\mathbf{\sigma}=\begin{bmatrix}\lambda_{1}&0&0\\ 0&\lambda_{2}&0\\ 0&0&-\lambda_{3}\end{bmatrix}, \tag{24}\]
with \(\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\geq 0\). We obtain the minimization problem
\[||\mathbf{\sigma}+\mathbf{\tau}^{m}||^{2}\rightarrow\min_{ \lambda_{m}}, \tag{25}\]
with the inequality conditions
\[\lambda_{2}\geq\lambda_{m}\quad\mbox{and}\quad\lambda_{m}\geq\lambda_{3}. \tag{26}\]
Incorporating these into our minimization problem, we obtain the Lagrangian
\[L(\lambda_{m},\alpha,\beta)=(\lambda_{1}-\lambda_{m})^{2}+( \lambda_{2}-\lambda_{m})^{2}\] \[+(-\lambda_{3}+\lambda_{m})^{2}-\alpha g(s)-\beta h(t)\rightarrow \min_{\lambda_{m},\alpha,\beta}, \tag{27}\]
Here, \(s\) and \(t\) are slack variables and \(\alpha\) and \(\beta\) are Lagrange multipliers. The functions \(g(s)\) and \(h(t)\) contain
the constraint information of the problem:
\[g(s) = \lambda_{m}-\lambda_{2}+s^{2}\,\] \[h(t) = \lambda_{3}-\lambda_{m}+t^{2}\, \tag{28}\]
We obtain the derivatives
\[\frac{\partial L}{\partial\lambda_{m}}= -2(\lambda_{1}-\lambda_{m})-2(\lambda_{2}-\lambda_{m})+2(\lambda_ {m}-\lambda_{3})\] \[-\alpha+\beta=0, \tag{29}\] \[-\frac{\partial L}{\partial\alpha}= g(s)=\lambda_{m}-\lambda_{2}+s^{2}=0,\] (30) \[-\frac{\partial L}{\partial\beta}= h(t)=\lambda_{3}-\lambda_{m}+t^{2}=0, \tag{31}\]
and Karush-Kuhn-Tucker (KKT) conditions
\[\alpha s=0\quad\mbox{and}\quad\beta t=0, \tag{32}\]
from which the four cases
\[(1) \alpha=\beta=0, s^{2}>0,t^{2}>0,\] \[(2) \alpha\neq 0,\beta=0, s^{2}=0,t^{2}>0,\] \[(3) \alpha=0,\beta\neq 0, s^{2}>0,t^{2}=0,\] \[(4) \alpha\neq 0,\beta\neq 0, s^{2}=t^{2}=0. \tag{33}\]
follow. We use this notation for all following examples as well. We start with case (1). As both Lagrange multipliers vanish, from Eq. (29) we obtain the solution
\[\lambda_{m}=\frac{1}{3}(\lambda_{1}+\lambda_{2}+\lambda_{3}). \tag{34}\]
Incorporating Eqs. (30) and (31), our result is only valid if both of the following inequalities hold:
\[2\lambda_{2}-\lambda_{1}-\lambda_{3}\geq 0\quad\mbox{and}\quad\lambda_{1}+ \lambda_{2}-2\lambda_{3}\geq 0. \tag{35}\]
For case (2), from Eq. (30) we calculate
\[\lambda_{m}=\lambda_{2}. \tag{36}\]
Eq. (31) requires
\[t^{2}=\lambda_{2}-\lambda_{3}\geq 0, \tag{37}\]
which is automatically fulfilled, as we demanded \(\lambda_{2}\geq\lambda_{3}\). Similarly, for case (3) from Eq. (31) we obtain
\[\lambda_{m}=\lambda_{3}. \tag{38}\]
This solution will always be equal or worse than the previous one. Finally, for case (4), we obtain the limit case \(\lambda_{m}=\lambda_{2}=\lambda_{3}\). Thus in conclusion, if possible (i.e. both inequalities are fulfilled), the solution \(\lambda_{m}=\frac{1}{3}(\lambda_{1}+\lambda_{2}+\lambda_{3})\) should always be chosen, elsewise \(\lambda_{m}=\lambda_{2}\). This can be seen by inserting both solutions in the Lagrangian and calculating the difference between the first and second solution, which returns the expression \(-\frac{1}{2}(\lambda_{1}-2\lambda_{2}+\lambda_{3})^{2}\leq 0\), implying a global minimum for the first solution. Analogously, this can also be done for the following examples.
Next, we consider the case
\[\mathbf{\sigma}=\begin{bmatrix}\lambda_{1}&0&0\\ 0&\lambda_{2}&0\\ 0&0&\lambda_{3}\end{bmatrix}, \tag{39}\]
with \(\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}>0\). Thus, the maximum allowed solution is \(\lambda_{m}=\lambda_{2}\), otherwise we would obtain at least one negative eigenvalue for the total stress (cf. Eq. (15)). The Lagrangian is then
\[L(\lambda_{m},\alpha) = (\lambda_{1}-\lambda_{m})^{2}+(\lambda_{2}-\lambda_{m})^{2}+( \lambda_{3}+\lambda_{m})^{2} \tag{40}\] \[-\alpha g(s)\rightarrow\min_{\lambda_{m},\alpha},\]
with derivatives
\[\frac{\partial L}{\partial\lambda_{m}}= -2(\lambda_{1}-\lambda_{m})-2(\lambda_{2}-\lambda_{m})+2( \lambda_{3}+\lambda_{m}) \tag{41}\] \[-\alpha=0,\] \[-\frac{\partial L}{\partial\alpha}= g(s)=\lambda_{m}-\lambda_{2}+s^{2}=0, \tag{42}\]
and the KKT-condition
\[\alpha s=0, \tag{43}\]
from which follow the two cases
\[(1) \alpha=0, s^{2}>0,\] \[(2) \alpha\neq 0, s^{2}=0. \tag{44}\]
From case (1) we calculate
\[\lambda_{m}=\frac{1}{3}(\lambda_{1}+\lambda_{2}-\lambda_{3}), \tag{45}\]
using Eq. (41), if \(\lambda_{m}\leq\lambda_{2}\) holds (Eq. (42)). Elsewise, case (2) yields
\[\lambda_{m}=\lambda_{2}. \tag{46}\]
### Compressive stress
Analogously to the previous tensile minimization, we now consider the minimization problem with the constraint, that only compressive stresses remain in the material, by evaluating our equations in the coordinate system with respect to the principal axes. If \(\sigma\) only contains positive eigenvalues, no solution can be found. To start, we assume
\[\mathbf{\sigma}=\begin{bmatrix}\lambda_{1}&0&0\\ 0&\lambda_{2}&0\\ 0&0&-\lambda_{3}\end{bmatrix}, \tag{47}\]
with \(\lambda_{1},\lambda_{2},\lambda_{3}>0\) and \(\lambda_{3}\geq\lambda_{1}\). The maximum allowed solution is \(\lambda_{m}=\lambda_{3}\). Additionally, we require \(\lambda_{m}\geq\lambda_{1}\). The Lagrangian is
\[L(\lambda_{m},\alpha,\beta)=(\lambda_{1}-\lambda_{m})^{2}+( \lambda_{2}-\lambda_{m})^{2}+\] \[(\lambda_{m}-\lambda_{3})^{2}-\alpha g(s)-\beta h(t)\to\min_{ \lambda_{m},\alpha,\beta}. \tag{48}\]
We calculate
\[\lambda_{m}=\frac{1}{3}(\lambda_{1}+\lambda_{2}+\lambda_{3}), \tag{49}\]
if both of the following inequalities hold
\[\lambda_{2}+\lambda_{3}-2\lambda_{1}\geq 0\quad\text{and}\quad 2\lambda_{3}- \lambda_{1}-\lambda_{2}\geq 0\;. \tag{50}\]
Otherwise, we obtain
\[\lambda_{m}=\lambda_{1}. \tag{51}\]
Again, the remaining cases are \(\lambda_{m}=\lambda_{3}\), which is never better than solution (2) and the limiting case \(\lambda_{1}=\lambda_{3}\).
As the second example for compression, we assume
\[\boldsymbol{\sigma}=\begin{bmatrix}\lambda_{1}&0&0\\ 0&-\lambda_{2}&0\\ 0&0&-\lambda_{3}\end{bmatrix}, \tag{52}\]
with \(\lambda_{1},\lambda_{2},\lambda_{3}>0\) and we require \(\lambda_{3}\geq\lambda_{1}\). The Lagrangian is
\[L(\lambda_{m},\alpha,\beta)=\left(\lambda_{1}-\lambda_{m}\right) ^{2}+\left(-\lambda_{2}-\lambda_{m}\right)^{2}+\] \[(\lambda_{m}-\lambda_{3})^{2}-\alpha g(s)-\beta h(t)\to\min_{ \lambda_{m},\alpha,\beta}. \tag{53}\]
We calculate
\[\lambda_{m}=\frac{1}{3}(\lambda_{1}-\lambda_{2}+\lambda_{3}), \tag{54}\]
if both of the following inequalities hold
\[\lambda_{3}-\lambda_{2}-2\lambda_{1}\geq 0\quad\text{and}\quad 2\lambda_{3}+ \lambda_{2}-\lambda_{1}\geq 0. \tag{55}\]
Otherwise, we calculate
\[\lambda_{m}=\lambda_{1}. \tag{56}\]
The remaining cases result in \(\lambda_{m}\leq\lambda_{3}\), which is never better than solution (2) and the limiting case \(\lambda_{1}=\lambda_{3}\).
Lastly, we assume
\[\boldsymbol{\sigma}=\begin{bmatrix}-\lambda_{1}&0&0\\ 0&-\lambda_{2}&0\\ 0&0&-\lambda_{3}\end{bmatrix}, \tag{57}\]
with \(\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}>0\). The Lagrangian is
\[L(\lambda_{m},\alpha)= \left(-\lambda_{1}+\lambda_{m}\right)^{2}+\left(-\lambda_{2}- \lambda_{m}\right)^{2}+\] \[(-\lambda_{3}-\lambda_{m})^{2}-\alpha g(s)\to\min_{\lambda_{m}, \alpha}. \tag{58}\]
We calculate
\[\lambda_{m}=\frac{1}{3}(\lambda_{1}-\lambda_{2}-\lambda_{3}), \tag{59}\]
if \(\lambda_{m}\leq\lambda_{1}\) holds. Elsewise, we obtain
\[\lambda_{m}=\lambda_{1}. \tag{60}\]
### Plane stress
As a final problem, we consider a thin sheet of material under plane stress. Without loss of generality we assume the total stress of the material \(\boldsymbol{\sigma}^{t}\) equal to zero in all z-directions. This implies that the elastic stress of the material in this direction automatically adjusts to counteract the Maxwell stress to preserve the plane stress state. For our optimization problem this opens up additional possibilities, as the remaining Maxwell stress in the xy-plane (indicated by the following subscript) now has the form
\[\boldsymbol{\tau}_{xy}^{m}=\begin{bmatrix}\pm\lambda_{m}&0\\ 0&-\lambda_{m}\end{bmatrix}, \tag{61}\]
where the sign of the first eigenvalue can now be chosen arbitrarily. Again we use a coordinate system with respect to the principal axes. For the elastic stress, we obtain
\[\boldsymbol{\sigma}_{xy}=\begin{bmatrix}\lambda_{1}&0\\ 0&\lambda_{2}\end{bmatrix}, \tag{62}\]
where we distinguish between the three cases (1) \(\lambda_{1}\leq\lambda_{2}\leq 0\), (2) \(\lambda_{1}\leq 0\), \(\lambda_{2}\geq 0\), (3) \(\lambda_{2}\geq\lambda_{1}\geq 0\). For all cases, we evaluate
\[||\boldsymbol{\sigma}^{t}||^{2}\to\min_{\lambda_{m}}, \tag{63}\]
with \(||\boldsymbol{\sigma}^{t}||^{2}=(\sigma_{xx}+\tau_{xx}^{m})^{2}+(\sigma_{yy}+ \tau_{yy}^{m})^{2}\). We obtain
\[\lambda_{m}= \tfrac{1}{2}(-\lambda_{1}+\lambda_{2})\qquad\text{ for case (1) and (2)}, \tag{64}\] \[\lambda_{m}= \tfrac{1}{2}(\lambda_{1}+\lambda_{2})\qquad\text{ for case (3)}. \tag{65}\]
Here, for case (1) and (2), we use the positive sign of the first eigenvalue of \(\boldsymbol{\tau}_{xy}^{m}\) and for case (3) we use the negative sign.
## 3 Numerical results
In this section we show calculations and numerical examples for all cases. As our examples are for illustrative purposes only, we refrain from using specific units.
We start with the plane stress case, which is simpler to understand and visualize due to its two-dimensional nature. Figure 2 shows a map of the remaining stress \(\sigma_{\text{rel}}\) depending on the eigenvalues of the stress tensor.
We observe, that from our definition for the relative stress reduction \(\sigma_{\text{rel}}=||\boldsymbol{\sigma}+\boldsymbol{\tau}^{m}||/|| \boldsymbol{\sigma}||\) (Eq. (19)), it only depends on the angle with respect to the polar axis (defined as usual as starting at \(O=(0,0)\) and moving horizontally to the right), but not its distance to the origin. Therefore, scaling both eigenvalues with the same factor does not change the result. We investigate this effect further by intro
\((r,\phi)\) with \(\lambda_{1}=r\cos\phi\), \(\lambda_{2}=r\sin\phi\), from which follows \(\phi=\arctan 2(\lambda_{2},\lambda_{1})\) and \(r=\sqrt{\lambda_{1}^{2}+\lambda_{2}^{2}}\). Using this definition, we calculate the remaining stress as
\[\sigma_{\rm rel}(\phi)=\begin{cases}\frac{\sqrt{2}}{2}\sqrt{1-\sin(2\phi)}& \text{for $\phi\in[0,\frac{\pi}{2}]$}\\ \frac{\sqrt{2}}{2}|\sin\phi+\cos\phi|&\text{for $\phi\in[\frac{\pi}{2},2\pi]$} \end{cases}, \tag{66}\]
which is independent of \(r\) as expected. Figure 3 shows the resulting plot of this function, which is in conformity with Figure 2. From Eq. (66), we are also able to compute the average remaining stress, which is defined as the mean integral
\[\bar{\sigma}_{\rm rel}=\frac{1}{2\pi}\biggl{(}\int\limits_{0}^{2\pi}\sigma_{ \rm rel}(\phi)\,\mathrm{d}\phi\biggr{)}. \tag{67}\]
We obtain a solution analytically as
\[\bar{\sigma}_{\rm rel} = \frac{1}{2\pi}\biggl{(}\int\limits_{0}^{\frac{\pi}{2}}\frac{ \sqrt{2}}{2}\sqrt{1-\sin(2\phi)}\,\mathrm{d}\phi \tag{68}\] \[+\int\limits_{\frac{\pi}{2}}^{2\pi}\frac{\sqrt{2}}{2}|\sin\phi+ \cos\phi|\,\mathrm{d}\phi\biggr{)}\] \[= \frac{1}{2\pi}(2-\sqrt{2}+4-\sqrt{2})\approx 0.505.\]
As can be seen later, compared to the three-dimensional cases, this result is quite good, which can be explained by the signs of the eigenvalues in Eq. (61). Case (1) of the plane stress minimization is the only case where it is not possible to choose the signs in such a way that the absolute values of both \(\sigma_{xx}\) and \(\sigma_{yy}\) are reduced, instead, one value increases the same amount the other one is reduced. This can also be seen in Figure 2, where the bottom-left quadrant yields the worst results compared to the other quadrants. For the other cases, it is very advantageous to choose one sign of the eigenvalues in the Maxwell stress tensor arbitrarily. However, there is a high variance in these results, as there are some rare cases, where the total stress completely vanishes, while in other cases, nearly no stress minimization is possible. It should be noted, that a uniform random creation of all eigenvalues would be biased both for two and three dimensions, therefore, a correct approach, e.g. as proposed in [23], should be used. A method for real applications with already known mean stresses was proposed in [24].
As the next step, we evaluate the three-dimensional problems. Analogously to the plane problem, we use the spherical coordinates \((r,\theta,\phi)\), with \(\lambda_{1}=r\sin\theta\cos\phi\), \(\lambda_{2}=r\sin\theta\sin\phi\) and \(\lambda_{3}=r\cos\theta\), from which follows \(\phi=\arctan 2(\lambda_{2},\lambda_{1})\) and \(\theta=\arccos(\frac{\lambda_{3}}{r})\), with \(r=\sqrt{\lambda_{1}^{2}+\lambda_{2}^{2}+\lambda_{3}^{2}}\). By inserting this into our calculation of the relative stress \(\sigma_{\rm rel}\) (Eq. (19)) we can again show that it is independent of the radius \(r\). Figure 4 shows the results for the three-dimensional minimization problems. For the unconstrained minimization (top row), we obtain three areas, where nearly a complete stress reduction is possible. Here, exactly two of the three eigenvalues are positive and all eigenvalues have the same or similar absolute values. On the other hand, there is a significant void area in the section for three negative eigenvalues, where no solution can be found, as seen in Eq. (14). In contrast to the previous example, the resulting functions for the different cases do not have an analytical primitive function. Therefore, we solve the problem numerically by randomly creating a sample of 100000 uniformly distributed points on the unit sphere. The angles for the creation have the dis
Figure 3: Remaining stress \(\sigma_{\rm rel}\) depending on the phase angle \(\phi\) for the plane stress case.
Figure 2: Remaining stress \(\sigma_{\rm rel}\) depending on the eigenvalues for the plane stress case.
tributions \(\phi\sim 2\pi\ \mathcal{U}[0,1]\) and \(\theta\sim\arccos\left(1-2\ \mathcal{U}[0,1]\right)\)[25]. Then, averaging the result of our stress reduction gives \(\bar{\sigma}_{\rm rel}=0.70\), where we assumed \(\sigma_{\rm rel}=1\) for the cases without a real solution. We repeat the procedure for the tension minimization problem, whose results depending on the two angles are plotted in the middle row of Figure 4.
Again, we obtain the three areas with a very low remaining stress at the same positions, but the rate, at which \(\sigma_{\rm rel}\) increases moving away from these points is higher. This time, the size of the void area is half of the total size and contains all cases with at least two negative eigenvalues, for which a minimization under the tensile constraint is not possible. For the calculation of the average remaining stress, we exclude these areas and calculate \(\bar{\sigma}_{\rm rel}=0.66\) by again using our uniform distribution.
Finally, the bottom row of Figure 4 shows the results for the compression problem. Here, only for the part where all eigenvalues are positive, no solution can be found. Similarly to the other examples we obtain the three areas, where a very good stress absorption is possible. For the compression problem, it is very noticeably that \(\sigma_{\rm rel}\) abruptly increases in the area with three negative eigenvalues. In order to fulfill the compression constraint, there are cases where the resulting stress is higher than before (\(\sigma_{\rm rel}>1\)). From the randomization, we calculate the average stress reduction \(\bar{\sigma}_{\rm rel}=0.87\), again excluding the area, where no solution can be found.
Figure 4: Left: remaining stress \(\sigma_{\rm rel}\) depending on the polar angle \(\theta\) and the azimuthal angle \(\phi\) for the three-dimensional cases. in the middle/right: three-dimensional projection depending on the eigenvalues (\(x\widehat{=}\lambda_{1}\), \(y\widehat{=}\lambda_{2}\), \(z\widehat{=}\lambda_{3}\)), front side and back side, respectively. Top: unconstrained minimization, in the middle: minimization of tension, bottom: minimization of compression.
Discussion
While our calculations seem promising, some limitations occur. To put our approach into practice, the unit cells have to be small enough for the mechanical stress to be assumed constant within the cell. Here, the absolute size may differ depending on the application. We focused on microscale calculations, but the interference of the electric field from neighboring unit cells could be an important effect to consider. As shown by our calculations, a complete absorption of the mechanical stress is only possible for rare special cases, however a significant stress reduction is possible for most cases. The variance of the stress reduction is very high and the result depends on the specific stress state. Furthermore, the maximum absolute values of reducible stresses depend on the technical possibilities of the used capacitors and the used bulk material, as a very high voltage may cause an dielectric breakdown. Therefore, possible applications could be found at smaller scales, e.g. robotic capsules for endoscopy or drug delivery in medical engineering [26]. To automate the process and allow the usage for not only static but also dynamic problems or applications, where the material load is not known beforehand, sensor technology plays an important role. In order to counteract the mechanical stress, the electric field has to be calculated and applied quickly from sensor measurements, especially if the load changes fast in time. For further development, we suggest experimental research of our proposed material. Finally, a parameter study evaluating the influence of the relative permittivity \(\varepsilon_{r}\) could be done.
|
2307.09205
|
Learning Dynamic Attribute-factored World Models for Efficient
Multi-object Reinforcement Learning
|
In many reinforcement learning tasks, the agent has to learn to interact with
many objects of different types and generalize to unseen combinations and
numbers of objects. Often a task is a composition of previously learned tasks
(e.g. block stacking). These are examples of compositional generalization, in
which we compose object-centric representations to solve complex tasks. Recent
works have shown the benefits of object-factored representations and
hierarchical abstractions for improving sample efficiency in these settings. On
the other hand, these methods do not fully exploit the benefits of
factorization in terms of object attributes. In this paper, we address this
opportunity and introduce the Dynamic Attribute FacTored RL (DAFT-RL)
framework. In DAFT-RL, we leverage object-centric representation learning to
extract objects from visual inputs. We learn to classify them in classes and
infer their latent parameters. For each class of object, we learn a class
template graph that describes how the dynamics and reward of an object of this
class factorize according to its attributes. We also learn an interaction
pattern graph that describes how objects of different classes interact with
each other at the attribute level. Through these graphs and a dynamic
interaction graph that models the interactions between objects, we can learn a
policy that can then be directly applied in a new environment by just
estimating the interactions and latent parameters. We evaluate DAFT-RL in three
benchmark datasets and show our framework outperforms the state-of-the-art in
generalizing across unseen objects with varying attributes and latent
parameters, as well as in the composition of previously learned tasks.
|
Fan Feng, Sara Magliacane
|
2023-07-18T12:41:28Z
|
http://arxiv.org/abs/2307.09205v1
|
# Learning Dynamic Attribute-factored World Models for Efficient Multi-object Reinforcement Learning
###### Abstract
In many reinforcement learning tasks, the agent has to learn to interact with many objects of different types and generalize to unseen combinations and numbers of objects. Often a task is a composition of previously learned tasks (e.g. block stacking). These are examples of _compositional generalization_, in which we compose object-centric representations to solve complex tasks. Recent works have shown the benefits of object-factored representations and hierarchical abstractions for improving sample efficiency in these settings. On the other hand, these methods do not fully exploit the benefits of factorization in terms of object attributes. In this paper, we address this opportunity and introduce the Dynamic Attribute FacTored RL (DAFT-RL) framework. In DAFT-RL, we leverage object-centric representation learning to extract objects from visual inputs. We learn to classify them in classes and infer their latent parameters. For each class of object, we learn a class template graph that describes how the dynamics and reward of an object of this class factorize according to its attributes. We also learn an interaction pattern graph that describes how objects of different classes interact with each other at the attribute level. Through these graphs and a dynamic interaction graph that models the interactions between objects, we can learn a policy that can then be directly applied in a new environment by just estimating the interactions and latent parameters. We evaluate DAFT-RL in three benchmark datasets and show our framework outperforms the state-of-the-art in generalizing across unseen objects with varying attributes and latent parameters, as well as in the composition of previously learned tasks.
## 1 Introduction
Model-based reinforcement learning (MBRL) and world models [1; 2; 3] have demonstrated improved performance in many RL tasks by providing better sample efficiency. However, most world models focus only on modeling a single object or holistic modeling over the environment, while in real-world tasks, we often have environments with multiple objects that interact, and we are interested to generalize to unseen combinations and numbers of objects. In recent years, there have been several studies exploring and learning object-oriented environment models or policy models [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17] and tackling the problem of _compositional_ or combinatorial generalization, in which we consider combining the modeling of multiple objects or tasks to solve a new task.
Although these methods have effectively leveraged object-centric and object-factored representations in RL, and thus improved the sample efficiency in multi-object settings, they have not fully exploited the benefits of factorization in terms of object attributes. Often an object's transition and reward functions are influenced only by a sparse subset of attributes, e.g. an object's position and reward are affected by its previous position, but not by its appearance or activation state. In these environments, interactions between objects are often sparse, both in time and in terms of which attributes are affected, e.g. the position of a box is affected by the position of another box at the timestep in which
they collide, but not directly by the other object's friction coefficient. Additionally, objects of the same type share similar factored dynamics, modulated by an object-specific latent parameter, while objects of different types might instead have different attributes, dynamics, and rewards.
In this paper, we propose Dynamic Attribute FacTored RL (DAFT-RL), a framework that learns a fine-grained attribute-factored representation across objects, including a dynamic graph for modeling interactions between objects. As part of this framework, we propose a model, DAFT-MDP, that builds on Factored (PO)MDPs [18; 19; 20; 21; 22; 23], Relational MDPs [24; 25; 26] and especially Object-Oriented (PO)MDPs [27; 28; 29], but focuses on a more fine-grained factorization at the attribute level and dynamic graphs. We implement our framework as a structured and sequential generative model by combining it with state-of-the-art object-centric representation learning [30; 31] for extracting objects and attributes from visual inputs, factored adaptation approaches inspired by the causality literature [32; 33] for estimating the factored dynamics and reward, soft attention networks [34] for action binding [35] and (dynamic) Neural Relational Inference [36; 37] for modeling interactions. Our framework allows us to learn a policy on a set of source environments that can successfully generalize to new environments with unseen combinations of objects with different latent parameters (possibly with unseen values) and types, as well as to combinations of previously learned tasks on different objects, without any further policy learning. We show the benefits of DAFT-RL in three benchmark datasets for compositional generalization, where it outperforms the baselines.
## 2 Dynamic Attribute-FacTored MDPs (DAFT-MDP)
We formalize our assumptions by introducing our DAFT-MDP model, which is an extension with class template graphs, interaction patterns, and interaction graphs of factored (PO)MDPs [18; 19; 20; 21; 22; 23]. This extension takes inspiration from Relational MDPs [24; 25; 26] and their literature, especially Object-Oriented (PO)MDPs [27; 28; 29], but we propose a more fine-grained factorization of the transition and reward at the object attribute level, based on estimating template and dynamic interaction graphs.
Intuitively, we will consider learning a policy that can generalize across different environments that vary in the number and characteristics of their objects. We will assume that each environment is composed of multiple _objects_, each of a specific type, or _class_. Each object has several observable _attributes_ (e.g. position, velocity) and some latent constant _parameters_ (e.g. an object-specific friction coefficient). Objects of the same class will have the same attributes, the same transition and reward functions, but can differ in the values of the attributes (e.g. they are at different positions) and in the value of the latent parameters (e.g. they have different friction coefficients). We will assume that the transition and reward functions can be _factored_ in terms of attributes and that for a given attribute only a sparse subset of other attributes influences these functions. The objects can interact with each other, which might influence their dynamics. We will assume that these _interactions_ are _sparse_, both in time and in terms of the effect on the attributes of each object, and that all objects in a class have the same _interaction pattern_ in terms of how the attributes interact with objects of another class. In each environment, we will assume that an action has only an effect on one object at a time.
We formalize these assumptions in the following. We start by defining our class system, then we describe three types of graphs (class template graphs, interaction patterns, and dynamic interaction graphs in Fig. 1) that describe how dynamics and reward factorize based on the classes, attributes, and interactions, and finally provide a formal definition of a Dynamic Attribute-FacTored MDP.
Class system, attributes, and objects.We assume a known set of classes \(\mathcal{C}=\{C_{1},\ldots,C_{k}\}\) of objects, where each class \(C_{j}\) describes a set of observable attributes \(\{C_{j}.s_{1},\ldots,C_{j}.s_{n}\}\), which we assume for simplicity are the same number in each class. We assume that each class has a set of latent constant parameters \(C_{j}.\boldsymbol{\theta}\), that represent physical properties of the object that can vary across different objects of the same type. For example, in one of the benchmarks [15], we consider two types of objects \(\mathcal{C}=\{\texttt{box},\texttt{sw}\}\), boxes and switches, with class attributes \(\{\texttt{box}.\texttt{pos},\texttt{box}.\texttt{vel},\texttt{box}.\boldsymbol{ \theta}\}\) representing the position, velocity, and friction coefficient of a box, and \(\{\texttt{sw}.\texttt{pos},\texttt{sw}.\texttt{active}\}\), representing the position and activation state of a switch. The class system specifies a template for a set of environments that can vary in the number and characteristics of the objects. Each environment has a fixed set of objects \(\mathcal{O}=\{o_{1},\ldots,o_{m}\}\), where each object is an instance of a class in \(\mathcal{C}\), which we denote as \(o_{i}\in C_{j}\) for \(i\in\{1,\ldots,m\},j\in\{1,\ldots,k\}\). We denote the class of an object \(o_{i}\) with \(C(i)\). For example, using the class system from the previous example, we can represent a source environment as \(\mathcal{O}^{\prime}=\{o_{1},o_{2}\}\) where \(o_{1}\in\texttt{box}\) and \(o_{2}\in\texttt{sw}\), and a target environment as
\(\mathcal{O}^{\prime\prime}=\{o_{1},o_{2},o_{3}\}\), where we add \(o_{3}\in\textsc{box}\) to the original objects. For each object \(o_{i}\in C_{j}\) and timestep \(t=\{1,\dots,T\}\), we denote its attributes at time \(t\) as \(\mathbf{o}_{i}^{t}=\{o_{i}.s_{1}^{t},\dots,o_{i}.s_{n}^{t}\}\), which are instantiations of the class attributes \(\{C_{j}.s_{1},\dots,C_{j}.s_{n}\}\), and its constant parameters as \(o_{i}.\boldsymbol{\theta}\). In our example, for box \(o_{1}\) the attributes \(o_{1}.s_{1}^{t}\), \(o_{1}.s_{2}^{t}\) and \(o_{1}.\boldsymbol{\theta}\) are its position and velocity at time \(t\) and its friction coefficient, while for the switch \(o_{2}\) the attributes \(o_{2}.s_{1}^{t}\) and \(o_{2}.s_{2}^{t}\) are its position and activation at time \(t\).
States, actions, transitions, and rewards.We define the state at time \(t\) as the collection of all the object states, i.e., \(\mathbf{s}^{t}=\{\mathbf{o}_{1}^{t},\dots,o_{m}^{t}\}\) with domain \(\mathcal{S}\). We collect all object-specific latent parameters in a global parameter \(\boldsymbol{\theta}=\{o_{1}.\boldsymbol{\theta},\dots,o_{m}.\boldsymbol{ \theta}\}\) with domain \(\Theta\). We define the space of actions \(\mathcal{A}\) and use \(\mathbf{a}^{t}\in\mathcal{A}\) to denote the action at time \(t\). We denote the transition probability as \(p(\mathbf{s}^{t+1}|\mathbf{s}^{t},\mathbf{a}^{t},\boldsymbol{\theta})\). The reward at time \(t\) is denoted as \(r^{t}\) and the reward probability is denoted as \(p(r^{t}|\mathbf{s}^{t},\mathbf{a}^{t},\boldsymbol{\theta})\). The transition and reward probabilities are factorized according to the _class template graphs_, _interaction pattern graphs_ and _interaction graphs_ that we can learn from data, and that we will introduce below.
Class template graphs (Fig. 1A).We assume that all objects within the same class share the same factorization in terms of how their own attributes, latent parameters and actions influence their dynamics and rewards. For example, all boxes will have a similar relationship between their position, velocity, and latent friction parameters, since they follow the same physical laws. Similarly, all switches will share similar dynamics, which will be different from the boxes. We describe these relationships as Dynamic Bayesian Networks (DBNs) [38], which are graphical models that describe the template for the relations between two contiguous timesteps, \(t\) and \(t+1\), and assume this structure is time-invariant. In particular, for each class \(C_{j}\), we learn a DBN \(\mathcal{G}_{C_{j}}\) over the nodes \(\{C_{j}.s_{1}^{t},\dots,C_{j}.s_{n}^{t},C_{j}.s_{1}^{t+1},\dots,C_{j}.s_{m}^{ t+1},C_{j}.\boldsymbol{\theta},\mathbf{a}^{t},r^{t}\}\) that represents a template for the instance graph between the attributes for each object of this class \(\mathcal{G}_{o_{i}}^{t}\) for \(o_{i}\in C_{j}\). In particular, the edges between the action \(a^{t}\) and the attributes at timestep \(t+1\) can be switched on or switched off at different timesteps, since as we will see in the description of the interaction graphs, they represent the interaction pattern of the agent with any specific object. We show an example of two DBNs for two classes of objects (boxes and switches) in Fig. 1a. In this example, \(C_{1}.s_{2}^{t}\) influences \(C_{1}.s_{2}^{t+1}\) and \(C_{1}.s_{3}^{t+1}\), but it's not influenced by \(C_{1}.s_{3}^{t}\). The reward \(r^{t}\) is only influenced by \(C_{1}.s_{3}^{t}\) and the action will only have an effect on \(C_{1}.s_{2}^{t+1}\) and \(C_{1}.s_{3}^{t+1}\). Moreover, \(C_{2}.s_{2}^{t}\) influences only \(C_{2}.s_{2}^{t+1}\) and \(r^{t}\).
Interaction pattern graphs (Fig. 1B).When two objects interact, it often happens that only some of the attributes of each object after the other. For example, when two boxes collide, their positions and velocities might change, but their masses will not. Therefore, we will assume that from an attribute perspective, the interplay between objects during interaction is also factored and sparse. Additionally, we assume that the interactions between two objects follow patterns based on the classes of the objects. For example, the attribute of a box will always interact in the same way with the attributes of a switch, regardless of the specific object. In particular, for any pair of classes \(C_{i}\) and \(C_{j}\) (possibly also with \(i=j\)), we learn a DBN \(\mathcal{G}_{C_{i},C_{j}}\) over the nodes \(\{\{C_{i}.s_{1}^{t}\}_{I=1}^{t},\{C_{i}.s_{1}^{t+1}\}_{I=1}^{n},C_{i}. \boldsymbol{\theta},\{C_{j}.s_{1}^{t}\}_{I=1}^{n},\{C_{j}.s_{1}^{t+1}\}_{I=1}^{n},C_{j}.\boldsymbol{\theta}\}\). We show an example of a DBN describing how boxes interact with other boxes in Fig. 1b. In this case, the interaction is
Figure 1: The graphical representation of DAFT-MDP. The colors denote the attributes for an object or a class, the red dashed lines denote edges that can be switched on or off at different timesteps.
limited to \(C_{1}.s_{2}^{t}\) from one object influencing the \(C_{1}.s_{1}^{t+1}\) and \(C_{1}.s_{2}^{t+1}\) from the other object, and the latent parameters \(C_{1}.\mathbf{\theta}\) from each object influencing \(C_{1}.s_{3}^{t+1}\) from the other object. While the interaction patterns are time-invariant, we assume that for each pair of objects, the interactions are only switched on at some points in time, as we will now describe through the interaction graphs.
Dynamic interaction graph (Fig. 1C).The class template and interaction pattern graphs that we just described can model the general behavior of the classes of objects in a static, time-invariant way. On the other hand, in many multi-object environments object interactions occur sparsely, the pairs of interacted objects are not fixed, and the action has an effect only on a limited number of objects at any given time (e.g. we will assume only one for simplicity). We therefore propose to model these interactions between objects as a dynamic graph \(\mathcal{G}_{\mathrm{inter}}=\{\mathcal{G}_{\mathrm{inter}}^{t}\}_{t=1}^{T}\) at the object level, which is a sequence of graphs \(\mathcal{G}_{\mathrm{inter}}^{t}\) with edges from a subset of \(\{o_{1}^{t},\dots,o_{m},\mathbf{a}^{t}\}\) to a subset of \(\{o_{1}^{t+1},\dots,o_{m}^{t+1}\}\). Each edge \(o_{i}^{t}\to o_{j}^{t+1}\) represents an interaction between an object \(o_{i}\) and an object \(o_{j}\). This interaction is instantiated in an instance interaction graph \(\mathcal{G}_{o_{i},o_{j}}^{t}\), following the interaction pattern graph \(\mathcal{G}_{C_{i},C_{j}}\) for each pair of objects \(o_{i}\in C_{i}\) and \(o_{j}\in C_{j}\). We also learn an _object selector_\(\alpha^{t}\in\{1,\dots,m\}\) that selects the single object on which the action \(\mathbf{a}^{t}\) has an effect at timestep \(t\), which we represent with an edge from \(\mathbf{a}^{t}\) to the selected object \(o_{\alpha^{t}}^{t}\) in \(\mathcal{G}_{\mathrm{inter}}^{t}\). In particular, this selection (or _action binding_) activates the edges from \(\mathbf{a}^{t}\) to the object attributes described in the class template graph \(\mathcal{G}_{C_{j}}\) in the instantiated version for object \(o_{i}\in C_{j}\), which we denote \(\mathcal{G}_{o_{i}}^{t}\). The graph \(\mathcal{G}_{\mathrm{inter}}\) is dynamic because the edges of each graph in the sequence can change at each timestep \(t=1,\dots,T\). We show an example of interactions between three objects at different timesteps in Fig. 1c. In this example, at timestep \(t\) there is an interaction between \(o_{1}\) and \(o_{3}\), which will follow the interaction pattern presented in Fig. 1b, i.e. \(o_{1}.s_{2}^{t}\to\{o_{3}.s_{1}^{t+1},o_{3}.s_{2}^{t+1}\}\) and \(o_{1}.\mathbf{\theta}^{t}\to o_{3}.s_{3}^{t+1}\), and viceversa for \(o_{3}\) towards \(o_{1}\). Moreover, \(\mathbf{a}^{t}\) has an effect on \(o_{2}\), and specifically \(o_{2}.s_{3}^{t+1}\), following the class template graph presented in Fig. 1a. Instead, at timestep \(t+1\), there are no interactions between the objects and \(\mathbf{a}^{t+1}\) has an effect on \(o_{1}\), specifically \(o_{1}.s_{2}^{t+2}\) and \(o_{1}.s_{3}^{t+2}\). For completeness, in App. B we provide an example of this graph combined with the instantiated class template graphs for each object \(\mathcal{G}^{o_{1}}\), \(\mathcal{G}^{o_{2}}\) and \(\mathcal{G}^{o_{3}}\) for three timesteps \(t,t+1,t+2\), as well as the instantiated interaction pattern graph \(\mathcal{G}^{o_{1},o_{3}}\) that is switched on at timestep \(t\).
Our modeling assumptions.Now that we have introduced all of the graphical structures that we will need, we can describe our assumptions as a Dynamic Attribute-FacTored Markov Decision Process (DAFT-MDP). We will assume that the class system is fixed and that the objects can vary in each environment, as can their interaction graphs. Under this assumption, a DAFT-MDP defines a family of MDPs, that is parametrized in the objects \(\mathcal{O}\) and their dynamic interaction graph \(\mathcal{G}_{\mathrm{inter}}\).
**Definition 1** (Daft-MDP).: _A Dynamic Attribute-FacTored Markov Decision Process (DAFT-MDP) is a tuple \((\mathcal{C},\mathcal{O},\Theta,\mathcal{A},\mathcal{G},\mathbb{P}_{s}, \mathcal{R},\gamma)\), where \(\mathcal{C}\) is the set of classes, \(\mathcal{O}\) is the set of objects, \(\Theta\) is the space of the constant latent parameters, \(\mathcal{A}\) the action space, \(\mathbb{P}_{s}\) is the transition distribution, \(\mathcal{R}\) is the reward function and \(\mathcal{G}\) is a set of graphs that contains the collection of class template graphs for each class \(\{\mathcal{G}_{C_{j}}\}_{C_{j}\in\mathcal{C}}\), the collection of interaction pattern graphs for each pair of classes \(\{\mathcal{G}_{C_{i},C_{j}}\}_{C_{i},C_{j}\in\mathcal{C}}\) and the dynamic interaction graph \(\mathcal{G}_{\mathrm{inter}}\), as defined previously. These graphs define the factorization of the transition distribution per object and per attribute, as follows:_
\[\mathbb{P}_{s}(\mathbf{s}^{t+1}|\mathbf{s}^{t},\mathbf{\theta},\mathbf{a}^{t})=\prod_{i=1}^{m} \prod_{l=1}^{n}\mathbb{P}_{s}\left(o_{i}.s_{l}^{t+1}|\mathrm{pa}_{\mathcal{G}_{ o_{i}}^{t}}(o_{i}.s_{l}^{t+1}),\{\mathrm{pa}_{\mathcal{G}_{o_{i}}^{t},\mathbf{ \kappa}_{\mathbf{\kappa}}}(o_{i}.s_{l}^{t+1})\}_{o_{k}\to o_{i}\in\mathcal{G}_{ \mathrm{inter}}^{t}}\right)\]
_where \(\mathbf{s}^{t}\) is the collection of all attributes of all objects at time \(t\), \(\mathbf{\theta}\in\Theta\) is the collection of all latent constant parameters for all objects, \(\mathbf{a}^{t}\in\mathcal{A}\) is the action. In the right-hand term, \(o_{i}.s_{l}^{t+1}\) is attribute \(s_{l}\) of object \(o_{i}\) at time \(t+1\), while \(\mathrm{pa}_{\mathcal{G}_{\mathrm{orig}}^{*}}(o_{i}.s_{l}^{t+1})\) are the parents of the attribute \(l\) for object \(o_{i}\) based on the class template graph \(\mathcal{G}_{C(i)}\), where \(C(i)\) is the class of \(o_{i}\), and where the action binding \(\alpha^{t}\) activates any potential connections from \(\mathbf{a}^{t}\). In the second term of the conditioning, we iterate over the objects \(o_{k}\) that are interacting with \(o_{i}\) at time \(t\) in the dynamic interaction graph \(o_{k}\to o_{i}\in\mathcal{G}_{\mathrm{inter}}^{t}\). For each of these objects \(o_{k}\) we collect the attributes that interact with \(o_{i}.s_{l}\) in the instance interaction pattern \(\mathcal{G}_{o_{i},o_{k}}^{t}\) based on interaction pattern graph \(\mathcal{G}_{C(i),C(k)}\) for the respective classes \(C(i)\) and \(C(k)\). Similarly, we define the factorization of the reward function per object and per attribute as \(\mathcal{R}(\mathbf{s}^{t},\mathbf{a}^{t},\mathbf{\theta})=\mathcal{R}(\{\mathrm{pa} _{\mathcal{G}_{\mathrm{orig}}}(r^{t})\}_{o_{i}\in\mathcal{O}})\), where for each object \(o_{i}\) we collect all the attributes that have an edge to the reward in the instantiation of the class template graph._
In the following, we assume that the classes \(\mathcal{C}\) are known and fixed across environments, while the objects \(\mathcal{O}\) can vary, as can the latent parameters \(\mathbf{\theta}\). In the training phase, we will learn how to classify objects, the transition and reward functions based on the class template graphs \(\{\mathcal{G}_{C_{j}}\}_{C_{j}\in\mathcal{C}}\) and the interaction patterns \(\{\mathcal{G}_{C_{i},C_{j}}\}_{C_{i},C_{j}\in\mathcal{C}}\). In the testing phase, we will infer the class and latent parameters of each object, as well as the interactions between the objects in the dynamic interaction graph \(\mathcal{G}_{\mathrm{inter}}^{t}\), which specify the transition and reward functions in the new environment.
## 3 The DAFT-RL framework
In the previous section, we introduced our model, DAFT-MDP. In this section we provide a framework for estimating DAFT-MDPs, leveraging them for policy learning in a set of source environments, and adapting the policy to a new target environment with different objects, without any additional policy learning. Our framework is divided in four steps: (i) offline class learning in single-object environments, (ii) offline interaction learning and latent parameter inference in multi-object environments, (iii) policy learning and imagination in multi-object environments, and finally (iv) adaptation to a new multi-object environment. We present each step and its implementation in the following.
In all steps and all environments, if the input is an image, we extract the objects and their attributes \(\mathbf{s}^{t}=\{\mathbf{o}_{1}^{t},\mathbf{o}_{2}^{t},\ldots,\mathbf{o}_{m}^ {t}\}\) from sequences of images \(\mathbf{x}^{t}\) with pre-trained object-centric methods, e.g. SA [30] and AIR [31]. For symbolic inputs, we directly access the attributes of all objects \(\mathbf{s}^{t}=\{\mathbf{o}_{1}^{t},\mathbf{o}_{2}^{t},\ldots,\mathbf{o}_{m}^ {t}\}\). For each set of objects \(\mathcal{O}\), we learn to classify the objects into their classes \(\mathcal{C}\) with supervised learning, which we describe in detail in App. C.
### Step 1: Class learning in single-object environments.
In this step, we consider data from \(m\) single-object environments with different objects and no agent interaction. In particular, for each class \(C_{j}\) we collect the transitions for several objects \(o_{i}\in C_{j}\) as \(\{\mathbf{o}_{i}^{t},\mathbf{a}_{i}^{t},r_{i}^{t}\}_{t=1}^{T}\), in environments in which there is only object \(o_{i}\), and a random policy is used to generate actions. We denote these data as \(\mathcal{D}^{\mathrm{single}}=\{\{\mathbf{o}_{i}^{t},\mathbf{a}_{i}^{t},r_{i} ^{t}\}_{t=1}^{T}\}_{i\in C_{j},\forall j=1,\ldots,k,i=1,\ldots,m}\).
We initiate the class template \(\mathcal{G}_{C_{j}}\) for each class \(C_{j}\) randomly, and then use \(\mathcal{D}^{\mathrm{single}}\) to learn it, except the contribution of the latent parameters, which is learned in the next phase. In particular, we learn the class template graph by maximizing the log-likelihood of \(\log p_{\lambda}\left(\mathbf{o}_{i}^{t+1},r_{i}^{t}\mid\mathbf{o}_{i}^{t}, \mathbf{a}_{i}^{t},\mathcal{G}^{C_{j}}\right)\), where \(\lambda=\{\lambda_{s},\lambda_{r}\}\) are the parameters of dynamics and reward models. For the implementation, we use Gated Recurrent Units (GRU) [39] to learn the dynamics and reward models jointly with the class template graphs. At time step \(t\) for each object \(i\) with class \(C_{j}\), the inputs to the GRU are \(\{\mathrm{pa}_{\mathcal{G}_{j}}(\mathbf{o}_{i}^{t+1})\}_{i=1}^{m}\) and \(\{\mathrm{pa}_{\mathcal{G}_{C_{j}}}(r_{i}^{t})\}_{i=1}^{m}\), and the GRU outputs \(\mathbf{o}_{i}^{t+1}\) and \(r_{i}^{t}\). The learning objective of this step is given below, we maximize the log-likelihoods of dynamics and reward models and regularize the graph to be sparse:
\[\mathrm{argmax}_{\lambda,\{\mathcal{G}^{C_{j}}\}_{j=1}^{k}}\sum_{t=1}^{T}\sum_ {i=1}^{m}\sum_{l=1}^{n}\left(\log p_{\lambda}(o_{i}.s_{l}^{t+1},r_{i}^{t}\mid \mathrm{pa}_{\mathcal{G}_{C_{l}}}(o_{i}.s_{l}^{t+1}),\mathrm{pa}_{\mathcal{G}_{ C_{l}}}(r_{i}^{t})\right)-\sum_{j=1}^{k}\left\|\mathcal{G}^{C_{j}}\right\|_{1}\]
where \(m\) and \(k\) indicate the number of single-object environments and object type classes, respectively, and \(\mathrm{pa}_{\mathcal{G}_{C_{l}}}\) denotes the parents of a variable in the template class graph for the class \(C(i)\) of object \(o_{i}\). After this step, we fix the learned \(\{\mathcal{G}^{C_{1}},\mathcal{G}^{C_{2}},\ldots,\mathcal{G}^{C_{k}}\}\) with the exception of the edges from the latent parameters \(\theta\), which here we assume are disabled and we will learn in the next step. In later stages, we will reuse the learned reward model \(\lambda_{r}\) and the class template graphs.
### Step 2: Interaction learning and latent parameter inference in multi-object environments.
In this step, we consider data \(\mathcal{D}^{\mathrm{multi}}\) from \(N\) multi-object environments with different object configurations in which the objects can have varying latent parameters. Formally, we define \(\mathcal{D}^{\mathrm{multi}}=\{\{\mathbf{o}_{1}^{t},\mathbf{a}_{1}^{t},r_{1}^{t },\mathbf{o}_{2}^{t},\mathbf{a}_{2}^{t},r_{2}^{t},\ldots,\mathbf{o}_{m}^{t}, \mathbf{a}_{m}^{t},r_{m}^{t}\}_{t=1}^{T}\}_{i=1}^{N}\). In each of these environments, we assume the agent can interact only with one object at a time. On these data, we again extract the objects and their attributes from a sequence of images with pre-trained object-centric methods and classify the objects using the object classifier. We use these data to learn the interaction pattern graphs \(\mathcal{G}_{C_{i},C_{j}}\) for each pair of classes \(C_{i}\) and \(C_{j}\) and the dynamic interaction graph \(\mathcal{G}_{\mathrm{inter}}\) by exploiting the
previously learned class template graphs. In particular, we first learn the action binding \(\alpha^{t}\), and at a second stage, we jointly learn the rest of the dynamic interaction graph \(\mathcal{G}_{\mathrm{inter}}\), the interaction patterns \(\mathcal{G}_{C_{i},C_{j}}\) for each pair of classes, the object-specific latent parameters \(o_{i}\). \(\mathbf{\theta}\) and their edges to the other attributes in \(\mathcal{G}_{C_{j}}\). We describe these two stages in detail in the following.
#### 3.2.1 Step 2.1: Learning the action binding
Motivated by [35], we learn the dynamic action binding \(\alpha=\{\alpha^{1},\alpha^{2},\ldots,\alpha^{T}\}\) using soft attention networks, which are modified by the single-head self-attention module in the Transformer model [34]. Specifically, we perform non-linear transformations on the states and actions using multi-layer perceptrons (MLPs) to derive the key \(\mathbf{k}^{t}=\langle f_{k}(\mathbf{o}_{i}^{t}),f_{k}(\mathbf{o}_{2}^{t}), \ldots,f_{k}(\mathbf{o}_{m}^{t})\rangle\), query \(\mathbf{q}_{i}^{t}=f_{q}\left(\mathbf{a}^{t}\right)\), and value \(\mathbf{v}^{t}=f_{v}\left(\mathbf{a}^{t}\right)\), respectively. We then compute the attention weights \(\alpha^{t}=\texttt{softmax}\left((\mathbf{k}_{1}^{t})^{\intercal}\mathbf{q}^ {t},(\mathbf{k}_{2}^{t})^{\intercal}\mathbf{q}^{t},\ldots,(\mathbf{k}_{m}^{t })^{\intercal}\mathbf{q}^{t}\right)\). We use the learned attention weights \(\alpha\) as the action binding selector, as it provides an estimation of the binding affinity from the action to the objects at each time step. The soft attention mechanism assigns weights from the action to the objects by multiplying the value vector \(\mathbf{v}^{t}\) with the attention weights \(\alpha^{t}\), and then embeds the weighted actions into the dynamics of each object. We maintain a fixed structure for the class template graphs and focus on learning the action binding selector by updating \(f_{k}\), \(f_{q}\), and \(f_{v}\).
#### 3.2.2 Step 2.2: Learning dynamic interaction graph
As noted in Section 2, the interaction among objects may change over time and usually occurs in a sparse manner. To learn this dynamic graph, we leverage a sequential latent variable model to infer the object interaction graph. Following the neural relational inference (NRI) works [36, 37], we use an encoder to generate the latent variables and subsequently sample the interaction graph based on these variables. Specifically, we use graph neural networks (GNN) as the encoder module, where the nodes represent the states of each object, and the predicted edges denote the temporal interactions between the objects. In line with the dynamic NRI (dNRI) [37], we use recurrent units to model the temporal relations of the interaction graph. We outline the key components of the model in the following and provide a detailed description in App. C.
**Encoder and prior.** During training, at each time step \(t\), we use GNN layers to generate hidden embeddings \(\mathbf{h}^{t}=\texttt{GNN}\left(\mathbf{o}_{1}^{t},\mathbf{o}_{2}^{t}, \ldots,\mathbf{o}_{m}^{t}\right)\). For each object pair \(o_{i}\) and \(o_{j}\), we then obtain \(\mathbf{h}_{(i,j)}^{t}\). For the encoder, we use a Gated Recurrent Unit (GRU) [39] to model the temporal dependency of the interaction graphs. The inputs to the GRU include the future embeddings of the states, which then generate the updated embeddings: \(\mathbf{h}_{(i,j),\mathrm{enc}}^{t}=\texttt{GRU}_{\mathrm{enc}}(\mathbf{h}_{( i,j)}^{t},\mathbf{h}_{(i,j),\mathrm{enc}}^{t+1})\). For the prior, we also use a GRU, but in this context, we append \(\mathbf{h}_{(i,j)}^{t}\) with the output of the GRU from the previous step \(\mathbf{h}_{(i,j),\mathrm{prior}}^{t-1}\) as the input. In specific terms, we have \(\mathbf{h}_{(i,j),\mathrm{prior}}^{t}=\texttt{GRU}_{\mathrm{prior}}(\mathbf{h}_ {(i,j)}^{t},\mathbf{h}_{(i,j),\mathrm{prior}}^{t-1})\). For both the encoder and the prior, we feed the output embedding from the GRUs to an MLP layer to derive the distribution of the latent variables \(q_{\phi}(\mathbf{z}^{t}\mid\mathbf{s}^{1:T})\), where \(\mathbf{s}^{1:T}=\{\mathbf{o}_{1}^{t},\ldots,\mathbf{o}_{m}^{t}\}_{t=1}^{T}\), and prior distribution \(p_{\phi}(\mathbf{z}^{t}\mid\mathbf{s}^{1:t},\mathbf{z}^{1:t-1})\), respectively. We assume that the encoder outputs a Bernoulli distribution for each edge and the graph \(\mathcal{G}_{\mathrm{inter}}\) is sampled using the Gumbel-Softmax trick [40].
**Decoder** We perform one-step prediction to generate \(\hat{\mathbf{o}}_{i}^{t+1}\) for each object \(o_{i}\), predicting the state dynamics with the learned graphs, including \(\mathcal{G}_{\mathrm{inter}}\), as well as the interaction pattern graph \(\mathcal{G}_{C_{i},C_{j}}\) for two objects with class \(C_{i}\) and \(C_{j}\). At the same time, our goal is also to learn the interaction pattern graph with sparsity regularization. We also learn the latent parameters, \(\mathbf{\theta}\) at this stage. Specifically, we also incorporate \(C_{j}\).\(\mathbf{\theta}\) and the graph from \(\mathcal{G}_{C_{j},\mathbf{\theta},\mathbf{o}_{i}^{t}}\) for each object \(o_{i}\) with class \(C_{j}\) into the dynamics model and use the sparsity regularization for \(\mathcal{G}_{C_{j},\mathbf{\theta},\mathbf{o}_{i}}\).
Therefore, the learning objectives include maximizing the likelihood of the dynamics model, minimizing the KL divergence between \(p_{\phi}\) and \(q_{\phi}\) to estimate \(\mathbf{\theta}\), and encouraging the sparsity of the interaction pattern graph \(\mathcal{G}_{C_{i},C_{j}}\) and the subgraph from latent parameters to states \(\mathcal{G}_{C_{i},\mathbf{\theta},\mathbf{o}_{i}}\):
\[\operatorname*{argmax}_{\lambda_{s},\phi,v,k,q,\theta,\mathcal{G}} \sum_{t=1}^{T}\sum_{i=1}^{m}\sum_{l=1}^{n}\ \log p_{\lambda_{s}}\Big{(}o_{i}.s_{l}^{t+1}\mid\ \mathrm{pa}_{\mathcal{G}_{o_{i}}^{t}}(o_{i}.s_{l}^{t+1}),\{\mathrm{pa}_{ \mathcal{G}_{o_{i}}^{t},o_{k}}^{t}(o_{i}.s_{l}^{t+1})\}_{o_{k}\to o_{i}\in \mathcal{G}_{\mathrm{inter}}^{t}}\Big{)}\] \[-\sum_{j=1}^{k}\sum_{i=1}^{k}\big{\|}\mathcal{G}_{C_{i},C_{j}} \big{\|}_{1}-\sum_{i=1}^{m}\sum_{j=1}^{k}\big{\|}\mathcal{G}_{C_{j},\mathbf{ \theta},\mathbf{o}_{i}}\big{\|}_{1}-\sum_{t=2}^{T}\mathrm{KL}\left(q_{\phi} \left(\mathbf{z}^{t}\mid\mathbf{s}^{1:T}\right)\|p_{\phi}\left(\mathbf{z}^{t}\mid \mathbf{s}^{1:t},\mathbf{z}^{1:t-1}\right)\right)\]
where \(\mathcal{G}\) is the dynamic interaction graph \(\mathcal{G}_{\mathrm{inter}}\), interaction pattern graphs \(\{\mathcal{G}_{C_{i},C_{j}}\mid i\in\{1,2,\ldots,k\},j\in\{1,2,\ldots,k\}\}\) and the subgraph from latent parameters to states \(\{\mathcal{G}_{C_{j},\mathbf{\theta},\mathbf{\phi}_{i}}\mid i\in\{1,2,\ldots,m\},j\in\{ 1,2,\ldots,k\}\}\). Similarly, to Definition 1, \(\mathrm{pa}_{\mathcal{G}_{i}^{*}}(o_{i}.s_{l}^{t+1})\) indicates the parents of the attribute \(s_{l}\) for object \(o_{i}\) based on the class template graph \(\mathcal{G}_{C(i)}\), where \(C(i)\) is the class of \(o_{i}\), and where the action binding \(\alpha^{t}\) activates or deactivates any potential connections from \(\mathbf{a}^{t}\). In the second term of the conditioning, we iterate over the objects \(o_{k}\) that are interacting with \(o_{i}\) at time \(t\) in the dynamic interaction graph \(o_{k}\to o_{i}\in\mathcal{G}_{\mathrm{inter}}^{t}\). For each of these objects \(o_{k}\) we collect the attributes that interact with \(o_{i}.s_{l}\) in the instance interaction pattern \(\mathcal{G}_{o_{i},o_{k}}^{t}\) based on interaction pattern graph \(\mathcal{G}_{C(i),C(k)}\) for the respective classes \(C(i)\) and \(C(k)\). \(\lambda_{s}\) and \(\phi\) indicate the dynamics model and encoder parameters, while \(v,k,q\) are the parameters of MLPs for learning the attention models. After this, we have learned all the graphs, dynamics, and reward models that estimate DAFT-MDP.
### Step 3: Policy learning and imagination in multi-object environments.
In the first two phases, we have learned the template for our world model, which we can now finetune to new multi-object domains by inferring the environment-specific latent parameters \(\mathbf{\theta}\) and the interaction graph \(\mathcal{G}_{\mathrm{inter}}\). We again consider several multi-object environments with different object configurations in which the objects can have varying latent parameters. For each environment, we then use the finetuned environment-specific world model to create a set of imagined trajectories. Finally, we can learn a policy \(\pi^{*}(\mathbf{a}^{t}|\mathbf{s}^{t},\mathbf{\theta},\mathcal{G}_{\mathrm{inter}})\) across different environments, based on the real and imagined trajectories. We can apply policy learning or planning methods using any RL algorithms. To take full advantage of the estimated models, we use MBRL or planning methods such as model predictive control (MPC) [41] or proximal policy optimization (PPO) [42] to learn \(\pi^{*}\). Detailed information about the domain parameters is provided in App. C.
### Step 4: Adaptation to a new multi-object environment.
In a new environment, we apply the policy \(\pi^{*}(\mathbf{a}^{t}|\mathbf{s}^{t},\mathbf{\theta},\mathcal{G}_{\mathrm{ inter}})\) by inferring latent parameters \(\mathbf{\theta}\) and dynamic interaction graphs \(\mathcal{G}_{\mathrm{inter}}\) based on a few trajectories, without any policy learning.
## 4 Related work
We shortly summarize the related work in this section and provide a more detailed discussion of related work, including the discussion of each method and a comparison based on the method features in App. A. Recent work in object-oriented and relational RL has incorporated various inductive biases for modeling object relations into both model-based and model-free RL frameworks. Zhou et al. [15] investigate the benefits of deep learning architectures, such as MLP, self-attention, and deep-sets in goal-conditioned RL with factorized object states. Likewise, Yoon et al. [13] provide a comprehensive empirical evaluation of pre-trained object-oriented models, such as SA[30] and SLATE [43], for model and policy learning in multi-object RL. Mambelli et al. [7] use linear relational networks to model object interactions and learn the policy. Another line of research focuses on learning structured representations among objects and their interactions [9; 5; 6; 8; 12; 17; 16; 44]. Most of these approaches aim to learn an object-wise factorization model, either with structured symbolic input or high-dimensional raw pixels as input. NCS [12] and STEDIE [17] go further by disentangling action/control-relevant or irrelevant features for each object. Unlike these works, we propose a more fine-grained factored world model that considers the structure among all attributes of the objects as well as the dynamic interaction among all objects.
## 5 Experimental evaluation
We consider a diverse set of RL benchmarks and setups, including modified OpenAI Fetch environments [15; 45; 46] (symbolic inputs), Spriteworld [9] (visual inputs), and the Block-stacking benchmark [47](visual inputs). We compare our approach with several baseline models. These include methods using Deep sets, GNN, and Self-Attention as the inductive bias [15; 4], such as SRICS [6], STOVE [8], SMORL [5], NCS [12], LRN [7], and COBRA [9]. To ensure a fair comparison, we modify these baselines, so that methods that originally only support symbolic input can also be adapted to handle image-based input by using visual encoders to obtain the symbolic states,
and add an imagination component. We provide the details of these modifications in App. D.1. We provide descriptions of all setups, training and testing domains for each benchmark in App. D.2. Here we describe the most representative results, but we provide the complete results in App. D.3.
Symbolic benchmark: OpenAI Fetch - Push and Switch.Following [15], we modify the OpenAI Gym Fetch environment [45] to create the \(N\)-push and \(N\)-switch benchmarks, where \(N\) denotes the number of objects. In the \(N\)-push task, the agent is trained to push all cubes to their corresponding target positions. Similarly, for the \(N\)-switch task, the agent is required to flip all switches in the environment. In this benchmark, all inputs are symbolic. As a sanity check, we show in App. D.3 that our method has comparable results with the baselines in the _single-task mode_, in which we train the model estimation and policy learning individually on the \(2\)-Push, \(3\)-Push, \(2\)-Switch, and \(3\)-Switch tasks. To evaluate _compositional generalization_, we train on the set of tasks {1-Push, \(1\)-Switch, \(2\)-Push, \(2\)-Switch}; while during the test phase, we test the model in different settings as shown in Table 1. We consider combinations of skills (denoted by \(S\)), e.g. \(2\)-Push+\(2\)-Switch (\(S\)), which combines the training tasks \(2\)-Push, \(2\)-Switch. We also consider changes in the number of objects (denoted by \(O\)) and skills, e.g. \(3\)-Push+\(3\)-Switch (\(S\)+\(O\)), which combines the training tasks but also varies the number of objects. We also test whether our model can achieve efficient transfer during testing when the objects' latent parameters differ from the training samples (denoted by \(L\)). Generally, during training, we consider the objects with masses and friction coefficients uniformly sampled from a set of values, and during testing, we test the model with two different sets of values. For example, \(2\)-Switch (\(L\)) considers this case. Finally, we consider the challenging setting in which we combine all of these changes. As seen in Table 1, in most cases DAFT-RL outperforms the baselines, with a bigger gain in the difficult settings. In this setting, we do not compare with COBRA [9], since it expects pixel inputs. As a representative example, we show the smoothed learning curve for \(2\)-Push+\(2\)-Switch (L+S) in Fig. 2A, in which we display the top three methods in terms of success rate.
Image benchmark: SpriteworldWe consider four tasks in Spriteworld [9]: object goal, interaction, comparison, and property comparison. Given that the observations are in pixel space, we use pre-trained SA [30] or AIR [31] as pre-trained encoders to obtain the object factored states. Following [13], we generate the datasets with varying numbers of objects placed randomly for pre-training the object-centric model. We consider cases in which we vary the number of objects, have unseen combinations of colors for the objects or unseen combinations of shapes in the target task, and provide the results in Table 2, showing that DAFT-RL with Slot Attention outperforms all baselines, followed closely by DAFT-RL with AIR. In this setting, we do not show the results from DeepSets or Self-Attention, because of bad performance, but we provide the results in App. D.3. We show the smoothed learning curve for the object comparison task in Fig. 2B, in which we only display the top three methods.
\begin{table}
\begin{tabular}{l l l l l l l l l l l l} \hline \hline
**Experiment** & \multicolumn{6}{c}{**Method**} \\ \cline{2-13}
**Setting** & DAFT-RL (S/NR): & DeepSets & Self-attention & SRCS & CNN & STOVE & SMORL & NCS & LRN \\ \hline
2-Push + 2 Switch (S) & 0.851 \(\pm\)0.135 & 0.498 \(\pm\)0.024 & 0.367 \(\pm\)0.027 & 0.717 \(\pm\)0.039 & 0.633 \(\pm\)0.027 & 0.888 \(\pm\)0.021 & 0.788 \(\pm\)0.021 & **0.912 \(\pm\)0.022** & 0.813 \(\pm\)0.027 \\ \(N\)-Push + 3 Switch (S+U) & **0.985 \(\pm\)0.024** & 0.244 \(\pm\)0.012 & 0.238 \(\pm\)0.011 & 0.618 \(\pm\)0.019 & 0.551 \(\pm\)0.017 & 0.614 \(\pm\)0.017 & 0.614 \(\pm\)0.017 & 0.711 \(\pm\)0.015 \\ \(N\)-Push (L) + 3 Switch (S+U) & **0.985 \(\pm\)0.023** & 0.940 \(\pm\)0.023 & 0.941 \(\pm\)0.046 & 0.639 \(\pm\)0.023 & 0.538 \(\pm\)0.037 & 0.931 \(\pm\)0.027 & 0.964 \(\pm\)0.023 & 0.944 \(\pm\)0.023 & 0.946 \(\pm\)0.029 & 0.939 \(\pm\)0.038 \\ \(2\)-Sighth (L) & **0.923 \(\pm\)0.065** & 0.987 \(\pm\)0.043 & 0.921 \(\pm\)0.054 & 0.978 \(\pm\)0.022 & 0.873 \(\pm\)0.024 & 0.847 \(\pm\)0.029 & 0.927 \(\pm\)0.025 & 0.955 \(\pm\)0.035 & 0.864 \(\pm\)0.022 & 0.907 \(\pm\)0.008 \\ \(N\)-Push (L+O) & **0.921 \(\pm\)0.037** & 0.982 \(\pm\)0.036 & 0.957 \(\pm\)0.036 & 0.920 \(\pm\)0.036 & 0.822 \(\pm\)0.036 & 0.528 \(\pm\)0.036 & 0.915 \(\pm\)0.036 & 0.928 \(\pm\)0.035 & 0.859 \(\pm\)0.036 & 0.868 \(\pm\)0.036 \\ \(N\)-Push (L+O) & **0.938 \(\pm\)0.033** & 0.451 \(\pm\)0.075 & 0.320 \(\pm\)0.045 & 0.500 \(\pm\)0.020 & 0.784 \(\pm\)0.047 & 0.727 \(\pm\)0.027 & 0.855 \(\pm\)0.027 & 0.855 \(\pm\)0.020 & 0.518 \(\pm\)0.047 & 0.857 \(\pm\)0.021 \\ \(N\)-Push + 2 Switch (L+S) & **0.703 \(\pm\)0.026** & 0.354 \(\pm\)0.077 & 0.351 \(\pm\)0.013 & 0.520 \(\pm\)0.049 & 0.498 \(\pm\)0.022 & 0.555 \(\pm\)0.043 & 0.463 \(\pm\)0.027 & 0.501 \(\pm\)0.018 & 0.613 \(\pm\)0.044 \\ \(N\)-Push + 3 Switch (L+O+S) & **0.783 \(\pm\)0.025** & 0.256 \(\pm\)0.045 & 0.115 \(\pm\)0.029 & 0.531 \(\pm\)0.028 & 0.302 \(\pm\)0.035 & 0.531 \(\pm\)0.024 & 0.525 \(\pm\)0.051 & 0.529 \(\pm\)0.027 & 0.418 \(\pm\)0.062 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average success rate over 3 random seeds for Push & Switch compositional generalization in terms of the combination of skills (S), changing the number of objects (O), and changing latent parameters (L) with respect to training. The numbers in bold highlight the top-performing method.
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline
**Experiment** & \multicolumn{6}{c}{**Method**} \\ \cline{2-13}
**Setting** & DAFT-RL (S/NR): & DAFT-RL (AIR) & SMORL & SRCS & Gym & STOVE & COBRA & NCS & LRN \\ \hline Object Goal & **0.897 \(\pm\)0.034** & 0.591 \(\pm\)0.029 & 0.658 \(\pm\)0.073 & 0.710 \(\pm\)0.060 & 0.230 \(\pm\)0.011 & 0.536 \(\pm\)0.048 & 0.663 \(\pm\)0.051 & 0.547 \(\pm\)0.022 & 0.759 \(\pm\)0.007 \\ Object Instruction & **0.900 \(\pm\)0.070** & 0.574 \(\pm\)0.0053 & 0.658 \(\pm\)0.035 & 0.742 \(\pm\)0.047 & 0.230 \(\pm\)0.038 & 0.506 \(\pm\)0.048 & 0.663 \(\pm\)0.057 & 0.574 \(\pm\)0.022 & 0.769 \(\pm\)0.015 \\ Object Generation & **0.930 \(\pm\)0.067** & 0.879 \(\pm\)0.061 & 0.834 \(\pm\)0.050 & 0.735 \(\pm\)0.029 & 0.290 \(\pm\)0.021 & 0.724 \(\pm\)0.008 & 0.707 \(\pm\)0.057 & 0.716 \(\pm\)0.054 & 0.789 \(\pm\)0.008 \\ Property Comparison & **0.907 \(\pm\)0.066** & 0.879 \(\pm\)0.073 & 0.836 \(\pm\)
Image benchmark: Block-stackingIn the Block-stacking benchmark [47], the task is to stack all the blocks into the target position. We use the pre-trained encoders to obtain the object attributes and MPC [41] to learn the optimal policy. We follow the experimental setup in the multi-step planning task configuration in [17], which is specified in App. D.2. As a sanity check, we show in App. D.3 that DAFT-RL provides comparable performances to the baselines in the _single-task mode_. We train on a varying number of objects and objects with different masses and then use the trained model to test on the domains where there exist an unseen number of objects and unseen combinations of masses. We show the results in Table 3 and Fig. 2C for varying numbers of blocks. Our approaches consistently outperform baselines, with bigger gains for more objects.
Ablation studiesTo evaluate the effectiveness of each component in DAFT, we conduct the following ablation studies: a) DAFT w/o latent parameters, b) DAFT w/o class template graphs, c) DAFT w/o dynamic interaction graph, and d) DAFT w/o interaction pattern graphs. As a representative result, we show the results of \(2\)-Push+\(2\)-Switch (L+S) in Fig. 2D. The results further indicate the contribution of each part to the effectiveness of final policy learning. We also provide visualizations of the learned graphs in App. D.3, which show that our model is capable of learning the true causal of a single object in symbolic cases. For pixel inputs, we do not have the ground-truth causal graphs.
## 6 Conclusions and Future work
We proposed Dynamic Attribute FacTored RL (DAFT-RL), a framework that leverages learned attribute-factored representations with dynamic graphs. For each class of object, we learned a class template graph that describes how the dynamics and reward of a object of this class factorizes according to its attributes, as well as an interaction pattern graph that describes how it interacts with objects of different classes at the attribute level. We also learned interactions between objects and with the agent with a dynamic graph. Through this template world model we learned a policy that can then be directly applied in a new environment by just estimating the interactions and latent parameters. We showed that DAFT-RL outperforms the state-of-the-art in three compositional generalization benchmarks. In future work, we plan to investigate jointly learning the representations from pixels, the class template graphs, interaction pattern graph, dynamic interation graphs and the policy.
Figure 2: A. The smoothed learning curve for \(2\)-Push + \(2\)-Switch (L+S) with different friction coefficients for each object (for clarity, we show only the top three methods in terms of the success rate); B. The smoothed learning curve for the object comparison task in Spriteworld with unseen object numbers, combinations of colors and shapes (for clarity, we show only the top three methods in terms of the success rate); C. Success rate versus number of blocks in the stacking task, where each block has distinct mass; D. Ablation study on the \(2\)-Push+\(2\)-Switch task: I. DAFT-RL w/o latent parameters; II. DAFT-RL w/o factored class template graph; III. DAFT-RL w/o dynamic interaction graph; IV. DAFT-RL w/o factored interaction pattern; V. Original DAFT.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{**Experiment**} & \multicolumn{5}{c}{**Method**} \\ \cline{2-10}
**Settings** & DAFT-RL (SA) & DAFT-RL (AR) & SMORL & SRICS & GNN & STOVE & NCS & LBN \\ \hline \hline
2 Blocks & 0.809 \(\pm\)0.019 & **0.838 \(\pm\)0.030** & 0.658 \(\pm\)0.028 & \(0.704\pm 0.016\) & 0.549 \(\pm\)0.016 & 0.728 \(\pm\)0.044 & 0.797 \(\pm\)0.035 & 0.649 \(\pm\)0.026 \\
4 Blocks & **0.738 \(\pm\)0.032** & 0.698 \(\pm\)0.022 & 0.605 \(\pm\)0.020 & 0.591 \(\pm\)0.049 & 0.526 \(\pm\)0.041 & 0.498 \(\pm\)0.013 & 0.571 \(\pm\)0.026 & 0.461 \(\pm\)0.028 \\
6 blocks & 0.591 \(\pm\)0.025 & **0.664 \(\pm\)0.017** & 0.536 \(\pm\)0.040 & 0.509 \(\pm\)0.043 & 0.461 \(\pm\)0.088 & 0.475 \(\pm\)0.023 & 0.521 \(\pm\)0.049 & 0.602 \(\pm\)0.097 \\
8 blocks & 0.506 \(\pm\)0.083 & **0.571 \(\pm\)0.039** & 0.386 \(\pm\)0.002 & 0.420 \(\pm\)0.061 & 0.334 \(\pm\)0.047 & 0.278 \(\pm\)0.086 & 0.397 \(\pm\)0.052 & 0.463 \(\pm\)0.077 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average success rate over 3 random seeds for Block-stacking with unseen object numbers and mass combinations. The numbers in bold highlight the top-performing method.
|
2309.16689
|
A New 1-mg Fast Unimorph SMA-Based Actuator for Microrobotics
|
We present a new unimorph actuator for micro-robotics, which is driven by
thin shape-memory alloy (SMA) wires. Using a passive-capillary-alignment
technique and existing SMA-microsystem fabrication methods, we developed an
actuator that is 7 mm long, has a volume of 0.45 mm^3, weighs 0.96 mg, and can
achieve operation frequencies of up to 40 Hz as well as lift 155 times its own
weight. To demonstrate the capabilities of the proposed actuator, we created an
8-mg crawler, the MiniBug, and a bioinspired 56-mg controllable
water-surface-tension crawler, the WaterStrider. The MiniBug is 8.5 mm long,
can locomote at speeds as high as 0.76 BL/s (body-lengths per second), and is
the lightest fully-functional crawling microrobot of its type ever created. The
WaterStrider is 22 mm long, and can locomote at speeds of up to 0.28 BL/s as
well as execute turning maneuvers at angular rates on the order of 0.144 rad/s.
The WaterStrider is the lightest controllable SMA-driven water-surface-tension
crawler developed to date.
|
Conor K. Trygstad, Xuan-Truc Nguyen, Nestor O. Perez-Arancibia
|
2023-08-03T21:02:12Z
|
http://arxiv.org/abs/2309.16689v1
|
# A New \(1\)-mg Fast Unimorph SMA-Based Actuator for Micro Robotics
###### Abstract
We present a new unimorph actuator for microrobotics, which is driven by thin _shape-memory alloy_ (SMA) wires. Using a passive-capillary-alignment technique and existing SMA-microsystem fabrication methods, we developed an actuator that is \(7\,\mathrm{mm}\) long, has a volume of \(0.45\,\mathrm{mm}\), weighs \(0.96\,\mathrm{mg}\), and can achieve operation frequencies of up to \(40\,\mathrm{Hz}\) as well as lift \(155\) times its own weight. To demonstrate the capabilities of the proposed actuator, we created an \(8\)-mg crawler, the MiniBug, and a bioinspired \(56\)-mg controllable water-surface-tension crawler, the WaterStrider. The MiniBug is \(8.5\) mm long, can locomote at speeds as high as \(0.76\,\mathrm{BL}\)s (_body-lengths per second_), and is the lightest fully-functional crawling microrobot of its type ever created. The WaterStrider is \(22\,\mathrm{mm}\) long, and can locomote at speeds of up to \(0.28\,\mathrm{BL}\)s as well as execute turning maneuvers at angular rates on the order of \(0.144\,\mathrm{rad}\)s. The WaterStrider is the lightest controllable SMA-driven water-surface-tension crawler developed to date.
## I Introduction
We envision the creation of autonomous millimeter-scale robots that can work together in swarms to execute tasks such as artificial pollination, insect-plague control, agricultural surveying, search and rescue, environmental monitoring, microfabrication, and robotic-assisted surgeries. In order for millimeter-scale robots to complete complex assignments and function autonomously, they must efficiently generate large forces relative to their size and weight. The most common actuators used in microrobotics are based on piezoelectric [1, 2, 3, 4], electromagnetic [5, 6, 7, 8, 9, 10], or _shape-memory alloy_ (SMA) [11, 12, 13, 14, 15] technologies. Because of their wider frequency bandwidths, piezoelectric and electromagnetic microactuators are generally preferred over SMA-based systems. However, compared to SMA-based methods, these two actuation technologies exhibit significantly lower work densities and, additionally, electromagnetic microactuators have been shown to be difficult to operate outside laboratory settings. In [13], we introduced a \(6\)-mg fast SMA-based microactuator; here, we present an actuator of the same type which is significantly lighter, smaller, and stronger for its size. In fact, this is the lightest and fastest SMA-based actuator reported to date; it weighs only \(0.96\,\mathrm{mg}\), has a length of \(7\,\mathrm{mm}\), has a volume of \(0.45\,\mathrm{mm}\)3, can produce functional displacements at frequencies of up to \(40\,\mathrm{Hz}\), and can lift \(155\) times its own weight. This achieved performance is a consequence of the actuator's mechanical design and structure, which is essentially composed of two parallel nitinol (\(56\,\mathrm{\char 37}\) Nickel & \(44\,\mathrm{\char 37}\) Titanium) SMA wires with a diameter of \(25.4\,\mathrm{\SIUnitSymbolMicro m}\) and a leaf spring used to facilitate the transition between the _twinned_ and _dewinned martensite_ phases of the SMA material during an actuation cycle [13]. This configuration is the key element that enables the actuator to operate at high frequencies (up to \(40\,\mathrm{Hz}\)).
To test and demonstrate the capabilities of the proposed microactuator, we developed an \(8\)-mg terrestrial crawler, the MiniBug, and a bioinspired \(56\)-mg water-surface-tension crawler, the WaterStrider; these two robots are shown in Fig. 1. Inspired by the design introduced in [13], the MiniBug is the lightest fully-functional SMA-driven crawler ever created and represents significant progress toward miniaturization with respect to the crawling platforms in [13] and [14]. Specifically, the MiniBug has a length of \(8.5\,\mathrm{mm}\) and can locomote at speeds of up to \(0.76\,\mathrm{BL}\)/s (_body-lengths per second_). We envision that robots of this type can be equipped with micromanipulators and be used in swarms to execute tasks in structured environments with smooth surfaces; for example, to provide assistance in small-scale production lines. Inspired by the locomotion modes of common water-stride insects (_Gerris lacustri_), the WaterStrider prototype (see Fig. 1) has a lightweight structure and elliptical supporting feet with relatively large surface areas, which ensure that the generated surface-tension forces are strong enough to compensate gravity and allow the robot to stably stand on water. The two fin-like propulsors of this platform are independently driven by two of the proposed high-work-density actuators through four-bar transmissions designed and fabricated using the _smart composite microstructure_ (SCM) method. During operation, the typical resulting stroke angles are on the order of \(30\,\mathrm{\SIUnitSymbolDegree}\) and directional control of the robot is achieved by changing their amplitudes in real time (see Section IV). In this case, through simple experiments and heuristic considerations, we
Fig. 1: **Two microrobots driven by the proposed SMA-based actuator.** The WaterStrider (left) is a \(56\)-mg controllable robot with a body length of \(22\,\mathrm{mm}\) that crawls on water by taking advantage of surface-tension phenomena. The MiniBug (right) is an \(8\)-mg microrobot with a body length of \(8.5\,\mathrm{mm}\) that crawls on land. This robot is the lightest fully-functional SMA-driven terrestrial crawler developed to date.
aimed to maximize the hydrodynamic force output by taking advantage of fluid-structure-interaction phenomena, a matter of current and further research. The WaterStrider prototype in Fig. 1 can locomote at speeds of up to \(0.28\,\)BL/s and complete open-loop turning maneuvers at angular rates of up to \(0.144\,\)rad/s. We anticipate that in the near future, water-surface-tension crawlers of the WaterStrider type will be employed for geographical surveying and continuous water quality monitoring in lakes, dams, and rivers; for example, to quickly detect toxic spills or changes in hydrology.
The rest of the paper is organized as follows. Section II describes the design and fabrication of the proposed unimorph SMA-based actuator. Section III discusses the experimental characterization of the actuator. Section IV describes the development and functionality of the MiniBug and WaterStrider platforms, which we created to test and demonstrate the capabilities of the actuator. Lastly, Section V draws some conclusions regarding the presented research.
## II Design and Fabrication
The dynamic behavior of an SMA wire is characterized by the _shape memory effect_ (SME) and the _superelasticity_ property [16]. These phenomena are observable when the SMA material composing the wire transitions between three distinct crystal-structure phases: _denwinned martensite_, _twinned martensite_, and _austenite_. As shown in Fig. 2(a), an actuation cycle can be induced through sequential heating, cooling, and application of stress. The proposed design can be driven using _Joule_ heating [17], or other methods such as catalytic combustion [15], and passive cooling (free convection); stress is continuously applied using a leaf spring made of _carbon fiber_ (CF), as shown in Fig. 2(b). In this specific case, the SMA material (nitinol), under a stress of \(172\,\)MPa, has a nominal transition temperature from martensite to austenite of \(90\,\)\({}^{\circ}\)C. Using basic beam theory, it can be shown that the force applied by the leaf spring is approximately constant for contractions of the actuator's SMA wires greater than \(18\,\)um; therefore, for design purposes, we assumed that the stress experienced by the SMA material remains constant during an entire actuation cycle. As seen in Fig. 3, from a fabrication viewpoint, the proposed SMA-based actuator is composed of three main types of components: (i) two parallel \(25.4\)-um-diameter SMA wires (Dynalloy Flexinol HT SMA Wire); (ii) a \(90\)-um-thick CF beam, made of Tenax 112 prepreg, that functions as a leaf spring; and, (iii) two cross-shaped plates, made of copper-clad FR4 (Cu-FR4), used for electrical and mechanical connection.
Using data reported in [13] and simple mechanical tests, we chose a thickness of \(90\,\)um, a width of \(0.5\,\)mm, and a length of \(6\,\)mm for the CF beam in order to heuristically minimize weight and maximize actuator output at high frequencies. Similarly, we chose the smallest commercially-available diameter of \(25.4\,\)um for the SMA wires in order to minimize the volume with respect to the surface of the SMA material, thus maximizing the free-convection rate of cooling. In this case, the design trade-off is that the force produced by an SMA wire decreases with its diameter. To compensate, we can simply increase the number of SMA wires used for actuation; as already mentioned, the presented design uses two in parallel. All the steps in the simultaneous fabrication process of four actuators are detailed in Fig. 3(a). In Step 1, a Cu-FR4 frame is cut using a \(3\)-W \(355\)-nm DPSS laser (Photonics Industries DCH-355-3). In Step 2, SMA wires are looped through or at two opposite sides of the Cu-FR4 frame and tied under tension using a simple knot; then, they are secured using a small amount of _cyanoacrylate_ (CA) glue (Loctite 401). In Step 3, CF beams are installed on protrusions (alignment tabs) of the frame, using the passive capillary self-alignment phenomena described in [18, 19]; at this scale, the properties of the chosen CA glue produce the
Fig. 3: **Fabrication of the proposed SMA-based actuator.** **(a)** Steps of the SCM-based fabrication procedure. In Step 1, a \(3\)-W \(355\)-nm DPSS laser (Photonics Industries DCH-355-3) is used to micromachine the Cu-FR4 supporting frame employed during fabrication. In Step 2, SMA wires are looped through holes in the frame, and secured under tension with a simple knot and a small amount of CA glue (Loctite 401). In Step 3, we use capillary self-alignment to accurately place four CF beams onto premnachined alignment tabs of the Cu-FR4 frame. In Step 4, the actuator is released from the Cu-FR4 frame using a \(3\)-W \(355\)-nm DPSS laser. **(b)** Capillary alignment used in Step 3 of the actuator fabrication process. In Step 1, droplets of CA glue are precisely placed onto alignment tabs of the Cu-FR4 frame, and four CF beams are immediately placed on top of the droplets. In Step 2, each beam is pulled, without external intervention, in the desired direction of alignment due to the surface tension of the glue and capillary action at the beam-glue interface. In Step 3, the CA glue cures and the CF beams become precisely aligned over the tabs of the Cu-FR4 frame. **(c)** Photos showing two cured CA glue droplets and two precisely aligned beams after completion of Step 3 in the actuator fabrication process.
Fig. 2: **Design and functionality of the proposed SMA-based actuator.** **(a)** Depiction of the molecular crystal structure of an SMA material during cycles of heating and cooling, assuming the completion of major hysteretic loops. In the case of an SMA wire, starting at the elongated _detwimed martensite_ phase, the SMA material reaches the _auxentie_ phase after the application of sufficient not surpass the SMA transition temperature and thus force the contraction of the wire according to the _shape-memory effect_ (SME). Then, after sufficient cooling, the material transitions to the _twinned martensite_ phase. As shown, the application of an external stress (edura) the SMA material and the wire elongates until reaching its initial state. **(b)** Depiction of a complete SME-based actuation cycle during operation. Heat is applied to the SMA wire using _Joule_ heating; cooling of the SMA material occurs passively through unforced convection; simultaneously, the SMA material is detwinned using a CF leaf spring.
capillary effects necessary for passive self-alignment during the short period of time before curing.
The mechanism of the capillary-alignment technique is depicted in Fig. 3(b). First, droplets of CA glue are precisely deposited on the alignment tabs of the Cu-FR4 frame; then, permeabilized CF beams are placed on top of the droplets; lastly, with proper droplet placement, the capillary forces of the CA glue _pull_ the CF beams perfectly over the Cu-FR4 alignment tabs before the glue cures. Because of the quick cure time of CA glue, the CF beams must be placed immediately after droplet deposition to ensure proper capillary alignment. Specifically, the beams are passively aligned on the Cu-FR4 tabs with respect to their transverse direction while slight manipulations with tweezers are required to center them with respect to their axial direction. In Step 4, as depicted in Fig. 3(a), the actuators are released using DPSS laser cutting. The photograph in Fig. 3(c) shows a close-up of an actuator-fabrication frame after the installation of CF beams; here, the precise alignment of two beams on their Cu-FR4 tabs can be seen as well as the cured CA glue droplets holding them in place.
## III Actuator Characterization
### _Experimental Setup_
For the experimental characterization of the proposed actuator dynamics, we used the setup shown in Fig. 4. As seen in the signals-and-systems diagram of Fig. 4(a), a MathWorks Simulink Real-Time system is used to generate the _pulse-width-modulation_ (PWM) voltage signal with pre-specified characteristics--frequency, _on_-voltage height, and _duty cycle_ (DC)--that drives the actuator. The power of this PWM signal is amplified with a MOSFET-based circuit (YYNMOS-4) to provide sufficient current to Joule heat the SMA wires during actuation. Throughout characterization, the tested actuator is mounted on the stand shown in Fig. 4(b) and depicted in Fig. 4(c). In this configuration, one end of the actuator is attached to a 3D-printed mount while the other end is precisely aligned below a laser displacement sensor (Keyence LK-G32) to measure the instantaneous displacement output of the actuator. During the tests, signals are digitally generated, measured variables are read and recorded, and information is processed at a rate of \(10\,\mathrm{kHz}\).
During one PWM period, the voltage applied across an SMA wire of the actuator is first _on_, then _off_; here, _on_ corresponds to \(15\,\mathrm{V}\) and _off_ corresponds to \(0\,\mathrm{V}\). The contraction rate of an SMA wire directly depends on the amount of current running through it [13, 17], and the DC of the driving PWM signal--defined as the fraction of the signal period for which the signal is in the _on_ state--determines the fraction of time an actuator is allowed to heat during a PWM cycle [13]. At low operation frequencies (\(\sim\)\(1\,\mathrm{Hz}\)), a large DC might result in the SMA material becoming overheated and damaged. Therefore, using information from simple tests, we selected a set of DC values for each considered frequency. We chose an _on_-voltage height of \(15\,\mathrm{V}\) to limit the current passing through the SMA wires of the actuator to \(200\,\mathrm{mA}\). This limitation is critical to avoid overheating and damaging the SMA material and thin copper wires (\(52\,\mathrm{AWG}\)) used to connect the actuator to the driving power circuit.
To empirically evaluate the performance of an actuator regarding force and work, we load and operate it with several
Fig. 4: **Experimental setup used to dynamically characterize the proposed actuator.****(a)** Signals-and-systems diagram of the setup. A MathWorks target-and-host Simulink Real-Time system with a National Instruments PCI-6229 AD/DA board is used to digitally generate signals, read and record measured variables, and process information at a rate of \(10\,\mathrm{kHz}\). Accordingly, the PWM signal required to excite the tested actuator is first generated and then its power is amplified with a MOSFET-based circuit (YYNMOS-4) to provide sufficient current while _Joule_ heating the SMA wires of the tested actuator. A laser displacement sensor (Keyence LK-G32) measures the instantaneous deflection of the actuator’s tip, the output. An oscilloscope monitors signals in real time and records the PWM data coming from the MOSFET-based circuit. **(b)** Photograph of the test stand with an actuator mounted. **(c)** Schematic of the weight-loading method and fixture used to hold an actuator during the characterization tests. The tested actuator is fixed on one end with the distal free end precisely aligned, using a 3-axis optomechanical stage, under the laser displacement sensor shown in (b). We use a copper hook to attach a piece of monofilament thread to the actuator; we secure it with a small amount of CA glue. We increase the loading acting on the actuator by crimping additional brass beads, weighing \(0.18\,\mathrm{NN}\) each, to the thread. After they are crimped, the beads are secured using CA glue to prevent shaking during operation. To compensate for the \(0.015\,\mathrm{mN}\) weight of the hook and thread, the first crimped bead weighs only \(0.165\,\mathrm{mN}\). **(d)** Photo of a tested actuator mounted on the experimental stand with a \(0.72\,\mathrm{mN}\) load hanging from it.
different weights for several different PWM parameters. Figs. 4(b)-(d) graphically describe the method to load the actuator. As seen, a short piece of monofilament thread is connected to the distal end of the actuator through a copper hook; then, brass beads weighing \(0.18\,\)mN each are incrementally crimped onto the thread until the actuator can no longer lift the load. To compensate for the combined \(0.015\)-mN weight of the hook and thread, the first bead was chosen to weigh only \(0.165\,\)mN. After data collection, the displacement data measured with the laser sensor is processed offline using a zero-phase low-pass filter--designed with MATLAB's digital-filter-design tool--to reduce sensor noise. We then employ simple algorithms run in MATLAB to compute figures of merit to evaluate the performance of the proposed actuator.
### _Characterization Results_
Actuator responses for PWM exciting frequencies of \(1\), \(5\), \(10\), and \(15\,\)Hz are shown in Figs. 5(a)-(d), respectively. As indicated in the plots, the associated DC values are \(6\), \(11\), \(10\), and \(10\,\)%, which were empirically determined to maximize actuator output. Sections of these experiments can be seen in the accompanying supplementary movie. At \(1\,\)Hz, the measured steady-state deflection at the actuator's tip oscillates between about \(0.1\) and \(1.75\,\)mm, which approximately corresponds to a major hysteretic loop [16]. At higher frequencies, the amplitude of oscillation decreases and a steady-state deflection offset occurs. These phenomena result from the hysteretic behavior of the SMA material during heating-and-cooling cycles, and the limited time available for the SMA wires to cool down and reach ambient temperature. To evaluate actuator performance, we define the _maximum actuator displacement output_ (MADO) for an actuation cycle as the difference between the maximum and minimum beam deflection at the actuator's tip during a PWM period, measured using the laser sensor shown in Fig. 4(b). Furthermore, for a test defined by its PWM exciting frequency and PWM DC value, we define the _average_ MADO (AMADO) as the mean of the MADO sequence for a test, computed across \(15\,\)s of steady-state data. Fig. 6(a) shows the AMADO values corresponding to all sixty experimental cases defined by the exciting PWM frequencies in the set \(\{1,5,10,15\}\) Hz and PWM DC values in the set \(\{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\}\) %. For plotting and analysis purposes, for each PWM frequency, we normalized the AMADO data by dividing them by the maximum computed AMADO value among all the tested DC values. As mentioned above, for the elements in the chosen frequency set, these maxima respectively occur when the DC values are chosen to be \(6\), \(11\), \(10\), and \(10\,\)%; correspondingly, the specific maximum raw AMADO values are \(1.625\), \(1.15\), \(0.48\), and \(0.14\,\)mm. As seen, the AMADO value for an experiment significantly decreases as the exciting PWM frequency increases; however, at \(15\,\)Hz, the AMADO value of \(140\,\)um is still comparable to the displacement a piezoelectric actuator of this scale can produce [20].
During force-output characterization experiments, for each PWM exciting frequency, we employ the DC value corresponding to the largest load-free AMADO value. In the cases presented here, for each load in the set \(\{0,0.18,0.36,0.54,0.72,0.90,1.08,1.26,1.44\}\,\)mN and each frequency in the set \(\{1,5,10,15\}\,\)Hz, we measured the instantaneous loaded deflection, \(d_{\text{max}}(f_{\text{load}})\), of the actuator's tip and compute the _average loaded_ MADO (ALMADO) value across \(15\,\)s, \(\tilde{d}_{\text{max}}(f_{\text{load}})\), where \(f_{\text{load}}\) is the corresponding load. The chosen load set is a reflection of the way we increment the weight of the test load in each sequential experiment. Specifically, in a first set of experiments, the actuator is excited unloaded; then, in a second set of experiments, we add a weight of \(0.18\,\)mN, corresponding to the hook, the monofilament thread, and a \(0.165\)-mN brass bead; then, in a third set of experiments, we crimp a \(0.18\)-mN bead to the thread, and so forth, as depicted in Figs. 4(c)-(d). We determined empirically that eight brass
beads (corresponding to \(1.44\,\)mN) can be added before an actuator of the considered type fails.
The mean and _standard error of the mean_ (SEM) of the ALMADO values obtained through five different experiments, for each considered frequency and DC value, are shown in Fig. 6(b). Using these data, we can estimate the _average maximum actuator work output_ (AMAWO) as a function of the loading force for each frequency by computing \(\overline{W}_{\text{max}}(f_{\text{load}})=f_{\text{load}}\cdot\bar{d}_{\text{ max}}(f_{\text{load}})\). For a frequency of \(1\,\)Hz, the best observed AMAWO value is of \(1.4\,\)\(\upmu\)J, which corresponds to a load of \(1.26\,\)mN. Clearly, for each tested load, the ALMADO value significantly decreases as the exciting frequency increases. Also, despite the existence of several outliers, for the first three frequencies, the data in Fig. 6(b) indicates a decreasing trend of the ALMADO value as the load increases. Specifically, for the frequency-DC pair \(\{1\,\text{Hz},6\,\)%},\) the mean of this figure of merit decreases from \(1.625\) to \(0.994\,\)mm. Similarly, for the pair \(\{5\,\text{Hz},11\,\)%},\) it decreases from \(1.15\) to \(0.655\,\)mm; and, for the pair \(\{10\,\text{Hz},10\,\)%},\) from \(0.48\) to \(0.315\,\)mm. In contrast, for the pair \(\{15\,\text{Hz},10\,\)%},\) the ALMADO value is not clear why this phenomenon occurs, but it might be related to the amount of kinetic energy in the system as a whole. Note that both the relatively large SEM values and output variations due to load increases highlight the need for feedback control in real-time applications of actuators of this type. Actuator failure typically occurs at a load of about \(1.6\,\)mN and, using simple microscopic analyses, we determined that this is caused by the fracture of the SMA material under mechanical stress. Also, the cause of failure provides empirical evidence of the structural integrity and high functionality of the proposed actuator. Consistently, its outstanding strength and work density are also evidenced by its ability to lift \(155\) times its own weight for all the tested frequencies. To our best knowledge, no other microactuation technology compares to the presented method regarding work density. To further demonstrate these capabilities, in Section IV, we present two mobile microrobots driven by this actuation method.
## IV Applications in Microrobotics
### _The MiniBug_
With a total weight of \(8\,\)mg and a length of \(8.5\,\)mm, the MiniBug (see Fig. 1) is the lightest fully-functional SMA-driven crawler reported to date. This platform compellingly demonstrates the high-frequency capabilities of the proposed \(0.96\)-mg SMA-based actuator and its suitability to be integrated into microrobots. The MiniBug's base design was inspired by the SMALLBug presented in [13]. However, a new slot-and-pin alignment method, along with a tuning procedure for its final physical configuration, enabled us to use \(90\)-\(\upmu\)m-thick CF material, which resulted in a significantly smaller and lighter robotic structure. We designed the feet of the MiniBug to cyclically and coordinately produce anisotropic friction and, as a consequence, forward locomotion. Each of the four feet has a sharp and a smooth face of contact with the supporting ground. Fig. 7 depicts the locomotion mechanisms during operation. Theoretically, as depicted in Fig. 7(a), during heating, the actuator contracts
Fig. 6: **Normalized _average_ MADO (AMADO) and _average loaded_ MADO (ALMADO) values measured experimentally. (a) AAMOO values corresponding to all sixty cases defined by the exciting PWM frequencies in the set \(\{1,5,10,15\,\)Hz and PWM DC values in the set \(\{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\}\,\)\(\upmu\)K. At \(1\,\)Hz, largest AMADO value of \(1.625\,\)mm corresponds to a DC of \(6\,\)%. At \(5\,\)Hz, the largest AMADO value of \(1.15\,\)mm corresponds to a DC of \(11\,\)%. At \(10\,\)%, the largest AMADO value of \(0.48\,\)mm corresponds to a DC of \(10\,\)%. At \(15\,\)Hz, the largest AMADO value of \(0.14\,\)mm corresponds to a DC of \(10\,\)%. During the performance of these experiments, the measured ambient temperature oscillated approximately two degrees above \(22\,\)\({}^{\circ}\)C. (b) ALMADO data corresponding to all thirty-six cases defined by the exciting PWM frequency-DC pairs in the set \(\{1\,\)Hz, \(6\,\)%\(\}\), \(\{5\,\)Hz, \(11\,\)%\(\}\), \(\{10\,\)Hz, \(10\,\)%\(\}\), \(\{15\,\)Hz, \(10\,\)%\(\}\) and the loads in the set \(\{0,0.18,0.36,0.54,0.72,0.90,1.08,1.26,1.44\}\,\)mN. Each data point in the plot denotes the mean of the ALMADO values obtained through five different back-to-back experiments. The associated _standard error of the mean_ (SEM) values are indicated with vertical bars. For each tested load, the ALMADO value significantly decreases as the exciting frequency increases. Also, for the first three frequencies, a decreasing trend of the ALMADO value, as the load increases, can be observed. Namely, for the pair \(\{1\,\)Hz, \(6\,\)%, the mean of this figure of merit decreases from \(1.625\) to \(0.994\,\)mm. Similarly, for the pair \(\{5\,\)Hz, \(11\,\)%\(\}\), it decreases from \(1.15\) to \(0.655\,\)mm, and, for the pair \(\{10\,\)Hz, \(10\,\)%\(\}\), from \(0.48\) to \(0.315\,\)mm. In contrast, for the pair \(\{15\,\)Hz, \(10\,\)%\(\}\), the ALMADO value remains approximately constant as the load increases.
Fig. 7: **Design and functionality of the MiniBug. (a)** The MiniBug in its expanded state. The feet of this robot were designed with a sharp face and a smooth face to generate anisotropic friction and, as a consequence, forward locomotion during cyclic operation of the driving actuator. As the SMA wires of the driving actuator contract during heating, the sharp faces of the robot’s front feet anchor to the supporting surface while the rear feet slide forward. **(b)** The MiniBug in its contracted state. As the SMA wires of the driving actuator elongate during cooling, the sharp faces of the robot’s rear feet anchor to the supporting surface while the front feet slide forward.
and the front feet anchor to the ground as the stick lines of their sharp faces cling to the supporting surface while the back feet slide toward the right on their smooth faces. This contraction also moves the robot's _center of mass_ (COM) closer to the back legs, thus increasing the normal force acting on the back feet while anchored to the ground and facilitating the forward sliding of the front feet as the actuator expands during cooling, as shown in Fig. 7(b). In reality, during crawling, the robot's feet slightly slide when they are supposed to be completely anchored; this phenomenon is more prominent at low locomotion frequencies (\(1\,\mathrm{Hz}\)). This issue is mitigated at higher frequencies because the deflection drift of the driving actuator keeps the COM closer to the rear legs, which reduces undesired back sliding and, as a consequence, increases crawling efficiency.
To test and demonstrate the locomotion capabilities of the robot, we simply excited the driving actuator, in open loop, using a PWM signal with constant parameters during operation. As shown in Fig. 8, we tested six cases corresponding to the frequency-DC pairs: \(\{1\,\mathrm{Hz},6\,\%\}\), \(\{5\,\mathrm{Hz},11\,\%\}\), \(\{10\,\mathrm{Hz},10\,\%\}\), \(\{15\,\mathrm{Hz},10\,\%\}\), \(\{20\,\mathrm{Hz},10\,\%\}\), and \(\{40\,\mathrm{Hz},10\,\%\}\). In all these tests, we kept the PWM _on_-voltage height at \(18\,\mathrm{V}\), which we calculated using the total resistance of the actuator system in order to limit the resulting current to \(200\,\mathrm{mA}\). The photo sequences in Fig. 8 show \(12\,\mathrm{s}\) of locomotion corresponding to the six experiments. Video footage of these tests can be seen in the accompanying supplementary movie. The MiniBug exhibits the same locomotion modes of the SMALLBug discussed in [13]; however, at the highest actuation frequency of \(40\,\mathrm{Hz}\), a new locomotion mode is observed, which we dubbed as _gliding_. In this mode, the driving actuator does not display noticeable actuator deflections; we speculate that the generated high-frequency vibrations induce the robot to slide forward. Because of the manner locomotion is produced at high frequencies, in this mode, traction is not significant and the MiniBug can be easily pushed around by minor disturbances. Everything considered, the best locomotion performance is achieved at the actuation frequency of \(15\,\mathrm{Hz}\) because the robot simultaneously generates significant traction and locomotes at its fastest speed of \(0.76\,\mathrm{BL/s}\). In all tested modes, due to its small mass, the MiniBug is subjected to relatively large forces from the tether wires, which heavily affect locomotion speed. To remove these disturbances, we envision a tetherless power solution based on either directed-energy transmission, or catalytic combustion.
### _The WaterStrider_
The extraordinary physical abilities exhibited by water-surface-tension locomoting insects have inspired
Fig. 8: **The MiniBug locomoting at six different actuation frequencies.** The photographic sequences show the distance traveled by the MiniBug at intervals of \(4\,\mathrm{s}\), for the operation frequencies in the set \(\{1,5,10,15,20,40\}\,\mathrm{Hz}\); the DC values are the empirical optima determined according to the method discussed in Section III. The fastest relative speed of \(0.76\,\mathrm{BL/s}\) occurs at \(15\,\mathrm{Hz}\). Depending on the frequency of operation, the robot exhibits four locomotion modes: (i) _crawling_, (ii) _shuffling_, (iii) _galloping_, and (iv) _gliding_. At \(1\,\mathrm{Hz}\), the MiniBug slowly crawls, achieving an average speed of \(0.10\,\mathrm{BL/s}\); at \(5\,\mathrm{Hz}\), the locomotion mode changes to shuffling, during which the MiniBug achieves an average speed of \(0.46\,\mathrm{BL/s}\); at \(10\,\mathrm{Hz}\), \(15\,\mathrm{Hz}\), and \(20\,\mathrm{Hz}\), the locomotion mode switches to galloping, during which the MiniBug achieves average speeds of \(0.69\,\mathrm{BL/s}\), \(0.76\,\mathrm{BL/s}\), and \(0.75\,\mathrm{BL/s}\), respectively. At the highest frequency of \(40\,\mathrm{Hz}\), the MiniBug _glides_ and hardly any actuator output displacement can be noticed as high-frequency vibrations allow the robot to _virtually_ float forward at an average speed of \(0.61\,\mathrm{BL/s}\). Video footage of these tests can be seen in the accompanying supplementary movie.
scientific research [21, 22] and the development of new robots, such as those presented in [23] and [24]. The robot in [23] weighs \(1\,\mathrm{g}\), stays afloat on hydrophobic wires, and uses sophisticated actuation mechanisms to generate elliptical stroke patterns for locomotion. However, its weight is much larger than that of _Aquarius pathudum_ specimens used for inspiration, which have a mass of only about \(20\,\mathrm{mg}\). The robot in [24] weighs \(68\,\mathrm{mg}\), stays afloat on hydrophobic wires, and can jump vertically \(142\,\mathrm{mm}\). In this case, SMA wires provide the actuation forces needed to jump off the water; however, this robot was not reported capable of locomotion. With a weight of \(56\,\mathrm{mg}\) and a length of \(22\,\mathrm{mm}\), the WaterStrider (see Fig. 1) is the lightest controllable water-surface-tension locomoting robot reported to date. We designed this robot to efficiently utilize the large force outputs produced by the unimorph SMA-based actuator presented in Section III. The weight of the robot is supported by elliptical feet designed with large surface areas to exploit surface-tension forces while standing on water. We also designed the highly-flexible fin-like propulsor depicted in Fig. 9(a) to take advantage of fluid-structure-interaction phenomena and thus increase hydrodynamic efficiency; actuating each propulsor independently enables locomotion and turning capabilities.
We used the modified SCM method in [15] to fabricate all the structural and functional components of the WaterStrider, including two four-bar transmissions, four elliptical feet, a body frame, two fin-like propulsors (see Fig. 1), and two SMA-based actuators of the type discussed in Section III. Consistently, all these multi-layer parts were made from \(90\)-\(\mathrm{\SIUnitSymbolMicro m}\)-thick CF sheets and Kapton film. For the final assembly, we used the method presented in [15] and the capillary alignment technique discussed in Section II. A key structural element of the WaterStrider's body frame are twisting reinforcement bars, installed to prevent actuation-induced body warping during operation. The main locomotion mode designed for the WaterStrider is depicted in Figs. 9(b)-(d). Here, the two four-bar transmission mechanisms receive as inputs the displacement outputs generated by the two SMA-based actuators and amplify them into large stroke-angle outputs that drive the fin-like bending propulsors of the system. As seen, during a locomotion period, the two actuators contract and then relax, enabling the propulsors to provide the force necessary to push the WaterStrider forward. During forward locomotion, both actuators are operated symmetrically, in open loop, to generate a large straight propulsion force. Accordingly, to execute turning maneuvers, one actuator-propulsor pair is actuated with a \(5\)-Hz PWM signal while the other actuator-propulsor pair is left inactive, which produces a drag-based turning torque on the robot.
The photographic sequences in Fig. 10 summarize the experiments performed to test and demonstrate the locomotion capabilities of the WaterStrider. Video footage of these tests can be seen in the accompanying supplementary movie. The sequence in Fig. 10(a) shows the WaterStrider during forward locomotion in open loop. In this case, both driving actuators were excited using \(1\)-Hz PWM signals with the optimal DC of \(6\,\mathrm{\char 37}\), empirically determined through the characterization experiments discussed in Section III. We selected an _on_-voltage height of \(12\,\mathrm{V}\) to ensure that no more than \(200\,\mathrm{mA}\) of current passed through the SMA wires of the actuators; the same _on_-voltage height was kept in all the experiments shown in Fig. 10. With these excitation parameters, the WaterStrider achieved an average speed of \(0.26\,\mathrm{BL}\)/s. The sequence in Fig. 10(b) also shows the WaterStrider during forward locomotion in open loop. In this case, however, both driving actuators were excited using \(2\)-Hz PWM signals with a DC of \(7.5\,\mathrm{\char 37}\). With these excitation parameters, the WaterStrider achieved an average speed of \(0.28\,\mathrm{BL}\)/s.
To execute turning maneuvers, such as those shown in Figs. 10(c)-(d), one actuator-propulsor pair is open-loop excited using a \(5\)-Hz PWM signal with the optimal DC of \(11\,\mathrm{\char 37}\), determined through the characterization tests described in Section III, while the other actuator-propulsor pair is left inactive to produce a drag-induced body torque, as already explained. Specifically, in the test shown in Fig. 10(c), the right actuator-propulsor pair is excited to make the robot turn left; similarly, in the test shown in Fig. 10(d), the left actuator-propulsor pair is excited to make the robot turn right. During these left and right turning maneuvers, the WaterStrider achieved rates of \(0.144\) and \(0.073\,\mathrm{rad}\)/s, respectively. It is important to mention that the forces exerted by the tether wires on the WaterStrider during operation significantly affect its locomotion behavior because the friction induced by the water surface is almost negligible. For the same reason, the locomotion trajectory of its body can be easily and heavily disturbed by other external forces; this fact explains the observed difference in angular speed when the robot turns right and left. The ability of the WaterStrider to overcome these disturbances, while executing forward locomotion and turning maneuvers, demonstrates the capacity of the proposed SMA-based actuators to produce high output forces. Furthermore, these results indicate that once new onboard, or tetherless, technologies become available to power microrobots of the WaterStrider type, a wide gamut of applications useful for humans will become a reality. The most promising possibilities are high-density batteries [2], catalytic combustion [15], and directed transmission of electromagnetic energy [12].
Fig. 9: **Design and functionality of the WaterStrider.****(a)** The fin of the propulsor was designed to be flexible with a structure composed of a series of hinges, made of CF and Kapton, and fabricated using the SCM method. The robot was designed with its two propulsors biased towards the back of its body in order to reduce drag during forward locomotion. This characteristic was achieved using rigid \(30\)-degree angled couplers that connect, through CF bars, the transmissions and fins of the two propulsors. The couplers can be replaced to change the inclination angles of the propulsors. **(b)** To generate forward thrust and, as a consequence, forward locomotion, the WaterStrider symmetrically flags its two propulsors cyclically. **(c)** To turn left, the WaterStrider flaps its right fin-like propulsor at a frequency of \(5\,\mathrm{Hz}\) while its left propulsor remains inactive. The asymmetrical production of thrust plus the drag force acting on the inactive left propulsor generate a functional counter-clockwise torque on the robot’s body. **(d)** To turn right, the WaterStrider flaps its left fin-like propulsor at a frequency of \(5\,\mathrm{Hz}\) while its right propulsor remains inactive.
## V Conclusions
We presented a new \(0.96\)-mg (\(\sim\)\(1\) mg) fast unimorph SMA-based actuator that is capable of high-frequency operation (up to \(40\) Hz) as well as lifting \(155\) times its own weight. This development is the result of using the modified SCM method in [15, 20] and the introduction of a new alignment technique for microfabrication based on the use of passive capillary forces. Through dynamic characterization experiments, we tested and demonstrated the high-frequency operation and high-force output capabilities of the proposed SMA-based actuator. To show the suitability of the actuator in microrobotic applications, we designed and built two locomoting microrobots: (i) the MiniBug, which, with a weight of \(8\) mg and a length of \(8.5\) mm, is the lightest fully-functional SMA-driven terrestrial crawler reported to date; and, (ii) the \(56\)-mg \(22\)-mm-long WaterStrider, which is the first subgram controllable SMA-driven crawler capable of locomoting on water by taking advantage of surface-tension effects. Through the discussion of several tests, we demonstrated the locomotion behavior and performance of the MiniBug during operation, which can function at actuation frequencies of up to \(40\) Hz and achieve an average speed of \(0.76\) BL/s. Similarly, we demonstrated the locomotion behavior and performance of the WaterStrider during operation, which can achieve an average speed of \(0.28\) BL/s and execute turning maneuvers at angular rates of up to \(0.144\) rad/s. To achieve autonomy, we envision the deployment of MiniBug and WaterStrider platforms in swarms that collectively would be capable of carrying enough power to complete missions.
|
2310.03481
|
Personalized Transformer-based Ranking for e-Commerce at Yandex
|
Personalizing user experience with high-quality recommendations based on user
activity is vital for e-commerce platforms. This is particularly important in
scenarios where the user's intent is not explicit, such as on the homepage.
Recently, personalized embedding-based systems have significantly improved the
quality of recommendations and search in the e-commerce domain. However, most
of these works focus on enhancing the retrieval stage. In this paper, we
demonstrate that features produced by retrieval-focused deep learning models
are sub-optimal for ranking stage in e-commerce recommendations. To address
this issue, we propose a two-stage training process that fine-tunes two-tower
models to achieve optimal ranking performance. We provide a detailed
description of our transformer-based two-tower model architecture, which is
specifically designed for personalization in e-commerce. Additionally, we
introduce a novel technique for debiasing context in offline models and report
significant improvements in ranking performance when using web-search queries
for e-commerce recommendations. Our model has been successfully deployed at
Yandex, serves millions of users daily, and has delivered strong performance in
online A/B testing.
|
Kirill Khrylchenko, Alexander Fritzler
|
2023-10-05T11:46:39Z
|
http://arxiv.org/abs/2310.03481v2
|
# Personalized Transformer-based Ranking for e-Commerce at Yandex
###### Abstract.
Personalizing user experience with high-quality recommendations based on user activity is vital for e-commerce platforms. This is particularly important in scenarios where the user's intent is not explicit, such as on the homepage. Recently, personalized embedding-based systems have significantly improved the quality of recommendations and search in the e-commerce domain. However, most of these works focus on enhancing the retrieval stage.
In this paper, we demonstrate that features produced by retrieval-focused deep learning models are sub-optimal for ranking stage in e-commerce recommendations. To address this issue, we propose a two-stage training process that fine-tunes two-tower models to achieve optimal ranking performance. We provide a detailed description of our transformer-based two-tower model architecture, which is specifically designed for personalization in e-commerce.
Additionally, we introduce a novel technique for debiasing context in offline models and report significant improvements in ranking performance when using web-search queries for e-commerce recommendations. Our model has been successfully deployed at Yandex, serves millions of users daily, and has delivered strong performance in online A/B testing.
deep learning, personalization, recommender systems, e-commerce, learning-to-rank, debiasing +
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Information retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems;computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems; Computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization systems;computing methodologies Learning from implicit feedback; Neural networks
+
Footnote †: journal: Information systems Retrieval: Learning to rank; Personalization; Recommender systems systems; Computing methodologies Learning
Related Works
To our knowledge, the first two-tower model was proposed by Microsoft: DSSM (Cheng et al., 2017) generated query and document embeddings for the web-search domain. Many companies have since adopted two-tower models for various information retrieval tasks such as recommendations, search, and ads. Information retrieval involves a multi-stage process of information filtering: billion-sized catalogs are filtered down to a few candidates to be presented to the user. The first stage, called retrieval, is where two-tower models are most commonly used. EBR systems usually constitute one of multiple retrieval channels (Zhou et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). Another popular choice is the inverted index with boolean matching based on text or visual signals (Zhou et al., 2017; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). While inverted indices have good relevance due to exact term matching, EBR is tolerant to misspellings and allows for personalization and context awareness. EBR models are typically trained to optimize softmax loss with in-batch or uniformly sampled negatives. Yi et al. (Yi et al., 2019) used importance sampling to correct sampling bias of in-batch negatives for Youtube. Yang et al. (Yang et al., 2019) introduced further improvements for retrieval using mixed negative sampling for Google App Store. Recently, there has been a trend to increase consistency between the ranking and retrieval stages with auxiliary losses (Zhou et al., 2017; Wang et al., 2019; Wang et al., 2019).
The ranking stage is usually one of the last stages in information filtering, where compute-intensive models are used on hundreds or thousands of candidates. Deep learning models for personalized ranking mostly follow the Embedding&MLP paradigm proposed by Cheng et al. (Cheng et al., 2017). Such models tend to abandon two-tower architecture in favor of fusing user, context, and target item information. YoutubeDNN (Chen et al., 2017) formed user video-watching and searching activity vectors by averaging historical user event embeddings. DIN (Zhou et al., 2018) used a convex combination of historical user event embeddings, essentially an attention layer with the target item as query and historical events as keys and values. DIEN (Zhou et al., 2018) and BST (Zhou et al., 2018) improved on DIN with GRU and transformers. However, these models use an early fusion of the target item and user history items, which is very costly in production even for the re-ranking stage and limits our ability to use batch serving. This motivates us to explore two-tower models. There are works that employ transformer-based two-tower models to produce similarity-based features for ranking in search (Zhou et al., 2017; Wang et al., 2019) and ads (Wang et al., 2019). However, these models are not personalized and only use query and document information.
Personalization with sequential recommenders based on user histories is largely associated with the next-item-prediction (NIP) task, which essentially turns recommendation into a language modeling problem. First works in this field were motivated by the progress in natural language processing, e.g. ELMo (Zhou et al., 2018), Transformer (Zhou et al., 2019), GPT (Zhou et al., 2019), BERT (Cheng et al., 2017). GRU4Rec (Zhou et al., 2019), Caser (Zhou et al., 2019) and SASRec (Zhou et al., 2019) were the pioneer works to model the NIP task with GRU, CNNs, and transformers respectively. Bert4Rec (Bert4Rec, 2019) trained a recommender in a BERT-like bidirectional fashion with MLM. CL4Rec (Bert et al., 2019) used contrastive learning for pre-training, and CARCA (Zhou et al., 2019) promoted inductive context-aware models with content-based item embeddings. However, most of these works conduct evaluation on academic datasets with questionable practices such as sampled negatives (Li et al., 2019) and absence of a time-based train-test split (Zhou et al., 2019; Wang et al., 2019). Furthermore, the NIP task is subject to selection bias which encourages models to imitate the logging policy (Chen et al., 2017). Additionally, due to the scale of academic datasets, these works are prone to unnecessary or even harmful inductive bias: customized model architectures, auxiliary losses, and special pre-training regimes that do not benefit real-world web-scale recommendations. Large companies also conduct research and develop sequential recommenders: Pinterest (Pinterest, 2018), Alibaba (Bah et al., 2018; Wang et al., 2019; Wang et al., 2019), eBay (Nakamura et al., 2019; Wang et al., 2019; Wang et al., 2019), NAVER (Zhou et al., 2019; Wang et al., 2019), Etsy (Chen et al., 2017), Spotify (Spotify, 2018), etc. They usually provide correct evaluation schemes, use web-scale datasets, and sometimes provide results of online A/B testing.
To combat the feedback loop, it's important to mitigate the effects of various recommendation biases like position, selection, examination, trust, and popularity bias. There are several debiasing approaches: (1) reweighting samples based on inversed propensity scores (Bah et al., 2018; Wang et al., 2019); and (2) modeling bias explicitly with a separate model (Zhou et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), detaching the bias model during deployment. Motivated by the second approach, we introduce the concept of context bias and propose a context debiasing technique to improve the ranking quality of offline models.
Although enriching user e-commerce history with web-search queries is a case of cross-domain recommendations (CDR), we do not delve into CDR research because it is not a goal of our work to explore and improve upon various CDR methods. To the best of our knowledge, the most notable works that improve the quality of e-commerce recommendations with web-search data are from NAVER (Zhou et al., 2019; Wang et al., 2019).
## 3. Modeling
In this section, we describe our two-tower model architecture, shown in Figure 1. We discuss the item tower, user tower, and similarity function separately, before introducing an additional context tower for context debiasing. Finally, we present our two-stage training regime, which involves pre-training the model for retrieval-oriented tasks and then fine-tuning it for ranking-focused tasks.
### Item Tower
In the e-commerce domain we are tasked with recommending products to the users. Unlike some domains such as web-search, products provide a rich source of information including category, title, description, structured attributes (e.g. brand, color, flavor, material), images, and reviews. Product catalogs are very dynamic with some items available in a single quantity and new items arriving daily, making it important to consider the cold-start and distribution shift problems. To increase generalization, avoid memorization and alleviate cold-start we do not use identifier-based learnable embeddings. Instead, we employ content-based item representations using only titles.
Product titles are processed as CBOW (Pinterest, 2018) using a wordpiece (Pinterest, 2018) tokenizer built on web-search data with a vocabulary of 103295 tokens. Token embeddings are summed which does not degrade performance compared to averaging.
The item tower architecture was motivated by our experience with personalization transformer-based models for web search and display advertising. Initially, in web-search we used a cross-encoder model with user and item features fused in the same transformer encoder. Later on we switched to two-tower models to handle
millions of RPS in display advertising. The quality of the model decreased significantly when we used a Multi-Layer Perceptron (MLP) as an item tower. However, using transformer encoder as an item tower on top of products represented as a single CBOW title embedding produced good results.
Simplifying the transformer encoder with a single input embedding lead to the architecture presented on the right in figure 1. This architecture differs from the MLP in that it includes residual connections and layer normalizations, similar to those found in a transformer:
\[x_{k+1}=\text{LayerNorm}_{k}(\text{ReLU}(\text{Linear}_{k}(x_{k}))+x_{k})\]
where \(k\) is a layer number and \(x_{1}\in\mathbb{R}^{d}\) is an input title embedding.
With such architecture, we were able to preserve most of the initial cross-encoder quality gains. Incorporating contrastive learning for pre-training further improved the model quality compared to the cross-encoder.
Following the item tower, the item embedding is \(l_{2}\) normalized to unit length.
### User Tower
To our knowledge, when choosing which types of user events to use in a sequence encoded by a transformer, the first best guess is to use the same types of events you are trying to predict. We form a chronologically ordered sequence from the user's clicks, add-to-carts, add-to-favorites, and purchases.
We also observed quality gains from enriching user histories with web-search queries. Interestingly, using web-search document clicks did not lead to any further improvements. We explore two ways of incorporating web-search history: using a separate transformer encoder on a fixed number of the most recent user's web-search queries and fusing web-search information with e-commerce activity into a single event sequence. Moreover, we believe that combining diverse data types, such as queries and products, yields even better results in an inductive setting that utilizes content-based event representations.
Three types of embeddings are summed up to form an event embedding:
* **Content embedding.** Queries and product titles are tokenized using the wordpiece tokenizer from section 3.1. The embedding matrix is shared across queries and products.
* **Positional embedding.** Each event is assigned a chronologically reversed absolute position: the latest event gets position 0, the second latest gets position 1, and so on. Position embeddings are learnable.
* **Event type embedding.** Every event type, including click, add-to-cart, add-to-favourites, purchase, web-search query, has a learnable embedding.
Together with learnable [CLS] embedding, event embeddings are fed to a bi-directional transformer encoder. A post-transformer contextualized [CLS] embedding is \(l_{2}\) normalized and used as a final user representation. We use a standard transformer encoder architecture [10, 43] with post-normalization. All embeddings, except for [CLS], are layer-normalized prior to the transformer.
Similar to Pinnerformer [29], we opt to use a single user embedding to simplify maintenance and deployment. Our model is used in a batch serving scenario with user and item embeddings being recalculated daily. Multiple user embeddings would result in a
Figure 1. Model architecture. Add+ln denotes layernorm\((x+y)\).
linearly scaled memory cost for KV-storage and complicate integration into downstream applications. However, using multiple user embeddings usually produces better quality and helps to enforce diversity, which is why we usually employ multiple embeddings for real-time models.
We use a bi-directional transformer encoder without causal masking and teacher forcing, only the [CLS] representation is used to produces scores.
### Similarity function
We use inner product as a user-item similarity function. Since we \(l_{2}\) normalize embeddings, it is equivalent to cosine similarity:
\[r_{ui}=\frac{\langle u_{u},v_{i}\rangle}{\parallel v_{u}\parallel\parallel v _{i}\parallel}\]
where \(v_{u},v_{i}\in\mathbb{R}^{d}\) are user \(u\) and item \(i\) embeddings. In our experiments, we found that normalizing embeddings usually results in a more stable training process.
### Context Debiasing
When a recommendation request comes, typically a variety of contextual data, such as device, operational system, time of day, seed item, search query, is available along with user and item information. Using context information in real-time models is paramount and usually brings significant improvements.
We will discuss deployment in more detail in Section 4.4, but it's worth noting that we deploy our model in batch-serving mode, i.e. user and item embeddings are recalculated on a daily basis. As a result, we do not have access to context information during model inference. Since the context has very useful signal, our models tend to implicitly infer current user context from history. However, it is unnecessary to spend model and user embedding capacity for implicit context information. We expect that our gradient boosting trees ranker model (e.g. Catboost (Cheng et al., 2019)), for which we are constructing a similarity-based feature, already utilizes context information efficiently. Additionally, our data is skewed towards certain user contexts that are more prevalent in real-world scenarios, which can impact our performance for less popular contexts.
Inspired by debiasing techniques aimed at mitigating effects from various biases, we introduce context debiasing technique for alleviating redundancy of context information in our ranker. Without context debiasing, our model estimates the following probability:
\[\mathbb{P}(i|u)=\sum_{ctx\in C}\mathbb{P}(i|u,ctx)\mathbb{P}(ctx|u)\]
where \(\mathbb{P}(i|u)\) is a probability that the user \(u\) clicks on item \(i\), and \(C\) is a set of all possible contexts.
Instead, during training we learn a separate context tower using actual user context:
\[\mathbb{P}(i|u,ctx)=\sigma(r_{ui}+r_{ctx})\]
where \(r_{ui}\) is learned similarity between user \(u\) and item \(i\), and \(r_{ctx}\) is a learnable scalar based on a given context \(ctx\).
During deployment, we detach the context tower and use only user-item similarity. A possible modification to this technique would be to fuse context with item and user, for example, by forming a context embedding and calculating the inner product between context and item. However, in our experiments, we didn't see any improvements with this approach.
Similar to Zhang et al. (Zhang et al., 2019), we attempted to apply dropout to the output of the context tower but it did not lead to any quality improvements. This could be related to the fact that our context tower has the simplest possible form. We do not exclude the possibility that dropout will be useful with a more complex context tower.
### Two-stage training
The most common loss function for deep learning two-tower personalization models is a sampled softmax loss with random and in-batch negatives, which essentially makes the model retrieval-focused. Negative implicit feedback based on impressions is rarely used, because: (1) using logs with non-positive impressions dramatically increases pipeline complexity; (2) such models perform poorly on retrieval tasks due to sample selection bias; (3) all impressed items are very similar (Zhang et al., 2019) and examination bias leading to very noisy data with large amounts of false negatives.
However, the sample selection bias problem goes both ways. When trained with sampled negatives, models underperform on ranking scenarios which present much harder impressed negatives. We demonstrate that tuning the model directly on a ranking scenario with pairwise ranking loss and impressed negatives results in significant gains.
Still, due to the data sparsity of positive feedback (such as purchases), training large two-tower models on ranking from scratch yields almost zero results. This issue can be mitigated with transfer learning. We propose a two-stage training scheme for training ranking-oriented deep learning two-tower models.
#### 3.5.1. Pre-training stage
In the pre-training stage, the model is trained in a standard retrieval-oriented regime with sampled softmax loss function:
\[\mathcal{L}_{pretrain}(u,p,N)=-\log\frac{\exp(\tau\cdot r_{up})}{\exp(\tau\cdot r _{up})+\sum_{n\in N}\exp(\tau\cdot r_{tun})}\]
where \(\tau\) represents the temperature parameter, \(r_{ui}\) denotes the similarity between user \(u\) and item \(i\), \(p\) corresponds to a positive item, and \(N\) is a collection of in-batch negatives. Along with on-device in-batch negatives, we utilize item embeddings from all other GPU workers as negative samples. User clicks, add-to-carts, add-to-favourites and purchases are used as positive item interactions.
#### 3.5.2. Fine-tuning stage
During the fine-tuning stage, we use a separate loss for each type of the positive impressed signal, including clicks, add-to-cart, add-to-favorites, and purchases. To ensure that our similarity scores remain well-calibrated during continuous training, we employ a combination of pairwise and pointwise losses.
Initially, we used a separate pointwise loss for each type of the positive signal, but later we switched to a single pointwise loss based on clicks:
\[\text{BCE}_{\text{click}}(u,i)=-y_{i}\log f_{ui}-(1-y_{i})\log(1-f_{ui})\]
where \(y_{i}\) indicates whether item \(i\) is clicked, \(f_{ui}:=\sigma(\alpha_{cl}r_{ui}+\alpha_{ctx}r_{ctx}+\beta_{cl})\) is the predicted probability of item \(i\) being clicked
by user \(u\), and \(\alpha_{cl},c_{ctx}\), \(\beta_{cl}\) are learned scalar parameters. \(r_{ctx}\) is the context tower prediction.
The initial pairwise loss for each type of positive signal had a form:
\[\begin{split}\text{BPR}_{k}(u,p,n)&=-\log\sigma( \gamma_{k}(r_{up}-r_{tun}))=\\ &=-\log\frac{e^{yk}r_{up}}{e^{\gamma_{k}r_{up}}+e^{\gamma_{k}r_{tun }}}\end{split} \tag{1}\]
where \(k\) denotes a type of positive feedback, \(u,p,n\) represent user, positive and negative items respectively, and \(\gamma_{k}\) is a learned scalar.
We adopted a solution from Bai et al. (Bai et al., 2017) to pairwise losses, replacing the exponents with sigmoids in equation 1. With this approach, we achieve well-aligned ranking and regression objectives. Furthermore, sigmoids allow us to integrate context debiasing into pairwise losses. Our final pairwise ranking loss is:
\[\text{BPR}_{k}(u,p,n)=-\log\frac{f_{up}}{f_{up}+f_{tun}} \tag{2}\]
where \(f_{ui}:=\sigma\left(\gamma_{k}r_{ui}+\gamma_{k,ctx}r_{ctx}+\beta_{k}\right)\), with additional learned scalars \(\gamma_{k,ctx},\beta_{k}\).
Our final objective for fine-tuning the model is shown below. For brevity, we omit specific arguments of each loss component:
\[\mathcal{L}_{finetune}=\text{BPR}_{click}+\text{BPR}_{cart}+\text{BPR}_{fmt}+\text{BPR}_{prch}+\text{BCE}_{\text{click}} \tag{3}\]
where the pairwise loss is calculated for each targetwise ordered item pair within a recommendation request and pintwise loss is calculated for each impressed item.
## 4. Experiments
### Dataset
We gather one year of logs from Yandex e-commerce platform Yandex Market for training, including clicks, add-to-cart, add-to-favorites, purchases, and web queries. For fine-tuning, we collect impressions from all recommendation surfaces at Yandex Market. Due to the proprietary nature of the data, we don't disclose dataset sizes.
For each positive user interaction, a pre-training sample is formed from the user history up to that interaction with a one-day delay. Such delay for history events is important because (1) it is consistent with the production daily batch serving job scenario, and (2) setting a low history delay may lead to an overly easy pre-training task. A single item usually generates a sequence of positive interactions within a short time period, e.g. click \(\rightarrow\) add-to-cart \(\rightarrow\) purchase, which is why it becomes easy to predict add-to-carts based on clicks and predict purchases based on add-to-carts with in-batch negatives and short history delay. Unlike the fine-tuning stage, pre-training also utilizes organic positive interactions produced by the user outside of recommendations.
For fine-tuning, we utilize impressions from all recommendation surfaces on Yandex Market. We group items impressed together and filter out groups without any positive interactions. An impression is considered positive if the impressed item had a positive interaction with the user within a short period following the recommendation. To speed up training and simplify the implementation of pairwise losses, we pack together all impressed items from the same recommendation surface into a single dataset record.
Although almost every recommendation surface has its own GBT ranker, for brevity we provide results of offline evaluation only for the most user-centric surfaces: retargeting and discovery. Retargeting is an unconstrained form of personalization akin to eBay's Recently Viewed Items (Zhu et al., 2017) module. Discovery limits recommended items to previously undiscovered ones. We use the next ten days after the training period to evaluate our model as a feature in the GBT ranker. As a metric, we report a test relative nDCG gain from including our similarity feature in the GBT feature set.
Context debiasing tower uses recommendation surface identifier and user device as input features. The context tower produces a sum of two learned scalar values, for surface and device respectively.
### Implementation Details
We use quite a large amount of user events \(-\) 1024 latest user events. The transformer encoder has four layers with hidden size 256 and four attention heads. The candidate tower has four layers with hidden size 1024. The temperature parameter for softmax loss is learnable. We split embedding matrices, transformer, candidate tower, and loss parameters into separate parameter groups with groupwise gradient norm clipping and differently tuned learning rates. Learning rate is warnumped for 2500 steps and then linearly decays till the end of the training. Pre-training is done on 16 A100 40g hosts with an effective batch-size of 2048 while fine-tuning is done on 8 A100 40g hosts with an effective batch size of 4096. Pre-training is done for three epochs and fine-tuning takes a single epoch due to overfitting. Training is done in a multi-host distributed setting with PyTorch (Paszasz et al., 2017) and Deepspeed (Zhu et al., 2017).
We continuously fine-tune the model on new chunks of data, weekly. During continuous training, we freeze sigmoid inner parameters (\(\alpha_{k},\beta_{k}\)). Also, we use a constant learning rate for all parameter groups. Only second training stage, presented in equation 2, is employed. Optimizer state is not reused across iterations.
### Offline Experimental Results
The main experimental results are presented in Table 1 as a sequence of incremental improvements. The baseline (L1) is only trained on the retrieval task. Fine-tuning on the pairwise ranking task (L2) brings significant improvements. Calibrated ranking (L3) bridges the gap between pointwise and pairwise losses. Removing all pointwise losses, with the exception of clickwise loss (L4),
\begin{table}
\begin{tabular}{l l l l} \hline \hline & Model & Retargeting & Discovery \\ \hline
1 & Sampled Softmax Loss & +0.233\% & +0.882\% \\
2 & Two-stage Training & +0.476\% & +1.377\% \\
3 & Calibrated Pairwise Ranking & +0.501\% & +1.367\% \\
4 & Retaining Single Pointwise Loss & +0.521\% & +1.405\% \\
5 & History Length (256 to 512) & +0.541\% & +1.588\% \\
6 & Pointwise Loss Weight (1.0 to 0.1) & +0.566\% & +1.631\% \\
7 & Context Debiasing & +0.586\% & +1.645\% \\
8 & Web-Search Queries & +0.677\% & **+2.061\%** \\
9 & History Length (512 to 1024) & **+0.755\%** & +1.981\% \\ \hline \hline \end{tabular}
\end{table}
Table 1. Incremental Model Improvements (Relative nDCG Improvement).
demonstrates an uplift in quality in both retargeting and discovery scenarios. Scaling history length to 512 (L5) greatly improves discovery. We then reduce the importance of pointwise loss (L6) and apply context debiasing (L7). Finally, we enrich user histories with web-search queries. First, we keep the same maximum history length as in previous measurements (L8) and then we improve on retargeting by expanding the history length to 1024 events (L9).
The benefits of two-stage training are illustrated in Table 2. The sparsity of positive feedback in the pairwise ranking scenario renders the one-stage training approach ineffective, whereas pre-training on the retrieval task mitigates this effect.
Enhancing user history with web-search queries works better with an early fusion of e-commerce and web-search data, which is demonstrated in the Table 3. When mergin history (L2, L3), we take the chronologically latest user events across both web-search and e-commerce activity. Without fusion (L1), we take 512 latest e-commerce events and 512 latest web-search events and train separate transformer encoders to form two embeddings for each user. Single scalar prediction value is calculated by summing up the scaled inner products for each of the user embeddings with an item embedding. During evaluation, we form two distinct features, separately calculating user-item similarities for web-search and e-commerce user representations. Although comparing single feature to multiple features is unfair, fusing information still works better.
### Online Deployment
User and item embeddings are recalculated daily with a batch serving job. Once recomputed, the user embeddings are uploaded to kv-storage. During serving, the item embeddings are stored in RAM as a hash table and embedding lookup is performed when necessary.
We report the number of billed orders that are attributed to recommendations. If an item has been impressed to a user within a short period before the item's order, the item's order is attributed to recommendations. An order as a whole is attributed to recommendations if at least one item in the order is attributed to recommendations. An order is marked as new if it has items that the user has never encountered on our platform before the recommendation.
Table 4 shows that our model increased the number of orders attributed to recommendations by 6% on the homepage and by 5% on the cart page. New orders attributed to recommendations increase by +10% on the homepage and by +7% on the cart page. Furthermore, we increased the time spent on the next visited page after the homepage by +1.5%.
Our online A/B test revealed an effect similar to Pinterest (Pinterest, 2018): the uplift from our model was substantially higher in the first few days compared to the rest of the A/B test.
## 5. Conclusion
In this paper, we demonstrated the importance of ranking-focused fine-tuning of deep learning embedding-based personalization models. We proposed a two-stage training procedure with retrieval-oriented pre-training and ranking-oriented fine-tuning and introduced context debiasing to improve offline models. Enrichment of user e-commerce histories with web-search data demonstrated further improvements. Lastly, we described in detail our two-tower transformer-based personalization model for e-commerce which was validated through offline and online experiments. The described model has been deployed at the Yandex e-commerce platform Yandex Market to serve main app traffic.
We are currently working on a multi-domain real-time context-aware two-tower transformer-based model for both e-commerce recommendations and search at Yandex Market. We also explore ways to improve item embeddings with (1) item transformers, (2) additional product data like structured attributes, descriptions, and product images, and (3) graph neural networks.
## Acknowledgments
We would like to express our gratitude to the team at Yandex Market for their support during the research and deployment of this model. In particular, we would like to thank Mikhail Denisenko for his significant contribution to this project. We are also grateful to Artur Ilichev and Ivan Lapitsky for their assistance. Additionally, we want to acknowledge the rest of our Recsys team, including Ivan Guschenko-Cheverda, Vsevolod Svetlov, and Sergey Ovcharenko. Without their hard work and dedication, this research would not have been possible.
|
2302.03690
|
Storing a Trie with Compact and Predictable Space
|
This paper proposed a storing approach for trie structures, called coordinate
hash trie. The basic idea is using a global hash table with a special hash
function to store all edges of a trie. For a trie with $n$ nodes and an
alphabet with size $m$, the execution time of finding, inserting and deleting a
child node, is $O(1)$ for the average case, $O(m)$ for the worst case. The
space used by this approach is $O(n)$, unrelated to $m$. The constant of space
consumption is predictable, with no need for reallocation or resizing. In
addition, this approach is very easy to implement.
|
Yuxuan Dong
|
2023-02-06T20:38:56Z
|
http://arxiv.org/abs/2302.03690v4
|
# Storing a Trie with Compact and Predictable Space
###### Abstract
This paper proposed a storing approach for trie structures, called _coordinate hash trie_. The basic idea is using a global hash table with a special hash function to store all edges of a trie. For a trie with \(n\) nodes and an alphabet with size \(m\), the execution time of finding, inserting and deleting a child node, is _O(1)_ for the average case, _O(m)_ for the worst case. The space used by this approach is _O(n)_, unrelated to \(m\). The constant of space consumption is predictable, with no need for reallocation or resizing. In addition, this approach is very easy to implement.
## 1 Introduction
The problem of searching, inserting, and deleting a string in a string set, arises frequently in programming. Trie [4] is a widely used data structure for this problem.
A trie is a tree for storing a set of strings. Each edge of a trie is labeled with a symbol in the alphabet. Figure 1 shows a trie of strings {he,she,his,hers}, which were inserted in order. Node 0 is the root node. A double circle node denotes a _terminal_ node where a string terminates.
A trie has three basic operations.
* **Node walking:** Given a node \(\mathbf{z}\) and symbol \(\mathbf{y}\), find the child node \(\mathbf{z}\), which is reached by the symbol \(\mathbf{y}\);
* **Node insertion:** Given a node \(\mathbf{z}\) and symbol \(\mathbf{y}\), create a new child node \(\mathbf{z}\) of \(\mathbf{z}\) and connect \(\mathbf{z}\) and \(\mathbf{z}\) with the symbol \(\mathbf{y}\);
* **Node deletion:** Given a node \(\mathbf{z}\) and symbol \(\mathbf{y}\), delete the child leaf node \(\mathbf{z}\) of \(\mathbf{z}\) which is connected with the symbol \(\mathbf{y}\).
Suppose there are \(\mathbf{n}\) nodes in a trie, and the size of the alphabet is \(\mathbf{m}\). If the execution time of basic operations is \(\mathbf{f(n,m)}\), then the execution time of searching, inserting, and deleting a string with length \(\mathbf{l}\) would all be \(\mathbf{O(l\mathbf{f(n,m)})}\).
The most straightforward and time-efficient implementation of trie is using a direct-mapped array for each node to store its children. This approach is called the _direct-mapped trie_ in this paper. A direct-mapped trie can be represented by a two-dimension array \(\mathbf{A[n][m]}\). If there is an edge from \(\mathbf{i}\) to \(\mathbf{k}\), labeled with the symbol \(\mathbf{j}\), we will have \(\mathbf{A[i][j]=k}\). If there is no such edge, \(\mathbf{A[i][j]}\) is set to \(0\). This works because there is no edge pointing to the root node.
The direct-mapped approach has \(\mathbf{f(n,m)=O(1)}\), thus it's very efficient. The matrix \(\mathbf{A}\), however, takes \(\mathbf{O(nm)}\) space. This makes the direct-mapped trie unpractical for large alphabets or devices with a restricted primary memory. The matrix \(\mathbf{A}\) is usually very sparse. For example, in figure 1 with the alphabet {a,b,c,...,z}, there are 26 columns in each row in \(\mathbf{A}\), but a row contains at most two valid elements. The sparse nature of \(\mathbf{A}\) gives us opportunities for compression.
Several efforts were made to compress a trie. However, some of these approaches significantly increase the execution time of one or more basic operations. Others make the actual space consumption hard to estimate.
This paper proposed an approach for storing a trie. The execution time of basic operations is _O(1)_ for the average case, _O(m)_ for the worst case. The space consumption is _O(n)_. Because our approach requires no resizing or reallocation, the actual space consumption is predictable. Thus it's practical for large alphabets and devices with a restricted primary memory.
## Our Approach
For a trie with \(n\) nodes and the alphabet with size \(m\). We use an integer \(\mathbf{z}\in[0,n)\) to denote a node, and an integer \(\mathbf{y}\in[0,m)\) to denote a symbol in the alphabet.
Our approach, called _coordinate hash_\(trie\), uses a global hash table, called the _edge table_, to store all edges in the trie. The edge table represents a key-value dictionary. The edge from node \(\mathbf{z}\) to node \(\mathbf{z}\), labeled with \(\mathbf{y}\), is represented by a dictionary item \((\mathbf{x},\mathbf{y})\to\mathbf{z}\) in the edge table. \((\mathbf{x},\mathbf{y})\) is called the _edge key_. \(\mathbf{z}\) is called the _edge value_.
The hash function used in our approach is
\[\mathbf{h}(\mathbf{x},\mathbf{y})=(\mathbf{x}m+\mathbf{y})\bmod\mathbf{H},\]
where \(\mathbf{H}\) is the number of slots in the edge table.
The trie contains \(\mathbf{n-1}\) edges. Thus we could take \(\mathbf{H}=(\mathbf{n-1})/\mathbf{\alpha}\), where \(\mathbf{\alpha}\) is a positive real number constant, known as the load factor of the hash table.
Basic operations of trie thus become search, insertion, and deletion operations of hash table.
At the first glance, accessing an item in the edge table may search the whole table for the worst case. However, we will prove that, with the above hash function, the execution time could be bound to _O(m)_ for the worst case.
## Complexity Analysis
The space consumption of a coordinate hash tire is clearly _O(n)_. The execution time of basic operations of a coordinate hash trie depends on how the hash table is implemented. We assume an implementation meets the following conditions.
**Assumption 1**: The average execution time of searching, inserting, and deleting an item in the hash table is _O(1)_;
**Assumption 2**: The worst execution time of searching, inserting, and deleting an item keyed with \(k\) in the hash table, is at most proportional to the number of keys, which have the same hash value as \(k\) has.
**Most implementations of hash table meet or approximately meet assumption 1 and 2.**
**Theorem 1**: **The average execution time of node walking, node insertion, and node deletion of a coordinate hash trie is \(\boldsymbol{O(1)}\).**
**Proof Straightforward from assumption 1.**
**To give the worst execution time, we need to figure out, for a given edge key \(\boldsymbol{(x,y)}\), how many edge keys have the same hash value as \(\boldsymbol{(x,y)}\) has.**
**We give a definition for convenience first.**
**Definition 1**: **The** \(GCD\)****\(coordinate\) **of an edge key \(\boldsymbol{(x,y)}\) **is a tuple \(\boldsymbol{(x^{\prime},y^{\prime})}\) which meets:**
\[\begin{cases}\boldsymbol{x^{\prime}}=\lfloor(\boldsymbol{x}m+\boldsymbol{y}) /\boldsymbol{gcd(H,m)}\rfloor,\\ \boldsymbol{y^{\prime}}=(\boldsymbol{x}m+\boldsymbol{y})\bmod\boldsymbol{ gcd(H,m)},\end{cases}\]
**where we use** \(\boldsymbol{gcd(a,b)}\) **to denote the greatest common divider of non-negative integers** \(a\) **and** \(b\)**.**
**Edge keys and their GCD coordinates are one-to-one mapped. In addition, for each edge key \(\boldsymbol{(x,y)}\) and its GCD coordinate \(\boldsymbol{(x^{\prime},y^{\prime})}\), we have**
\[\begin{cases}\boldsymbol{x}m+\boldsymbol{y}=\boldsymbol{x^{\prime}} \boldsymbol{gcd(H,m)}+\boldsymbol{y^{\prime}},\\ \boldsymbol{0}\leq\boldsymbol{x^{\prime}}<\boldsymbol{n}\boldsymbol{m}/ \boldsymbol{gcd(H,m)},\\ \boldsymbol{0}\leq\boldsymbol{y^{\prime}}<\boldsymbol{gcd(H,m)}.\end{cases}\]
**Using the concept of GCD coordinates, we could give the condition that two edge keys have the same hash value.**
**Lemma 1**: **Given two edge keys \(\boldsymbol{(x_{1},y_{1})}\) and \(\boldsymbol{(x_{2},y_{2})}\), \(\boldsymbol{h(x_{1},y_{2})}=\boldsymbol{h(x_{2},y_{2})}\) if and only if**
\[\begin{cases}\boldsymbol{x^{\prime}_{1}}\equiv\boldsymbol{x^{\prime}_{2}}& \boldsymbol{(mod\;H/\boldsymbol{gcd(H,m)})},\\ \boldsymbol{y^{\prime}_{1}}=\boldsymbol{y^{\prime}_{2}},\end{cases}\]
**where** \(\boldsymbol{(x^{\prime}_{i},y^{\prime}_{i})}\) **is the GCD coordinate of** \(\boldsymbol{(x_{i},y_{i})}\)**, for** \(\boldsymbol{i=1,2}\)**.**
**Proof Take** \(\boldsymbol{G=gcd(H,m)}\)**.**
\[\begin{array}{ccc}h(x_{1},y_{1})=h(x_{2},y_{2})\\ \implies&x_{1}m+y_{1}\equiv x_{2}m+y_{2}\pmod{H}\\ \implies&x_{1}^{\prime}G+y_{1}^{\prime}\equiv x_{2}^{\prime}G+y_{2}^{\prime} \pmod{H}\\ \implies&(x_{1}^{\prime}-x_{2}^{\prime})G+(y_{1}^{\prime}-y_{2}^{\prime}) \equiv 0\pmod{H}\\ \implies&(x_{1}^{\prime}-x_{2}^{\prime})G+(y_{1}^{\prime}-y_{2}^{\prime}) \equiv\text{\,a multiple of }H\\ \implies&(x_{1}^{\prime}-x_{2}^{\prime})G+(y_{1}^{\prime}-y_{2}^{\prime}) \equiv\text{\,a multiple of }G\\ \implies&y_{1}^{\prime}-y_{2}^{\prime}\text{\,a multiple of }G.\end{array}\]
Because \(|y_{1}^{\prime}-y_{2}^{\prime}|<G\), we have \(y_{1}^{\prime}-y_{2}^{\prime}=0\). Thus we have \(y_{1}^{\prime}=y_{2}^{\prime}\).
Then we get
\[\begin{array}{ccc}&x_{1}^{\prime}G\equiv x_{2}^{\prime}G&\pmod{H}\\ \implies&x_{1}^{\prime}\equiv x_{2}^{\prime}&\pmod{H/G}.\end{array}\]
Another direction of the proof is similar.
\(\square\)
**Theorem 2** The worst execution time of node walking, node insertion, and node deletion of a coordinate hash trie is at most proportional to \(\lceil\alpha m\rceil\).
**Proof** For a given edge key \((x_{0},y_{0})\) with the GCD coordinate \((x_{0}^{\prime},y_{0}^{\prime})\), we denote the number of edge keys which has the same hash value with \((x_{0},y_{0})\) to be \(t\).
According to assumption 2, the worst execution time of node walking, node insertion, and node deletion in the coordinate hash trie is at most proportional to \(t\).
According to lemma 1, \(t\) is equal to the number of \(x^{\prime}\) which meet
\[x^{\prime}\equiv x_{0}^{\prime}\pmod{\frac{H}{gcd(H,m)}},\]
where \(x^{\prime}\) is the x-component of the GCD coordinate of an edge key.
Because \(0\leq x^{\prime}<nm/gcd(H,m)\), there are at most \(nm/gcd(H,m)\) possible values of \(x^{\prime}\). Thus we have
\[t\leq\lceil\frac{nm/gcd(H,m)}{H/gcd(H,m)}\rceil=\lceil\alpha m\rceil.\]
\(\square\)
### Related Work
A general approach for compressing a sparse matrix, called _row displacement_, was
proposed by [1], [6], and analyzed by [5].
An approach based on row displacement, known as _double-array trie_, was proposed by [2]. Double-array trie keeps _O(1)_ worst execution time of node walking and deletion. However, the tight bound of the space used by a double-array trie is hard to estimate by parameters \(n\) and \(m\). In addition, the worst execution time of node insertion is significantly increased. Under the assumptions that a double array trie uses _O(n+cm)_ space, where \(c\) is a constant, the worst execution time of node insertion is _O(nm+cm2)_.
Another approach is using \(n\) binary search trees, each for a node, to store the children of the node. This approach reduces the space of a trie to _O(n)_. However, the execution time of basic operations is increased to _O(logm)_.
We could also use \(n\) hash tables, each for a node, to store the children of the node. This approach reduces the space of a trie to _O(n)_. This approach also gives _O(1)_ execution time of the basic operations for the average case, and _O(m)_ for the worst case. However, this approach installs \(n\) hash tables with different sizes. Each hash table requires an initial capacity. If the initial capacities are too large, there will be space waste. If the capacities are insufficient, resizing and reallocation must be made. This makes the actual space used by a trie hard to estimate by parameters \(n\) and \(m\). In addition, the resizing and reallocation can significantly affect the execution time if they occur frequently.
## Remarks
### A C implementation of the coordinate hash trie is provided online:
<[https://github.com/dongyx/chtrie](https://github.com/dongyx/chtrie)>.
Our approach can be generalized to store any sparse matrix. An approach for sparse matrix storage using a similar idea was proposed by [3], but it didn't give a theoretical analysis.
|
2302.12945
|
Individual bias and fluctuations in collective decision making: from
algorithms to Hamiltonians
|
In this paper, we reconsider the spin model suggested recently to understand
some features of collective decision making among higher organisms [A.T.
Hartnett et al., Phys. Rev. Lett. 116 (2016) 038701]. Within the model, the
state of an agent $i$ is described by the pair of variables corresponding to
its opinion $S_i=\pm 1$ and a bias $\omega_i$ towards any of the opposing
values of $S_i$. Collective decision making is interpreted as an approach to
the equilibrium state within the non-linear voter model subject to a social
pressure and a probabilistic algorithm. Here, we push such physical analogy
further and give the statistical physics interpretation of the model,
describing it in terms of the Hamiltonian of interaction and looking for the
equilibrium state via explicit calculation of its partition function. We show
that depending on the assumptions about the nature of social interactions two
different Hamiltonians can be formulated, which can be solved with different
methods. In such an interpretation the temperature serves as a measure of
fluctuations, not considered before in the original model. We find exact
solutions for the thermodynamics of the model on the complete graph. The
general analytical predictions are confirmed using individual-based
simulations. The simulations allow us also to study the impact of system size
and initial conditions in the collective decision making in finite-sized
systems, in particular with respect to convergence to metastable states.
|
Petro Sarkanych, Mariana Krasnytska, Luis Gómez-Nava, Pawel Romanczuk, Yurij Holovatch
|
2023-02-25T00:55:05Z
|
http://arxiv.org/abs/2302.12945v1
|
# Individual bias and fluctuations in collective decision making: from algorithms to Hamiltonians
###### Abstract
In this paper, we reconsider the spin model suggested recently to understand some features of collective decision making among higher organisms [A.T. Hartnett _et al._, Phys. Rev. Lett. **116** (2016) 038701]. Within the model, the state of an agent \(i\) is described by the pair of variables corresponding to its opinion \(S_{i}=\pm 1\) and a bias \(\omega_{i}\) towards any of the opposing values of \(S_{i}\). Collective decision making is interpreted as an approach to the equilibrium state within the non-linear voter model subject to a social pressure and a probabilistic algorithm. Here, we push such physical analogy further and give the statistical physics interpretation of the model, describing it in terms of the Hamiltonian of interaction and looking for the equilibrium state via explicit calculation of its partition function. We show that depending on the assumptions about the nature of social interactions two different Hamiltonians can be formulated, which can be solved with different methods. In such an interpretation the temperature serves as a measure of fluctuations, not considered before in the original model. We find exact solutions for the thermodynamics of the model on the complete graph. The general analytical predictions are confirmed using individual-based simulations. The simulations allow us also to study the impact of system size and initial conditions in the collective decision making in finite-sized systems, in particular with respect to convergence to metastable states.
+
Footnote †: : _Phys. Biol._
## 1 Introduction
Collective decision making is omnipresent in biological systems. It can be observed across a wide range of scales ranging from cellular ensembles [1], via groups of social animals [2, 3] to entire societies or colonies [4, 5]. The ability of biological collectives to make accurate collective decisions, even when individuals have limited information about the state of the group and of the environment, inspired researchers across many disciplines, including physicist studying complex systems and self-organization [6, 7], or engineers interested in bio-inspired collective decision algorithms for artificial, distributed multi-agent systems [8, 9].
There have been significant advances in our understanding of collective decision making over the past decades, providing insights for example on the role of different interaction networks [10], correlated information[11] or agent heterogeneity [12], many fundamental questions remain still open. On the theoretical side, idealized physics-inspired models provide a very valuable tool to investigate universal properties of collective decision making in very large systems and at large time-scales, where the microscopic details of the individual deliberation process and the interactions between agents can be ignored. On the one hand such models can be very efficiently numerically simulated, and on the other hand, even more importantly relying on analogies to classical spin models in physics, they allow to employ analytical methods from statistical physics to deepen our understanding on the role of various factors. Recently, spin models have been used for example to model the decision making of animal groups on the move, and allowed to establish a bridge to neuronal decision-making within a single individual [13, 14].
Recently, Hartnett _et al._[15] proposed a lattice spin model for a binary collective decision making task. Their main aim was to investigate the role of heterogeneity in preferences as well as the role of unbiased individuals in collective decision making. The model and the corresponding study was motivated by previous empirical and theoretical work highlighting the unexpected impact of unbiased individuals for collective decision making in groups featuring individuals with conflicting biases to [16].
In this work, we want to push the physics analogy further: whereas Hartnett _et al._ defined their model at the algorithmic level, by formulating an update rule for individual agents (spins) including coupling between agents via social field, we follow a more classical statistical physics approach to first formulate Hamiltonians for the system, which in turn enables us then to calculate partition functions and analyze the free energy landscape to identify steady-state solutions. A core difference of many collective decision making models to the classical spin models in physics is that the social coupling between agents (or spins) is not given by a (linear) superposition of pair-wise interactions, but typically is assumed to follow some non-linear response of the focal agent to an effective (local) social field, e.g. a threshold-like response. This in turn, in the derivation of macroscopic theories leads typically to emergence of infinite hierarchies of coupled multi-agent (multi-spin) terms, requiring some sort of closure. We
show that this is also the case here, and that depending on the type of approximation made, we arrive at different Hamiltonians.
In contrast to Hartnett _et al._, for the sake of analytical tractability we will focus here on the case of fully-connected graph. This allows the derivation of exact solutions for the free energy from a given Hamiltonian. However, we will compare selected analytical results with agent-based simulations, and our general approach also sets the stage for future investigation of different graph structures. The rest of the paper is arranged as follows: in the next Section 2 we describe the algorithm of Ref. [15] and suggest possible Hamiltonians of many-agent models that adhere certain features of this algorithm. In Section 3 we derive exact expressions for the partition functions of the suggested models and discuss their equilibrium thermodynamic properties. These are compared and confirmed by our numerical simulations presented in Section 4. We end by conclusions and outlook in Section 5.
## 2 Models: From the algorithm to Hamiltonians
In this section we briefly describe an algorithm of collective decision making in a group of biased individuals (subsection 2.1) and suggest two spin Hamiltonians (subsections 2.2, 2.3, correspondingly) that share certain features of this algorithm. Let us note from the very beginning that the algorithm described in subsection 2.1 defines the local dynamical update that should lead the multi-agent system to the stable state. In the subsequent two sections we will be interested in the equilibrium properties of the stable state, leaving aside the way systems approaches the equilibrium. In turn, this enables one to introduce different static models as we discuss in subsections 2.2 and 2.3.
### The model and its algorithmic interpretation
The model suggested by Hartnett _et al._. [15] describes collective decision making in an inhomogeneous population of \(N\) individuals, that consist of three groups: two groups that favour conflicting opinions (the informed or biased individuals) and one group of uninformed individuals, that do not have any bias towards preferred outcome. The opinion of an individual is described by a binary'spin' variable \(S_{i}=\pm 1\), \(i=1,...,N\). Each individual may or may not exhibit a bias regarding its preferred state. The bias of the \(i\)th individual is described by a variable \(\omega_{i}\) that may attain three values \(\{\omega_{0},\omega_{+},\omega_{-}\}\). These values correspond to unbiased (\(\omega_{i}=\omega_{0}=1\)), biased to +1 (\(\omega_{i}=\omega_{+}\)), and biased to -1 (\(\omega_{i}=\omega_{-}\)) individual. It is assumed that individual biases \(\omega_{0}\), \(\omega_{+}\), and \(\omega_{-}\) are randomly and uniformly distributed with densities \(\rho_{0}\), \((1-\rho_{0})\rho_{+}\), and \((1-\rho_{0})\rho_{-}\), correspondingly. An approach to equilibrium is described within a variant of a discrete time nonlinear voter model: it is considered that at each time step an individual is a subject of a local social field \(h_{i}\) that originates from its nearest neighbours and is distorted by individual's bias \(\omega_{i}\):
\[h_{i}=\frac{\omega_{i}n_{i}^{+}-n_{i}^{-}}{\omega_{i}n_{i}^{+}+n_{i}^{-}}\,, \tag{1}\]
with \(n_{i}^{\pm}\) being a number of the \(i\)th individual nearest neighbours with opinion \(+1\) or \(-1\), correspondingly. In turn, the social field exerted on the individual at a time instance \(t\) probabilistically defines its state at time \(t+1\): an individual in state \(-1\) at time \(t\) switches to the state \(+1\) at time \(t+1\) with the probability \(G_{i}\), whereas an individual in state \(+1\) at time \(t\) switches to the state \(-1\) at time \(t+1\) with the probability \(1-G_{i}\). The probability function is chosen to be:
\[G_{i}=\frac{1}{2}\Big{(}1+\frac{\tanh(bh_{i})}{\tanh(b)}\Big{)}\,, \tag{2}\]
and involves a non-linearity parameter \(0\leq b\leq\infty\).1 For the limiting values of \(b\), when the bias is absent (all \(\omega_{i}=1\)), the probability function leads to the classical voter model [17, 18] (at \(b=0\)) or to the majority-rule model [19] (at \(b=\infty\)), that describes, in particular a zero-temperature discrete time Ising model dynamics [20]. Choosing intermediate values of \(b\) allows one to interpolate between these two familiar types of dynamics. Summarizing the above description, it is worth to mention, that the model of Ref. [15] is implemented by the following algorithm:
Footnote 1: In the original formulation of Ref. [15] this parameter is denoted as \(\beta\). Here, we use a different notation to avoid misinterpretation with the temperature.
1. choose an initial configuration of variables \(S_{i}\) and \(\omega_{i}\) for all sites \(i=1,\ldots,N\);
2. calculate a local social field \(h_{i}\), Eq. (1) and probability function \(G_{i}\) (2) for all sites \(i=1,\ldots,N\) ;
3. change states \(S_{i}=-1\) to \(S_{i}=1\) with probability \(G_{i}\), change states \(S_{i}=1\) to \(S_{i}=-1\) with probability \(1-G_{i}\);
4. repeat steps 2 and 3 until an equilibrium state is reached.
So far, the model has been analysed by extensive computer simulations on a square 2D lattice. Although the principal goal of these studies was to understand the collective behaviour that arises in animal groups and is influenced by many factors, the model description deliberately concentrated on an impact of underlying global factors making it similar to those used in statistical physics. To make this analogy even closer, below we will analyse several Ising-like models that describe spin systems with inhomogeneities that mimic the above described bias. As it will become evident, algorithmic formulation of the original model may have different counterparts when formulated in terms of many-particle Hamiltonians. Moreover, such an approach will allow us to study influence of thermal fluctuations on collective behaviour in the spin systems under consideration, which may reveal an impact of noise on collective information processing in large groups. Although the models we will consider below can be analyzed for any spatial arrangements of spins, we will concentrate on the case of a complete graph, when each node is connected to all other nodes. Such a choice may correspond to the situations when agents are able to be in contact independently of their proximity in space, also it will enable us to get exact solutions for thermodynamics.
### Biased Ising model (bi-model)
An explicit assumption of the algorithm described in section 2.1 is that the opinion states are shared with neighbours via interaction. Let us proceed by looking on an equilibrium state of the system of interacting agents, each being characterized by a pair of variables \(S_{i},\omega_{i}\) that describe individual opinion (state) and bias towards this state. As it was already mentioned above, we will consider the case, when all individuals interact pair-wise, irrespective on what is the distance between them. To this end, let us consider the following biased Ising model (bi-model) on a complete graph with the Hamiltonian:
\[H_{bi}=-\frac{2}{N}\sum_{i<j}\omega_{i}\omega_{j}S_{i}S_{j}\,. \tag{3}\]
Here and below, the sums span over all \(N\) nodes of the graph and the Ising spins \(S_{i}=\pm 1\) correspond to the opinion states. Hamiltonian (3) generalizes Ising model on a complete graph (Kac model [21, 22]) incorporating dependence on random variables [23, 24]. Although within the static model Hamiltonian considered here one should not expect the one-to-one correspondence with the dynamic algorithm of subsection 2.1, we aim to give further conceptual background to the notion of bias that plays central role in the algorithm. In line with the Hartnett algorithm, we will assume \(\omega_{i}\) to be a function of the opinion state, \(\omega_{i}=\omega(S_{i})\), in such a way that \(S_{i}\) is preferred, provided this state coincides with the bias of the individual \(i\). On contrary, when the value of the opinion state variable \(S_{i}\) does not coincide with the individual's bias, the value of \(\omega_{i}\) disfavours such state. This can be achieved assuming the following dependence:
\[\omega(S_{i})=1+k_{i}S_{i}\,, \tag{4}\]
where \(k_{i}\) attains one of three values:
\[k_{i}=\left\{\begin{array}{ll}\epsilon_{+}\,,&\mbox{biased to +1},\\ \epsilon_{0}\,,&\mbox{unbiased},\\ \epsilon_{-}\,,&\mbox{biased to -1}\,,\end{array}\right. \tag{5}\]
and \(\epsilon_{+}>0\), \(\epsilon_{0}=0\), and \(\epsilon_{-}<0\), are model parameters that govern the strength of a bias. In the model of section 2.1 it is assumed that individual biases \(\omega_{0}\), \(\omega_{+}\), and \(\omega_{-}\) are randomly and uniformly distributed with respective densities \(\rho_{0}\), \((1-\rho_{0})\rho_{+}\), and \((1-\rho_{0})\rho_{-}\). This corresponds to the case when \(k_{i}\) are i.i.d. random variables with a distribution function:
\[P(k)=\rho_{0}\delta(k-\epsilon_{0})+(1-\rho_{0})\rho_{+}\delta(k-\epsilon_{+ })+(1-\rho_{0})\rho_{-}\delta(k-\epsilon_{-})\,. \tag{6}\]
Furthermore, we will assume that \(\{k_{i}\}\) are randomly distributed and fixed in a certain configuration. This assumption is quite natural and mimics the fact that individual bias does not depend on individual location and does not change in time. Such situation corresponds to the so-called 'quenched disorder' [25]. The bi-model will be further analyzed below, in section 3.1.
### Non-interacting spins in a social field (sf-model)
To account for a bias on an agent state we have introduced in the Hamiltonian (3) a pair interaction between biased individuals. Another approach to the model described in subsection 2.1 is to consider a system of non-interacting spins \(S_{i}\) each being a subject of an inhomogeneous local social (magnetic) field \(h_{i}\) (sf-model) with the Hamiltonian:
\[H_{sf}=-\sum_{i=1}^{N}h_{i}S_{i}\,, \tag{7}\]
where the local field \(h_{i}\) is given by Eq. (1). However, the caution expressed above about the correspondence between the model Hamiltonian and the algorithm of section 2.1 is to place here too. Indeed, the notion of the'social field' (1) is implemented in the algorithm via the dynamic update rule (2). Therefore, strictly speaking there is no one-to-one correspondence between the'social fields' of both cases. To proceed further with an explicit expression for \(h_{i}\), since the model is considered on a complete graph, one makes use of the following relations
\[n_{i}^{\pm}=\sum_{j\neq i}\delta_{S_{j},\pm 1}=\sum_{j=1}^{N}\delta_{S_{j},\pm 1 }-\delta_{S_{i},\pm 1}=N_{\pm}-\delta_{S_{i},\pm 1}, \tag{8}\]
where \(N_{\pm}\) denote the number of spins up or down respectively and \(\delta\) is the Kronecker symbol. There is an obvious normalization condition \(N_{+}+N_{-}=N\). The order parameter (mean magnetization per site) reads:
\[m=\frac{1}{N}(N_{+}-N_{-})=\frac{1}{N}\sum_{j=1}^{N}S_{j}\in[0,1]. \tag{9}\]
Hence
\[N_{\pm}=N\frac{1\pm m}{2}. \tag{10}\]
In terms of the order parameter \(m\), one can rewrite the local magnetic field in a more compact way:
\[h_{i}=\frac{m(\omega_{i}+1)+\omega_{i}-1-\frac{2}{N}(\omega_{i}\delta_{S_{i}, 1}-\delta_{S_{i},-1})}{m(\omega_{i}-1)+\omega_{i}+1-\frac{2}{N}(\omega_{i} \delta_{S_{i},1}+\delta_{S_{i},-1})}\,. \tag{11}\]
In the thermodynamic limit \(N\rightarrow\infty\), terms proportional to \(\frac{1}{N}\) can be neglected leading to the local magnetic field
\[h_{i}=\frac{m(\omega_{i}+1)+\omega_{i}-1}{m(\omega_{i}-1)+\omega_{i}+1}\,. \tag{12}\]
Accordingly, Eq. (7) is the Hamiltonian of a system of non-interacting spins in random local magnetic fields \(h_{i}\) (12). The fields are functions of random variables \(\omega_{i}\) that attain values \(\{\omega_{+},\,\omega_{0},\,\omega_{-}\}\) with given distribution function (6). Depending on the bias, the fields can attain one of three values:
\[h_{+}=\frac{(\omega_{+}+1)m+\omega_{+}-1}{(\omega_{+}-1)m+\omega_{+}+1},\quad h _{0}=m,\quad h_{-}=\frac{(\omega_{-}+1)m+\omega_{-}-1}{(\omega_{-}-1)m+\omega_ {-}+1}\,. \tag{13}\]
Thermodynamics of the sf-model will be considered below in subsection 3.2.
## 3 Equilibrium state and macroscopic observables
In this section we obtain exact solutions for equilibrium thermodynamic properties of models with Hamiltonians suggested above in subsections 2.2 and 2.3.
### Exact solution for the bi-model
Substituting (4) into (3) we rewrite the Hamiltonian as:
\[H_{bi}=-\frac{2}{N}\sum_{i<j}\omega_{i}\omega_{j}S_{i}S_{j}=-\frac {1}{N}\sum_{i\neq j}\omega_{i}\omega_{j}S_{i}S_{j}=\] \[-\frac{1}{N}\sum_{i,j}\omega_{i}\omega_{j}S_{i}S_{j}+\frac{1}{N} \sum_{i}\omega_{i}^{2}\,. \tag{14}\]
Note that there is no restrictions on sums over \(i,j\) in the first term of the last expression and we used that \(S_{i}^{2}=1\) to derive (14). This property of the spin variable leads to further simplifications of the Hamiltonian. Indeed, as long as \(\omega_{i}\) linearly depends on \(S_{i}\), initially the Hamiltonian (14) contains three- and four-spin interactions: terms proportional to products of three and four \(S_{i}\), correspondingly. However, taking into account the above mentioned property (\(S_{i}^{2}=1\)) one arrives at the following representation of the bi-model Hamiltonian (14):
\[H_{bi}=-\frac{1}{N}\sum_{i,j}S_{i}S_{j}-2\langle k\rangle\sum_{i}S_{i}-N \langle k\rangle^{2}+1+\frac{2}{N}\sum_{i}k_{i}S_{i}+\langle k^{2}\rangle\,, \tag{15}\]
where \(\langle k\rangle=\frac{1}{N}\sum_{i}k_{i}\) and \(\langle k^{2}\rangle=\frac{1}{N}\sum_{i}k_{i}^{2}\) are the mean and mean square of the random variable \(k\). Note that the Hamiltonian (15) is that of the Ising model in a local external field.
To analyse the thermodynamic properties, one defines the partition function for a given configuration of random variables \(\{k\}\):
\[Z_{bi}(\{k\})=\mathrm{Sp}\,e^{-\beta H_{bi}}\,,\qquad\mathrm{Sp}(\ldots)=\prod _{i}\sum_{S_{i}=\pm 1}(\ldots)\,, \tag{16}\]
and \(\beta=1/(k_{B}T)\). In a standard setting, the next step is to define the configuration-dependent free energy \(G_{bi}(\{k\})=-\beta^{-1}\ln Z_{bi}(\{k\})\) and only then to perform an averaging with distribution function (6). However, as we will see below, considering model on a complete graph essentially facilitates the problem. To proceed further, one makes use of the Stratonovich-Hubbard transformation writing for the first term in the Hamiltonian (15):
\[e^{\frac{\beta}{N}\sum_{i,j}S_{i}S_{j}}=e^{\frac{\beta}{N}\left(\sum_{i}S_{i} \right)^{2}}=\sqrt{\frac{N}{4\pi\beta}}\int_{-\infty}^{+\infty}\mathrm{d}x\,e ^{\frac{-N}{4\beta}x^{2}+x\sum_{i}S_{i}}\,, \tag{17}\]
whereas for the whole Hamiltonian one gets:
\[e^{-\beta H_{bi}}=\sqrt{\frac{N}{4\pi\beta}}e^{\beta N(k)^{2}-\beta\langle k^ {2}\rangle-\beta}\int_{-\infty}^{+\infty}\mathrm{d}x\,e^{\frac{-N}{4\beta}x^{ 2}}\prod_{i}e^{f(x,k_{i})S_{i}}\,, \tag{18}\]
with
\[f(x,k_{i})=x+2\beta\langle k\rangle-\frac{2\beta}{N}k_{i}\,. \tag{19}\]
Now it is straightforward to take trace (16) and to get for the partition function:
\[Z_{bi}\simeq\int_{-\infty}^{+\infty}\mathrm{d}x\,e^{\frac{-N}{4\beta}x^{2}}e^{ \sum_{i}\ln\cosh f(x,k_{i})}\,. \tag{20}\]
Here and below we omit factors irrelevant for the subsequent analysis. Substituting for \(N\to\infty\) the sum over all sites by the sum over all values of \(k\):
\[\sum_{i=1}^{N}\ln\cosh f(x,k_{i})=N\sum_{\{k\}}P(k)\ln\cosh f(x,k)\,,\]
(in our case \(k\) spans three values (5) and \(P(k)\) is given by (6)) we arrive at the following expression for the partition function:
\[Z_{bi}\simeq\int_{-\infty}^{+\infty}\mathrm{d}x\,e^{-Ng(x)}\,, \tag{21}\]
where
\[g(x) = \frac{1}{4\beta}x^{2}-\rho_{0}\ln\cosh(x+2\beta\langle k\rangle) -(1-\rho_{0})\rho_{+}\ln\cosh(x+ \tag{22}\] \[2\beta\langle k\rangle-\frac{2\beta}{N}\epsilon_{+})-(1-\rho_{0 })\rho_{-}\ln\cosh(x+2\beta\langle k\rangle-\frac{2\beta}{N}\epsilon_{-})\,,\]
and
\[\langle k\rangle=(1-\rho_{0})(\epsilon_{+}\rho_{+}+\epsilon_{-}\rho_{-})\,. \tag{23}\]
This expression gives an exact solution for the partition function. Note that although the partition function was calculated for the fixed (quenched) sequence of random variables \(\{k\}\) the resulting expression does not depend on a particular sequence, but rather of their mean values. This is a result of self-averaging, typical for random spin models on a complete graph [23, 24]. In the thermodynamic limit \(N\to\infty\) keeping the leading terms and taking into account that \(\rho_{+}+\rho_{-}=1\) one gets for the function (22) :
\[g(x)=\frac{x^{2}}{4\beta}-\ln\cosh(x+2\beta\langle k\rangle)\,. \tag{24}\]
With function (24), the integral (21) has the usual form of the partition function of the Ising model in an external field on the complete graph. It is a textbook exercise to take the integral by the steepest descent method getting the following expression for the Gibbs free energy per spin:
\[\beta g(x_{0})=-\lim_{N\to\infty}\frac{\ln Z_{bi}}{N}=\frac{(x_{0}-2\beta \langle k\rangle)^{2}}{4\beta}-\ln\cosh(x_{0}) \tag{25}\]
with \(x_{0}=x_{0}(\beta,\langle k\rangle)\) being the coordinate of \(g(x)\) minimum:
\[\frac{\mathrm{d}\,g(x)}{\mathrm{d}\,x}|_{x=x_{0}}=0,\qquad\quad\frac{\mathrm{ d}^{2}\,g(x)}{\mathrm{d}\,x^{2}}|_{x=x_{0}}>0\,. \tag{26}\]
Typical behaviour of function (25) is shown in Fig. 1. There, we plot \(\beta g(x)\) for \(\langle k\rangle=-0.3\), \(\langle k\rangle=0\), and \(\langle k\rangle=0.3\) at different values of \(T\). Obviously the second case corresponds to the absence of an external field (no bias), whereas the first and the third one are symmetric counterparts. The critical temperature, separating two regimes in Fig. 1b readily follows from (24) at \(\langle k\rangle=0\): \(\beta_{c}^{-1}=k_{B}T_{c}=2\). The first obvious observation in terms of the problem considered here is that any non-zero value of \(\langle k\rangle\) (any bias) leads to non-vanishing value of \(x_{0}\) at any finite temperature \(T\). As it follows from Eq. (23), the value \(\langle k\rangle=0\) is achieved either when all individuals are unbiased (\(\rho_{0}=1\)) or for equal mean strengths of oppositely biased individuals (\(\epsilon_{+}\rho_{+}=-\epsilon_{-}\rho_{-}\)). Another observation is that two minima are present at low temperatures \(0\leq T\leq T_{1}\) (dotted red curves in Figs. 1a,c). Since the integral in (21) is evaluated by the steepest descent method, only the global minimum contributes to the free energy (25) in the thermodynamic limit \(N\to\infty\). For the finite system size however, the local minimum contributes too and corresponds to the metastable state. In particular, such metastable states influence crossover to the stable state, see [26] and references therein for further discussions. For the temperature \(T_{1}\) at which the local minimum disappears one gets:
\[T_{1}=T_{c}\Big{(}1-\Big{[}\frac{9}{4}\langle k\rangle^{2}\Big{]}^{1/3}\Big{)}\,. \tag{27}\]
Relation between \(x_{0}\) and magnetization \(m\) is given by the equation of state:
\[m(h,\beta)=-\Big{(}\frac{\partial g(\beta,h)}{\partial h}\Big{)}_{\beta}\,, \tag{28}\]
where the Gibbs free energy density at the presence of an external magnetic field \(h\) readily follows from Eq. (25)
\[g(\beta,h)=\frac{[x_{0}(\beta,h)-\beta(2\langle k\rangle+h)]^{2}}{4\beta^{2}} -\frac{\ln\cosh(x_{0}(\beta,h))}{\beta}\,, \tag{29}\]
with \(x_{0}(\beta,h)\) being the solution of
\[\frac{x}{2\beta}-\langle k\rangle-\frac{h}{2}-\tanh x=0\,. \tag{30}\]
Substituting (29) into (28) one arrives at the following relation between \(m\) and \(x_{0}\):
\[m(h,\beta)=\frac{x_{0}(\beta,h)}{2\beta}-\langle k\rangle-\frac{h}{2}\,, \tag{31}\]
and the following expression for the free energy:
\[g(\beta,h)=m^{2}-\frac{1}{\beta}\ln\cosh(2\beta m+\langle k\rangle+\frac{h}{2})\,. \tag{32}\]
The magnetization \(m\equiv m(h,\beta)\) is found from the equation for the extremum (26):
\[m=\tanh\left\{2\beta(m+\langle k\rangle+\frac{h}{2})\right\}. \tag{33}\]
Note that the zero temperature solution of Eq. (30) reads
\[\lim_{\beta\rightarrow\infty}\frac{x_{0}(\beta,h)}{2\beta}=1+\langle k\rangle +\frac{h}{2}\,, \tag{34}\]
and leads to a proper normalization of the magnetization given by Eq. (31): \(m(\beta\rightarrow\infty,h)=1\).
In Fig. 2 we show the spontaneous magnetization \(m(\beta,0)\) (33) for different values of \(\langle k\rangle\) ranging from -1 to 1 with a step 0.2. We will compare other features of considered here bi-model with those of the algorithmic model in section 4.
### Exact solution for the sf-model
The sf-model partition function \(Z_{sf}(\{\omega\})\) is related to the Hamiltonian \(H_{sf}\) (7) of the non-interacting spins in random social field by taking trace over spins as in Eq. (16). Similar as it was shown in the former subsection for \(Z_{bi}(\{k\})\), the partition function \(Z_{sf}(\{\omega\})\) is self-averaging with respect to the random variables \(\omega\) leading to
\[Z_{sf}=2^{N}\exp\Big{\{}\sum_{i=1}^{N}\ln\cosh(\beta(h_{i}+h))\Big{\}}=2^{N} \exp\Big{\{}\sum_{\omega}P(\omega)\ln\cosh(\beta(h(\omega)+h)\Big{\}}\,, \tag{35}\]
Figure 2: Temperature behaviour of the spontaneous magnetization \(m(T,0)\) (33) at \(-1\leq\langle k\rangle\leq 1\).
where similar as in section 3.1 we accounted for the homogeneous external field \(h\). Here, the distribution function \(P(\omega)\) is given by Eq. (6), the summation in the last expression spans three values \(\omega=\{\omega_{0},\omega_{+},\omega_{-}\}\), and the corresponding random fields \(h(\omega)\) are given by Eq. (13). In turn, the Gibbs free energy per site reads:
\[-\beta g(\beta,h)=\rho_{0}\cosh(\beta(m+h))+(1-\rho_{0})\rho_{+} \cosh(\beta(h_{+}+h))+\] \[(1-\rho_{0})\rho_{-}\cosh(\beta(h_{-}+h))\,, \tag{36}\]
and the magnetization \(m(\beta,h)\) is found from the self-consistency relation. The last relates mean magnetization that appears in the Hamiltonian (7) with the mean spin value via:
\[m=\frac{1}{N}\Big{\langle}\sum_{i=1}^{N}S_{i}\Big{\rangle} \tag{37}\]
where the averaging is performed over the Gibbs distribution with the Hamiltonian (7). Substituting (7) into (37) one arrives at:
\[m(\beta,h)=\rho_{0}\tanh(\beta(m+h))+(1-\rho_{0})\rho_{+}\tanh( \beta(h_{+}+h))+\] \[(1-\rho_{0})\rho_{-}\tanh(\beta(h_{-}+h))\,. \tag{38}\]
For the zero-temperature magnetization with no external field \(h=0\), all functions \(\tanh(x)\) in (38) can be replaced by \(\mathrm{sign}(m)\) leading to the equation \(m=\mathrm{sign}(m)\) that has two solutions \(m(\beta\rightarrow\infty)=\pm 1\). Typical behaviour of the solutions of the Eq. (38) as functions of temperature are shown in Fig. 3 for \(\rho_{0}=0.7,\rho_{+}=0.6,\rho_{-}=0.4,\omega_{+}=1.5,\omega_{-}=0.5\). Depending on \(T\) there might be up to 3 solutions. Each of them is shown with different colour in the figure. The stable state corresponds to the solution giving the minimal value of the free energy. For the chosen values of model parameters, it
Figure 3: Solutions of the equation (38) at fixed values of model parameters \(\rho_{0}=0.7,\rho_{+}=0.6,\rho_{-}=0.4,\omega_{+}=1.5,\omega_{-}=0.5\). Different colours represent three branches. The lowermost red branch corresponds to the stable state with minimal free energy.
appears to be described by the lowermost curve (red online) in Fig. 3. The value of the free energy for the uppermost blue curve is only slightly higher than for the red one. Therefore, it is reasonable to assume that the blue curve describes the metastable state. For the case illustrated in Fig. 3 the free energy for the metastable state is only about 1% higher than in the stable state. This increases the probability that the system remains in the metastable state. With an increase of temperature the values of all three solutions get closer with the limiting value \(m(T\rightarrow\infty)=0\).
In Fig. 4 we show the stable state magnetization as a function of temperature keeping the same set of parameters as in Fig. 3 and choosing different bias strengths \(\omega_{-}\). Similarly to the bi-model we discussed in the previous subsection, in the sf-model the magnetization remains non-zero at any finite temperature (no transition is observed) and its sign and value depend on the parameters \((\rho_{0},\rho_{+},\rho_{-},\omega_{+},\omega_{-})\).
## 4 Numerical simulations
In order to test the theoretical predictions, we performed numerical simulations of the individual-based model on the complete graph. To account for the finite temperature assumed in the theoretical analysis, we extend the Hartnett model by an additional random process that may induce state changes of individual agents: At each time step, irrespective of the social field exhibited, an agent will switch its current state from \(\pm 1\) to \(\mp 1\) with the probability \(p_{noise}\). For \(p_{noise}=0\), we recover the original Hartnett model on a complete graph.
Please note, that there exist different possibilities to introduce noise into the system, and our choice was guided by simplicity and numerical convenience. However, there is no direct correspondence between the "microscopic" noise parameter \(p_{noise}\) and the thermodynamic quantity \(T\). Finally, for the extreme choice of \(p_{noise}=1\), we will observe permanent switching of all agent states at each time step. Depending on the
Figure 4: Stable state magnetization as a function of temperature at different values of \(\omega_{-}\). The rest of the model parameters are the same as in Fig. 3.
initial condition, e.g. an initial high consensus state this may result in a spuriously synchronized collective switching of the entire system, while maintaining consensus. Therefore, we restrict our analysis \(p_{noise}\leq 0.8\), which ensures randomization and vanishing consensus irrespective of initial conditions.
Whereas the dynamical evolution of the system certainly depends on the nonlinearity parameter \(b\) in the Hartnett model, see Eq. (2), the actual stationary states can be assumed in first approximation to be independent of \(b\). Therefore, we fix in our numerical simulation the nonlinearity parameter to \(b=1\).
In Fig. 5, we show typical time courses of the average opinion (magnetization) of single simulations of the Hartnett model with noise on a complete graph. In general, for small \(p_{noise}\) individual runs converge to a state of high consensus \(|m|=1\) but not necessarily to the same average opinion \(m\). Thus, averaged over many simulation runs we observe a bimodal distribution of final collective opinion states. For an unbiased system, e.g. with \(\rho_{+}=\rho_{-}=0\) or \(\rho_{+}=\rho_{-}\), and \(\omega_{+}\omega_{-}=1\), if we start from a fully disordered initial condition \(m=0\), the average probability to observe the final state \(\pm 1\) is \(0.5\).
In a biased system, the results of the probability distribution of steady state at small \(p_{noise}\) becomes asymmetric, with the opinion favored by the bias being more likely to be observed. However, finite-size fluctuations may always make individual simulation runs to converge to the steady state counter to the bias, in particular if the initial condition of the system is in a perfectly disordered state (\(|m|(t=0)=0\)).
With increasing noise \(p_{noise}\) the consensus decreases, and we observe approach towards \(|m|=0\) in the limit of large \(p_{noise}\) (see Fig. 5 ). Due to finite fluctuations at large noise it is difficult to distinguish in numerical simulations a continuous phase transition predicted from the theory for the unbiased case from a finite, yet arbitrary small magnetization predicted in the biased case. In general, we observe a transition-like behavior with finite magnetization for small \(p_{noise}\) and effectively vanishing magnetization at large \(p_{noise}\).
Overall, the simulation results confirm qualitatively the analytical predictions but with important differences. The main deviation is the clear bimodality of the stationary numerical solutions in the presence of bias and for small noise (Fig. 6). While the theoretical prediction for spontaneous magnetization in the thermodynamic limit (33) presented in Fig. 1 shows only a single solution corresponding to the global minimum of the free energy, numerical simulations may also converge due to finite-size fluctuations towards metastable states, where at small noise the system dynamics becomes "trapped" (see Fig. 5 and 6a). Corresponding metastable states are consistent with the results obtained from solution of the self consistency equation (Fig. 3). When we interpret the different branches of solutions in Fig. 3 in the sense of dynamical fixed points of the systems behavior, then red and blue branches correspond to globally and locally stable points, respectively, while the yellow branch should correspond to an unstable point. This interpretation predicts that the numerically obtained distribution of final consensus values must depend on the initial conditions of the system, which is indeed
the case: For an initially unbiased system with \(m(t=0)=0\), we observe a bimodal distribution with only a minority of runs converging to the metastable state (\(+1\) in Fig. 6a). On the other hand, for the same parameters a system initialized in a consensus state \(m=+1\), we observe that at low noise all the simulations remain trapped in the metastable state. With increasing noise the metastable state is predicted to vanish (Fig. 3). Indeed, at some critical noise value, we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state. For a further increase of \(p_{noise}\), we observe a jump from a finite, positive average opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative opinion state, to a negative state, to a negative opinion state, to a negative state, to a negative opinion state, to a negative state, to a negative opinion state, to a negative state, to a negative opinion state, to a negative state
then observe an approach to the undecided state \(m=0\) from below (Fig. 6c,d).
Finally, we test the assumption that the observed deviations reported above are indeed due to finite-size fluctuations preventing the system to converge to the global minimum of free energy. We simulate a system with a bias to the negative opinion for increasing system sizes with the \(m=0\) initial condition. As can be seen in Fig. 7a for small systems (\(N=1000\)) we observe a relatively high probability for the simulations to converge to the metastable state (positive opinion for \(p_{nois}<0.1\)), which however decreases with system size, and eventually for \(N=20000\) practically vanishes. To make it more quantitative, we define the ratio \(\mathcal{R}=\mathcal{N}_{+}/\mathcal{N}_{-}\), where \(\mathcal{N}_{+}\) is the number of realizations where the steady-state magnetization is positive (metastable state) and \(\mathcal{N}_{-}\) is the number of realizations where the steady-state magnetization is negative (global
Figure 6: **a**. Results of the numerical simulations using the same parameters as Figure 3. The results are presented as a two dimensional histogram (or heat-map) where, for each value of the noise strength, we computed 200 realizations and then computed the probability to observe a given value of the steady-state magnetization. The value of this probability is given in a black and white scale. The initial conditions were such that half of the nodes were in state \(+1\) and the rest in state \(-1\). **b**. Average values of the steady-state magnetization for different values of the parameter \(\omega_{-}\) computed over 200 realizations. The rest of the parameters are the same as Figure 4. Note that the heat-map on the left is complementary to the purple curve [\(\omega_{-}=0.5\)]. Thus, although we observe that the mean magnetization for \(\omega_{-}=0.5\) is close to zero, these average values arise from [almost] symmetric values observed in the heat-map. The initial conditions were such that half of the nodes were in state \(+1\) and the rest in state \(-1\). **c**. Numerical results using the same parameters as subplot **a** but an initial condition where all the nodes in the network are in state \(+1\). **d**. Numerical results for parameter \(\omega_{-}=0.5\) using the same parameters as subplot **b** but an initial condition where all the nodes in the network are in state \(+1\). For all plots in this figure we used a system of size \(N=1000\).
minimum of the free energy). For a system biased to the negative opinion, as discussed above, a value of \(\mathcal{R}=0\) corresponds to all simulations converging to the (global) minimum of the free energy, while a diverging ratio (\(\mathcal{R}\rightarrow\infty\)) would correspond to all simulations being trapped in the metastable state. In Fig. 7b we show \(\mathcal{R}\) as a function of the inverse system size \(1/N\), and we observe that the curve approaches the origin for decreasing \(1/N\) (increasing system size), which shows that in the thermodynamic limit (\(N\rightarrow\infty\)), we will observe only the results corresponding to the global minimum of the free energy as predicted by theory in Fig 2.
## 5 Conclusions and outlook
The typical approach in statistical physics is to formulate a model in terms of a Hamiltonian, derive for it analytical results in the thermodynamic limit, and then to test the analytical predictions with numerical simulations by implementing a dynamic algorithm consistent with the initially formed Hamiltonian. However, when physics inspired spin models are used to study collective decision making and opinion dynamics, they are typically formulated in terms of dynamical models, agent-based models [27, 15, 28]. Thus, here we follow partly a "reverse" approach: Starting from an agent-based model for collective decision making of agents with heterogeneous preferences, originally introduced by Hartnett _et al._ and previously studied numerically on lattices [15], our aim was to formulate a many-particle Hamiltonian and partition functions and investigate possible analytical solutions.
We consider two different description of social interactions between individuals with a bias: First, the biased Ising model (bi-model), a superposition of pairwise interactions with the individual bias increasing or decreasing the interaction strength with neighbors holding preferred or disliked opinion, respectively (Sec. 2.2). Second, biased agents responding to a local social field (sf-model) generated by its neighbors (Sec. 2.3). While
Figure 7: **a-c**. Numerical results obtained using the same parameters as the heat-map in Figure 6 for three different system sizes \(N\in[1000,10000,20000]\). **d**. Numerical results of the ratio \(\mathcal{R}\) as a function of the inverse of the system size \(1/N\) for eight different system sizes \(N\in[100,200,400,800,1600,3200,6400,12800]\). The rest of the parameters are the same as the ones of the right panel of Figure 6 for only one noise strength value of \(0.005\).
these approaches are intuitive and straight forward to justify, they are certainly not the only two ways that can be taken. However, both models that we consider are exactly solvable in the case of a complete graph topology of the agent interaction network. With this we are able to single out effects inherent to an all-to-all coupling from those induced by specific network structure as e.g. 2D lattice, considered in the original paper by Hartnett _et al._. Analytically, we restrict ourselves here to the discussion of stationary states, leaving the questions of dynamics and relaxation towards the steady state for future work.
The bi-model (Sec. 2.2) does not exhibit metastable states in the thermodynamic limit (\(N\rightarrow\infty\)). By the steepest descent calculations they vanish in this limit. In contrast to that, in numerical simulations with finite \(N\), we can observe a finite probability of individual simulation runs to convergence to a metastable state from a zero-magnetization initial condition. However, in agreement with the theoretical prediction, the probability of observing metastable states decreases with increasing system size \(N\), and eventually vanishes for sufficiently large \(N\) (Fig. 7).
In the sf-model (Sec. 2.3) the steady state of the magnetization can be obtained from the self-consistency relation. Here, we can also identify solutions corresponding to the metastable states which can be observed in the numerical simulations at finite \(N\). The theory predicts for example the disappearance of the metastable state with increasing temperature corresponding to a saddle-node bifurcation, Fig.3, which are linked to the possibility of sudden jumps (discontinuous) in the average magnetization. We were able to directly confirm these predictions in our numerical simulations. In Fig. 6c,d, we show the result for the steady state (average) magnetization for initial conditions strongly favoring relaxation to the metastable state. Here, despite an overall bias towards the negative option at low noise (low temperature) we observe an average positive magnetization, corresponding to a metastable state. However, at a critical noise value we see a discontinuous jump of the magnetization from the positive (metastable) average opinion to a negative one, which corresponds to the globally stable solution. This phenomenon could be potentially relevant for opinion dynamics in real-world social systems. For example one can imagine a population of agents that were initially unbiased, or biased towards option \(+1\), reaching steady state consensus with \(m>0\). As long as the perturbations (noise) are small, even if the preferences of the agents shift slowly towards an overall bias to the negative option \(-1\), e.g. by previously unbiased individuals assuming a negative bias, the average opinion will remain "locked" to the positive one. However, a change in noise level, or a sufficiently large perturbation, can then exhibit a sudden shift of the average opinion towards the negative consensus opinion, aligned with the underlying negative bias of the agent population. Such tipping-points in social and socio-ecological systems are being widely discussed [27, 29], and have received attention for example in the context of climate action and sustainability transition [30, 31].
Considering the complete graph allows for exact analytical solutions, but it can not account for effects of particular network structure. The most important difference
with respect to the original model on a 2D lattice [15], is that on a complete graph, we always observe full consensus due to lack of spatial structure allowing (random) local aggregations of individuals with the same bias to self-reinforce and "shield" themselves against a majority opinion, preventing full consensus. On the 2D lattice, the density of unbiased individuals was shown to crucially important to facilitate consensus by breaking up locked in spatial domains of opposite opinion. On a complete graph, the density of unbiased individuals modulates the overall bias in the system and thus the equilibrium magnetization. However, it also controls the structure and stability of metastable solutions as determined via the self-consistency approach of the social field model.
In this study, we have demonstrated how two different Hamiltonians can be formulated based on reasonable assumptions about the nature of social interactions, for a given behavioral algorithm of collective decision making. We have obtained exact steady-state solutions for both Hamiltonians on an"all-to-all" interaction network, using different analytical methods. Our work highlights the power of analytic methods rooted in statistical physics to provide a deep understanding of complex social dynamics. While the results obtained on a complete graph can be expected to be similar to the steady state of the system on random networks (Erdos-Renyi graphs), other network topologies may yield different results. Therefore, our results provide a solid starting point for future investigations of the dynamical behavior of the system, such as convergence to a steady state, and the impact of complex network topologies, including lattices, small-world, or scale-free networks, resembling real-world cases.
## Acknowledgements
We acknowledge support by Abel Jonen in piloting/testing individual-based model simulations. This work was supported by the BMBF Bridge2ERA program, projekt 01DK20044 ('Complex networks: self-organization and collective information processing'); Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy--EXC 2002/1 'Science of Intelligence', project 390523135 (LG-N and PR); National Academy of Sciences of Ukraine, project KPKBK 6541030 (PS, MK, and YuH). YuH acknowledges useful discussions with Yuri Kozitsky (Lublin). MK and YuH acknowledge the hospitality of members of Pawel Romanczuk lab when staying at the Humboldt University Berlin.
|
2301.05969
|
The Role of Heuristics and Biases During Complex Choices with an AI
Teammate
|
Behavioral scientists have classically documented aversion to algorithmic
decision aids, from simple linear models to AI. Sentiment, however, is changing
and possibly accelerating AI helper usage. AI assistance is, arguably, most
valuable when humans must make complex choices. We argue that classic
experimental methods used to study heuristics and biases are insufficient for
studying complex choices made with AI helpers. We adapted an experimental
paradigm designed for studying complex choices in such contexts. We show that
framing and anchoring effects impact how people work with an AI helper and are
predictive of choice outcomes. The evidence suggests that some participants,
particularly those in a loss frame, put too much faith in the AI helper and
experienced worse choice outcomes by doing so. The paradigm also generates
computational modeling-friendly data allowing future studies of human-AI
decision making.
|
Nikolos Gurney, John H. Miller, David V. Pynadath
|
2023-01-14T20:06:43Z
|
http://arxiv.org/abs/2301.05969v1
|
# The Role of Heuristics and Biases During Complex Choices with an AI Teammate
###### Abstract
Behavioral scientists have classically documented aversion to algorithmic decision aids, from simple linear models to AI. Sentiment, however, is changing and possibly accelerating AI helper usage. AI assistance is, arguably, most valuable when humans must make complex choices. We argue that classic experimental methods used to study heuristics and biases are insufficient for studying complex choices made with AI helpers. We adapted an experimental paradigm designed for studying complex choices in such contexts. We show that framing and anchoring effects impact how people work with an AI helper and are predictive of choice outcomes. The evidence suggests that some participants, particularly those in a loss frame, put too much faith in the AI helper and experienced worse choice outcomes by doing so. The paradigm also generates computational modeling-friendly data allowing future studies of human-AI decision making.
## Introduction
The superiority of passive, algorithmic decision making is dogma in the behavioral sciences [13, 14]. This is not a surprise given that even poorly-tuned models outperform humans [13]. Nevertheless, researchers have documented a strong aversion to relying on algorithms [12, 15]. There is increasing evidence, however, that people are more open to input from algorithmic decision aids [10]. This shift towards a more trusting stance aligns with intuition: combining the proliferation of algorithmic decision aids and the insight that repeated exposure to stimuli can alter the ways in which people respond to them [15, 1] creates a scenario in which it would be surprising if people _did not_ exhibit more trust.
Increasingly, decision aids rely on adaptive rather than passive algorithms. These artificial intelligence (AI) helpers are optimized to function in settings where passive algorithms, such as linear models, underperform relative to human decision making. Although these technologies function and behave in fundamentally different ways from more passive predecessors, researchers largely still study them by relying on traditional empirical approaches: either simple choice experiments (e.g. hogma2019, in which the authors use their evidence to motivate the need for further research, particularly in settings where AI are in use) or _in situ_ (in which there are entire sub-fields, such as human-robot interaction). We argue that a middle ground is needed where the experimental setting matches the technological capabilities of AI but avoids the contextual challenges of _in situ_ experimentation.
We present results from just such an experiment in which participants work with an AI helper to search through complex choice options. We find classic behavioral biases, specifically anchoring and framing effects, are still present in the choice behavior. The biases are predictive of choice outcomes and highlight opportunities to develop AI helpers that can improve outcomes by accounting for these biases. Critically, we engineered the experimental environment such that it generates AI-friendly data. This feature creates future opportunities for training AI helpers using data from an experimental paradigm in which known artifacts of human cognition are present.
## Related Works
### Complex Choices
A blossoming corner of behavioral science is working to understand the nuance of human judgment and decision making during _complex choice_. A complex choice is one in which the relationships between the variables under consideration are nonlinear, which means that even when a choice is characterized by a small number of variables, their interactions can render optimization intractable [16]. Obvious examples of complex choices are home buying, career moves, and mate selection. In any of these examples, changing a single choice variable can dramatically impact how other variables affect the choice. In the case of buying a home, for example, the opportunity to work remotely changes how commuting distance, home size, and a host of other variables interact. It is not just monumental choices that suffer from complexity. Filling a shopping cart given a budget constraint, something that most adults regularly do, can be complex. A shopper must, for example, consider how the myriad of items they buy combine into meals, when they will perish, whether they will satisfy their taste
preferences in the future, etc.
Behavioral scientists have not ignored the unique nature of complex choices. Herbert Simon, for example, noted that many choices are simply intractable due to the interaction of variables (1956). Despite this recognition, complex choice experiments have only recently emerged. Behavioral science historically relied on experimental designs that remove complexity from choices because it allows researchers to easily isolate behavioral nuances. These experiments usually take one of two forms: descriptive, in which all the information needed to make an informed choice is available, or experiential, in which participants must learn about the choice through repeated experience (Jessup et al., 2022).
In descriptive experiments, researchers typically manipulate a single choice feature in a between-subjects design. A classic example from studies of the anchoring heuristic is to ask people about the population density of a city after asking them if the population is more or less than a reference value (Jacowitz and Kahneman, 1995). When the reference is low (high), people report that the city is more sparsely (densely) populated. Such experiments are crucial to dissecting human behavior, however, they are hamstrung by their rigidity. For example, it is well documented that human judgment and decision making is highly dependent on experience, including a person's own thoughts about such experiences (Schwarz, 2004). The very nature of complex choices renders descriptive experimentation, however, near impossible. Moreover, there is a growing body of work that suggests the instances in which it has been adapted may be undermined by other artifacts of human cognition (such as preference reversals) once people gain more experience (Hertwig et al., 2004; Tsetos, Chater, and Usher, 2012; Erev et al., 2017; Jessup et al., 2022).
The intractability of complex choice has contributed to the development of experiential study paradigms that adapt complex systems models. Two popular complex systems models are the "multi-arm bandit" (Berry and Fristedt, 1985) and the \(NK\)(Levinthal, 1997) paradigms. One experimental paradigm, for example, asks participants to combine \(n\) shapes that interact \(k\) ways to create art for aliens. This paradigm enabled the researchers to investigate how people search, explore, and work together during complex choices (Billinger, Stieglitz, and Schumacher, 2014; Wu et al., 2018; Billinger et al., 2021, 2022). Although these experiments did not directly control for the bias, Billinger et al. (2014 did demonstrate that anchoring on past outcomes, using prior experience as an endogenous variable, can inform future complex choices). Study participants exerted more (less) effort when underperforming (overperforming) relative to their previous performance.
### Complex Choice in Teams
Teaming adds another element of complexity to any choice. In the alien art task, human dyads coordinated their search efforts during a complex choice without external guidance or resources through a mix of turn-taking and simultaneous moves (Billinger et al., 2022). Teams can also facilitate better complex choices by expanding the sampled regions of a problem space. Each team member brings unique knowledge, preferences, and beliefs to the task, which their behavior may reflect. To illustrate, consider an experiment that tasked teams of participants with a simulated entrepreneur-ship task: managing a lemonade stand (Sommer, Bendoly, and Kavadias, 2020). The teams worked together to learn about and combine the interacting variables, such as different ingredients, price levels, stand locations, etc., to make the lemonade stand profitable. When the task complexity was low, i.e. few interactions and relatively low stochasticity, interactive teams did better than teams of individual performers. This result flipped, however, as complexity increased.
Behavioral scientists are not the only researchers working towards a deeper understanding of teaming during complex choices. The promise of human-machine teams capable of superior performance is one changed by researchers across computer science disciplines. The earliest examples from the literature are likely better described as cooperative efforts--such as airplane piloting and air-traffic control (Hoc, 2000). Early success in these and related domains birted a sizable, ever-evolving sub-discipline that studies the dynamics of trust in automated systems (Lee and See, 2004). This research area now spans settings from games that offer insights into the belief states of human teammates (Siu et al., 2021; Chong et al., 2022) to military scenarios that challenge common models of trust dynamics (Gurney, Pynadth, and Wang, 2022). The expansive literature on human-AI teams now encompasses topics as diverse as what expectations people hold of AI teammates (Zhang et al., 2021) to endowing AI with the ability to represent the belief states of their human counterparts (Gurney and Pynadath, 2022).
Conspicuously missing, however, from research on human-AI teaming is an investigation of what are arguably the most salient features of human judgment and decision making: heuristics and biases (Gigerenzer and Todd, 1999; Kahneman, 2011). The obvious argument is that for an AI teammate to accurately model its human counterpart it needs to take into consideration the cognitive tools used by the human during choice. Some prior work on complex choice has endogenously tested the role of heuristics and biases, such as Billinger et al. (2014) who demonstrated that anchoring on previous outcomes informs future search behavior. Building on this and other related research, our work exogenously controls for anchoring and framing effects, which are two of the most studied phenomena in the heuristics and biases literature.
### Anchoring and Framing Effects
Anchoring in human cognition is the tendency of final choices to reflect starting information, even if that information is uninformative (Tversky and Kahneman, 1974; Chapman and Johnson, 1999; Epley and Gilovich, 2006). To illustrate, imagine walking down a store a aisle and seeing a sign above a sale item proclaiming a given limit per customer. The anchoring effect suggests that you will buy more or less of the sale item depending on the stated limit despite your intentions or preferences (Wansink, Kent, and Hoch, 1998). Similar effects are documented in a host of domains as diverse as retirement savings (Madrian and Shea, 2001), court
room sentencing [11], and even in mate selection [12].
Framing effects are similarly robust. The basic idea of framing is that for any prospect, a person has some reference value. Changes are evaluated relative to that value and differently if they are viewed as losses versus gains [10]. The classic empirical demonstration of framing asks participants to consider two interventions to stop the spread of a deadly disease. All participants receive a probability-based option and an alternative. Half receive an alternative framed as a gain (lives saved), while the others receive one framed as a loss (lives lost) but that maintains the same ratio. The observation is a reversal in the number of participants selecting the probability-based option when the choice is framed as a loss [13].
## Experimental Paradigm
We adapted a rugged-landscape search metaphor, that was first introduced in biology to describe evolutionary selection [12], in an interactive, complex choice task [11]. The fundamental idea of this metaphor is that evolutionary pressure, over many generations, guides organisms in a search for an optimal genotype across a genetic landscape where elevation on the landscape represents the fitness of a genotype. This metaphor was generalized in the \(NK\) model [15, 16] and researchers have applied it to an array of different settings, perhaps most famously in the management literature [17]. Our task asks study participants to search such a landscape for its global peak (optimum) by tuning dials, each of which accesses a different landscape dimension. The complex choice that participants face is when to stop searching, i.e., what dial setting to submit as the best for a given landscape. Participants received a bonus payment for making better choices, i.e. submitting higher elevations. Each participant completed four dial tuning tasks, the first two on their own and then two more with an AI helper. Importantly, this task lacks strong contextual cues, unlike prior tasks (making artwork for aliens, managing a lemonade stand, farming, etc.).
### Landscapes as Choice Sets
Landscapes are procedurally generated and unique to each participant. They vary in ruggedness, just like mountain ranges, so even a three-dimension task can be challenging for a person to solve. The only way that a person can know the value of a given location in the landscape is by visiting it. Depending on the smoothness of a landscape, a person may, however, make inferences about different regions based on information they previously uncovered. Smoothness is determined by the range of possible slopes between neighboring points in the landscape. For the present experiment, participants completed tasks with one (simple) or four (rugged) peaks and a slope setting such that two neighboring points never differed in value by more than 10% of the absolute range of values, which was 33 units. The order of the tasks was randomized, but participants always did a simple and rugged task alone and with the AI helper.
Landscape topologies are created by randomly placing the global peak in a plane, which for this experiment encompassed 24 \(\times\) 24 units. For multi-peaked landscapes, additional peaks are randomly placed by generating a list of candidate locations where they maintain a given level of prominence such that valleys separate peaks. The size and dimensionality of the landscape constrain the number of peaks and their elevations. Once all the peaks are placed, a smoothing algorithm ensures that the slopes between the peaks and valleys meet the needed constraints. Our experimental design implemented an elevation boundary of \([0,32]\) and peaks ranging in the \([26,32]\) interval with the tallest fixed at 32.
The landscapes are rendered so that they are continuous over the edges, as in the right panel of figure 1. This means that rolling the east and west edges of a landscape together forms a smooth transition. The same is true of the north and south edges, thus each landscape forms a toroidal world characterized by peaks and valleys. Shifting that landscapes off of the \([0,32]\) interval facilitates studying the impact of earlier trials on future behavior. The incentivized task is to find, via exploration, the tallest peak in a given world. In the right panel of figure 1, the global peak is the light yellow point in the upper center of the image. A dark blue valley is below it and surrounded by other peaks.
### User Interface
Participants cannot see the landscapes. Instead, they uncover information about a landscape by "tuning" on-screen dials, as shown in the left panel of figure 1. Each dial is associated with a different dimension of the task. Participants submit tuned settings by clicking the "Evaluate Dial Settings" button. A two-dial experiment, for example, allows a person to explore a three-dimensional world, which is what we implemented for the present study. In this case, conceptually, one dial moves along the east-west and the other moves along the north-south dimension of the landscape. The feedback given when a setting is submitted is the elevation, or choice quality, of that dial combination. Importantly, the landscape dimensions, thus the dials, are interdependent and their relationships are nonlinear.
Tuning happens by clicking on a dial handle and dragging it around the underlying ring. As the dial is moved, the letter in the center of the dial changes. Each position on a dial is associated with a letter, which we implemented to simplify feedback. Feedback about the historical dial settings is available in a numbered list below the dials. Participants are able to scroll through their history, see what settings they previously checked, and decide when they are ready to make their choice by clicking the "Finalize Choice" button.
We implemented dials with 24 discrete settings. This matches the coordinate plane used to generate the landscapes. Our decision to use a plane of this size with fixed steps was based on pretesting that suggested such a landscape would yield a search space (576 unique combinations) that participants are unwilling to exhaustively search given the incentive structure. In other words, for most participants, uncertainty would remain around their choice.
We define two broad types of dial usage: exploring and exploiting. Exploration entails looking at regions of the
landscape more than two units from any previously observed location. For example, if a participant observed the dial setting [A,A] on their first move, submitting [A,D] or [X,C] on the next move would constitute an explore. Exploitation is when a participant observes a new location less than three units from any previously observed setting.
### AI Helper
After completing two tasks, an AI helper took control of the right dial. It, like study participants, did not have direct access to the landscape, thus it learned about the topology through sampling. Importantly, participants knew that the AI would help but not how it functions.
The collaborative search happened by the human first adjusting their dial (including leaving it alone) and then clicking the evaluation button. This initiated the AI helper that took the new participant dial setting and compared two combinations, one using the previous setting of its dial and the other using a new one. The AI was more likely to accept worse values, as well as search distant locations, early on. As the interaction progressed, however, it decreased the likelihood of accepting worse outcomes and its search distance. After evaluating the combinations and selecting one, the new setting was pushed to the feedback window. Control of the interface was then returned to the study participant who had the option of continuing the search or finalizing the choice.
Even though the task is challenging and relatively time-consuming for humans, it is rather simple for most AI. Our research questions are about human behavior when working with an AI helper, not developing better helpers, so we chose to implement a simple agent. The helper uses a stochastic model to determine the likelihood of accepting a worse setting combination and whether to make a big or small leap across the landscape. This is an adaptation of a simulated annealing algorithm. This simple AI ensured richer interactions with the participants and eliminated unwanted variance in the interactions, thus reducing the needed sample size. Lastly, because it could do the task almost instantaneously, we implemented a brief feedback delay to give the sense that the helper was "working."
### Experimental Manipulations
The two-by-two design crosses gain-loss framing with anchoring. Participants in the gain frame attempt to gather as many points as possible, which go towards their bonus payment, thus their feedback is positive. In the loss frame, participants attempt to reduce the number of points lost, thus the feedback is negative. The left panel of figure 1 depicts a gain frame. The task is to maximize the value in both gain and loss framing. Recall that the landscapes are randomly shifted away from the \([0,32]\) interval on which they were drawn. Values cover a 33-unit interval in the gain frame on \([0,100]\), while loss frame values are on \([-100,0]\). Participants in the anchoring treatment received a message immediately above the dials that informed them of the best possible dial setting's value, i.e., as good as they could do on a given task. Participants without an anchor had to discover what constituted a "good" outcome for each task.
Summarizing the present study: each participant completed four incentivized complex choice tasks, two alone and two with an AI helper, in which they tuned dials to find optimal settings. We did not inform participants of the underly
Figure 1: A screenshot from the experiment is on the left, and a landscape rendering is on the right. All participants saw the same dial and feedback interface, but participants in the anchoring treatment saw the best value possible directly above the dials. A two-by-two image of the treatment conditions is available in the online supplement. Participants did not see a rendering of the landscape.
ing choice landscapes, which were procedurally drawn and unique to each task. Instead, they were left to uncover insights about the space through experience. Participants did the solo tasks first, but in both the solo and team tasks the order of the one and four peaked landscapes was random. Participants were also randomized into a loss or gain framing treatment conditions. The tuning values of the loss treatment were negative, and participants worked to diminish the number of lost points. Conversely, tuning values of the gain treatment were positive, and participants worked to gain points. The gain-loss framing treatment was crossed with an anchoring treatment in which half of the participants were informed of the best they could find on each task. All participants worked with the same AI helper, but it was stochastic, meaning that even if two participants made the same choice under similar contexts, the helper may have made different choices. Together, these experimental manipulations facilitate studying human-AI teams when the person is operating under a cognitive heuristic or bias.
## Data and Analyses
We recruited 400 participants via Prolific Academic to complete a study on decision making using dials. We only recruited participants with an approval rate of 100 and who could complete the study on a desktop computer. In the months leading up to data collection, there was an influx of new workers on the platform that significantly skewed the composition of the worker population. Thus, we also restricted participation to workers who joined the platform prior to the influx.1 Funding and IRB constraints restricted us to recruiting English-speaking American citizens. According to Prolific Academic, this left just over 20,000 eligible workers, roughly \(\frac{1}{6}\) of the research pool at recruitment time. The study took about 15 minutes to complete and paid 52.00 plus a bonus of up to $2.00 based on effort. The resulting payments yielded an average pay of $18.40 per hour.
Footnote 1: [https://www.prolific.co/blog/we-recently-went-viral-on-tiktok-heres-what-we-learned](https://www.prolific.co/blog/we-recently-went-viral-on-tiktok-heres-what-we-learned) (We also added this blog to the Wayback Machine at web.archive.org).
Of the 400 participants who completed the experiment, 172 participants identified as male, 218 as female, and eight as other. The average age of a participant was 32 years. 203 participants indicated they were college graduates with a four-year degree or higher. We dropped two observations from the data set because the participants failed to complete the study. One participant was a considerable outlier: evaluating 617 settings for one of their challenges, more than the complete set of combinations. Moreover, this is three times the effort of the next most ambitious participant. Since this behavior would likely have an outsized impact on any analysis, we decided to remove this participant from the data set, leaving 397 observations.
We report participants' solo efforts in a companion paper [12]. We found that when working alone, the four-peak landscapes were more challenging than the one-peak, that both anchoring and framing yielded significant main effects on participants' choice quality, and that doing the one-peak task first was correlated with better outcomes on the four-peak task. Participants in the loss frame submitted more settings, all else equal, and spent more time exploiting versus exploring. Having an explicit anchor, on the other hand, was correlated with submitting fewer dial settings and did not significantly alter participants' search strategies.
### Complex Choice with the AI Helper
Each landscape is unique and, because the space of the landscape is fixed, adding more peaks increases the average elevation. Thus, comparing raw performance on the landscapes does not fully depict the impact of working with the AI helper. We account for this by dividing each landscape score by the landscape's average elevation to create an adjusted score. On average, participants' total adjusted score for the two choice tasks was worse when they worked with the AI helper (\(M=3.613,\,SD=0.717\)) than on their own (\(M=3.718,\,SD=0.820\)). This difference in total adjusted score (\(0.105,\,95\%\,CI\,[0.036,0.175]\)) was statistically significant (\(t(396)=2.985,\,p=0.003\)) per a paired sample t-test. As illustrated by figure 2, the difference was primarily driven by participants in the loss framing. Whether there was an anchor present (\(t(90)=3.464,\,p<0.001\)) or not (\(t(101)=2.270,\,p=0.025\)), the difference in adjusted scores for participants in the loss framing was significant.
A factorial ANOVA predicting the difference between solo and team efforts in the total adjusted score using the experimental treatments as interacting independent variables revealed a significant main effect for framing but not anchoring or the interaction. Thus, a two-sample t-test is sufficient for comparing the effect of framing on performance. Participants in the loss framing condition achieved significantly better (\(t(394.420)=2.230,\,p=0.026\)) scores than their counterparts (\(M_{difference}=0.156,\,95\%\,CI\,[0.185,0.294]\)).
Participants did worse on the four- (\(M=1.522,\,SD=0.259\)) versus one-peak (\(M=2.091,\,SD=0.554\)) landscape. This difference (\(0.569,\,95\%\,CI\,[0.521,0.616]\)) was significant (\(t(396)=23.431,\,p<0.001\)). A factorial ANOVA predicting the score difference between the two landscape types using the experimental treatments as interacting independent variables revealed no main effects for anchoring (\(F(1,393)=2.792,\,p=0.100\)) and framing (\(F(1,393)=0.128,\,p=0.721\)), nor an interaction effect (\(F(1,393)=0.342,\,p=0.559\)).
Relative to their solo efforts (\(M=2.175,\,SD=0.631\)), participants did worse on the one-peak landscape when working with the AI (\(M=2.091,\,SD=0.554\)), a difference (\(0.084,\,95\%\,CI\,[0.025,0.144]\)) which was statistically significant (\(t(396)=2.773,\,p=0.006\)). This did not hold for the four-peak landscapes (\(t(396)=1.342,\,p=0.181\)). A factorial ANOVA predicting the difference between solo and team efforts in the adjusted score for the one-peak landscape using the experimental treatments as interacting, independent variables revealed a significant main effect for the framing treatment condition but not for the anchoring or the interaction. Thus, a two-sample t-test is sufficient for comparing the effect of framing on performance on the one-peak landscapes. Participants in the loss framing con
dition achieved significantly better (\(t(393)=2.010,\,p=0.045\)) scores than their counterparts (\(M_{difference}=0.122,\,95\%\,CI\,[0.003,0.242]\)).
Search duration (number of submitted settings) and the explore/exploit trade-off (fraction of submitted settings that explore new regions, i.e. search strategy) serve as the main behavioral metrics. On average, participants searched less, i.e. submitted fewer dial settings across the two tasks, when they worked with the AI helper (\(M=30.368,\,SD=27.786\)) than on their own (\(M=43.202,\,SD=41.521\)). This difference in search duration (\(12.834,\,95\%\,CI\,[9.512,16.155]\)), per a paired-sample t-test, was statistically significant (\(t(396)=7.596,\,p<0.001\)). The fraction of those submissions that were explores while working with the AI helper (\(M=0.612,\,SD=0.243\)) was greater than when they were working on their own (\(M=0.447,\,SD=0.232\)). This difference in the explore/exploit trade-off (\(-0.165,\,95\%\,CI\,[-0.186,-0.144]\)) was also statistically significant (\(t(396)=-15.3456,\,p<0.001\)). The absolute number of submissions that were explores while working with the AI helper (\(M=13.584,\,SD=6.244\)), however, was not statistically different from the solo effort (\(M=13.411,\,SD=8.617\); \(t(396)=-0.442\), \(p=0.659\)). This suggests that participants invested less effort in fine-tuning their submissions, perhaps because they anticipated that the AI would do so for them.
**Search Duration with Anchoring and Framing Effects** Comparing the difference in search duration for participants' solo and team efforts under the different treatment conditions provides insight into how anchoring and framing effects impact human-AI teams. A factorial ANOVA predicting the difference in search duration (solo effort minus team effort duration) using the experimental treatments as interacting, independent variables revealed significant main effects for both treatment conditions but not the interaction, thus we removed the interaction from the model. The resulting model suggests strong main effects for both anchoring (\(F(1,394)=14.971,\,p<0.001\)) and framing (\(F(1,394)=16.751,\,p<0.001\)). A post hoc Tukey test showed that the anchoring (\(-12.611,\,95\%\,CI\,[-21.047,-4.174],\,p_{adj}<0.001\)) and framing (\(13.331,\,95\%\,CI\,[4.892,21.769],\,p_{adj}<0.001\)) effects were significant at the \(p<0.01\) level, although search duration moved the opposite way.
Building on these insights, using an independent sample t-test, loss frame participants adjusted their effort downwards (\(M=-19.964,\,SD=39.838\)) significantly more than gain frame participants (\(M=-6.088,\,SD=24.808\); \(t(318.21)=-4.319,\,p<0.001\)) when they started working with the AI. This is the inverse of the anchoring effect: participants with explicit anchors adjusted their effort downwards (\(M=6.449,\,SD=29.426\)) significantly less than those without (\(M=19.060,\,SD=36.341\); \(t(382.16)=3.804,\,p<0.001\)). As illustrated in figure 3, the anchoring effect seems to have tempered the framing effect.
On average, participants searched less in each successive task. However, the only significant drop was between their second solo effort and the first team effort (\(M_{difference}=-5.108;\,t(396)=5.003,\,p<0.001\)). The difference between the first and second solo (\(M_{difference}=-1.690;\,t(396)=1.654,\,p=0.099\)) as well as the first and second team (\(M_{difference}=-0.927;\,t(396)=1.300,\,p=0.194\)) efforts were not significant. This points to participants expecting that the AI would reduce the effort they needed to invest in the task to perform well. For gain-frame participants, this appears to be true: as they exerted less effort while working with the AI (\(M_{difference}=-6.088;\,t(203)=-3.505,\,p<0.001\)), but averaged the same adjusted scores (\(M_{difference}=-0.029;\,t(203)=-0.574,\,p=0.567\)). Loss frame participants, however, appear to have over-adjusted their effort (\(M_{difference}=-19.964;\,t(192)=-6,962,\,p<0.001\)) such that their adjusted scores suffered significantly (\(M_{difference}=-0.186;\,t(192)=-3.877,\,p<0.001\)). For the anchoring treatment, whether a participant saw an anchor (\(M_{difference}=-6.449;\,t(195)=-3.068,\,p=0.002\)) or not (\(M_{difference}=-19.060;\,t(200)=-7.436,\,p<0.001\)), the difference in their effort was significantly less when they worked with the AI helper. This was not correlated with a significantly lower adjusted score during the AI tasks for the no-anchor participants (\(M_{difference}=-0.090;\,t(200)=-1.730,\,p=0.090\)). It was, however, correlated with a significantly lower adjusted score for the anchor participants (\(M_{difference}=-0.121;\,t(195)=-2.539,\,p=0.012\)).
Figure 2: Adjusted score accounts for variance in the landscapes and the increase in average elevation created by adding more peaks. The difference in score is simply the total score achieved during the solo effort minus the total score achieved while working with the AI helper.
**Search Strategy with Anchoring and Framing Effects** As noted, the change in search duration echoed through to the strategy participants used when they began working with the AI helper. This shorter search duration meant a higher fraction of submissions that were explorations. A factorial ANOVA predicting the difference in search strategy (solo effort minus team effort strategy) using the experimental treatments as interacting, independent variables revealed no main effects for anchoring \((F(1,393)=1.934,\,p=0.165)\) and framing \((F(1,393)=0.684,\,p=0.409)\), nor an interaction effect \((F(1,393)=1.150,\,p=0.284)\). Again, we interpret this as suggesting that the participants anticipated that the AI would reduce the effort they needed to expend in tuning the dials.
## Discussion
Aversion to algorithmic choice aids is a well-documented phenomenon [14, 15]. The proliferation of both passive and adaptive algorithms in every corner of life, however, is leaving people increasingly accepting of them [13]. Historically, the experimental methods used to study human-algorithm teams relied on reducing the complexity of choices or studying them _in situ_. Although both methods have merit, they also have major shortcomings, such as not generalizing to complex choices or to choices made in different contexts. The method we developed here maintains choice complexity and is sufficiently abstract to facilitate the generalization of findings.
Study participants generally made worse choices when they worked with the AI helper than when they worked alone. This result was primarily driven by participants in a loss frame who dramatically decreased their effort once they started working with the AI helper. However, participants in the gain frame performed about the same on their own as with the AI. The decrease in participants' effort was primarily seen in fine-tuning a choice by exploiting local information rather than exploring new options. The only participants that did not significantly reduce their effort when they started working with the AI helper were those in the gain frame _and_ with an explicit anchor, i.e. they knew the value of the best possible choice. These results suggest that people were possibly over-reliant on the AI helper, assumed that it was better at the task than it actually was, or, relatedly, their lack of knowledge about how it functioned hindered their ability to team with it. These possibilities point to interesting and important topics for future research. Additionally, they suggest that the well-documented phenomenon of algorithm aversion may not be a stable aspect of human cognition.
## Conclusion
AI helpers are making their way into every aspect of life--and people appear to be more willing than ever to allow them to help. Historically, the complex choices for which AI can be most helpful have been reduced to simpler analogs for experiments or studied _in situ_. Although simple choice experiments can easily isolate the effects of heuristics and biases, they ignore the fact that many choices are characterized by astounding complexity. _In situ_ experimentation overcomes this, but it may produce results that do not generalize. As demonstrated, the dial tuning task maintains the ability to study well-documented cognitive artifacts during complex choices plus, we argue, it provides general results (and data) that AI systems can use. Specifically, the data generated by this experiment are easy to translate into layered image representations that lend themselves to deep learning models. With sufficient data points, such models could learn to discriminate between biased and unbiased behavior.
|
2307.01881
|
ProPILE: Probing Privacy Leakage in Large Language Models
|
The rapid advancement and widespread use of large language models (LLMs) have
raised significant concerns regarding the potential leakage of personally
identifiable information (PII). These models are often trained on vast
quantities of web-collected data, which may inadvertently include sensitive
personal data. This paper presents ProPILE, a novel probing tool designed to
empower data subjects, or the owners of the PII, with awareness of potential
PII leakage in LLM-based services. ProPILE lets data subjects formulate prompts
based on their own PII to evaluate the level of privacy intrusion in LLMs. We
demonstrate its application on the OPT-1.3B model trained on the publicly
available Pile dataset. We show how hypothetical data subjects may assess the
likelihood of their PII being included in the Pile dataset being revealed.
ProPILE can also be leveraged by LLM service providers to effectively evaluate
their own levels of PII leakage with more powerful prompts specifically tuned
for their in-house models. This tool represents a pioneering step towards
empowering the data subjects for their awareness and control over their own
data on the web.
|
Siwon Kim, Sangdoo Yun, Hwaran Lee, Martin Gubri, Sungroh Yoon, Seong Joon Oh
|
2023-07-04T18:53:47Z
|
http://arxiv.org/abs/2307.01881v1
|
# ProPILE: Probing Privacy Leakage in Large Language Models
###### Abstract
The rapid advancement and widespread use of large language models (LLMs) have raised significant concerns regarding the potential leakage of personally identifiable information (PII). These models are often trained on vast quantities of web-collected data, which may inadvertently include sensitive personal data. This paper presents ProPILE, a novel probing tool designed to empower data subjects, or the owners of the PII, with awareness of potential PII leakage in LLM-based services. ProPILE lets data subjects formulate prompts based on their own PII to evaluate the level of privacy intrusion in LLMs. We demonstrate its application on the OPT-1.3B model trained on the publicly available Pile dataset. We show how hypothetical data subjects may assess the likelihood of their PII being included in the Pile dataset being revealed. ProPILE can also be leveraged by LLM service providers to effectively evaluate their own levels of PII leakage with more powerful prompts specifically tuned for their in-house models. This tool represents a pioneering step towards empowering the data subjects for their awareness and control over their own data on the web.
## 1 Introduction
Recent years have seen staggering advances in large language models (LLMs) [27; 3; 33; 7; 30; 34; 24]. The remarkable improvement is commonly attributed to the massive scale of training data crawled indiscriminately from the web. The web-collected data is likely to contain sensitive personal information crawled from personal web pages, social media, personal profiles on online forums, and online databases such as collections of in-house emails [13]. They include various types of personally identifiable information (PII) for the data subjects, including their names, phone numbers, addresses, education, career, family members, and religion, to name a few.
This poses an unprecedented level of privacy concern not matched by prior web-based products like social media. In social media, the affected data subjects were precisely the users who have consciously shared their private data with the awareness of associated risks. In contrast, products based on LLMs trained on uncontrolled, web-scale data have quickly expanded the scope of the affected data subjects far beyond the actual users of the LLM products. Virtually anyone who has left some form of PII on the world-wide-web is now relevant to the question of PII leakage.
Currently, there is no assurance that adequate safeguards are in place to prevent the inadvertent disclosure of PII. Understanding of the probability and mechanisms through which PII could leak
under specific prompt conditions remains insufficient. This knowledge gap highlights the ongoing need for comprehensive research and implementation of robust leakage measurement tools.
In this regard, we introduce ProPILE, a tool to let the data subjects examine the possible inclusion and subsequent leakage of their own PII in LLM products in deployment. The data subject has only black-box access to LLM products; they can only send prompts and receive the generated sentences or likelihoods. Nevertheless, since the data subject possesses complete access to their own PII, ProPILE leverage this to generate effective prompts aimed at assessing the potential PII leakage in LLMs. See Figure 1 for an overview of the ProPILE framework. Importantly, this tool holds considerable value not only for data subjects but also LLM service providers. ProPILE provides the service providers with a tool to effectively assess their own levels of PII leakage with more powerful prompts specifically tuned for their in-house models. Through this, the service providers can proactively address potential privacy vulnerabilities and enhance the overall robustness of their LLMs.
Our experiments on the Open Pre-trained Transformers (OPT) [37] trained on the Pile dataset [10] confirm the followings. 1) A significant portion of the diverse types of PII included in the training data can be disclosed through strategically crafted prompts. 2) By refining the prompt, having access to model parameters, and utilizing a few hundred training data points for the LLM, the degree of PII leakage can be significantly magnified. We envision our proposition and the insights gathered through ProPILE as the initial step towards enhancing the awareness of data subjects and LLM service providers regarding potential PII leakage.
## 2 Related Works
### Privacy Leakage in Learned Models: Pre-LLM Era
The successful development of machine learning (ML) technologies and related web products led to privacy concerns. ML models may unintentionally include PII of certain data subjects in ML training data. As those models become publicly available, concerns have been raised that such PIIs may be accessed by millions of users using the ML service. Researchers have assessed the possibility of reconstructing PII-relevant training data from a learned model [9; 11; 36; 39; 38]. The task is referred to as **training data reconstruction or model inversion**. Previous work has shown that it is often possible to reconstruct training data well enough to reveal sensitive attributes (e.g., face images from a face classifier), even with just a black-box access [9; 11; 36]. Researchers have also designed a more evaluation-friendly surrogate task, **membership inference attack**[31], that tests whether each of the given samples has been included in the training data of the learned model. Subsequent work has shown that this is indeed possible for a wide range of models, including text-generation models [12; 32] and image-generation models [6]. For a comprehensive review of the field up to 2020, refer to the overview by Rigaki & Garcia [29].
Figure 1: **ProPILE. Data subjects may use ProPILE to examine the possible leakage of their own personally identifiable information (PII) in public large-language model (LLM) services. ProPILE helps data subjects formulate an LLM prompt based on \(M-1\) of their PII items to task the LLM to output the \(M^{\text{th}}\) PII not given in the prompt. If the true PII has a significantly higher likelihood of a response from the LLM, we consider this to be a privacy threat to the data subject. The likelihood 0.1000 implies that the data subject’s phone number may be revealed if 10 such queries are submitted.**
### Privacy Leakage in Learned Models: Post-LLM Era
The appearance of billion-scale large-language models (LLMs) and the highly successful products including ChatGPT [24], leads to an even higher level of privacy concerns. Their training data includes not only the data consciously or voluntarily provided by the data subjects, but also a massive crawl of the entire web such as personal web pages, social media accounts, personal profiles on online forums, and databases of in-house emails [13]. Building a model-based service on such a web-crawled dataset and making it available to millions of users worldwide poses a novel, serious threat to the data rights of the data subjects. Motivated by this, a few early studies have been made to measure privacy leakage in LLMs [13; 21; 4; 14]. However, although [13] initiated the discussion on PII leakage in LLMs, it was limited to the preliminary analysis of only email addresses. [21] conducted a separate study that specifically targeted LLMs fine-tuned with an auxiliary dataset enriched with PII. Furthermore, their study specifically concentrated on scenarios where the prefix or suffix associated with the PII was known. In contrast, ProPILE aims to provide a more comprehensive tool for probing LLMs already in deployment without LLM fine-tuning or prefix retrieval.
### Prompt Tuning
Prompt engineering [28; 20] improves downstream task performance of LLMs by well-designing prompts without further LLM fine-tuning. In soft prompt tuning [15; 17], a few learnable soft token embeddings concatenated to the original prompts are trained while LLM is frozen, so that more optimal prompts for the downstream task can be obtained. The white-box approach of ProPILE leverages soft prompt tuning to further refine the black-box approach's hand-crafted prompts.
## 3 ProPILE: Probing PII Leakage of Large Language Models
In this section, we propose ProPILE, a probing tool to profile the PII leakage of LLMs. We first introduce the two attributes of PII, namely linkability and structurality, which are important for the subsequent analysis. We also describe our threat model and eventually introduce probing methods of ProPILE. Finally, we discuss the quantification of the degrees of privacy leakage.
### Formulation of PII
#### 3.1.1 Linkability
From a privacy standpoint, the random disclosure of PII may not necessarily pose a substantial risk. For instance, when a phone number is generated in an unrelated context, there are no identifiable markers linking the number to its owner. However, if targeted PII is presented within a context directly tied to the owner, it could pose a severe privacy risk as it unequivocally associates the number with its owner. In light of this, the linkability of PII items has been considered critical for the study of privacy leakage [26]. We formalize the linkability of PII in the definition below.
**Definition 1** (Linkable PII leakage).: Let \(\mathcal{A}:=\{a_{1},...,a_{M}\}\) be \(M\) PII items relevant to a data subject \(S\). Each element \(a_{m}\) denotes a PII item of a specific PII type. Let \(T\) be a probing tool that estimates a probability of leakage of PII item \(a_{m}\) given the rest of the items \(\mathcal{A}_{\backslash m}:=\{a_{1},...,a_{m-1},a_{m+1},...,a_{M}\}\). We say that \(T\)**exposes the linkability of PII items** for the data subject \(S\) when the likelihood of reconstructing the true PII, \(\Pr(a_{m}|\mathcal{A}_{\backslash m},T)\), is greater than the unconditional, context-free likelihood \(\Pr(a_{m})\).
#### 3.1.2 Structurality
We consider PII in LLM training data in a string format. Certain types of PII tend to be more structured than others. The structurality of PII has significant implications for practical countermeasures against privacy leakage. We discuss them below.
**Structured PII** refers to the PII type that often appears in a structured pattern. For example, phone numbers and social security numbers are written down in a recognizable pattern like (xxx) xxx-xxxx that is often consistent within each country. Email addresses also follow a distinct pattern id@domain and are considered structured. Though less intuitive, we also consider physical addresses structured: [building, street, state, country, postal code].
We expect structured PII to be easily detectable with simple regular expressions [1]. This implies apparently simple remedies against privacy leakage. Structured PII may easily be purged out from training data through regular expression detection. Moreover, leakage of such PII may be controlled through detection and redaction in the LLM outputs. However, in practice, the complete removal of structured PII in training data and LLM-generated content is difficult. Regulating the generation of useful public information, such as the phone number and address of the emergency clinic, will significantly limit the utility of LLM services. It is often difficult to distinguish PII and public information that fall within the same pattern category. As such, it is not impossible to find structured PII in the actual LLM training data, such as the Pile dataset (section 4.1) [10], and the leakage of PII in actual LLM outputs [18]. We thus study the leakage of structured PII in this work.
**Unstructured PII** refers to the PII type that does not follow an easy regular expression pattern. For example, information about a data subject's family members is sensitive PII that does not follow a designated pattern in text. One could write "{name1}'s father is {name2}", but this is not the only way to convey this information. Other examples include the affiliation, employer, and educational background of data subjects. Unstructured PII indeed poses greater threats of unrecognized privacy leakage than structured PII. In this work, we consider family relationships and affiliation as representative cases of unstructured PII (section 4.3).
### Threat Model
Our goal is to enable data subjects to probe how likely LLMs are to leak their PII. We organize the relevant actors surrounding our PII probing tool and the resources they have access to.
**Actors in the threat model.** First of all, there are **data subjects** whose PII is included in the training data for LLMs. They have their ownership, or the data rights, over the PII. **LLM providers** train LLMs using web-crawled data that may potentially include PII from corresponding data subjects. Finally, **LLM users** have access to the LLM-based services to send prompts and receive text responses.
**Available resources.** LLM-based services, especially proprietary ones, are often available as APIs, allowing only **black-box access** to LLM users. They formulate the inputs within the boundary of rate limit policy and inappropriate-content regulations and receive outputs from the models. On the other hand, LLM providers have **white-box access** to the LLM training data, LLM training algorithm, and hyperparameters, as well as LLM model parameters and gradients. Data subjects may easily acquire black-box access to the LLMs by registering themselves as LLM users, but it is unlikely that they will get white-box access. Importantly, data subjects have rightful access to their own PII. We show how they can utilize their own PII to effectively probe the privacy leakage in LLMs.
### Probing methods
We present two probing methods, one designed for data subjects with only black-box access to LLMs and the other for model providers with white-box access.
#### 3.3.1 Black-box Probing
**Actor's goal.** In a black-box probing scenario, an actor with black-box access aims to probe whether there is a possibility that the LLM leaks one of their PII. Particularly, an actor has a list of their own PII \(\mathcal{A}\) with \(M\) PII items and aims to check if the target PII \(a_{m}\in\mathcal{A}\) leaks from an LLM.
**Probing strategy.** For a target PII \(a_{m}\), a set of query prompts \(\mathcal{T}\) is created by associating the remaining PII \(\mathcal{A}_{\backslash m}\). Particularly, \(\mathcal{A}_{\backslash m}\) is prompted with \(K\) different templates as \(\mathcal{T}=\{t_{1}(\mathcal{A}_{\backslash m}),...,t_{K}(\mathcal{A}_{ \backslash m})\}\). Then, the user sends the set of probing prompts \(\mathcal{T}\) to the target LLM for as much as \(N\) times. Assuming the target LLM performs sampling, the user will receive \(N\times K\) responses along with the likelihood scores \(\mathcal{L}\in\mathbb{R}^{K\times L\times V}\), where \(L\) and \(V\) denote the length of the response and the vocabulary size of the target LLM, respectively. Example prompts are shown in Figure 2.
#### 3.3.2 White-box Probing
**Actor's goal.** In the white-box probing scenario, the goal of the actor is to find a tighter worst-case leakage (lower bound on the likelihood) of specific types of PII (\(a_{m}\)). The actor is given additional resources beyond the black-box case. They have access to the training dataset, model parameters, and model gradients.
**Probing strategy.** We use soft prompt tuning to achieve the goal, to find a prompt that induces more leakage than the handcrafted prompts in the black-box case. First, we denote a set of PII lists included in the training dataset of target LLM as \(\mathcal{D}=\{\mathcal{A}^{i}\}_{i=1}^{N}\). White-box approach assumes that an actor has access to a subset of training data \(\tilde{\mathcal{D}}\subset\mathcal{D}\), where \(|\tilde{D}|=n\) for \(n\ll N\). Let us denote a query prompt as \(X\) that is created by one of the templates used in the black-box probing \(X=t_{n}(\mathcal{A}^{i}_{\setminus m})\). Then \(X\) is tokenized and embedded into \(X_{e}\in\mathbb{R}^{L_{X}\times d}\), where \(L_{X}\) denotes the length of the query sequence and \(d\) denotes the embedding dimension of the target LLM. The soft prompt \(\theta_{s}\in\mathbb{R}^{L_{s}\times d}\), technically learnable parameters, are appended ahead of \(X_{e}\) making \([\theta_{s};X_{e}]\in\mathbb{R}^{(L_{s}+L_{X})\times d}\), where \(L_{s}\) denotes the number of soft prompt tokens to be prepended. The soft embedding is trained to maximize the expected reconstruction likelihood of the target PII over \(\tilde{\mathcal{D}}\). Therefore, the training is conducted to minimize negative log-likelihood defined as below:
\[\theta_{s}^{*}=\operatorname*{argmin}_{\theta_{s}}\operatorname*{\mathbb{E}}_ {\mathcal{A}\sim\tilde{\mathcal{D}}}\Bigl{[}-\log(\Pr(a_{m}|[\theta_{s};X_{e} ]))\Bigr{]}. \tag{1}\]
After the training, the learned soft embedding \(\theta_{s}^{*}\) is prepended to prompts \(t_{n}(\mathcal{A}_{\setminus m})\) made of unseen data subject's PII to measure the leakage of \(a_{m}\) of the subject.
### Quantifying PII leakage
For both black-box and white-box probing, the risk of PII leakage is quantified using two types of metrics depending on the output that the users receive.
**Quantification based on string match.** Users receive generated text from the LLMs. Naturally, the string match between the generated text and the target PII serves as a primary metric to quantify the leakage. **Exact match** represents a verbatim reconstruction of a PII; the generated string is identical to the ground truth PII.
**Quantification based on likelihood.** We consider the scenario that black-box LLMs can provide likelihood scores for candidate text outputs. The availability of likelihood scores enables a more precise assessment of the level of privacy leakage. It also lets one simulate the chance of LLMs revealing the PII when it is deployed at a massive scale. Reconstruction likelihood implies the probability of the target PII being reconstructed given the query prompt. Therefore, the likelihood defined as follows is used to quantify the leakage:
\[\Pr(a_{m}|\mathcal{A}_{\setminus m})=\prod_{r=1}^{L_{r}}p(a_{m,r}|x_{1},x_{2},...,x_{L_{q}+r-1}). \tag{2}\]
In this equation, \(a_{m}\) represents the target PII and the product is taken over the range from \(r=1\) to \(L_{r}\), where \(L_{r}\) represents the length of the target PII (\(a_{m}\)). \(x_{1},x_{2},...,x_{L_{q}+r-1}\) correspond to the tokens or words comprising the query prompt of length \(L_{q}\) followed by the response.
Even a low level of likelihood has critical implications for privacy leakage, particularly for systems deployed at scale. For example, ChatGPT has been deployed to more than 100 million users worldwide [25]. The likelihood of \(0.01\%\) of reconstructing the PII implies \(100\) cases of PII reconstruction if only 0.01% of the 100 million users attempt the reconstruction 10 times each.1 The inverse of the likelihood indicates the expected number of sampling or queries needed to generate the exact PII.
Footnote 1: [https://github.com/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/j/journals/journals/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/jj/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/journals/j/journals/j/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/journals/j/journals/j/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/jjournals/journals/j/journals/jjournals/journals/j/journals/journals/jjournals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/journals/journals/jjournals/j/journals/journals/j/journals/journals/j/jjournals/j/journals/journals/journals/journals/j/journals/journals/j/journals/journals/jjournals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/jj/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/jjournals/j/journals/journals/journals/jjournals/j/journals/journals/j/journals/journals/jjournals/j/journals/j/journals/journals/journals/journals/jjournals/j/journals/journals/journals/j/journals/journals/journals/jjournals/journals/j/journals/journals/journals/j/journals/jjournals/j/journals/jjournals/journals/j/journals/jjournals/j/journals/journals/journals/j/journals/journals/jjournals/jj/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/jj/journals/journals/j/journals/journals/journals/jjournals/journals/journals/j/journals/journals/jjournals/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/jjournals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/jjournals/journals/j/journals/journals/jj/journals/journals/journals/journals/journals/j/journals/j/journals/j/journals/jjournals/journals/j/journals/j/journals/j/journals/j/journals/jjournals/j/journals/j/journals/journals/j/jjournals/j/journals/journals/j/journals/jj/journals/j/journals/jjournals/journals/j/journals/journals/j/journals/journals/jjournals/j/journals/j/journals/](https://github.com/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/j/journals/journals/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/jj/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/journals/j/journals/j/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/journals/j/journals/j/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/jjournals/journals/j/journals/jjournals/journals/j/journals/journals/jjournals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/journals/journals/jjournals/j/journals/journals/j/journals/journals/j/jjournals/j/journals/journals/journals/journals/j/journals/journals/j/journals/journals/jjournals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/jj/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/jjournals/j/journals/journals/journals/jjournals/j/journals/journals/j/journals/journals/jjournals/j/journals/j/journals/journals/journals/journals/jjournals/j/journals/journals/journals/j/journals/journals/journals/jjournals/journals/j/journals/journals/journals/j/journals/jjournals/j/journals/jjournals/journals/j/journals/jjournals/j/journals/journals/journals/j/journals/journals/jjournals/jj/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/jj/journals/journals/j/journals/journals/journals/jjournals/journals/journals/j/journals/journals/jjournals/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/jjournals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/jjournals/journals/j/journals/journals/jj/journals/journals/journals/journals/journals/j/journals/j/journals/j/journals/jjournals/journals/j/journals/j/journals/j/journals/j/journals/jjournals/j/journals/j/journals/journals/j/jjournals/j/journals/journals/j/journals/jj/journals/j/journals/jjournals/journals/j/journals/journals/j/journals/journals/jjournals/j/journals/j/journals/)
## 4 Probing Existing LLMs
### Experimental setup
**Target LLM to be probed.** In our experiments, the selection of the target LLM was guided by two specific requirements. Firstly, in order to assess the probing results, it was necessary for the training dataset of the target LLM to be publicly available. Secondly, to facilitate both black-box and white-box probing, it was essential to have access to pre-trained weights of the target model. To meet these criteria, we opted to utilize the OPT with 1.3 billion hyperparameters (OPT-1.3B) [37] and corresponding tokenizer released by HuggingFace [35]2 as our target LLM for probing. Please refer to Appendix for the detailed generation hyperparameters and prompt templates.
Footnote 2: [https://huggingface.co/facebook](https://huggingface.co/facebook)
**Evaluation dataset.** This paper conducts experiments using five types of PII: **phone number**, **email address**, and **(physical) address** as instances of structured PII and **family relationship** and **university information** as instances of unstructured PII. To evaluate the PII leakage, an evaluation dataset was collected from the Pile dataset, which is an 825GB English dataset included in OPT training data [10]. It is noteworthy that the presence of documents containing all five types linked to a data subject is rare in the Pile dataset. However, for structured PII, there were instances where all three types of structured PII were linked to the name of a data subject. Hence, we extracted quadruplets of (name, phone number, email address, address) from the Pile dataset. Specifically, the PII items are searched with regular expressions and named entity recognition [2; 23]. Examples are shown in Figure 2 (b). For the collection of unstructured PII, we adopted a question-answering model based on RoBERTa3 and formulated relevant questions to extract information regarding relationships or affiliations. Only answers with a confidence score exceeding \(0.9\) were gathered, and subsequently underwent manual filtering to eliminate mislabeled instances. The final evaluation dataset consists of the structured PII quadruplets for 10,000 data subjects, name-family relationship pairs for 10,000 data subjects, and name-university pairs for 2,000 data subjects. Please refer to the Appendix for the dataset construction details.
Footnote 3: [https://huggingface.co/distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad)
### Black-box probing results
We show how black-box probing approach of ProPILE with hand-crafted prompts helps data subjects assess the leakage of their own PII. We also examine the effect of various factors on the leakage.
**Likelihood results.** We first evaluate the likelihood of the target PII item given the other items of a subject. Then, we consider the black-box LLM as revealing the linkable PII item, if the likelihood probability is _greater_ than that of randomly selected PII instances, i.e., \(\Pr(a_{m}|\mathcal{A}_{m})>\Pr(a_{m,\mathrm{Null}}|\mathcal{A}_{m})\). The \(a_{m,\mathrm{Null}}\) is from the evaluation dataset. We utilized the aforementioned evaluation dataset and created prompts using five different triplet templates, including those described
Figure 2: **Probing prompts. (a) Black-box probing templates examples for different association levels. Blue text denotes the associated PII to be included in the prompt, and Red text indicates the target PII and the type of it. (b) Examples from the evaluation dataset. Text in Pile dataset is converted to dictionary.**
in Figure 2 (a). Subsequently, the generation is done using beam search with a beam size of 3. The likelihood was computed using Equation 2.
Figure 3 (a-b) illustrates the density plot of the likelihoods. The blue and orange color represents the target PII (\(a_{m}\)) and randomly chosen PII (\(a_{m,\mathrm{null}}\)), respectively. The plots also display the mean likelihood values. It is observed that the mean likelihood of target PII are higher than that of the null PII for all PII types. We also denoted the p-value obtained from the statistical test using the Wilcoxon signed rank test [8]. The small p-value suggests that the observed difference is statistically significant except for affiliation. Figure 3 (c) shows \(\gamma_{<k}\). We have mentioned in section 3.4 that the x-axis variable, \(k\), can be interpreted as the number of samples. As the number of samples increases, we observe a gradual increase in the frequency of exact reconstruction.
The above black-box probing results demonstrate a high risk of reconstructing the exact PII based on available PII items and establishing the link. The results of \(\gamma_{<k}\) indicate that despite the seemingly low likelihood values, there is a possibility of exact reconstruction of PII.
**Exact match results.** Through black-box probing, the generated sequences can be obtained. The exact match can be assessed by evaluating whether the generated sequence includes the exact string of target PII or not. First, we evaluated the exact match with a varying number of templates used to construct the prompts. Results are shown in Figure 4 (a). The rate of exact matches increases as the number of prompt templates increases. This also supports the rationale behind white-box probing, as it suggests that finding more optimal prompts can further increase the leakage.
Furthermore, we conducted an assessment of exact matches when different levels of the associations are present in the prompt. Figure 4 (b) shows the results. The "twins" denotes that only the name of a data subject is used to make the query prompt, while "triplet" indicates the presence of an additional PII item in the prompt. We can observe a fivefold increase in the exact match rate for the email address. This increase occurred when a phone number, which offers more specific information about the data subject, was provided in addition to the name. In the case of phone numbers, we also observed an increase of more than double. This shows increasing information in the prompts that can be associated with the target PII elevates the leakage. It also supports the effectiveness of black-box probing that utilizes the data subject's linkable PIIs. Furthermore, with increased beam search sizes in the model (Figure 4 (c)) and larger model sizes (d), the frequency of the target PII appearing in
Figure 3: **Black-box probing result in likelihood perspective.** Reconstruction vs. baseline likelihood of (a) structured PII and (b) unstructured PII, shown with the average likelihood and the p-value of the Wilcoxon signed-rank test. (c) shows a summary of the likelihoods using \(\gamma_{<_{k}}\) defined in Equation 3.
generated sentences also tends to rise. The increasing leakage that occurs with larger model sizes can be attributed to improved accuracy. This implies that as the current trend of scaling up large language models continues, the potential risks of PII leakage may also increase.
### White-box probing results
In this section, we demonstrate the white-box probing by presenting the leakage of the **phone number** given other PII information in the structured quadruplet. We train 20 embedding vectors for the soft prompts by appending them ahead of a single prompt to generate the target phone number; We use additional 128 quadruplet data that are not included in the evaluation dataset. Please refer to Appendix for the training details. With the trained soft prompts, we measure the likelihood probabilities and exact match ratios on the evaluation dataset. Figure 5 summarizes the results in terms of the number of training data, the number of soft tokens, and the initialization type.
**Efficacy of soft prompt tuning.** Figure 5 illustrates the impact of the soft prompt on the exact match rate and reconstruction likelihood, with blue and orange colors, respectively. The results indicate a significant increase, from \(0.0047\%\) of black-box probing using five prompt templates to \(1.3\%\) with the soft prompt learned only from 128 data points being prepended to a single query prompt. The likelihood also increased by a large amount for the same case. It is speculated that the observed increase can be attributed to the soft prompt facilitating the more optimal prompts that may not have been considered by humans during the construction of prompts in black-box probing.
**Effect of dataset size.** The white-box probing scenario assumes that a user (or a service provider) has access to a small portion of the training. To see the impact of the number of data used for tuning to the degree of the leakage, soft prompts were trained using different numbers of triplets in the training dataset, specifically \([16,32,64,128,256,512]\). The results are depicted in Figure 5 (a). Even with 16 data points, a significant surge in leakage was observed. The exact match rate escalated to \(0.12\%\), surpassing the exact match scores achieved by using five prompts, as well as in terms of likelihood. As the training set size increases from 16 to 128, the exact match dramatically increases from 0.12% to 1.50%. This finding indicates that even with a small fraction of the training dataset, it is possible to refine prompts that can effectively probe the PII leakage in LLM.
**Additional analysis of soft prompt tuning.** We also examine the impact of different factors on the leakage and Figure 5 (b) and (c) display the leakage levels according to these factors. As the number of soft tokens increases, the leakage also exhibits an increasing trend. This can be attributed to the enhanced expressiveness of the soft prompts, which improves as the number of parameters increases. Furthermore, different initialization schemes produce diverse outcomes. We investigated three initialization schemes: 1) an embedding of the word representing the specific type of target PII, i.e., "phone", which was the default setting throughout our experiments, 2) an embedding sampled from a uniform distribution \(\mathcal{U}(-1,1)\), and 3) utilizing the mean of all vocabulary embeddings. As illustrated in Figure 1(c), the uniform and mean initialization schemes were unable to raise the leakage. In contrast, initializing with the PII type resulted in the most significant leakage.
**Transferability test.** If the soft embedding learned for one language model can be reused to probe a different language model, it opens up the possibility of applying the knowledge acquired from white-box probing to black-box probing. To assess the feasibility of this approach, we transferred the soft prompt learned for OPT-1.3B model to OPT models with different scales, namely OPT-350M and
Figure 4: **Black-box probing results in string-match perspective. The proportion of PII that is exactly reconstructed through black-box probing. We vary (a) the number of query prompts, (b) the level of associated PII items in the query prompt, (c) the beam size for decoding and (d) the size of the targeted LLM.**
OPT-2.7B. However, directly plugging the soft embedding trained on one model into another model is impossible due to the mismatch of embedding dimensions (e.g., \(1,024\) and \(512\) for OPT-1.3B and OPT-350M, respectively.) To address this, we follow a two-step process of the previous approach [22]. We project the soft embedding to the closest hard tokens in terms of Euclidean distance and decode it to raw string with the source model's tokenizer. The string is then concatenated ahead of the raw query text and fed into the target model.
Table 1 demonstrates that the soft prompt learned from the OPT-1.3B model increases the leakage of the same type of PII in both the OPT-350M and OPT-2.7B models. The increase in leakage is also denoted with the multiplication symbol (\(\times\)), showcasing how many times the reconstruction likelihood is amplified when utilizing the soft prompt learned for OPT-1.3B in the other models. While there may not be a substantial difference from the exact match perspective, the potential for transferability has been confirmed in the perspective of likelihood. Future work could explore research for investigating white-box probing techniques for enhancing transferability.
## 5 Conclusion
This paper introduces ProPILE, a novel tool designed for probing PII leakage in LLM. ProPILE encompasses two probing strategies: black-box probing for data subjects and white-box probing for LLM service providers. In the black-box probing approach, we strategically designed prompts and metrics so that the data subjects can effectively probe if their own PII is being leaked from LLM. The white-box probing approach empowered LLM service providers to conduct investigations on their own in-house models. This was achieved by leveraging the training data and model parameters to fine-tune more potent prompts, enabling a deeper analysis of potential PII leakage. By conducting actual probing on the OPT-1.3B model, we made several observations. First, we found that the target PII item is generated with a significantly higher likelihood compared to a random PII item. Furthermore, white-box probing revealed a tighter worst-case leakage possibility in terms of PII leakage. We hope that our findings empower the data subjects and LLM service providers for their awareness and control over their own data on the web.
**Limitations.** The construction of the evaluation dataset exclusively involved the use of private information sourced from open-source datasets provided by large corporations. This approach ensures the ethical acquisition of data. However, it's important to acknowledge that the data collection process itself was heuristic in nature. Consequently, the evaluation dataset may contain instances of incorrectly associated data or noise. This could introduce a degree of uncertainty or potential inaccuracies, which must be taken into account when interpreting the results.
\begin{table}
\begin{tabular}{l l|c c c|c c} \hline \hline \multirow{2}{*}{**Source**} & \multirow{2}{*}{**Target**} & \multicolumn{2}{c|}{**Avg. Likelihood**} & \multicolumn{2}{c}{**\# Exact match**} \\ & & Original & Transfer & \(\times\) & Original & Transfer \\ \hline \multirow{2}{*}{OPT-1.3B} & OPT-350M & \(1.05\!\times\!10^{-11}\) & \(1.08\!\times\!10^{-10}\) & **7.5** & 0 & 0 \\ & OPT-1.3B & \(6.06\!\times\!10^{-8}\) & \(3.47\!\times\!10^{-6}\) & **57.3** & 5 & 3 \\ & OPT-2.7B & \(1.39\!\times\!10^{-7}\) & \(2.18\!\times\!10^{-6}\) & **15.6** & 14 & 15 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Transferability of soft prompt**. Original denotes the black-box probing results using one query prompt and transfer denotes the probing results using the transferred soft prompt that is learned from the source model (OPT-1.3B). \(\times\) columns show how much the leakage likelihood increases by using the transferred soft prompt.
Figure 5: **White box probing results.** Leakage results on 10,000 unseen triplets according to (a) varying number of data used for prompt tuning, (b) number of soft tokens, (c) different initialization type. Blue and orange color denotes exact match rate and likelihood, respectively.
**Societal Impact.** We emphasize that our proposed probing strategies are not designed to facilitate or encourage the leakage of PII. Instead, our intention is to provide a framework that empowers both data subjects and LLM service providers to thoroughly assess the privacy state of current LLMs. By conducting such evaluations, stakeholders can gain insights into the privacy vulnerabilities and potential risks associated with LLMs prior to their deployment in a wider range of real-world applications. This proactive approach aims to raise awareness among users, enabling them to understand the security and privacy implications of LLM usage and take appropriate measures to safeguard their personal information.
## Acknowledgements
This work was supported by NAVER Corporation, the National Research Foundation of Korea (NRF) grants funded by the Korea government (Ministry of Science and ICT, MSIT) (2022R1A3B1077720 and 2022R1A5A708390811), Institute of Information & Communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (MSIT) (2021-0-01343: AI Graduate School Program, SNU), and the BK21 FOUR program of the Education and Research Program for Future ICT Pioneers, Seoul National University in 2023.
|
2306.07101
|
Dendrites and Efficiency: Optimizing Performance and Resource
Utilization
|
The brain is a highly efficient system evolved to achieve high performance
with limited resources. We propose that dendrites make information processing
and storage in the brain more efficient through the segregation of inputs and
their conditional integration via nonlinear events, the compartmentalization of
activity and plasticity and the binding of information through synapse
clustering. In real-world scenarios with limited energy and space, dendrites
help biological networks process natural stimuli on behavioral timescales,
perform the inference process on those stimuli in a context-specific manner,
and store the information in overlapping populations of neurons. A global
picture starts to emerge, in which dendrites help the brain achieve efficiency
through a combination of optimization strategies balancing the tradeoff between
performance and resource utilization.
|
Roman Makarov, Michalis Pagkalos, Panayiota Poirazi
|
2023-06-12T13:25:18Z
|
http://arxiv.org/abs/2306.07101v1
|
**Dendrites and Efficiency: Optimizing Performance and Resource Utilization**
## Abstract
The brain is a highly efficient system evolved to achieve high performance with limited resources. We propose that dendrites make information processing and storage in the brain more efficient through the segregation of inputs and their conditional integration via nonlinear events, the compartmentalization of activity and plasticity and the binding of information through synapse clustering. In real-world scenarios with limited energy and space, dendrites help biological networks process natural stimuli on behavioral timescales, perform the inference process on those stimuli in a context-specific manner, and store the information in overlapping populations of neurons. A global picture starts to emerge, in which dendrites help the brain achieve efficiency through a combination of optimization strategies balancing the tradeoff between performance and resource utilization.
## Introduction
For animals, survival in the wild depends heavily on their ability to perform well on behavioral tasks such as recognizing predators, interpreting social cues, and remembering safe routes to food and shelter. Therefore, their sensory systems must be optimized for processing natural stimuli on behavioral timescales. To achieve this, sensory information must be accurately represented in the form of neuronal activity and then integrated and interpreted in a contextually appropriate manner. Furthermore, important information must also be efficiently stored and retrieved for future use. However, these processes come at a cost. Neuronal communication is metabolically expensive and the capacity for memory storage in the brain is limited. As a result, brains have evolved a variety of strategies, such as predictive coding [1] and sparse firing [2, 3], to process and store information more efficiently. Growing evidence suggests that these strategies heavily rely on dendrites, the thin processes that extend from the cell bodies of neurons, to balance the tradeoff between high performance and resource savings (**Figure 1**).
Figure 1: Under evolutionary pressure, neurons in the brain must find a balance between high performance and resource savings. On one plate of this conceptual pair of scales, there is a demand to process, interpret and store information important for behavioral tasks. On the other plate, there are constraints on resources, such as energy and space. Here we propose that dendrites could help balance this tradeoff through various strategies, including hierarchical predictive coding, optimizing computations for natural stimuli, increasing expressivity, mitigating noise, and optimizing learning and storage capacity.
Dendrites of most excitatory neuron types receive thousands of synaptic inputs, distributed across their bifurcating branches. This morphological organization results in electrical and biochemical compartmentalization within a neuron. Thanks to a diverse reservoir of ionic and synaptic conductances, semi-independent dendritic compartments exhibit local regenerative events called dendritic spikes, that amplify postsynaptic potentials and mediate local plasticity. Dendritic events can either be isolated, promoting the formation of parallel computing units, or interact with each other while propagating to the soma, resulting in global activity. Together, the segregation, amplification, and nonlinear integration of activity within dendrites give rise to a diverse range of input-output transformations that a neuron can perform. Due to this computational complexity, dendrites are proposed to optimize neural processing in hierarchical networks, mitigate variability (noise) inherent to biological systems and enable neurons to process natural stimuli on behavioral timescales.
In this article, we review the latest findings on the role of dendrites in the efficient processing and storage of information. First, we discuss how dendrites underlie the segregation of inputs and the compartmentalization of activity in principal neurons. Then, we discuss the role of dendrites in specific optimization strategies, such as hierarchical predictive coding, optimizing computations for natural stimuli, increasing expressivity, mitigating noise, and increasing storage capacity. For each of these strategies we highlight how dendrites help to improve computational performance and save valuable resources.
## Compartmentalization as a cornerstone of dendritic optimization strategies
Due to their anatomical and biophysical characteristics, dendritic trees exhibit electrical and biochemical compartmentalization which give rise to local computations and plasticity. As a result, dendritic compartments can act as semi-independent thresholded units that nonlinearly integrate synaptic inputs before sending them to the soma [4, 5, 6] and have been suggested to serve as functional units of computation in the brain [7]. Payer et al. (2019) [8] delineate four classes of complex dendritic processing, among which information selection (when only a small portion of inputs is chosen for propagation) and routing (when the relative potency of dendritic subunits is modulated) heavily rely on compartmentalization. In this article we emphasize the importance of dendritic compartmentalization for efficient processing and storage of information in neuronal networks. We first show how compartmentalization is implemented in dendrites and then discuss particular optimization strategies based on compartmentalization in pyramidal neurons.
The elaborate, tree-like structure of dendrites has two important implications. First, it enables the distribution of synaptic inputs among different branches, where they can be processed in different ways. For example, spatial segregation in cortical pyramidal neurons allows for the functional separation of input signals that serve as driver vs. modulatory ones, carrying sensory and contextual information, respectively (**Figure 2a**). In cortical regions, feedforward (driver) inputs typically target the basal dendrites of pyramidal neurons while feedback (modulatory) inputs from higher areas are more commonly found in apical dendrites [9]. The coincidence of feedforward and feedback inputs is thought to associate different information streams [10] and determine the selectivity of neuronal responses to important stimuli [11]. Besides feedback connections, the modulatory input can be also represented by horizontal connections [12], thalamic inputs [13], inhibition [14, 15, 16] and neuromodulation [17].
Second, the tree-like structure of dendrites shapes the propagation of electrical signals through the cell, leading to subcellular compartmentalization of electrical activity. Postsynaptic signals enter a complex
Figure 2: **| Compartmentalization in pyramidal neurons**
maze of cable-like branches with multiple bifurcation points, where they attenuate and often end up confined within a single branch [18]. However, active voltage-dependent conductances, which are abundant in dendrites, can help to overcome this phenomenon. Large depolarizations can evoke local regenerative events, such as dendritic Na+ spikes, and long-lasting NMDA and Ca2+ plateau potentials [19]. While some of these events are restricted within specific dendritic branches, others can be global, involving large parts of the dendritic tree (for a review see Stuyt et al., 2022 [20]; **Figure 2b**).
The exact degree of dendritic electrical compartmentalization remains a subject of debate. Early in vitro and computational studies suggested a high degree of compartmentalization while recent in vivo experiments claim that isolated dendritic events are much rarer than expected (for a review see Francioni et al., 2022 [21]). These new observations suggest an important role of somato-dendritic coupling, in line with the earlier studies on the segregation of inputs, portraying neurons as being organized into a few functional domains. In this paradigm, global somato-dendritic events, such as backpropagation-activated calcium spikes (BACs) serve as the mechanism for integrating modulatory and driver signals at the cellular level [20]. Somato-dendritic coupling has been suggested to play a major role in sensory detection [11], and explain conscious processing [22] and the effects of anesthesia [23]. Notably, through computational modeling Wybo et al. (2019) [24] demonstrated that pyramidal neurons comprise substantially fewer functional subunits than dendritic branches. Moreover, their results suggest that compartmentalization can be dynamically regulated. For example, topology (subunit extent and number) can be modified by balanced inputs or shunting inhibition in a context-dependent manner.
Overall, dendritic compartmentalization provides a powerful mechanism for flexibly modulating the neuronal output in response to spatio-temporally organized synaptic input. The ways in which compartmentalization underlies efficient neuronal processing are detailed in the following paragraphs.
## Information processing
### Optimizing hierarchical predictive coding
Hierarchical coding refers to the organization of neural processing in a hierarchy, from low-level sensory features to high-level abstractions. In the visual cortex, for example, primary areas process simple edges and lines, while higher areas process more complex shapes and scenes. In this way, features common for multiple visual objects can be reused across the hierarchy, reducing the need for redundant processing. Moreover, hierarchical coding allows for the top-down control of sensory processing and nonlinear mixing of information from different sensory modalities to create a more complete representation of the
environment. It is believed that one of the primary computations performed by the cortex is an inference process, where bottom-up (sensory) information is combined with top-down expectations from prior knowledge to find a consistent explanation of sensory data [25]. Therefore, hierarchical coding is crucial for the brain to efficiently process and interpret sensory information, and to generate flexible and adaptive behaviors.
Recent studies suggest that dendrites can offer a mechanistic explanation of how hierarchical predictive coding (hPC) is implemented in the brain. Classical hierarchical predictive coding theory relies on error computations within each layer. However, Mikulasch et al. (2023) [25] have proposed a novel approach by shifting error computation from a separate neural population to the dendritic compartments of layer 2/3 pyramidal neurons (**Figure 3a**). This allows error computation to be performed in the voltage dynamics of individual dendritic compartments. According to this theory, a cortical neuron receives input from the lower level in hierarchy on its basal dendrites. Given enough "prior knowledge" the neuron can predict feedforward input and cancel it through lateral inhibition by parvalbumin-positive (PV) interneurons. On the other hand, novel unexpected inputs cannot be balanced by inhibition, so the signal reaches the soma. Thus, unexpected input results in the bottom-up prediction error. At the same time, the neuron receives predictions of its own activity from higher areas impinging on its apical dendrite. A mismatch between apical prediction signal and somatic spiking results in top-down prediction error [26]. Overall, the dendritic hPC theory highlights that by assigning the inference process to dendrites, rather than neuronal populations, the brain can save a significant amount of resources that can be allocated for other tasks.
### Processing natural stimuli on behavioral time scales
Beside feedback connections from the higher-order areas, a substantial fraction of horizontal connections also carry contextual information. For example, in the primary visual cortex less than 10% of the inputs to layer 2/3 pyramidal neurons originate from layer 4 feedforward (FF) connections, whereas more than 60% of inputs come from the horizontal connections within the layer. Recently, the role of these connections has been elucidated in terms of optimizing processing for natural stimuli [27]. The efficient coding hypothesis posits that sensory neurons achieve optimal encoding by matching their tuning properties to the statistics of the natural stimuli they encode. For example, neurons in the visual (or auditory) system should be optimized for coding images (or sounds) that are representative of those found in nature. Jin et al. (2022) [27] used the statistics of natural images to derive a function that a neuron should use to compute boundary probability (**Figure 3b**). This function describes how inputs
within the neuron's classical receptive field (FF) and those outside of it (horizontal) interact to modulate its response. The authors hypothesized that allocating FF inputs to distal basal dendrites and contextual inputs to their proximal parts could result in the desired input-output (I/O) integration function. Indeed, this allocation resulted in nonlinear summation of inputs due to NMDAR-dependent spatial interactions in computational models of neurons. The resulting asymmetric sigmoidal response function closely matched the boundary probability derived from natural images. This study predicts that, dendrites of pyramidal neurons provide a powerful computing substrate through which horizontal contextual connections can modulate neuronal integration function, to optimize it for natural stimuli statistics.
In addition to their spatial statistics, the temporal structure of natural stimuli also imposes some constraints on optimal neuronal processing. Neurons must be able to integrate, process, and retain information on behavioral timescales, while somatic spiking occurs on the scale of milliseconds. Neurons equipped with active dendritic conductances can overcome this discrepancy by generating long-lasting plateau potentials that extend on much longer timescales than individual somatic spikes. Leugering et al., (2023) [28] have proposed a theory of computations established by dendritic plateau potentials. According to this theory, the plateau potentials are confined to individual compartments that receive input from specific neuronal populations. The interaction of plateaus in different dendritic compartments could be used to represent sequential activations of different neuronal populations, allowing a neuron to detect complex patterns of stimuli. For example, the sequential activation of compartments receiving inputs from specific populations of place cells, can encode a more abstract notion of a path through the environment (**Figure 3c**). Representing such complex sequences with a single somatic spike results in an extremely sparse and efficient code.
### Increasing neuronal expressivity
Expressivity refers to the ability of a neural network to represent multiple functions or mappings between inputs and outputs, which determines the range of problems that a network can solve (**Figure 3d**). Poirazi and Mel (2003) [4] demonstrated that predicting the input-output function of a single biological pyramidal neuron requires an artificial neural network (ANN) comprising at least two layers of sigmoidal subunits. Specifically, a hidden layer that corresponds to a cell's nonlinear dendrites and an output layer that represents the thresholding that occurs at the soma. These findings suggested that biological neurons, due to their active dendrites, have comparable expressivity to multilayer ANNs. Interestingly, recent experimental studies [29, 30] show that certain dendritic calcium channels can enable dendrites to solve linearly non-separable problems such as the exclusive OR (XOR) problem.
**(a)** A cortical microcircuit schematic illustrates how dendrites facilitate hierarchical predictive coding [25]. Dendrites enable computation of top-down and bottom-up errors within pyramidal (PYR) neurons. Basal dendrites receive predictions from lower-level neurons, which the neuron tries to predict and cancel through lateral inhibition via parvalbumin-positive (PV) interneurons. Unpredicted inputs bypass inhibition, causing bottom-up prediction errors. Apical dendrites receive predictions from lower-level neurons, which are not directly related to the neuron's input.
**(b)** A cortical microcircuit schematic illustrates how dendrites facilitate hierarchical predictive coding [25]. Dendrites enable computation of top-down and bottom-up errors within pyramidal (PYR) neurons. Basal dendrites receive predictions from lower-level neurons, which the neuron tries to predict and cancel through lateral inhibition via parvalbumin-positive (PV) interneurons. Unpredicted inputs bypass inhibition, causing bottom-up prediction errors. Apical dendrites receive predictions from lower-level neurons, which are not directly related to the neuron's input.
**(c)** A cortical microcircuit schematic illustrates how dendrites facilitate hierarchical predictive coding [25]. Dendrites enable computation of top-down and bottom-up errors within pyramidal (PYR) neurons. Basal dendrites receive predictions from lower-level neurons, which are not directly related to the neuron's input.
**(d)** A cortical microcircuit schematic illustrates how dendrites facilitate hierarchical predictive coding [25]. Dendrites enable computation of top-down and bottom-up errors within pyramidal (PYR) neurons. Basal dendrites receive predictions from lower-level neurons, which are not directly related to the neuron's input.
**(e)** A cortical microcircuit schematic illustrates how dendrites facilitate hierarchical predictive coding [25]. Dendrites enable computation of top-down and bottom-up errors within pyramidal (PYR) neurons. Basal dendrites receive predictions from lower-level neurons, which the neuron tries to predict and cancel through lateral inhibition via parvalbumin-positive (PV) interneurons. Unpredicted inputs bypass inhibition, causing bottom-up prediction errors. Apical dendrites receive predictions from lower-level neurons, which are not directly related to the neuron's input.
**(f)** A cortical microcircuit schematic illustrates how dendrites facilitate hierarchical predictive coding [25]. Dendrites enable computation of top-down and bottom-up errors within pyramidal (PYR) neurons. Basal dendrites receive predictions from lower-level neurons, which are not directly related to the neuron's input.
**(f)** A cortical microcircuit schematic illustrates how dendrites facilitate hierarchical predictive coding [25]. Dendrites enable computation of top-down and bottom-up errors within pyramidal (PYR) neurons. Basal dendrites receive predictions from lower-level neurons, which are not directly related to the neuron's input.
**(g)** A cortical microcircuit schematic illustrates how dendrites facilitate hierarchical predictive coding [25]. Dendrites enable computation of top-down and bottom-up errors within pyramidal (PYR) neurons. Basal dendrites receive predictions from lower-level neurons, which are not directly related to the neuron's input.
**(g)** A cortical microcircuit schematic illustrates how dendrites facilitate hierarchical predictive coding [25]. Dendrites enable computation of top-down and bottom-up errors within pyramidal (PYR) neurons. Basal dendrites receive predictions from lower-level neurons, which are not directly related to the neuron's input.
**(h)** A cortical microcircuit schematic illustrates how dendrites facilitate hierarchical predictive coding [25]. Dendrites enable computation of top-down and bottom-up errors within pyramidal (PYR) neurons. Basal dendrites receive predictions from lower-level neurons, which are not directly related to the neuron's input.
**(h)** A cortical microcircuit schematic illustrates how dendrites facilitate hierarchical predictive coding [25]. Dendrites enable computation of top-down and bottom-up errors within pyramidal (PYR) neurons. Basal dendrites receive predictions from lower-level neurons, which the neuron tries to predict and cancel through lateral inhibition via parvalbumin-positive (PV) interneurons. Unpredicted inputs bypass inhibition, causing bottom-up prediction errors. Apical dendrites receive predictions from lower-level neurons, which are not directly related to the neuron's input.
**(h)** A cortical microcircuit schematic illustrates how dendrites facilitate hierarchical predictive coding [25]. Dendrites enable computation of top-down and bottom-up errors within pyramidal (PYR) neurons. Basal dendrites receive predictions from lower-level neurons, which are not directly related to the neuron's input.
**(h)** A cortical microcircuit schematic illustrates how dendrites facilitate hierarchical predictive coding [25]. Dendrites enable computation of top-down and bottom-up errors within pyramidal (PYR) neurons. Basal dendrites receive predictions from lower-level neurons, which the neuron tries to predict and cancel through lateral inhibition via parvalbumin-positive (PV) interneurons. Unpredicted inputs bypass inhibition, causing bottom-up prediction errors. Apical dendrites receive predictions from lower-level neurons, which are not directly related to the neuron's input.
**(h)** A cortical microcircuit schematic illustrates how dendrites facilitate hierarchical predictive coding [25]. Dendrites enable computation of top-down and bottom-up errors within pyramidal (PYR) neurons. Basal dendrites receive predictions from lower-level neurons, which the neuron tries to predict and cancel through lateral inhibition via parvalbumin-positive (PV) interneurons. Unpredicted inputs bypass inhibition, causing bottom-up prediction errors. Apical dendrites receive predictions from lower-level neurons, which are not directly related to the neuron's input.
**(h)** A cortical microcircuit schematic illustrates how dendrites facilitate hierarchical predictive coding [25]. Dendrites enable computation of top-down and bottom-up errors within pyramidal (PYR) neurons. Basal dendrites receive predictions from lower-level neurons, which the neuron tries to predict and cancel through lateral inhibition via parvalbumin-positive (PV) interneurons. Unpredicted inputs bypass inhibition, causing bottom-up prediction errors. Apical dendrites receive predictions from lower-level neurons, which are not directly related to the neuron's input.
**(h)** A cortical microcircuit schematic illustrates how dendrites facilitate hierarchical predictive coding [25]. Dendrites enable computation of top-down and bottom-up errors within pyramidal (PYR) neurons. Basal dendrites receive predictions from lower-level neurons, which the neuron tries to predict and cancel through lateral inhibition via parvalbumin-positive (PV) interneurons. Unpredicted inputs bypass inhibition, causing bottom-up prediction errors. Apical dendrites receive predictions from lower-level neurons, which are not directly related to the neuron's input.
**(h)** A cortical microcircuit schematic illustrating how dendrites facilitate hierarchical predictive coding [25].
from higher-level neurons that might be gated by somatostatin-positive (SST) interneurons. Mismatch between apical prediction and somatic spiking results in top-down prediction errors.
**(b)** Interaction of feedforward and horizontal inputs on basal dendrites offers an adaptation to natural visual stimuli [27]. Feedforward connections from the neuron's classical receptive field (CRF) arrive on the distal ends of basal dendrites, while horizontal connections, carrying contextual information from adjacent cells within a cortical layer, arrive on the proximal ends. The interplay of activity of proximal and distal inputs allows to recreate the boundary probability function derived from natural images through changes in the firing rate of presynaptic neurons.
**(c)** Dendritic compartments sequentially generate plateau potentials (left) in response to inputs from different neuronal populations of place cells (center) allowing a neuron to detect a complex sequence of events on behavioral timescales [28]. When one of the sequential compartments (pink) fails to evoke a plateau potential the neuron remains silent (right), thus increasing selectivity for the relevant sequence.
**(d)** Dendrites equipped with active conductances, that enable generating local plateau potentials, increase expressivity of individual neurons, allowing them to solve linearly non-separable problems such as the exclusive OR (XOR) problem.
**(e)** Nonlinear dendrites, in contrast to linear ones, provide effective strategies for mitigating neuronal noise. While both linear and nonlinear dendrites can reduce Gaussian noise, nonlinear dendrites excel in decreasing (mis)-classification errors [33], a common type of noise in the brain. These errors occur when neurons spike in response to non-preferred (null) stimuli or remain silent when exposed to preferred ones.
These action potentials exhibit maximum amplitude for stimuli that are at threshold-level, but are dampened for stronger stimuli. This discovery is significant as it expands the range of computations that can be performed by individual neurons, without the need for multi-layered networks which were thought to be necessary for such tasks.
While earlier studies used artificial neural networks (ANNs) to replicate the time-averaged firing rate of realistic neuron models, a recent study by Beniaguev et al. (2021) [31] took a different approach. They implemented a temporally convolutional deep neural network (DNN) that could also capture the temporal aspect of neuronal activity with millisecond precision. The study demonstrated that a network's depth (number of its layers) can serve as a proxy for assessing a neuron's expressivity. Specifically, to capture the input-output function of a realistic model of layer 5 pyramidal neuron (LSPN), it required a DNN consisting of five to eight layers. Removing NMDA receptors substantially decreased the neuron's complexity, as its output could be predicted by a single hidden layer. This study highlights that a neuron's expressivity is directly dependent on the presence of active dendritic mechanisms.
Wybo et al. (2022 arXiv) [32] further supported this idea by showing that NMDA spikes could modulate the I/O relationship in an abstract LSPN model receiving perisomatic FF inputs modulated by FB dendritic inputs. The number of dendritic compartments with NMDA spikes determined the slope and threshold of a rectified linear unit (ReLU) function, representing the neuron's I/O relationship. Remarkably, in this study NMDA modulation allowed for learning multiple tasks by modifying only FB weights, while keeping the previously learned FF weights untouched. These findings suggest that a highly expressive neuron can adapt to different contexts by switching between different I/O regimes. On a network level, increased expressivity of individual neurons can promote sparse coding, which reduces the number of neurons and overall computational cost required for a given task.
### Mitigating the negative effects of neuronal noise
Variability is an inherent property of neuronal activity. It can arise from numerous sources, both external environmental factors during sensory transduction, and intrinsic properties of the nervous system. The latter include the stochasticity of biochemical processes such as channel gating, vesicle release, and neurotransmitter diffusion. As the main purpose of neuronal activity is to communicate useful information, these random disturbances can be thought of as "noise" that obscures the relevant signal. Noise can interfere with the precision and reliability of neural communication, leading to a loss of information and affecting the efficiency of neuronal coding.
Previous studies have focused on linearly integrating neurons, assuming that the primary mechanism for denoising is the averaging of a large number of synaptic inputs. However, these studies did not take into account the presence of active ionic conductances in dendrites that can generate dendritic spikes. Poleg-Polsky (2019) [33] demonstrated that active nonlinear dendrites, in contrast to linear ones, are more efficient in the presence of misclassification errors (**Figure 3e**). This type of noise -- when a neuron spikes in response to a non-preferred stimulus or remains silent when exposed to a preferred one -- is ubiquitous in the brain and resistant to averaging. The additional thresholding step introduced by dendritic spike generation, allows neurons to mitigate it more effectively. In particular, neurons with active dendrites outperform linearly integrating neurons in a directional discrimination task in the presence of directional misclassification errors. Thus, dendritic spikes challenge noise tolerance rules derived for linear neurons.
Although noise is typically assumed to degrade performance, it can in fact have a positive effect on information processing. A common example is the amplification of weak signals through stochastic resonance, which occurs in the presence of a certain level of noise. Specifically, weak signals that ride on
top of noisy background activity can more easily cross the threshold for dendritic spiking, thus propagating to the soma. This mechanism enhances information transmission between neurons, allowing them to communicate more information per action potential. As a result, it reduces the metabolic cost and increases energy efficiency in coding [34]. Noise could also play an important role in the context of hierarchical predictive coding. Specifically, noise can relax the constraints on the delay in inhibition, when balancing bottom-up inputs and allow for neural sampling [25]. Finally, noise can produce new dynamics such as a bistability in I/O transformation due to slow calcium activation -- a phenomenon absent in a noiseless condition [35].
## Information storage
### Optimizing learning and storage capacity
In addition to efficient processing, dendrites enhance learning and memory storage. Poirazi and Mel (2001) [36] demonstrated that nonlinear integration in active dendrites allows a model neuron to distinguish more input configurations distributed across its branches, providing a distinct advantage over linear summation. Moreover, through structural plasticity, neurons have access to an additional high-capacity storage reservoir, namely the arrangement of synaptic connections, instead of relying solely on changes in synaptic strength. As a result, the combination of nonlinear dendrites and structural plasticity increases the storage capacity of a model neuron by an order of magnitude. Similar benefits were observed in hippocampal network models. When fast spiking interneurons were equipped with nonlinear dendritic integration, the encoding of a single memory required significantly fewer neurons compared to the linear case, thus providing important resource savings [37] (**Figure 4a**). Moreover, nonlinear dendritic integration of separate information streams increased the capacity to store and retrieve memory engrams in two-compartment models of hippocampal pyramidal cells [38]. Notably, the interference of memories, which can lead to the loss of previously stored memories, was minimal even when a large number of highly similar memories were stored.
While memory interference can lead to confusion and hinder survival, binding of information is critical for the formation of associative and episodic memories. Thus, brains must optimize the tradeoff between memory binding and interference. Synaptic clustering is a potential solution to this problem (**Figure 4b**). Numerous studies have shown that synaptic clustering can emerge from the co-activation of nearby inputs, due to their efficiency in activating local non-linearities and facilitating cooperative plasticity (for a review see: Kastellakis et al., 2015; 2019 [39, 40]). Such clustering binds together separate
information streams, allowing the formation of associative memories [41, 42] and can link different memories separated by hours [42, 43]. Interference, in this case, can be minimized by allocating functionally different inputs to separate branches [41, 44].
Finally, the ability to rewire through structural plasticity has been associated with faster learning. CCR5 knock-out mice, which exhibit higher than normal spine turnover dynamics, learn a fear conditioning task much faster than controls. Faster learning is likely due to an increase in the formation and stability of synaptic clusters, which is associated with sparser encoding. Stable spine clusters also serve as a means for protecting memories from subsequent modifications through stochastic rewiring [45]. Synaptic turnover was also shown to increase the efficiency of learning in bio-inspired spiking neural networks (SNNs) applied to machine learning tasks [46]. Specifically, learning to discriminate between two classes of MNIST digits was faster and achieved using significantly fewer synaptic weights in the presence of synaptic turnover compared to a model without synaptic turnover. This study provided
Figure 4: **Efficient storage of information in pyramidal neurons.**
exciting new findings on the means by which synaptic turnover can improve the efficiency of learning. Overall, the combination of nonlinear dendrites and structural plasticity can optimize learning, storage, and discrimination of memories through the formation of stable synaptic clusters. Given the space limitations of the neuronal substrate, storing more memories in the same population of neurons, and achieving this storage via the use of fewer synapses, is significantly more resource-efficient.
## Conclusion
Throughout nature, simple laws give rise to emergent complex behaviors, from protein folding to the organization of ecosystems. These laws often revolve around the principles of efficiency and energy saving. The brain offers a particularly compelling example of such a complex system, thus studying efficiency in the brain might be crucial for understanding it. Moreover, by emphasizing the importance of efficiency, research in neuroscience shifts the focus from asking "how" questions to inquiring "why" the brain works the way it does, ultimately fostering the development of a more comprehensive theoretical framework.
The study of efficiency in the brain has a long history, with Barlow's pioneering work from half a century ago proposing that neurons communicate as much information as possible through as few spikes as possible [47]. However, while revealing important principles of efficient coding, early works did not take into account the elaborate dendritic trees that neurons possess. Over the last few decades, computational and experimental studies have demonstrated that dendrites do not merely convey synaptic inputs to the soma, but perform complex computations due to their nonlinear properties. In this review, we presented multiple pieces of evidence that nonlinear dendritic computations increase the efficiency of information processing and storage in cortical and hippocampal pyramidal neurons. While dendrites may not be strictly necessary for abstract computations [48], their evolutionary advantage becomes clear in real-world scenarios when resources such as energy and space are limited.
These ideas not only expand our understanding of biological networks, but also have far-reaching implications for the fields of artificial intelligence and neuromorphic computing. According to recent research, incorporating dendritic computational principles into artificial neural networks [49] or neuromorphic systems [50, 51] substantially enhances their efficiency compared to classical-architecture-based systems. What lies ahead is an exciting interplay between neuroscience, machine learning and hardware experts, so as to capitalize on the wisdom of the brain and fully exploit the properties of dendrites to advance the efficiency of artificial systems.
## Conflict of interest
The authors declare no conflict of interest.
## Acknowledgements
We would like to thank Dr. Spyridon Chavlis and other members of the Poirazi lab for their valuable feedback on the manuscript. This work was supported by NIH (1R01MH124867-02), the European Commission (H2020-FETOPEN-2018-2019-2020-01, NEUREKA GA-863245 and H2020 MSCA ITN Project SmartNets GA-860949), and the Einstein Foundation Berlin, Germany (visiting fellowship to PP, EVF-2019-508).
|
2307.00547
|
Is Risk-Sensitive Reinforcement Learning Properly Resolved?
|
Due to the nature of risk management in learning applicable policies,
risk-sensitive reinforcement learning (RSRL) has been realized as an important
direction. RSRL is usually achieved by learning risk-sensitive objectives
characterized by various risk measures, under the framework of distributional
reinforcement learning. However, it remains unclear if the distributional
Bellman operator properly optimizes the RSRL objective in the sense of risk
measures. In this paper, we prove that the existing RSRL methods do not achieve
unbiased optimization and can not guarantee optimality or even improvements
regarding risk measures over accumulated return distributions. To remedy this
issue, we further propose a novel algorithm, namely Trajectory Q-Learning
(TQL), for RSRL problems with provable convergence to the optimal policy. Based
on our new learning architecture, we are free to introduce a general and
practical implementation for different risk measures to learn disparate
risk-sensitive policies. In the experiments, we verify the learnability of our
algorithm and show how our method effectively achieves better performances
toward risk-sensitive objectives.
|
Ruiwen Zhou, Minghuan Liu, Kan Ren, Xufang Luo, Weinan Zhang, Dongsheng Li
|
2023-07-02T11:47:21Z
|
http://arxiv.org/abs/2307.00547v1
|
# Is Risk-Sensitive Reinforcement Learning Properly Resolved?
###### Abstract
Due to the nature of risk management in learning applicable policies, risk-sensitive reinforcement learning (RSRL) has been realized as an important direction. RSRL is usually achieved by learning risk-sensitive objectives characterized by various risk measures, under the framework of distributional reinforcement learning. However, it remains unclear if the distributional Bellman operator properly optimizes the RSRL objective in the sense of risk measures. In this paper, we prove that the existing RSRL methods do not achieve unbiased optimization and can not guarantee optimality or even improvements regarding risk measures over accumulated return distributions. To remedy this issue, we further propose a novel algorithm, namely Trajectory Q-Learning (TQL), for RSRL problems with provable convergence to the optimal policy. Based on our new learning architecture, we are free to introduce a general and practical implementation for different risk measures to learn disparate risk-sensitive policies. In the experiments, we verify the learnability of our algorithm and show how our method effectively achieves better performances toward risk-sensitive objectives.
Machine Learning, Reinforcement Learning, Risk-Sensitive Reinforcement Learning, Reinforcement Learning, Risk
tion function and when it is deterministic we use \(s^{\prime}=M(s,a)\) to represent the transition, \(r(s,a)\) is the reward function, and \(\gamma\) denotes the discount factor. The return \(Z^{\pi}(s,a)=\sum_{t=0}^{\infty}\gamma^{t}r\left(s_{t},a_{t}\right)\) is a random variable representing the sum of the discounted rewards. The history \(h_{t}=\{s_{0},a_{0},\cdots,s_{t}\}\) is state-action sequences sampled by agents in the environment, and its space \(\mathcal{H}=\bigcup_{t}\left[\left(\prod_{i=0}^{t-1}(\mathcal{S}\times \mathcal{A})\right)\times\mathcal{S}\right]\). The objective of reinforcement learning (RL) is to learn a policy to perform the action \(a\sim\pi\) on a given state or history that maximizes the expected cumulative discounted reward \(\mathbb{E}_{\pi}\left[Z^{\pi}\left(s,a\right)\right]\). The optimization typically requires to compute the state-action value function \(Q(s,a)=\mathbb{E}_{\pi}\left[Z^{\pi}(s,a)\right]\), which can be characterized by the Bellman operator \(\mathcal{T}_{B}^{\pi}\): \(\mathcal{T}_{B}^{\pi}Q^{\pi}\left(s,a\right):=\mathbb{E}\left[r\left(s,a \right)\right]+\gamma\mathbb{E}_{s^{\prime}\sim\mathcal{P},a^{\prime}\sim \pi}\left[Q^{\pi}\left(s^{\prime},a^{\prime}\right)\right]\). The optimal policies can be obtained by learning the optimal value \(Q^{*}=Q^{\pi^{*}}\) through the Bellman optimality operator \(\mathcal{T}_{B}^{*}\): \(\mathcal{T}_{B}^{*}Q\left(s,a\right):=\mathbb{E}\left[r\left(s,a\right)\right] +\gamma\mathbb{E}_{\mathcal{P}}\max_{a^{\prime}}Q\left(s^{\prime},a^{\prime}\right)\).
Instead of utilizing a scalar value function \(Q^{\pi}\), which can be seen as optimizing the expectation of the distribution over returns, distributional RL considers modeling the whole distribution (Bellemare et al., 2017; Dabney et al., 2018). From a distributional perspective, we regard \(Z^{\pi}\sim\mathcal{Z}\) as a mapping from state-action pairs to distributions over returns, named the value distribution. Analogous to traditional RL, the goal is seeking a policy that maximizes the expected return over trajectories:
\[\pi^{*}\in\operatorname*{arg\,max}_{\pi}\ \mathbb{E}_{S_{0},A_{0}\sim\pi( \cdot|S_{0})}\left[Z^{\pi}\left(S_{0},A_{0}\right)\right]\, \tag{1}\]
where \(Z^{\pi}(S_{0},\cdot)\) represents the return distribution of the trajectory starting from a random initial state \(S_{0}\) and following \(\pi\). We can define a distributional Bellman operator \(\mathcal{T}^{\pi}\) that estimates the return distribution \(Z^{\pi}\)
\[\mathcal{T}^{\pi}Z\left(s,a\right)\mathop{:=}^{D}R\left(s,a\right)+\gamma Z \left(S^{\prime},A^{\prime}\right)\, \tag{2}\]
where \(A\mathop{:=}^{D}B\) means that random variables \(A\) and \(B\) follows the same distribution, \(R(s,a)\) is reward distribution, \(S^{\prime}\sim\mathcal{P}\left(\cdot|s,a\right)\) and \(A^{\prime}\sim\pi\left(\cdot|s^{\prime}\right)\). Correspondingly, the distributional Bellman optimality operator is
\[\mathcal{T}^{*}Z\left(s,a\right)\mathop{:=}^{D}R\left(s,a\right)+\gamma Z \left(S^{\prime},\operatorname*{arg\,max}_{a^{\prime}\in\mathcal{A}}\mathbb{E }\left[Z\left(S^{\prime},a^{\prime}\right)\right]\right)\, \tag{3}\]
In this paper, we use capital letters to denote random variables and emphasize their random nature.
### Distortion Risk Measure and Risk-Sensitive RL
As a type of risk measure, a _distortion risk measure_(Wang, 1996)\(\beta\) for a random variable \(X\) with the cumulative distribution function (CDF) \(F_{X}(x)\) is defined as \(\beta[X]=\int_{-\infty}^{\infty}x\ \frac{\partial}{\partial x}\left(h_{\beta}\circ F_{X} \right)(x)\ \mathrm{d}x\), where \(h_{\beta}:[0,1]\rightarrow[0,1]\), called a distortion function, is a continuous non-decreasing function that transforms the CDF of \(X\) into \(\left(h_{\beta}\circ F_{X}\right)(x)\). Intuitively, a distortion function distorts the probability density of a random variable to give more weight to either higher or lower-risk events. For example, mean and CVaR are the most commonly used distortion risk measures. For readers unfamiliar with distortion risk measures, we list some typical examples and their definitions in Appendix A.1. Thereafter, risk-sensitive reinforcement learning (RSRL) is natural to combine various distortion risk measures with distributional RL for achieving a risk-sensitive behavior. In the sequel, a risk-sensitive optimal policy with distortion risk measure \(\beta\) can be defined as a deterministic policy \(\pi_{\beta}^{*}\) by the risk-sensitive return over random variable \(S_{0}\sim\rho_{0}\) representing the initial state:
\[\pi_{\beta}^{*}\in\operatorname*{arg\,max}_{\pi}\ \mathbb{E}_{S_{0}\sim\rho_{0},A_{0} \sim\pi}\left[\beta\left[Z^{\pi}\left(S_{0},A_{0}\right)\right]\right]. \tag{4}\]
We call Eq. (4) the RSRL objective, as it seeks a policy that maximizes the risk measure of accumulated return over whole trajectory given the initial state distribution. Such a formulation was initially implemented in (Dabney et al., 2018), by directly changing the objective to risk measures computed from the value distribution. Some other works define and optimize risk in a per-step manner (Bauerle and Ott, 2011; Chow and Ghavamzadeh, 2014; Tamar et al., 2015), but this paper only focuses on the RSRL objective, as it is the natural risk-sensitive extension of RL. For readers interested in per-step risk definition, we give a brief introduction in Appendix A.4.
### Metrics for Convergence
In distributional RL, since the value function is modeled as a distribution, researchers utilize a maximal form of the Wasserstein metric to establish the convergence of the distributional Bellman operators (Bellemare et al., 2017; Dabney et al., 2018)\(\bar{d}_{p}(Z_{1},Z_{2}):=\sup_{x,a}d_{p}(Z_{1}(x,a),Z_{2}(x,a))\), where \(Z_{1},Z_{2}\in\mathcal{Z}\) are two value distributions and \(\mathcal{Z}\) denotes the space of value distributions with bounded moments. The \(p\)-Wasserstein distance \(d_{p}\) is the \(L_{p}\) metric on inverse CDF, i.e., quantile functions (Muller, 1997), which is defined as an optimal transport metric for random variables \(U\) and \(V\) with quantile functions \(F_{U}^{-1}\) and \(F_{V}^{-1}\) respectively: \(d_{p}(U,V)=\left(\int_{0}^{1}|F_{U}^{-1}(\omega)-F_{V}^{-1}(\omega)|^{p}d\omega \right)^{1/p}\). This can be realized as the minimal cost of transporting mass to make the two distributions identical.
Requiring the distributional Bellman operators to converge in the metric of \(\bar{d}_{p}\) indicates that we must match the value distribution. While in policy evaluation the distributional Bellman operator \(\mathcal{T}^{\pi}\) (Eq. (2)) is shown to be a contraction in \(p\)-Wasserstein, in the control setting proving the distributional Bellman optimality operator \(\mathcal{T}^{*}\) (Eq. (3)) is hard (see Bellemare et al. (2017) for more details) and is not always
necessary in practical cases. Instead, we may only need to achieve convergence in the sense of distributional statistics or measures. For example, we only require the learned value distribution to have the same mean of the optimal value distribution so that the policy learns to achieve the optimal return expectation, or we match a risk measure (like CVaR) of the optimal value distribution to learn a policy that achieves the optimal risk preference of the return distribution. As these measures upon value distributions are real functions w.r.t. states and actions, the convergence of distributional Bellman operators only need to lie in the infinity norm, a \(L_{\infty}\) metric: \(\|f_{1}-f_{2}\|_{\infty}=\sup_{x,a}\|f_{1}(x,a)-f_{2}(x,a)\|\).
## 3 Mismatch in RSRL Optimization
Although the RSRL objective Eq. (4) seems reasonable, _existing dynamic programming (DP) style algorithms does not optimize Eq. (4) properly_, as we will reveal in this section.
### What are Current RSRL Algorithms Optimizing?
Recalling the RL objective Eq. (1) or considering setting \(\beta\) as mean in the RSRL objective Eq. (4), we can optimize \(\pi\) by Bellman equation in a dynamic programming style following the distributional Bellman optimality operator, i.e., there is a deterministic policy that maximizes the return at every single step for a given return distribution \(Z\):
\[\pi_{\texttt{mean}}(s)\in\operatorname*{arg\,max}_{a\in\mathcal{A}}\mathbb{E }\left[Z\left(s,a\right)\right]\;. \tag{5}\]
And the distributional Bellman optimality operator is equivalent to:
\[\mathcal{T}^{*}Z\left(s,a\right)\mathop{:=}^{D}R\left(s,a\right)+\gamma Z \left(S^{\prime},\pi_{\texttt{mean}}(S^{\prime})\right),\;S^{\prime}\sim \mathcal{P}\;. \tag{6}\]
Although Bellemare et al. (2017) have shown that \(\mathcal{T}^{*}\) itself is not a contraction in \(\bar{d}_{p}\) such that it cannot be used for finding the optimal value distribution, we can realize \(\mathcal{T}^{*}\) as a "contraction" in \(L_{\infty}\) from the perspective of mean, which induces a pointwise convergence. In other words, the mean of the value distribution \(\mathbb{E}Z\) will converge to the mean of the value distribution \(\mathbb{E}Z^{*}\).
**Lemma 3.1** (Value iteration theorem).: _Recursively applying the distributional Bellman optimality operator \(Z_{k+1}=\mathcal{T}^{*}Z_{k}\) on arbitrary value distribution \(Z_{0}\) solves the objective Eq. (4) when \(\beta\) is exactly mean where the optimal policy is obtained via Eq. (5), and for \(Z_{1},Z_{2}\in\mathcal{Z}\), we have:_
\[\|\mathbb{E}\mathcal{T}^{*}Z_{1}-\mathbb{E}\mathcal{T}^{*}Z_{2}\|_{\infty}\leq \gamma\|\mathbb{E}Z_{1}-\mathbb{E}Z_{2}\|_{\infty}\;, \tag{7}\]
_and in particular \(\mathbb{E}Z_{k}\to\mathbb{E}Z^{*}\) exponentially quickly._
The proof is just the proof of value iteration and Lemma 4 in Bellemare et al. (2017). For completeness, we include it in Appendix C.1. In the context of distributional RL, we can explain it as the mean of value will converge to the mean of optimal value. Motivated by and simply resemble Eq. (5), previous implementation like Dabney et al. (2018) and Urpi et al. (2021) optimized a risk-sensitive policy:
\[\pi_{\beta}(s)\in\operatorname*{arg\,max}_{a\in\mathcal{A}}\beta\left[Z\left( s,a\right)\right]\;. \tag{8}\]
From a practical perspective, this can be easily achieved by only a few modifications to distributional RL algorithms towards any given distortion risk \(\beta\), which implies a dynamic programming style updating following a risk-sensitive Bellman optimality operator \(\mathcal{T}^{*}_{\beta}\) w.r.t. risk measure \(\beta\):
\[\mathcal{T}^{*}_{\beta}Z(s,a)\mathop{:=}^{D}R(s,a)+\gamma Z(S^{\prime},A^{ \prime})\;, \tag{9}\]
where \(S^{\prime}\sim\mathcal{P}(\cdot|s,a)\) and \(A^{\prime}\sim\pi_{\beta}(\cdot|s^{\prime})\) are random variables. Note that the optimal risk-sensitive policy defined in Eq. (8) is generally different from Eq. (4). The key difference is that Eq. (8) tends to maximize the risk measure everywhere inside the MDP, yet Eq. (4) only requires finding a policy that can maximize the risk measure of trajectories started from the initial state \(S_{0}\). Although this is equivalent when \(\beta\) is mean, when it is not, the equivalence does not ever hold when updating follows \(\mathcal{T}^{*}_{\beta}\) in a dynamic programming style, which leads to the divergence in RSRL optimization, as we will reveal below.
### \(\mathcal{T}^{*}_{\beta}\) Leads to Biased Optimization
\(\mathcal{T}^{*}_{\beta}\) **is not contraction except \(\beta\) is mean.** To show why \(\mathcal{T}^{*}_{\beta}\) leads to biased optimization, we first provide an analysis that optimizing towards the Bellman optimality operator \(\mathcal{T}^{*}_{\beta}\) w.r.t. risk measure \(\beta\) does not converge at all, _i.e._, there is no contraction property for \(\mathcal{T}^{*}_{\beta}\). For simplicity and starting from the easiest case, in the rest of this paper, we mainly discuss deterministic dynamics, i.e., instead of \(s^{\prime}\sim\mathcal{P}(\cdot|s,a)\), we simply consider \(s^{\prime}=M(s,a)\) and thus \(\mathcal{T}_{\beta}\) becomes:
\[\mathcal{T}^{*}_{\beta}Z(s,a)\mathop{:=}^{D}R(s,a)+\gamma Z(s^{\prime},A^{ \prime}),A^{\prime}\sim\pi_{\beta}(s^{\prime})\;. \tag{10}\]
We already know that \(\mathcal{T}^{*}_{\beta}\) cannot be a contraction in \(\bar{d}_{p}\), but different from Lemma 3.1, even from the perspective of the risk measure \(\beta\) if \(\beta\) is not mean, it still cannot be realized as a "contraction" in \(L_{\infty}\); in other words, the risk measure of the value distribution \(\beta[Z]\) is not guaranteed to converge to \(\beta\) of the value distribution \(\beta[Z^{*}]\). Thus, \(\mathcal{T}^{*}_{\beta}\) does not help to find an optimal solution to solve the RSRL objective Eq. (4).
**Theorem 3.2**.: _Recursively applying risk-sensitive Bellman optimality operator \(\mathcal{T}^{*}_{\beta}\) w.r.t. risk measure \(\beta\) does not solve the RSRL objective Eq. (4) and \(\beta[Z_{k}]\) is not guaranteed to converge to \(\beta[Z^{*}]\) if \(\beta\) is not mean or an affine in mean._
The formal proof can be referred to Appendix C.2. The above theorem of no contraction indicates that optimizing
towards \(\mathcal{T}_{\beta}\) may lead to arbitrarily worse solutions than the optimal solution of the RSRL objective Eq. (4).
To better understand how the problem occurs, let us consider a naive contradictive example on a 3-state MDP (Fig. 1, left), where the agents have a constant reward of -5 for conducting \(a_{1}\) and a binomial reward for \(a_{0}\), as shown in the first row of Fig. 1 (right). In such context, we optimize towards \(\beta=\texttt{CVaR}(\eta=0.1)\). Now consider the initial value estimation \(Z\) to be accurate at \(s_{1}\) (Fig. 1, right). We list the value of \(Z\) and its corresponding risk measure \(\beta[Z]\); the results \(\mathcal{T}_{\beta}^{*}Z\) when updating \(Z\) on \(s_{0}\) using \(\mathcal{T}_{\beta}^{*}\) and its corresponding risk measure. In this case, when updating \(Z(s_{0})\), \(Z(s_{1})\) will always indicate to use \(a_{1}\), although this can lead to a worse risk measure evaluated along the whole trajectory starting from \(s_{0}\), and prevent the agent from finding the optimality, _i.e._, applying \(a_{0}\) at both states.
History return distribution matters.The reason why the risk-sensitive Bellman optimality operator \(\mathcal{T}_{\beta}^{*}\) diverges and optimizing following \(\mathcal{T}_{\beta}^{*}\) does not lead to an optimal policy _w.r.t._ risk measure \(\beta\) comes from the fact that the risk measure over future return distributions cannot be maximized everywhere inside an MDP, hence it is not reasonable to use such a Bellman optimality operator to update the return distribution following the risk-sensitive optimal policy. In other words, updating \(\pi_{\beta}\) under \(\mathcal{T}_{\beta}^{*}\) only ensures to improve the risk measure of the trajectory starting from \(s^{\prime}\), _i.e._, \(\beta\left[Z^{\pi}(s^{\prime})\right]\), which does not guarantee to move towards a better risk measure along the whole trajectory \(\beta\left[Z^{\pi}(s_{0})\right]\), and can be totally different from optimizing the RSRL objective Eq. (4). Therefore, to achieve unbiased optimization, at every state the policy should take into account the return distribution along the past trajectory starting from \(s_{0}\).
## 4 Solving RSRL
To remedy the biased optimization issue of bellman-style update, we propose a novel algorithm that lies in a non-Markovian formulation without dynamic programming style optimization.
As we pointed out before, the key problem that leads the risk-sensitive optimal Bellman operator \(\mathcal{T}_{\beta}^{*}\) into biased optimization is that _the risk measure over future return distributions cannot be maximized everywhere inside an MDP_. Thereafter, _the dynamic programming style optimization that only utilizes the information forward, i.e., in the future, does not help to find the policy that maximizes the risk measures along the whole trajectories as defined in Eq. (4)_. Thus, when we compute the value distribution at certain states, we must include information backward, _i.e._, in the past, to help with modeling the risk measure along the whole trajectory. This motivates us to model the history-action value distribution \(Z^{\pi}(h_{t},a_{t})\sim\mathcal{Z}\), called _historical return distribution_, instead of the state-action value distribution \(Z^{\pi}(s_{t},a_{t})\), along with a history-based (non-Markovian) policy \(A\sim\pi(\cdot|h)\):
\[\begin{split}& Z^{\pi}(h_{t},a)\triangleq\sum_{i=0}^{t-1}\gamma^{ i}R(s_{i},a_{i})+\gamma^{t}Z^{\pi}(\{s_{t}\},a)\\ =&\sum_{i=0}^{t}\gamma^{i}R(s_{i},a_{i})+\gamma^{t+ 1}Z^{\pi}\left(\{s_{t+1}\},A_{t+1}\right)\end{split}\,, \tag{11}\]
where \(A_{t+1}\sim\pi(\cdot|h_{t+1}),s_{t+1}=M(s_{t},a_{t}),h_{t}=\{s_{0},a_{0},\cdots,s_{t}\}\in\mathcal{H}\) denotes the history sequence that happened before reaching (including) state \(s_{t}\). Therefore, the history-action value \(Z^{\pi}(h_{t},a)\) just records the discounted return of the whole trajectory given history \(h_{t}\) backward and moves forward following policy \(\pi\). Note that the policy is now Markovian under the history-based MDP, _i.e._, the policy gives action only based on the current history.
### Policy Evaluation
Similar to Bellman operators, we now define a new type of operator, named the history-relied (HR) operator, that defines the principle of updating the history-action value.
\[\mathcal{T}_{h}^{\pi}Z(h_{t},a)\mathop{\raisebox{-1.29pt}{$\,\stackrel{{ D}}{{\mathop{\raisebox{-1.29pt}{$\,\stackrel{{ D}}{{\mathop{\raisebox{-1.29pt}{$\,\stackrel{{ D}}{{\mathop{\raisebox{-1.29pt}{$\,\stackrel{{-}}{$ \,\stackrel{{-}{\,}{$\,\stackrel{{-}}{$\,{\,{\,}{$ \,}{{$\,{\,{\,}}_{\,{\,{\,}}_{\,{\,{}_{\,}_{\,}_{\,{}_{\,{}_{\,}_{\,}_{ \,{}_{\,{}_{\,}_{\,{}_{\,}_{\,{}_{\,}_{\,{}_{}_{\,{}_{}_{\,{}_{}_{}_{}_{ \,{}_{}_{\,{}_{}_{}_{\,{}_{}_{\,{}_{}_{}_{\,{}_{}_{}_{\,{}_{}_{}_{\,{}_{}_{}_{ \,{}_{}_{\,{}_{}_{}_{\,{}_{}_{}_{\,{}_{}_{}_{\,{}_{}_{\,{}_{}_{}_{}_{\,{}_{}_{ }_{{}_{\,{}_{}_{}_{\,{}_{}_{}_{\,{}_{}_{}_{}_{\,{}_{}_{}_{\,{}_{}_{{}_{}_{ }_{\,{}_{}_{{}_{}_{\,{}_{}_{}_{{}_{}_{}_{\,{}_{}_{}_{}_{\,{}_{}_{{}_{}_{}_{}_{ }_{{}_{\,{}_{}_{}_{{}_{}_{\,{}_{}_{}_{{}_{}_{}_{\,{}_{}_{}_{{}_{}_{\,{}_{}_{}_{{}_{}_{ }_{{}_{\,{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{ }_{{}_{{}_{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{ }_{{}_{{}_{}_{}_{{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}}_{}_{}_{{}_{}_{ }_{{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}}_{ }_{{}_{}_{{}_{}_{{}_{}_{{}_{}_{{}}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{{}}_{{}_{}_{}}_{{ }_{{}_{{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{{}}_{}_{{}_{}_{{}_{}_{{}}_{{}_{}_{{}}_{{}_{}_{}_{{}_{}_{{}_{}}_{}_{{}_{{}_{}_{{}_{}_{}_{{}}_{}_{{}_{}_{{}_{}_{{}_{}_{{}}_{}_{{}_{}_{{}_{ }_{{}_{{}_{}_{{}_{{}_{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{{}_{}_{{}}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{{}_{}_{{}_{}_{{}}_{}_{{}_{{}_{}_{}_{{}_{}_{{}_{{}}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{{}_{}_{{}_{}_{{}_{{}}_{{}_{}_{{}_{}_{{}}_{}_{{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{{}_{{}_{}}_{{}_{}_{{}_{{}_{}_{}_{{}_{{}_{}_{{}_{}_{{}_{}_{{}}_{{}_{}_{{}_{{}_{}_{{}}_{{}_{}_{{}_{{}_{}_{{}_{}_{{}}_{{}_{{}_{}_{{}_{}_{{}_{}_{{}_{{}}_{{}_{}_{{}_{}_{{}_{{}_{}_{}_{{}_{{}}_{{}_{{}_{{}_{}_{{}_{{}}_{{}_{{}_{{}_{}_{{}}_{{}_{{}_{}_{{}_{{}_{{}}_{{}_{{}_{{}_{}}_{{{}_{}_{{}_{{}_{}_{{}}_{{}_{{}_{{}_{}_{{}}_{{}_{{}_{{}_{}_{{}_{}_{{}_{{}_{{}}_{{}_{{}_{{}_{{}}_{{}_{{}_{}_{{}_{{}_{{}}_{{}_
### Policy Improvement and (No) Value Iteration
So far, we have considered the value distribution of a fixed policy \(\pi\) and the convergence of policy evaluation. Now let's turn to the control setting and find out the optimal value distribution and its corresponding policy under the risk-sensitive context.
In the form above, we want to find the optimal risk-sensitive policy that maximizes the risk measure over the whole trajectory given the initial state distribution as defined in Eq. (4), which is equivalent,
\[\pi^{*}(h)\in\operatorname*{arg\,max}_{a\in\mathcal{A}}\;\mathbb{E}_{h\sim \mathcal{H},a\sim\pi}\left[\beta\left[Z^{\pi}(h,a)\right]\right]\;. \tag{13}\]
Suppose \(\mathcal{H}\) and \(\mathcal{A}\) are both finite, the solution of Eq. (13) will always exist (but may not be unique!). Denote the optimal risk-sensitive policy set is \(\Pi^{*}\), where \(\forall\pi_{1}^{*},\pi_{2}^{*}\in\Pi^{*}\), we have their return distribution \(\beta[Z_{1}^{*}]=\beta[Z_{2}^{*}]\) and they must satisfy the risk-sensitive HR optimality equation:
\[\beta[Z_{1}^{*}(h_{t},a)] =\beta\left[R_{0:t}+\gamma^{t+1}Z_{2}^{*}(\{s_{t+1}\},a_{t+1}^{*})\right] \tag{14}\] \[a_{t+1}^{*} \in\operatorname*{arg\,max}_{a\in\mathcal{A}}\;\beta\left[Z_{2}^ {*}(h_{t+1},a)\right]\;. \tag{15}\]
We can prove Eq. (14) is also sufficient for Eq. (13), see Appendix C.5. Hereby, we define the risk-sensitive HR optimality operator \(\mathcal{T}_{h,\beta}^{*}\):
\[\mathcal{T}_{h,\beta}^{*}Z(h_{t},a)\gets R_{0:t}+\gamma^{t+1} Z(\{s_{t+1}\},a_{t+1}) \tag{16}\] \[a_{t+1}=\pi^{\prime}(h_{t+1})=\operatorname*{arg\,max}_{a\in \mathcal{A}}\;\beta\left[Z(h_{t+1},a)\right]\;,\]
where the policy is obtained by deterministically maximizing the history-action value under risk measure \(\beta\). And Eq. (14) implies some "fixed" points for Eq. (4) or Eq. (13) from the perspective of risk measure \(\beta\) for \(\mathcal{T}_{h,\beta}^{*}\).
Correspondingly, we can present our second theoretical result, that the policy improvement under HR optimality operator is also guaranteed to converge into the risk-sensitive optimal policy.
**Theorem 4.2** (Policy Improvement for \(\mathcal{T}_{h,\beta}^{*}\)).: _For two deterministic policies \(\pi\) and \(\pi^{\prime}\), if \(\pi^{\prime}\) is obtained by \(\mathcal{T}_{h,\beta}^{*}\):_
\[\pi^{\prime}(h_{t})\in\operatorname*{arg\,max}_{a\in\mathcal{A}}\;\beta\left[ Z^{\pi}(h_{t},a)\right]\;,\]
_then the following inequality holds_
\[\beta\left[Z^{\pi}(h_{t},\pi(h_{t}))\right]\leq\beta\left[Z^{\pi^{\prime}}(h_ {t},\pi^{\prime}(h_{t}))\right]\;.\]
The formal proof can be referred to Appendix C.4. When the new greedy policy \(\pi^{\prime}\), is as good as, but not better than, the old policy \(\pi\) in the sense of risk measures, we have that:
\[\beta[Z^{\pi}(h_{t},a_{t})]=\beta\left[\mathcal{T}_{h,\beta}^{*}Z^{\pi}(h_{t},a_{t})\right]\;,\]
Unfolding the right side, we get:
\[\beta[Z^{\pi}(h_{t},a_{t})] =\beta\left[R_{0:t}+\gamma^{t+1}Z^{\pi}(\{s_{t+1}\},a_{t+1})\right] \tag{17}\] \[a_{t+1}=\pi^{\prime}(h_{t+1}) \in\operatorname*{arg\,max}_{a\in\mathcal{A}}\;\beta\left[Z(h_{t+ 1},a)\right]\;,\]
which is exactly the risk-sensitive HR optimality equation Eq. (14). Therefore, we conclude that utilizing \(\mathcal{T}_{h,\beta}^{*}\) for policy improvement will give us a strictly better policy except when the original policy is already optimal.
In the sequel, we understand that if the optimal solution of Eq. (13) exists, there exists at least a sequence of distributional value function \(\{Z_{0},Z_{1},\cdots,Z_{n},Z_{1}^{*},\cdots,Z_{k}^{*}\}\) induced by the sequence of policy \(\{\pi_{0},\pi_{1},\cdots,\pi_{n},\pi_{1}^{*},\cdots,\pi_{k}^{*}\}\) such that \(\beta[Z_{1}]\leq\beta[Z_{2}]\leq\cdots\leq\beta[Z_{n}]\leq\beta[Z_{1}^{*}]= \cdots=\beta[Z_{k}^{*}]\). However, starting from an arbitrary \(Z\) (which may not correspond to any policy), it is non-trivial to prove \(\mathcal{T}_{h,\beta}^{*}\) converges to \(\beta[Z_{i}^{*}]\).
**Theorem 4.3**.: _For \(Z_{1},Z_{2}\in\mathcal{Z}\),_ HR _optimality operator \(\mathcal{T}_{h,\beta}^{*}\) has the following property:_
\[\|\beta[\mathcal{T}_{h,\beta}^{*}Z_{1}]-\beta[\mathcal{T}_{h,\beta}^{*}Z_{2}] \|_{\infty}\leq\|\beta[Z_{1}]-\beta[Z_{2}]\|_{\infty}\;, \tag{18}\]
The proof is in Appendix C.6. Theorem 4.3 told us that the value iteration for \(\mathcal{T}_{h,\beta}^{*}\) may not converge. Specifically, our proposed HR operator can be realized as a "nonexpensive mapping" from the perspective of risk measure \(\beta\) in \(L_{\infty}\). For our cases of limited spaces, we might expect there exists some "fixed" point \(Z^{*}\), and the best we can hope is a pointwise convergence such that \(\beta Z\) converges to \(\beta Z^{*}\) after recursively applying HR optimality operator \(\mathcal{T}_{h,\beta}^{*}\) w.r.t. risk measure \(\beta\). However, from Theorem 4.3, we know that \(\beta Z_{n}\) is not assured to be converged to \(\beta Z^{*}\) at any speed, hence the starting from arbitrary value distribution \(Z_{0}\), \(\mathcal{T}_{h,\beta}^{*}\) does not necessarily solve the RSRL objective Eq. (4). As a result, \(\beta Z_{n}\) may possibly fall on a sphere around \(Z^{*}\).
### Trajectory Q-Learning
As discussed above, by estimating the historical return distribution and improving the policy accordingly, we can now derive our practical RSRL algorithm, namely Trajectory Q-Learning (TQL). Representing the policy \(\pi\), the historical value function \(Q\) as neural networks parameterized by \(\phi\) and \(\theta\) respectively, and denoting the historical return distribution approximated by critics as
\[Z_{\theta}\left(h,a\right)=\frac{1}{N}\sum_{j=0}^{N-1}\;\operatorname*{ Dirac}\left[Q_{\theta}\left(h,a;\tau_{j}\right)\right]\;, \tag{19}\]
we optimize the following loss functions:
\[J_{\pi}\left(\phi\right)= \ \beta\left[Z_{\theta}\left(h,a\right)\right]\, \tag{20}\] \[J_{Q}\left(\theta\right)= \ \mathbb{E}_{a^{\prime}\sim\pi;\ \tau_{i},\tau_{j}^{\prime}\sim U \left(\left[0,1\right]\right)}\left[\rho_{\pi}^{\kappa}\left(\sum_{t=0}^{s_{t}= s}\gamma^{t}r\left(s,a\right)\right.\right.\] \[\left.\left.+\gamma\bar{Q}_{\theta^{\prime}}\left(\left\{s^{ \prime}\right\},a^{\prime};\tau_{j}^{\prime}\right)-Q_{\theta}\left(h,a;\tau_ {i}\right)\right)\right]\,, \tag{21}\]
where \(\rho_{\pi}^{\kappa}\) represents the quantile Huber loss (see Appendix A.2 for details). In practice, to accurately estimate \(Z(\{s^{\prime}\},\cdot)\) which is just a normal state-based value function (Dabney et al., 2018), we model \(\bar{Q}_{\theta^{\prime}}\left(\{s^{\prime}\},a^{\prime};\tau_{j}^{\prime}\right)\) with an extra Markovian value function \(Q_{\psi}(s,a;\tau)\), updated by
\[J_{Q}\left(\psi\right)= \ \mathbb{E}_{a^{\prime}\sim\pi;\ \tau_{i},\tau_{j}^{\prime}\sim U \left(\left[0,1\right]\right)}\left[\rho_{\pi}^{\kappa}\left(r\left(s,a \right)\right.\right.\] \[\left.\left.+\gamma\bar{Q}_{\psi^{\prime}}\left(s^{\prime},a^{ \prime};\tau_{j}^{\prime}\right)-Q_{\psi}\left(s,a;\tau_{i}\right)\right) \right]\,, \tag{22}\]
In total, the algorithm learns a policy \(\pi_{\phi}\), a history-based value function \(Z_{\theta}\), and a Markovian value function \(Z_{\psi}\). At each timestep, \(Z_{\psi}\) and \(Z_{\theta}\) are updated according to Eq. (22) and Eq. (21), and the policy \(\pi_{\phi}\) is optimized with Eq. (20). For discrete control, we can omit \(\phi\) and implement \(\pi\) by taking argmax from \(\beta[Z(h,a)]\). We list the step-by-step algorithm in Algo. 1 (discrete) and Algo. 2 (continuous).
## 5 Related Work
### Distributional Reinforcement Learning
Distributional RL considers the uncertainty by modeling the return distribution, enabling risk-sensitive policy learning. Bellemare et al. (2017) first studied the distributional perspective on RL and proposed C51, which approximates the return distribution with a categorical over fixed intervals. Dabney et al. (2018) proposed QR-DQN, turning to learning the critic as quantile functions and using quantile regression to minimize the Wasserstein distance between the predicted and the target distribution. Dabney et al. (2018) further proposed IQN, improving QR-DQN by quantile sampling and other techniques, which further investigate risk-sensitive learning upon various distortion risk measures.
### Risk in Reinforcement Learning
Risk management in RL towards real-world applications can be roughly divided into two categories, i.e., safe and constrained RL and distributional risk-sensitive RL. Safe and constrained RL formulates the risk as some kind of constraint to the policy optimization problem. For instance, Achiam et al. (2017) proposed a Lagrangian method which provides a theoretical bound on cost function while optimizing the policy; Dalal et al. (2018) built a safe layer to revise the action given by an unconstrained policy; Chow et al. (2018) used the Lyapunov approach to systematically transform dynamic programming and RL algorithms into their safe counterparts.
When the form of risks is either too complex or the constraints are hard to be explicitly defined, safe RL algorithms can be challenging to learn. In that case, distributional RL provides a way to utilize risk measures upon the return distributions for risk-sensitive learning. Among them, Tang et al. (2019) modeled the return distribution via its mean and variance and then learned an actor optimizing the CVaR of the return distribution; Keramati et al. (2020) proposed a novel optimistic version of the distributional Bellman operator that moves probability mass from the lower to the upper tail of the return distribution for sample-efficient learning of optimal policies in terms of CVaR; (Ma et al., 2020) modified SAC (Haarnoja et al., 2018) with distributional critics and discussed its application to risk-sensitive learning; Urpi et al. (2021) proposed their offline risk-averse learning scheme based on IQN (Dabney et al., 2018) and BCQ (Fujimoto et al., 2019); Ma et al. (2021) proposed CODAC, which adapts distributional RL to the offline setting by penalizing the predicted quantiles of the return for out-of-distribution actions; recently, (Hong Lim and Malik, 2022) proposed a solution to resolve a similar issue specifically in optimizing policies towards CVaR, yet our TQL is general for any risk measure with theoretical guarantees. A more detailed comparison can be referred to Appendix A.3. In comparison, our proposed TQL is not only designed for a specific risk measure but is general for all kinds of risk measures.
There are also several works (Bauerle and Ott, 2011; Chow and Ghavamzadeh, 2014; Tamar et al., 2015) utilizing dynamic risk measures as their objective, which considers per-step risk instead of the static (trajectory-wise) risk in this paper. Dynamic risk has the advantage of time-consistency, but can be hard to estimate practically and short-sighted due to per-step optimization. We present a more detailed discussion on dynamic and static risk in Appendix A.4.
## 6 Experiments
In this section, we design a series of experiments aimed to seek out: **RQ1**: Can our proposed TQL fit the ground-truth risk measures? **RQ2**: Can TQL find the optimal risk-sensitive policy and achieve better overall performance?
**Environments.** In order to examine the ability to optimize risk-sensitive policy, we design two specified environments for discrete and continuous control, respectively. For discrete actions, we design a risky mini-grid task shown in Fig. 1(a); for continuous control, we augment extra risky penalties upon the continuous Mountain-Car environment (Moore, 1990), see Section 6.1 for more details.
**Implementation, baselines, and metric.** For discrete action space, we implement TQL based on IQN (Dabney et al.,
2018a) to obtain the value distribution, and compare TQL with vanilla IQN and CVaR-DRL, a specific solution for learning a risk-sensitive policy towards better CVaR, proposed by (Hong Lim and Malik, 2022). For continuous control problems, we combine TD3 (Fujimoto et al., 2018) with IQN (Dabney et al., 2018), named IQTD3, by replacing the critics in TD3 with distributional critics. We further build TQL upon IQTD3 and take IQTD3 as the baseline algorithm. For comparison, each algorithm is optimized towards various risk-sensitive objectives that are represented by different risk measures, including mean, CVaR, POW, and Wang, whose detailed description is in Appendix Section A.1; and the evaluation metrics are also those risk measures.
### Results and Analysis
**Value distribution analysis on 3-state MDP.** In Section 3.2, we have illustrated in Fig. 1 that vanilla distributional RL is not able to reveal the global optimal risk-sensitive policy and its value. To validate, we learn the return distribution with a tabular version of vanilla IQN and TQL respectively, and visualize the learned return distribution in Fig. 3. The results show that vanilla distributional RL tends to learn \(Z(s_{1},\cdot)\) first as it is irrelevant to \(a_{0}\), and thus \(a_{1}=1\) will be chosen under \(s_{1}\). However, when learning \(Z(s_{0},\cdot)\), Bellman update will use \(Z(s_{1},a_{1})\) in target, ignoring all trajectories where \(a_{1}=0\). On the contrary, TQL learns historical value distribution \(\tilde{Z}\), which enforces the agent to consider all possible trajectories and thus reveal the optimal solution where \(a_{0}=a_{1}=0\).
**Discrete control evaluations.** We first show the result of the discrete mini-grid task, which is designed for learning CVaR objective. We present the learning curves of TQL and vanilla IQN in Fig. 1(b), which indicates that IQN consistently converges to the sub-optimal solution of visiting blue grids, similar to its behavior in the above-mentioned 3-state MDP. CVaR-DRL does improve the CVaR of historical return distribution to some extent, while it still produces sub-optimal policies (see Appendix A.3 for a more detailed analysis). In contrast, TQL is able to discover a better policy that achieves significantly higher CVaR than the vanilla IQN baseline. Furthermore, in Fig. 1(c), we visualize the final policy's return distribution. The blue and green bars indicate the frequency of episode returns for vanilla IQN and TQL respectively, and the corresponding dashed lines show the CVaR of return distribution for two policies. TQL is very likely to obtain high positive returns with little risk of negative returns, while vanilla IQN's return is always negative due to its Markovian policy.
To better understand the difference in the optimization process, we further illustrate how the policy evolves during the training process in Tab. 1. In particular, we observe that vanilla IQN converges from the end of the episode to the beginning due to its updating mechanism of dynamic programming, and its property of Markovian prevents it from finding the global optimum; moreover, CVaR-DRL fails due to its approximation in CVaR estimation but leads to a slightly-better policy. However, TQL is always doing a global search and thus finally reveals the optimal policy.
**Continuous control evaluations.** To further learn on continuous risk-sensitive control problems with TQL, we design a risky penalty for the Mountain-Car environment:
\[R_{\text{risky}}(s,a)=\left\{\begin{array}{cc}-c\cdot(2-|a|),&p=\frac{1}{ 4-3|a|}\\ 0,&p=1-\frac{1}{4-3|a|}\end{array}\right.. \tag{23}\]
where \(c\in[0,1]\) is a scaling factor that controls the degree of risk related to the scale of actions. At each timestep, we augment the original reward with the risky penalty \(R_{\text{risky}}\). Generally, actions close to \(0\) will result in higher expected accumulated rewards. However, to complete the task as fast as possible, the agent should choose larger actions that are close to 1, leading to more risky penalties. We compare
\begin{table}
\begin{tabular}{c|c c c} \hline \# Training steps & IQN & CVaR-DRL & TQL \\ \hline \(2\times 10^{4}\) & \([\downarrow\rightarrow,\downarrow,\rightarrow,\downarrow,\downarrow]\) & \([\rightarrow,\downarrow,\downarrow,\rightarrow,\downarrow]\) & \([\rightarrow,\downarrow,\downarrow,\rightarrow,\downarrow]\) \\ \(1\times 10^{5}\) & \([\downarrow\rightarrow,\rightarrow,\downarrow,\downarrow,\downarrow]\) & \([\rightarrow,\downarrow,\downarrow,\rightarrow,\rightarrow,\downarrow]\) & \([\downarrow\rightarrow,\downarrow,\rightarrow,\downarrow,\downarrow]\) \\ \(2\times 10^{6}\) & \([\rightarrow,\downarrow,\rightarrow,\downarrow,\rightarrow,\downarrow, \downarrow]\) & \([\downarrow,\rightarrow,\rightarrow,\downarrow,\downarrow,\downarrow]\) & \([\downarrow,\rightarrow,\downarrow,\rightarrow,\downarrow,\downarrow, \rightarrow]\) \\ \hline \end{tabular}
\end{table}
Table 1: The action sequences of IQN and TQL policies at different training steps. IQN converges from back to front; CVaR-DRL leads to a slightly-better policy; TQL finds out the global optimum.
Figure 2: Mini-grid experiments designed for learning CVaR objective. (a) Illustration of risky mini-grid environment. The agent starts at the upper left corner of the grid (red triangle), and reaches the bottom right green grid to end the episode. At each timestep, the agent receives a constant penalty of \(-2\). The yellow grids give a \(+100\) bonus with the probability of \(p=0.75\) and \(0\) with the probability of \(p=0.25\), while the blue grids always give a reward of \(+20\). Each yellow or blue grid can give its reward only once. The orange grids have a heavy penalty of \(-100\) to avoid the agent from going there. (b-c) Experiment results on the task: (b) Vanilla IQN quickly converges to a sub-optimal solution; CVaR-DRL discovers a slightly better policy; TQL finds the optimal policy. (c) The return distributions of vanilla IQN and CVaR-DRL are more conservative, while that of TQL results in a higher CVaR.
Figure 4: Learning curves on modified Mountain-Car environment with different risk measures as objective, measured by risk measures.
Figure 3: Predicted return distribution on different \(s\) or \(h\) and \(a\) input. The left 4 figures correspond to IQN: IQN first learns \(Z(s_{1},\cdot)\), see (c-d). It finds \(a_{1}=1\) better and keeps this strategy when learning \(Z(s_{0},\cdot)\), leading to (a) and (b); the right 6 figures correspond to our proposed method TQL: (e) matches (f) as taking \(a_{1}=0\) has better CVaR after taking \(a_{0}=0\); (h) matches (j) as taking \(a_{1}=1\) has better CVaR after taking \(a_{0}=1\). Overall, the policy corresponds to (e) and (f), which achieve global optimum.
IQTD3 with the proposed TQL and show the results for various risk measures in Fig. 4.
Overall, when the potential risk is larger (i.e., larger risky penalty \(c\in\{0.5,0.75,1.0\}\)), TQL significantly outperforms IQTD3. The Markovian policy learned by IQTD3 can hardly find out how to complete the control task due to its short-sighted decision-making, while TQL consistently learns a better policy. When the risk is smaller, namely \(c\in\{0.25,0.1,0.0\}\), the difference between TQL and IQTD3 becomes smaller, and both algorithms can learn an optimal risk-sensitive policy.
## 7 Conclusion and Future Work
In this paper, we present an in-depth analysis of the biased objective issue of the existing RSRL methods, and correspondingly propose Trajectory Q-Learning (TQL), a distributional RL algorithm for learning the optimal policy in RSRL. We justify the theoretical property of TQL and prove it converges to the optimal solution. Our experiments and the detailed analysis on both discrete and continuous control tasks validate the advantage of TQL in risk-sensitive settings. In future work, we plan to extend TQL to more complex tasks and real-world applications.
## Acknowledgements
We thank Zhengyu Yang, Ming Zhou and Zheyuan Hu for their helpful discussions. The SJTU team is supported by "New Generation of AI 2030" Major Project (2018AAA0100900), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102) and National Natural Science Foundation of China (62076161). The author Minghuan Liu is also supported by Wu Wen Jun Honorary Doctoral Scholarship, AI Institute, Shanghai Jiao Tong University. We sincerely thank all anonymous reviewers for their helpful feedback to revise our first manuscript.
|
2302.03848
|
Controlling Personality Style in Dialogue with Zero-Shot Prompt-Based
Learning
|
Prompt-based or in-context learning has achieved high zero-shot performance
on many natural language generation (NLG) tasks. Here we explore the
performance of prompt-based learning for simultaneously controlling the
personality and the semantic accuracy of an NLG for task-oriented dialogue. We
experiment with prompt-based learning on the PERSONAGE restaurant
recommendation corpus to generate semantically and stylistically-controlled
text for 5 different Big-5 personality types: agreeable, disagreeable,
conscientious, unconscientious, and extravert. We test two different classes of
discrete prompts to generate utterances for a particular personality style: (1)
prompts that demonstrate generating directly from a meaning representation that
includes a personality specification; and (2) prompts that rely on first
converting the meaning representation to a textual pseudo-reference, and then
using the pseudo-reference in a textual style transfer (TST) prompt. In each
case, we show that we can vastly improve performance by over-generating outputs
and ranking them, testing several ranking functions based on automatic metrics
for semantic accuracy, personality-match, and fluency. We also test whether NLG
personality demonstrations from the restaurant domain can be used with meaning
representations for the video game domain to generate personality stylized
utterances about video games. Our findings show that the TST prompts produces
the highest semantic accuracy (78.46% for restaurants and 87.6% for video
games) and personality accuracy (100% for restaurants and 97% for video games).
Our results on transferring personality style to video game utterances are
surprisingly good. To our knowledge, there is no previous work testing the
application of prompt-based learning to simultaneously controlling both style
and semantic accuracy in NLG.
|
Angela Ramirez, Mamon Alsalihy, Kartik Aggarwal, Cecilia Li, Liren Wu, Marilyn Walker
|
2023-02-08T02:45:21Z
|
http://arxiv.org/abs/2302.03848v1
|
# Controlling Personality Style in Dialogue with Zero-Shot Prompt-Based Learning
###### Abstract
Prompt-based or in-context learning has been shown to achieve high zero-shot performance on many natural language generation (NLG) tasks. Here we explore the performance of prompt-based learning for simultaneously controlling the personality and the semantic accuracy of an NLG for task-oriented dialogue. We experiment with prompt-based learning on the personage restaurant recommendation corpus to generate semantically and stylistically-controlled text for 5 different Big-5 personality types: agreeable, disagreeable, conscientious, unconscious, and extravert. We test two different classes of discrete prompts to generate utterances for a particular personality style: (1) prompts that demonstrate generating directly from a meaning representation that includes a personality specification; and (2) prompts that rely on first converting the meaning representation to a textual pseudo-reference, and then using the pseudo-reference in a textual style transfer (TST) prompt. In each case, we show that we can vastly improve performance by over-generating outputs and ranking them, testing several ranking functions based on automatic metrics for semantic accuracy, personality-match, and fluency. We also test the effect of providing examples of multiple personalities, and of different sampling strategies and numbers of examples, as well as testing whether NLG personality demonstrations from the restaurant domain can be used with meaning representations for the video game domain to generate personality stylized utterances about video games. Our findings show that the TST prompts produces the highest semantic accuracy (78.46% for restaurants and 87.6% for video games) and personality accuracy (100% for restaurants and 97% for video games). Our results on transferring personality style to video game utterances are surprisingly good. To our knowledge, there is no previous work testing the application of prompt-based learning to simultaneously controlling both style and semantic accuracy in NLG.
p **Key words:** personality, stylistic generation, task-oriented dialogue, natural language generation, style transfer, prompt-based learning, evaluation
## 1 Introduction
Over the last few years, prompt-based or in-context learning has been shown to achieve high performance on many natural language generation (NLG) tasks [1; 18; 22]. Here we explore the performance of prompt-based learning for controlling both the personality and the semantic accuracy in natural language generation for dialogue. We experiment with prompt-based learning on the personage corpus, a stylistic benchmark dataset for semantically-controlled NLG in the restaurant domain, with reference utterances that vary stylistically according to linguistic profiles of Big-5 personality types [34; 4; 26; 5; 29; 30]. The personality styles consist of 5 different Big-5 personality types: agreeable, disagreeable, conscientious, unconscious, and extravert.
We compare two different types of discrete prompts: (1) Data-to-text (D2T) prompts that directly **demonstrate** generating from a meaning representation that includes a personality specification; (2) prompts that are based on textual style transfer (TST) **instructions**[40], that require first converting the meaning representation to a a textual pseudo-reference. The two types of prompts are illustrated in Table 1 for the _agreeable_ Big-5 personality style. We also vary the number of demonstrations we provide, as well as whether the examples illustrate one personality or multiple personalities, and systematically examine the effect.
The two methods are illustrated in Figure 1. Using both methods, we show that we can vastly improve performance by over-generating multiple outputs for each setting [16], and then ranking the outputs using a combination of personality accuracy, semantic accuracy and fluency. For semantic accuracy, we compare off-the-shelf semantic faithfulness metrics such as beyond-bleu and bleurt to personage specific scripts for calculating slot error rate [49; 41; 36]. To measure personality accuracy, we train a personality classifier.
Based on on our results for personage, we use our best performing experimental setting on an out-of-domain dataset for Data-to-Text NLG for Video Games, ViGGO. By doing so, we are testing whether personality examples from the restaurant domain can be used on meaning representations for the video game domain to generate personality stylized utterances about video games. The ViGGO corpus comes with a script for calculating semantic accuracy that is specific to this domain [12], so we are able to apply the same ranking functions as we use for personage.
Our results show that prompting with a single personality performs better both for achieving the target style and faithfully rendering the meaning. Our best performing setting achieves personality accuracies of 100% and a best slot error rate of 22%. To
Figure 1: Two models for Semantically Controlled Generation with Style
our knowledge, there is no previous work testing the performance of prompt-based learning for simultaneously controlling both style and semantic accuracy in NLG.
## 2 Related Work
Prompt-based learning has recently been applied to many different NLG tasks. Previous work on semantically controlled NLG using prompt-based learning has focused on semantic accuracy rather than attempting to simultaneously control both semantics and style [37; 20; 50]. Previous work on controlling style using prompt-based learning has been framed as a textual style transfer (TST) task, where the goal is to enhance the text with stylistic features while preserving the overall semantics and fluency of the text [40; 44; 9; 19]. These measures strongly parallel the evaluation measures that we use for stylistically and semantically controlled NLG. In TST, stylistic correctness is typically measured with pre-trained style classifiers, as we do here. However it is notoriously difficult to measure semantic preservation in text-to-text tasks, where the definition of meaning tends to be quite slippery. Much work still uses bleu even though for many tasks it has been shown not to correlate with human judgements [40; 32; 9]. Newer neural measures such as beyond-bleu, bleurt and bertScore have also been used, with some recent work showing that beyond-bleu produces good results when used directly during fine-tuning [23].
Earlier work on controlling both semantics and style was based on seq-to-seq LSTM + attention models trained with thousands of examples [29; 28; 45; 39; 5]. We compare our results to previous seq-to-seq results on the personage corpus and on the viggo corpus in Section 4.
\begin{table}
\begin{tabular}{l} \hline \hline \multicolumn{2}{c}{**Data To Text Prompt (D2T)**} \\ name = nameVariable \(|\) eattype = restaurant \(|\) food = chinese \(|\)\(iro\)errange = moderate \(|\) area = riverside \(|\) familyfriendly = yes \(|\) near = nearVariable \(|\) personality = agreeable \\ Let’s see what we can find on nameVariable. oh right, it is an chinese restaurant in riverside with a quite moderate rating and it is kid friendly, also it is near nearVariable, you know, okay? \\ name = namevariable \(|\) eattype = pub \(|\) food = italian \(|\) area = city centre \(|\) familyfriendly = no \(|\) near = nearvariable \(|\) personality = agreeable \\ \hline \multicolumn{2}{c}{**Textual Style Transfer Prompt (TST)**} \\ Here is some text: \{namevariable restaurant chinese moderate riverside family friendly \\ nearvariable\}. Here is a rewrite of the text which is agreeable : \{Let’s see what we can find on nameVariable. I see it is a Chinese restaurant in riverside, also it is moderately priced and family friendly and near nearVariable.\}. \\ Here is some text: \{nameVariable pub Italian city centre not family friendly nearVariable \}. Here is a rewrite of the text which is agreeable : \{ \\ \hline \end{tabular}
\end{table}
Table 1: Example D2T and TST prompts for the Big-5 agreeable personality
One of the key elements of our novel approach is converting our data-to-text problem to a text-to-text problem by generating pseudo-references directly from our meaning representations. Work by Heidari et al. also experimented with different ways to convert meaning representations to textual forms for the purpose of fine-tuning [8]. They then show that they can use as few as 300 examples to fine-tune their NLG engine [7]. Other work on data-to-text generation that has used prompt-based learning has relied on models like GPT-3 to convert single KG triples into texts, and then fused those texts into a paragraph [50], but has not directly measured semantic accuracy or aimed to enforce a specific style to be generated. However this work, as well as other research, shows that meaning representations can be used directly in a prompt format to generate sentences [42; 22; 37]. Models are clearly sensitive to the type of prompt provided [47; 1], so we carefully compare classic data-to-text prompts with prompts that convert data-to-text to a text-to-text problem.
## 3 Experimental Method
Here, we test two different prompt-based learning approaches for semantically controlled stylistic generation, as illustrated in Figure 1. We aim to understand which method best conditions the NLG outputs.
### Personage and ViGGO Datasets
Our primary corpus is personage, 1 as illustrated in Figure 2[29; 6]. personage contains \(\sim\) 88,000 restaurant recommendations that vary along the Big Five Personality traits: agreeable, disagreeable, conscientious, unconscious, and extrovert [4; 24; 27]. Table 2 shows an example MR from Personage along with five surface realizations: a pseudo-reference generated directly from the meaning representation, as we describe in Section 3.2.1 [11]; a vanilla utterance generated for the E2E generation challenge; examples of the extravert, conscientious and agreeable personality types from the personage corpus [34; 4; 26].
Footnote 1: nlds.soe.ucsc.edu/stylistic-nlg
Recent work on stylistic variation in NLG for task-oriented dialogue has categorized stylistic variation into lexical, syntactic and semantic styles [45]. Mairesse et al. [25] provides a detailed summary of the psycholinguistic literature on how the Big 5 personality types are manifested in language, showing that personality affects style at all three levels, e.g. an extraverted personality will use more frequent words, will produce longer sentences with more aggregation operations, and select more positive content, while introverts tend to use rare words. The types of variation that are present in the Personage corpus are both lexical and syntactic, and categorized into Aggregation operations and Pragmatic operations. Aggregation op
erations modify syntactic dependency trees to combine propositions into a single sentence: these syntactic operations are aslo typically indicated by lexical cues such "with", "and", and "also", as illustrated by the Extraversion personality in Table 2. Pragmatic operations also often involve lexical cues, but their applications typically requires knowledge of syntactic or semantic constraints. For example, hedges such as "rather" can only be placed before scalar adjectives such as "expensive", as illustrated by the Agreeable example in Table 2, while insertion of hedges such as "you know" as shown in the Extraversion example in Table 2 is less constrained. Adding a tag question such as "okay?", or "isn't it?" to the end of a sentencem as seen in the Agreeable example, may require identifying the subject of the sentence in order to match the pronoun.
To our knowledge, personage is the only corpus that provides reference utterances for Big-5 personalities for a data-to-text generation task. However, we hypothesized that we could achieve some style transfer to another domain by prompting with personality demonstrations from personage, and requesting an output using a meaning representations in another domain. Thus, after experimenting with personage, we also test the ability to transfer personality styles from personage to the ViGGO video games corpus. Several examples of meaning representations and reference utterances from the original ViGGO corpus are shown in Table 2.
Figure 2: Sample meaning representation with a vanilla realization labelled E2E and three personality-based stylistic realizations from the personage Dataset.
### Ranking using Automatic Metrics
The overgenerate-and-rank method for NLG for dialogue systems assumes that overgeneration will produce multiple viable candidates, and that the best candidate(s) can be identified through ranking, in real time. In this paper, we leave aside the real-time requirement, and test whether ranking can improve the fluency, semantic accuracy, and the manifestation of personality in the selected output. More formally, a high-quality response generated from a model based on the personality and the MR provided in the prompt should: (1) strongly manifest the personality; (2) have no missing or incorrect mentions of the attribute values; (3) produce no irrelevant attribute mentions i.e. hallucinations; and (4) be fluent. The generated utterance \(y\), conditioned on a Personality \(P\), and an MR \(x\) with slot values \(s\), can be formulated as \(y=f(P,s)\). The conditional likelihood of an utterance given P,MR can be decomposed into the product of three probabilities:
\[p(y|P,s)=p(P|y,s)*p(s|y)*p(y) \tag{1}\]
The term \(p(P|y,s)\) is the probability of a particular personality given the generated utterance \(y\) and the semantic attributes \(s\). The term \(p(s|y)\) represents the semantic accuracy. The term \(p(y)\) is the unconditional probability of the generated text. We calculate the Personality probability with a personality classifier (Sec. 3.2.2), and test multiple ways of computing semantic accuracy (Sec. 3.2.1). We automatically measure the fluency of the generated text as sentence probability, as in previous work [10; 44]. We discuss how we can use these together to define several different ranking functions in Sec. 3.2.3..
\begin{table}
\begin{tabular}{l} \hline _give_opinion_(name [**SpellForce 3**], rating [**poor**], genres [**real-time strategy, role-playing**], player_perspective [**bird view**]) \\ \hline \hline I think that **SpellForce 3** is **one of the worst games** I’ve ever played. Trying to combine the **real-time strategy** and **role-playing** genres just doesn’t work, and the **bird’s eye view** makes it near impossible to play. \\ \hline _verify_attribute_(name [**Little Big Adventure**], rating [**average**], has_multiplayer [**no**], platforms [**PlayStation**]) \\ \hline \hline I recall that you were **not that fond** of **Little Big Adventure**. Does **single-player** gaming on the **PlayStation** quickly get boring for you? \\ \hline \end{tabular}
\end{table}
Table 2: Examples of MRs and corresponding reference utterances in the ViGGO dataset. The DA of the MRs is indicated in italics, and the slots in small caps. The slot mentions in the utterances are bolded.
#### Measuring Semantic Accuracy
One advantage of semantically controlled natural language generation is that the meaning representation, here represented as a dialogue act with its attributes and values, provides an objective way to measure semantic correctness. In contrast, work on machine translation, paraphrasing and textual style transfer (TST) use more approximate measures such as semantic similarity or bleu, despite acknowledging bleu's limitations [3; 49].
Previous work on data-to-text NLG defined a metric called the Slot Error Rate (SER) i.e. the percentage of slots \(y\) that the NLG failed to realize in \(x\)[48; 14]. Work on the personage corpus defines semantic error by adding the number of substitutions S, deletions D, repeats R, and hallucinations H [5; 13; 38; 36].2 The SER formula is then:
Footnote 2: Hallucinations can only be recognized for known attributes, but previous work has shown high correlations between human judgements and the ViGGO and personage SER scripts.
\[SER=\frac{S+D+R+H}{N} \tag{2}\]
where N is the number of slots in the MR. Previous work on the ViGGO dataset also provides scripts for calculating the SER [14]. We use these off-the-shelf SER scripts for both Personage and ViGGO here. Ranking needs an accuracy measure rather than an error measure, so for both Personage and ViGGO, we derive the SACC measure as:
\[SACC=1-SER \tag{3}\]
However, because the sacc metric is specific to personage and ViGGO, we also explore the use of common reference-based metrics for measuring semantic preservation, namely bleu, beyond-bleu, bleurt and bertScore[33; 23; 49; 41; 51]. These metrics are designed to be reference-based, i.e. they compare a generated text with a reference text (or a set of them), typically written by humans. However we apply a novel method to use reference-based metrics with pseudo-references, where we compare the generated utterances directly against the input MR that has been converted into a textual representation [11; 8]. The MR is linearized and the slot values are concatenated, except that boolean-valued slots such as "family friendly" are represented by their slot names rather than their values. A personage example is shown in the second row of Figure 2. While these metrics may produce rather low scores on (pseudo-reference, reference) text pairs, here we are interested in relative scores rather than absolute values.
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Measure** & **Personage** & **ViGGO** \\ \hline pBLEU & 0.11 & 0.29 \\ \hline pBeyond-BLEU & 0.29 & 0.48 \\ \hline pBLEURT & 0.39 & 0.50 \\ \hline pBERT precision & 0.31 & 0.47 \\ \hline pBERT recall & 0.11 & 0.34 \\ \hline pBERT F1 & 0.24 & 0.42 \\ \hline \end{tabular}
\end{table}
Table 3: Pearson Correlation between saccand common Semantic Preservation Measures when applied to Pseudo References. All correlations are statistically significant at \(p=0\).
We assume that our domain specific sacc measures are the best possible measure of semantic accuracy and compute the correlations between sacc and the reference-based metrics for both personage and ViGGO as shown in Table 3. For both personage and ViGGO, pbleu is the least correlated measure while beyond-bleu, bleurt and bertScore Precision are the most highly correlated. We use these correlations in defining the ranking functions in Section 3.2.3.
#### Personality Style Classifier
We also need an automatic method for measuring stylistic strength or the manifestation of personality. Style classifiers are usually used to measure style strength in TST [23; 10], so we train a multi-class personality style classifier on a balanced set of 4,000 samples of reference utterances from the personage dataset, i.e. 800 per personality [39]. The classifier is a 110 million parameter BERT model that was fine-tuned with the following hyperparameters: learning rate: 3e-4, train batch size: 128, evaluation batch size: 32, epochs: 3. Table 4. reports the F1, Precision, and Recall for each personality type on the personage test set references of 1390 examples. At inference time, each sample is a stylistic personality realization generated by Jurassic. When the personality matches the intended personality, the probabilities of classifier are used as a term in the ranking functions detailed in Section 3.2.3 below.
#### Ranking Functions
Over-generate and rank is an NLG paradigm that has been around since the beginning of statistical NLG [16; 2], where multiple candidates S\({}_{i}\) are first generated, and then ranked to select the best candidate. Using a single ranking measure at a time can be misleading when the aim is to optimize multiple aspects of the output [15]. On the other hand, defining appropriate ranking functions is challenging, since different aspects of the output can be weighted differently.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Personality** & **F1** & **P** & **R** \\ \hline extravert & 0.99 & 1.00 & 0.99 \\ \hline agreeable & 0.99 & 0.99 & 0.99 \\ \hline disagreeable & 0.99 & 0.99 & 0.99 \\ \hline conscientious & 0.99 & 0.99 & 0.99 \\ \hline unconscious & 0.99 & 1.00 & 1.00 \\ \hline \end{tabular}
\end{table}
Table 4: Precision, Recall and F1 for the Personality Classifier
\begin{table}
\begin{tabular}{l} \hline \hline RF1: SACC * PAC * P(S) \\ \hline RF2: SACC * PAC *P(S) * pBLEU \\ \hline RF3: pBBLEU * PAC *P(S) \\ \hline RF4: pBLEURT * PAC * P(S) \\ \hline RF5: pBERT * PAC * P(S) \\ \hline \end{tabular}
\end{table}
Table 5: Ranking Functions
Here we treat all aspects as equally important by multiplying our measures, and examine the effect on the output quality.
Automatic metrics that can be combined for ranking at run time includes measures of semantic accuracy, the personality classifier probabilities (PAC), and fluency, calculated as P(S), the probability of candidate S according to an LLM. We experiment with the five ranking functions in Table 5. RF1 captures all three components of the desired output, personality match (PAC), semantic accuracy (SAC) and fluency P(S). Because pbleu is the least correlated metric with sacc, we add it as an extra term to the ranking function in RF2 to see whether enforcing lexical similarity improves results. The other three measures all explore the effect of replacing sacc with a general measure for semantic similarity. As shown in Table 3, beyond-bleu, bleurt and bertScore Precision are the most highly correlated with sacc, so we replace sacc with these three measures in RF3, RF4 and RF5. We calculate P(S) using GPT2-Large [35].
### Prompt Formats, Prompt Sampling, and Prompt Selection
Our experiments test different prompt formats, number of examples, and sampling methods. We test two discrete prompt formats, using the traditional Data-to-Text representations (D2T) as well as representations similar to those used for Textual Style Transfer (TST). The two formats were shown in Table 1, namely D2T format demonstrates generating an utterance directly from the meaning representation with a personality token, while the TST format provides instructions to generate a particular personality from a textual pseudo-reference of the meaning representation.
Prompt-based learning is restricted by the number of input examples provided: for Jurassic the maximum is about 36 examples. In addition to varying the prompt format, we also experiment with the number of examples, as previous work has shown that also matters [43]. We test between 1 and 36 examples.
We also experiment with whether we can create a "control knob" for personality by presenting all 5 types of personalities in some prompts and only a single personality in other prompts. Table 1 provides an example of one personality being used for few shot learning. Experiments using only one personality at a time used either 10 examples or 36 examples. Experiments using all personalities used either 1 example per personality (5 total examples) or 6 examples per personality (30 total examples). In these experiments, examples were randomly selected from the original personage train set where we select examples given the criteria of the number of examples and the number of personalities.
In addition, once we determine a good setting for type of prompt and number and type of examples, we build on previous work using a diversity criterion for selecting prompts for instruction tuning [46]. We hypothesized that creating prompt examples using a diversity criteria might lead to better performance. While the instruction tuning work used ROUGE score to select diverse automatically generated prompts, we use bleurt, and select a set of prompts that are the least similar ac
cording to bleurt. We start with a large random sample from the training set, and randomly select our first example. We then calculate the bleurt score between our first example and all the other examples in our random sample. We greedily then select a second example with the lowest bleurt score, i.e. with the lowest similarity to the example already in the pool. We repeat this process, comparing each new candidate to the examples already in the pool, and selecting the one with the lowest average bleurt until we reach n number of examples, \(T=e_{1},e_{2}...e_{n}\). Experiments using this selection process will be called _diverse_.
Finally, based on our findings on personage, we utilize the best experimental settings to attempt personality style transfer in an out-of-domain dataset for Video Games called ViGGO, by providing demonstrations of personalities in the restaurant domain, with test items from ViGGO test set. [12].
## 4 Results
All experiments use the Jurassic-1 Jumbo 175B parameter PLM, a publicly available 178B parameter autoregressive language model [21; 17]. Based on tuning experiments with different settings, we set temperature at.7 and top P at 1.
We first report results comparing the two prompt formats and types of samples for personage, and then we report results for the five ranking functions. Finally we report results for personality transfer on the ViGGO corpus, and present a qualitative analysis of personality expression and diversity.
**Prompt Style and Prompt Sampling.** Here we compare the two prompt formats, Data-to-Text (D2T) and Textual Style Transfer (TST) for personage, when providing examples of either a single personality or examples of all 5 personalities, and varying the number of examples per prompt. Table 1 provided examples of both the D2T and TST prompt styles and Table6 provides the experimental results.
The top part of Table 6 shows that the D2T prompts consistently perform worse in every experimental setting, independently of whether examples are provided of multiple personalities (all) or single personalities (specific) or whether fewer or more examples are provided. For example, comparing D2T-10-specific to TST-10-specific, we can see that after ranking, the TST-10-specific setting (10 examples of a specific personality) provides the best performance with a semantic accuracy of 78.23% and a stylistic accuracy (PAC) of 99.00%. This supports our hypothesis that textualizing the data-to-text representations to make them look more like the natural text that LLMs are trained on would result in better performance. We also achieve a significantly higher stylistic accuracy over the whole candidate pool (99.00% PAC BR for TST-10-specific as compared to 96.71% PAC BR for D2T-10-specific), by instructing the model to realize the content as a particular personality type, rather than just demonstrating. Also interestingly, the D2T performance for sacc is the same before (BR) and after (AR) ranking, while the improvements in sacc after
ranking are large for the TST prompt style. This shows that the overall quality of the pool of D2T candidates is lower.
The lower part of Table 6 focuses on the TST experiments. These results show that it is more challenging to get the LLM to learn from diverse prompts how to do more than one task at a time, i.e. performance is lower when we provide examples of all personalities, such as TST-1-all (5 examples with one for each personality) and TST-6-all (30 examples with 6 for each personality). Moreover, interestingly, performance is better with only 10 examples of a specific personality, rather than 36 examples. The LLM may find long contexts such as would be provided with 36 examples more challenging than a shorter context.
Finally, we compare our random sampling method for our best setting to our diversity promoting sampling method described in Section 3.3[46]. The bottom row of Table 6 shows that we get a slight improvement in SACC to 78.46% by sampling more diverse prompts as well as an improvement from 97.55% to 100% for personality accuracy after ranking (PAC AR). While the difference in sacc is not significant (p\(=.28\)), the difference in PAC AR is significant (p\(=0\)). We therefore conclude that the diversity selection method is beneficial.
However compared to SOTA of 99.00% SACC for fine-tuning, the overall performance for sacc is low [5]. Semantically and stylistically perfect outputs are also rare. Previous work comments that it is difficult for a model to simultaneously achieve high stylistic accuracy and high semantic accuracy. Here personality accuracy is high but there are few semantically perfect outputs to choose from.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**ID** & **SACC BR** & **SACC AR** & **PAC BR** & **PAC AR** & **Perfect BR** & **Perfect AR** \\ \hline \hline
**D2T-10-specific** & 66.08\% & 65.77\% & 97.61\% & 100.00\% & 6.80\% & 13.45\% \\ \hline
**D2T-36-specific** & 68.26\% & 68.50\% & 19.68\% & 88.27\% & 1.83\% & 9.86\% \\ \hline
**D2T-1-all** & 63.24\% & 62.12\% & 19.53\% & 86.26\% & 0.98\% & 5.90\% \\ \hline
**D2T-6-all** & 69.03\% & 68.48\% & 19.68\% & 88.06\% & 1.91\% & 11.29\% \\ \hline \hline
**TST-10-specific** & **72.02\%** & **78.23\%** & **99.00\%** & 97.55\% & 7.19\% & **36.69\%** \\ \hline
**TST-36-specific** & 68.07\% & 73.72\% & 97.00\% & 97.63\% & 5.19\% & 29.21\% \\ \hline
**TST-1-all** & 66.36\% & 70.30\% & 38.00\% & 97.63\% & 4.32\% & 25.76\% \\ \hline
**TST-6-all** & 70.96\% & 75.81\% & 56.00\% & 97.63\% & 6.21\% & 35.47\% \\ \hline \hline
**TST-10-diverse** & 67.96\% & **78.46\%** & 98.00\% & **100.00\%** & 9.47\% & **38.99\%** \\ \hline \end{tabular}
\end{table}
Table 6: Results comparing Data-to-Text prompts vs. Textual Style Transfer prompts using RF2 for ranking. Rows indicate both Prompt type and number of examples in prompt. BR (Before Ranking) metrics reports the average score over the entire candidate pool of 13900 outputs. AR (After Ranking) reports performance after selecting the best candidate according to RF2. sacc\(=\) Semantic Accuracy. PAC \(=\) Personality Accuracy. **Perfect** reports the percentage of candidates that are correct for both semantic accuracy and personality realization.
**Ranking.** We now explore how different ranking functions affect the results. Table 7 provides the results for the best experimental setting for each ranking function from Section 3.2.3. Here, we only experiment with the diverse prompts, given the results in the last row of Table 6,
Row RF2 of Table 7 shows that the addition of the pbleu term to RF1 achieves higher semantic accuracy (p\(=0\)) and higher bleu, while maintaining the same stylistic accuracy (PAC) of 100%. We speculate that the addition of the pbleu term favors outputs whose lexical realizations more closely match the original MR, enabling the SER script to more easily identify semantically correct realizations.
Comparing RF3, RF4 and RF5 in the last three rows of Table 7 shows that the best performing off-the-shelf semantic accuracy function is bleurt, with RF4 performing significantly better than both beyond-bleu and bertScore (p\(=0\)), although with somewhat lower personality accuracy.
**Out of Domain Results for ViGGO.** We now turn to our experiments using example prompts from the personage corpus with meaning representations for the ViGGO corpus. Our goal is to see whether we can transfer personality style across domains. Table 2 provided examples of ViGGO MRs and vanilla outputs: there are no reference utterances for ViGGO outputs with personality. We apply the best prompt format and number of prompts combination (TST-10) from the personage experiments on ViGGO, for both the randomly sampled and diverse prompt sets to test how diverse examples affects generalization across domains.
Table 8 shows that the results for ViGGO are surprisingly good, and that there is good personality transfer across domains, with personality accuracies of 97.00%. Interestingly, the upper part of Table 8 shows that the diverse prompts yield higher overall sacc, suggesting better generalization via diversity. Row 2 of Table 8 shows that RF3 with diverse prompts has the best combined performance for sacc and personality accuracy. The RF4 rows in both the parts of Table 8 show that RF4, using bleurt provides the highest sacc, and the highest pbleu score, but unacceptably low PACs of 57.00% and 56.00%. Perhaps the bleurt metric ranks candidates lower that manifest personality. A paired t-test shows that RF3 performs signifi
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline
**ID** & **Formula** & **ID** & **SACC** & **PAC** & **BLEU** \\ \hline
**RF1** & SACC * PAC * P(S) & TST-10-diverse & 76.56\% & 100.00\% & 0.235 \\ \hline
**RF2** & SACC * PAC *P(S) * pBLEU & TST-10-diverse & **78.46\%** & **100.00\%** & 0.240 \\ \hline
**RF3** & pBBLEU * PAC *P(S) & TST-10-diverse & 65.87\% & 100.00\% & 0.224 \\ \hline
**RF4** & pBLEURT * PAC * P(S) & TST-10-diverse & 71.61\% & 98.20\% & 0.213 \\ \hline
**RF5** & pBERT * PAC * P(S) & TST-10-diverse & 63.10\% & 100.00\% & 0.219 \\ \hline \end{tabular}
\end{table}
Table 7: Results on personage using all Ranking functions for prompt examples selected using a diversity criteria (TST-diverse), for the TST-10 best prompt setting. sacc \(=\) Semantic Accuracy. PAC \(=\) Personality Accuracy. bleu is Corpus pbleu.
cantly better than RF1 (\(\mathrm{p}=0\)) for sacc, even though bleurt was most highly correlated with sacc. Again, sacc is low compared to the fine-tuning SOTA of \(99.2\%\)[14].
**Qualitative Analysis.** Table 9 provides generation outputs for each personality along with their reference texts for the restaurant domain, while Table 10 provides Viggo generation outputs when conditioned on personality utterances from personage. In both tables we mark in bold the linguistics markers of personality for each personality type [25]. Table 9 illustrates how the LLM generalizes from the examples given for each personality to produce similar markers that would not have been seen in the demonstrations, for example the "Hey" formulation for agreeableness in the first row is completely novel, as is the "There you are,..." formulation for extraversion in the last row. Table 10 illustrates the differences between the vanilla reference sentences for Viggo and the outputs that have been personality conditioned, with many of the basic linguistic markers appearing in the ViGGO outputs.
## 5 Discussion and Conclusion
We tested two types of discrete prompts for stylistically and semantically controlled NLG, and show treating data-to-text as a text-to-text task performs better for both semantic and stylistic accuracy. To our knowledge these are the first results testing prompt-based learning for simultaneously controlling both semantics and style.
\begin{table}
\begin{tabular}{|l l|l|l|l|l|} \hline
**ID** & **Formula** & **ID** & **SACC** & **PAC** & **BLEU** \\ \hline \hline
**RF1** & SACC* PAC * LMPROB & TST-10-diverse & 86.02\% & 96.44\% & 0.139 \\ \hline
**RF2** & SACC*PAC*LMPROB*PBLEU & TST-10-diverse & 85.59\% & 96.39\% & 0.138 \\ \hline
**RF3** & BBLEU*PAC*LMPROB & TST-10-diverse & **86.15\%** & **96.61\%** & 0.139 \\ \hline
**RF4** & BLEURT*PAC*LMPROB & TST-10-diverse & 87.58\% & 57.06\% & 0.168 \\ \hline
**RF5** & BERT*PAC*LMPROB & TST-10-diverse & 85.59\% & 96.61\% & 0.138 \\ \hline \hline
**RF1** & SACC*PAC*LMPROB & TST-10 & 80.75\% & 96.44\% & 0.095 \\ \hline
**RF2** & SACC*PAC*LMPROB*PBLEU & TST-10 & 85.44\% & 96.00\% & 0.113 \\ \hline
**RF3** & BBLEU*PAC*LMPROB & TST-10 & 84.40\% & 94.44\% & 0.107 \\ \hline
**RF4** & BLEURT*PAC*LMPROB & TST-10 & 95.75\% & 55.94\% & 0.398 \\ \hline
**RF5** & BERT*PAC*LMPROB & TST-10 & 80.00\% & 96.61\% & 0.063 \\ \hline \end{tabular}
\end{table}
Table 8: Results on ViGGO for all Ranking functions for prompt examples selected using a diversity criteria (TST-diverse) vs. randomly selected prompts for the TST-10 best prompt setting. sacc \(=\) Semantic Accuracy. PAC \(=\) Personality Accuracy. bleu here is Corpus pseudo bleu.
We varied the number of prompt examples, and the sampling of examples, to either sample multiple personalities or single personalities, and comparing random sampling to a sampling to encourage diversity. We found that examples illustrating multiple personalities in the same prompt produces worse performance rather than encouraging generalization. In addition, we find that selecting sample demonstrations using a diversity criterion improves both semantic and stylistic accuracy as well as stylistic transfer to the video games domain.
We also surprisingly get lower overall semantic accuracies when prompting and testing with restaurant examples than we do when prompting with restaurant examples and testing with video game MRs. We speculate that this may be due to the delexicalization of the restaurant name in the personage corpus. personage is based on synthetic MRs created for the E2E generation challenge. This means that the MRs do not describe real restaurants, while the ViGGO MRs do correspond to real video games. Thus ViGGO realizations can benefit from the knowledge that the LLM has about video games. In future work, we hope to test whether using MRs that correspond to real restaurants improves semantic accuracy.
\begin{table}
\begin{tabular}{l|l|l} \hline
**Personality** & **Generated Text** & **Reference Text** \\ \hline Agree & “**Hey,” I say. “You want to know more** & **Let’s see what we can find on** **Marinello’s? Right, right,”** & **Let’s see what we can find on** **Marinello’s. Yeah** it is an Italian **I say************************. “It has **like**, an average rating and it is an Italian place in city centre, **also it is a pub, **you know, alright?”** & **Let’s see what we can find on** **Marinello’s. Yeah** it is an Italian restaurant, family friendly and expensive in the city centre, **also** it has an average rating, and it is a pub **pal, okay? you know**. \\ \hline Disagree & Oh God it’s an English pub with a **relatively** moderate price range and it isn’t kid friendly. it is near nearVariable. **obviously**. & **Damn** moderately priced nameVariable is in city centre **and** it isn’t family friendly, **also** it’s an English place. It is near nearVariable. It is a pub. \\ \multirow{2}{*}{Conse} & Right, let’s see what we can find on **nameVariable. I see**, it is a restaurant, **you know** and it isn’t kid friendly, **also** it is a **Italian place, **also** it is in **irverside**, **and** it has a so-so rating. \\ \hline Unconsc & Yeah, oh God I am not sure. Oh, i mean, oh God. Oh God. Anyway, nameVariable is a fast food restaurant, near nearVariable, **also** it is in city centre, and it isn’t family friendly. & **Oh gosh mmhm... I don’t know. I mean,** nameVariable is a restaurant, **also** it is in city centre, and it is **as**o** it is **a** restaurant, **also** it is **a** restaurant **!** \\ \multirow{2}{*}{Extra} & There you are, now, let’s see what **we can find on** nameVariable. Well, nameVariable is a fast food restaurant in city centre, **also** it is a restaurant, it is moderately priced **and** it is **family friendly.** \\ \hline Extra & There you are, now, let’s see what **we can find on** nameVariable. Well, nameVariable is a fast food restaurant in city centre, **also** it is a restaurant, it is moderately priced **and** it is **family friendly.** \\ \hline \end{tabular}
\end{table}
Table 9: Examples of Generated Text and References for Each Personality. Cues indicative of each personality type are shown in bold.
This work has several limitations. One limitation is the overall semantic accuracy performance for both restaurants and video games. These are lower than fine-tuned models and thus, in a real setting, fine-tuned models would still need to be used, as well as the fact that the Jurassic model cannot be run in real time. Both of these limitations might be addressable by instruction tuning a smaller model for data-to-text and stylistic control tasks such as we report here [31, 46]. Another limitations is that we only tested our approach on two domains, and only on five personality styles.
|
2305.01020
|
Evaluating statistical language models as pragmatic reasoners
|
The relationship between communicated language and intended meaning is often
probabilistic and sensitive to context. Numerous strategies attempt to estimate
such a mapping, often leveraging recursive Bayesian models of communication. In
parallel, large language models (LLMs) have been increasingly applied to
semantic parsing applications, tasked with inferring logical representations
from natural language. While existing LLM explorations have been largely
restricted to literal language use, in this work, we evaluate the capacity of
LLMs to infer the meanings of pragmatic utterances. Specifically, we explore
the case of threshold estimation on the gradable adjective ``strong'',
contextually conditioned on a strength prior, then extended to composition with
qualification, negation, polarity inversion, and class comparison. We find that
LLMs can derive context-grounded, human-like distributions over the
interpretations of several complex pragmatic utterances, yet struggle composing
with negation. These results inform the inferential capacity of statistical
language models, and their use in pragmatic and semantic parsing applications.
All corresponding code is made publicly available
(https://github.com/benlipkin/probsem/tree/CogSci2023).
|
Benjamin Lipkin, Lionel Wong, Gabriel Grand, Joshua B Tenenbaum
|
2023-05-01T18:22:10Z
|
http://arxiv.org/abs/2305.01020v1
|
# Evaluating statistical language models as pragmatic reasoners
###### Abstract
The relationship between communicated language and intended meaning is often probabilistic and sensitive to context. Numerous strategies attempt to estimate such a mapping, often leveraging recursive Bayesian models of communication. In parallel, large language models (LLMs) have been increasingly applied to semantic parsing applications, tasked with inferring logical representations from natural language. While existing LLMs exploitants have been largely restricted to literal language use, in this work, we evaluate the capacity of LLMs to infer the meanings of pragmatic utterances. Specifically, we explore the case of threshold estimation on the gradable adjective "_strong_", contextually conditioned on a strength prior, then extended to composition with qualification, negation, polarity inversion, and class comparison. We find that LLMs can derive context-grounded, human-like distributions over the interpretations of several complex pragmatic utterances, yet struggle composing with negation. These results inform the inferential capacity of statistical language models, and their use in pragmatic and semantic parsing applications. All corresponding code is made publicly available1.
Footnote 1: [https://github.com/benlipkin/prosem/tree/CogSci2023](https://github.com/benlipkin/prosem/tree/CogSci2023)
language models; semantic parsing; pragmatics
## Introduction
Natural language understanding unfolds in context and reflects more than literal interpretation. Such a process is posited to be mediated by a series of inferences, which jointly scrutinize mappings between linguistic structure and mental representations in tandem with the plausibility of resulting interpretations. A sentence as simple as _"Mia is tall"_ may be broadly meaningful in of itself, but the range of plausible heights a listener will consider shifts with context that _"Mia plays in the WNBA"_ or that _"Mia is a three-year old child."_ These contextual inferences are broadly studied as linguistic _pragmatics_(Wittgenstein, 1953; Searle, 1969; Austin, 1975; Levinson, 1983; Grice, 1989; Clark, 1996).
Recently, work on large-scale training of transformer language models has produced engineering artifacts that perform exceedingly well across a range of natural language processing (NLP) benchmarks. While trained explicitly to optimize an objective of next-token prediction, such systems implicitly recapitulate large swaths of the traditional NLP pipeline, from POS tagging and parsing to semantic role labeling and coreference resolution (Tenney, Das, & Pavlick, 2019; Bommasani et al., 2021). Indeed, a growing body of contemporary work utilizes LLMs to synthesize program-like representations from natural language (NL) inputs for use in downstream applications from action planning to theorem solving (Acquaviva et al., 2021; Gao et al., 2022; Collins, Wong, Feng, Wei, & Tenenbaum, 2022; Mishra et al., 2022; Zelikman, Huang, Poesia, Goodman, & Haber, 2022; Wong et al., prep.). In leveraging such systems as semantic parsers, this work casts LLMs as formal accounts of the mapping between linguistic forms and representations of meaning. However, such evaluations have been largely restricted to _literal_ language use and translation. In contrast, _pragmatic_ meaning estimation often requires considering a distribution over multiple interpretations in context, presenting additional complexity (Fried, Tomlin, Hu, Patel, & Nematzadeh, 2022; Hu, Floyd, Jouravlev, Fedorenko, & Gibson, 2022; Ruis et al., 2022; Hu, Levy, Degen, & Schuster, 2023).
Existing models of pragmatic reasoning typically rely on explicit probabilistic computation, often within the _Rational Speech Acts_ (RSA) communication framework, whereby a pragmatic listener reasons about an informative speaker to infer intended meanings (Frank & Goodman, 2012; Goodman & Stuhlmuller, 2013; Goodman & Frank, 2016). We ask: can statistical language models _amortize_ common pragmatic inferences, recovering approximately equivalent distributions between language and contextually-modulated meanings?
To address this question, in this work, we explore the case of interpretation over the gradable adjective "_strong_" in describing a player in a fictional game. Conditioned on context describing a generative model over possible worlds, expressing a numerical prior on _"strength"_, among other variables, our paradigm invokes estimation over numerical interpretations of textual descriptions of a novel player's strength. We collect both LLM-estimated and human-measured distributions over the interpretations of such utterances, and explore composition with additional dimensions of complexity. We find that LLMs impressively infer context-aware, human-like distributions over complex pragmatic utterances such as _"very strong for a beginner"_. Simultaneously, we observe a failure to compose such inferred meanings with negation, e.g., _"not strong"_ or polarity inversion, e.g., _"weak"_, offering insights into potential shortcomings.
### Meaning as probabilistic programs
In expressing formal representations of linguistic meaning, one approach has been to build from the framework of model
theoretic semantics (Kripke, 1963; Montague, 1973; Partee, Meulen, & Wall, 1990; Kratzer & Irene, 1998), in combination with uncertainty quantification (Van Eijck & Lappin, 2012; Cooper, Dobnik, Lappin, & Larsson, 2015), converging upon probabilistic programming languages (PPLs), like Church (Goodman, Mansinghka, Roy, Bonawitz, & Tenenbaum, 2012), as a useful substrate. Goodman and Lassiter (2015), in particular, present a framework, which we build from here, for NL as belief updating over probabilistic programs. Starting from a generative model over possible worlds describing a domain, sentences are incrementally expressed as conditioning statements and executed to update posterior beliefs over world states.
Goodman and Lassiter motivate this framework by providing examples through a discussion of a fictional game of tug-of-war (ToW). In this simplified version of the classical children's game, two teams, each with one or more players, compete against each other, with the winner decided by the team whose players exert the most strength (Goodman & Tenenbaum, 2010; Gerstenberg & Goodman, 2012; Goodman, Tenenbaum, & Gerstenberg, 2014). Starting from this base, Goodman and Lassiter built examples of PPL-mediated contextual semantic analysis. For example, _"Team A has more than 3 players"_ could be expressed as (condition (> (length team-a) 3)), and when queried if _"Team A"_ might beat _"Team B"_ (which perhaps has only 2 players), this information would be considered in evaluating the distribution over outcomes of such a match. In elevating this approach beyond literal language use, to scenarios where NL presents with nondeterministic interpretation, Goodman and Lassiter proposed leveraging explicit probabilistic computation via RSA. One difficulty with this framework is the need to manually synthesize programs expressing the semantics of evaluated NL. Drawing from successful approaches in semantic parsing and program synthesis, such a process lends itself increasingly to automation using LLMs.
### Present study
Goodman and Lassiter (2015) have highlighted the elegant capacity of PPLs in expressing the logical representation of sentence meaning, but have left open how such programs be derived in the first place. In parallel, modern semantic parsing work has painted a picture of LLMs as systems capable of mediating such a translation. However, when it comes to scenarios where this task moves beyond literal language use, it is unclear: a) if LLMs are appropriately suited to mediate such sophisticated inferences and b), whether such model estimates would be in line with human expectations. In addressing these questions, we build from the ToW domain model and pursue gradable adjectives as an expressive test bed.
Gradable adjectives, such as _"strong"_, present with vagueness as they lack precise class boundaries. Several approaches have been developed to express the semantics of gradable adjectives, and in one common approach, a free threshold variable is introduced such that _"strong"_ be defined as having _"strength"_\(>\)\(\theta\)(Cresswell, 1976; Klein, 1980; Kennedy, 2007; Lassiter & Goodman, 2017; Tessler, Tsvilodub, Snedeker, & Levy, 2020). While the distribution over \(\theta\) or other formulations can be derived to various degrees using the recursive probabilistic inference of RSA (Qing & Franke, 2014; Tessler & Goodman, 2022), here we ask whether an LLM can stand in, directly estimating the distribution over \(\theta\) in a single forward pass. See Figure 1 for an overview.
Within the context of ToW players, with a prior over _"strength"_ defined in the domain description, we begin with the basic evaluation of _"strong"_ and its inverse polarity counterpart _"weak"_, then extending to the inclusion of negation, e.g., _"not strong"_, qualifiers, e.g., _"pretty strong"_, and comparison classes, e.g., _"strong for a novice player"_. In considering the plausibility of LLM inferences, we collect human behavioral norms for the same stimuli to quantify where and how the model captures or fails to reflect human intuitions. We find, on the positive end, that LLMs can perform rather sophisticated contextual amortization of a stack of inferences that include both literal and pragmatic ones, elegantly parsing over complex pragmatic utterances, conditioned on text expressing a generative world model as a probabilistic program. On the negative end, however, LLMs can struggle with the otherwise logically simpler properties of negation or polarity inversion, deviating from human interpretations in such cases. These results inform our understanding of the inferential capacity of LLMs, and as such simultaneously inform debates surrounding the capacities of statistical language learners (see e.g., Piantdosi (2023)).
Figure 1: Schematic overview. LLMs stand in for the traditional pragmatics pipeline, often recovering human-like estimates over multiple interpretations of complex constructions.
## Methods
To explore the questions outlined thus far, we begin by more formally defining the ToW domain model in Church, outlining the priors and constraints placed on the semantics explored for the remainder of this work. We define a scoring function, by which the LLM can estimate the probability of particular text interpretations, conditioned on the domain model context and an NL query. To evaluate the efficacy of this scoring function, we developed a set of test materials to evaluate human and LLM-based interpretations of gradable adjectives, and tested our modeling framework and 60 human participants on two variations of this task, one focused primarily on qualification and one on class comparison. Negation and polarity inversion were also explored as part of the qualification experiment. Finally, we consider the distributions over interpretations estimated by the model with respect to those empirically measured in human participants.
### Domain Model and LLM Context
```
1:This Church program models a tug-of-war game between teams of players.
2:Each player has a strength, with strength value 50 being about average. (define strength (men (lambdg (player) (gaussian 50 20))))
3:Each player has an intrinsic laziness frequency. (define laziness (men (lambdg (player) (uniform 0!))))
4:The strength of the team is the sum of the player strengths.
5:When a player is lazy in a match, they pull with half their strength. (define (team-strength team) (sun (map (lambda (player) (if (flip (laziness player)) (/ (strength player) 2) (strength player)))
```
```
1:The winner of the match is the stronger team.
3:Returns true if team-1 won against team-2, else false. (define (non-against team-1 team-2) (> (team-strength team-1))
```
```
1:Now, let us translate some user-defined statements.
2:Each statement begins with either 'Condition' or 'Query'.
3:'Condition' statements provide facts about the scenario.
4:'Query' statements are questions that evaluate quantities of interest.
5:'Condition:Jack is strong.
6:(condition (> (strength 'Jack 50))
```
Critically, we see a prior over strength \(\sim\mathcal{N}(50,20)\). While not all other elements of this domain model are required for our downstream tasks, we include full context so as to evaluate efficacy and robustness within a complete world model.
### Task Description
To condition onto our world model that _"Jack is strong"_, expressed as (condition (> (strength 'jack 0)), what value for \(\theta\) is reasonable? While leveraging RSA is one strategy, it quickly grows intractable to accurately estimate such a range for all gradable adjectives, with the combinatorial space further plagued by the possible composition with additional constraints, e.g., _"somewhat strong"_. So we ask: can an LLM amortize inference of this distribution over \(\theta\) in a way that is pragmatically-sensitive and consistent with human inferences? To evaluate this question, we developed a set of stimuli, each referencing gradable adjectives to describe the strength of a fictional athlete named _"Jack"_. These materials were divided among two experiments.
E1:QualifiersIn E1, we first evaluated the probability of sentences about Jack's strength to be interpreted as programs of the form: (condition (> (strength 'jack 0)), for \(\theta\) from \(0-100\), in intervals of \(10\). These included both cases where Jack is _"strong"_ and where he is _"not weak"_, to various degrees. For each sentence, \(P_{model}(\theta)\) was estimated by the LLM, and \(P_{human}(\theta)\) was measured from a collection of human participant point estimates. In addition to these test sentences, a control sentence was included of the form, _"Jack has at least average strength"_, which lacks vagueness and has intention of recovering the majority of probability mass at \(\theta=\mu_{strength}=50\). Then, to test robustness to polarity inversion, we developed a parallel set of materials to evaluate Jack's weakness, considering instead programs of the form (condition (< (strength 'jack 0))). These materials were directly matched to those in the first part of E1, with only the modification of swapping _"strong"_ and _"not weak"_ to _"not strong"_ and _"weak"_, respectively. The full set of \(18\) materials can be found in Figure 3.
E2:Comparison ClassesIn E2, we extended this evaluation by introducing comparison classes to conditionally refine interpretation. We modified the definition of strength in the LLM prompt, to consider a new variable, the league of a player, by injecting the following conditional statement (the full updated prompt can be found in the paper repository):
```
(cond ((equal? league 'beginner) (gaussian 30 20)) ((equal? league 'intermediate) (gaussian 50 20)) ((equal? league 'professional) (gaussian 70 20)) )
```
Here we test, can an LLM use a verbal descriptor of a player to jointly infer their league membership as well as relative
Figure 2: Example of full text passed to LLM for a single query. The tug-of-war domain model (blue) and task instructions (green) are consistent across all trials. For each evaluated sentence (yellow), the probability of each program (red) is evaluated to return a score for a given interpretation.
strength within that league? Drawing from a subset of E1, we developed a new set of materials that incorporate these comparison classes. In particular, we preserved the control form: _"Jack has at least average strength"_ and the form which deviated most from the mean in Figure 3: _"Jack is very strong"_. We modified each sentence for each league, along three degrees of abstraction: exact match, synonym, and allusion. For example, for the first league, we assessed Jack's strength for a _"beginner"_, _"novice"_, and _"someone new to the game."_ The full set of 18 materials can be found in Figure 4.
### Human Participant Evaluation
In order to evaluate \(P_{human}(\theta)\) for each stimulus, two behavioral studies were conducted. 60 participants were recruited from Prolific, 30 for E1 and 30 for E2. Participants provided informed consent and were paid approximately $15 per hour. The experiment requested that participants move a slider to indicate the threshold (\(\theta\)) on the strength of a fictional athlete named _"Jack"_, based on independent readings of the stimulus sentences. One participant was removed from E1 for self-reported comprehension difficulties. Analyses include only the remaining participants. The experimental source files, including instructions and stimulus materials, are released with the paper repository.
### LLM Scoring Function
In order to evaluate \(P_{model}(\theta)\) for each stimulus, a scoring function was defined over programs varying \(\theta\). The OpenAI code-davinci-002 LLM [3] is used to parameterize a language model, with the capacity to assign conditional probabilities over any string \(x_{i}\in\mathcal{X}\). To interpret the score of each program \(y_{i}\in\mathcal{Y}\) as a normalized probability with respect to the restricted hypothesis space under consideration, the log-probabilities of the considered programs under the LLM are passed through a softmax function with temperature parameter \(\alpha\), selected independently for each stimulus sentence using leave-one-out cross-validation (LOOCV) as expanded in the following section.
\[P(y_{i})=\frac{\exp{(\alpha\log{P(x_{i})})}}{\sum_{j=1}^{n}\exp{(\alpha\log{ P(x_{j})})}} \tag{1}\]
In this case where programs differ only in \(\theta\), \(P_{model}(\theta_{i})\) is approximated as \(P(y_{i})\). These discrete program probabilities form the basis for subsequent analyses.
### Comparing \(P_{human}(\theta)\) and \(P_{model}(\theta)\)
For each of the 36 stimulus sentences, 29 (E1) or 30 (E2) point estimates on \(\theta\) were measured in human participants. From these point estimates, a discrete empirical distribution over the domain \(0-100\), in intervals of 10, was calculated via normalized counts for each stimulus.
\[P_{human}(\theta_{i})=\frac{C(\theta_{i})}{\sum_{j=1}^{n}C(\theta_{j})} \tag{2}\]
For the same stimulus sentences, a weight was calculated for each program over the same domain. Such weights were normalized as in Equation 1 with \(\alpha\) selected for each stimulus by minimizing the sum of the Jensen-Shannon distances (JSD; Equation 4) between \(P_{human}(\theta)\) and \(P_{model}(\theta)\) for the remaining \(N-1\) stimuli per experiment, using the Nelder-Mead downhill simplex method [15].
\[\operatorname*{arg\,min}_{\alpha}JSD\left(P_{human}(\theta),P_{model}(\theta)\right) \tag{3}\]
With \(P_{human}(\theta)\) and \(P_{model}(\theta)\) defined, their similarity was calculated using the Jensen-Shannon distance, a metric distance between two probability distributions \(P\) and \(Q\), where \(M\) is the point-wise mean between \(P\) and \(Q\), and \(KL\) is the Kullback-Leibler divergence [11].
\[JSD\left(P\parallel Q\right)=\sqrt{\frac{KL(P\parallel M)+KL(Q\parallel M)}{2}} \tag{4}\]
In order to evaluate statistical significance of this similarity metric, a nonparametric permutation test was employed. To generate the null distribution, the values of \(P_{human}(\theta)\) and \(P_{model}(\theta)\) were shuffled over \(\theta\) for \(N=10,000\) iterations and JSD measured for each variant. \(p\)-values were calculated as the count of null samples less than the true JSD normalized by \(N\). Raw \(p\)-values were controlled for multiple comparisons using False discovery rate (FDR) correction for the number of tests, within each experiment [1].
## Results
In order to evaluate whether LLMs can effectively leverage context to accurately infer distributions over linguistic meaning, several experiments were conducted.
### E1: Qualifiers
For descriptions of Jack's strength, programs of the form (condition (> (strength 'jack \(\theta\))) were evaluated over \(\theta\). \(P_{model}(\theta)\) is presented in green in Figure 3A.
**LLMs make contextually-aware, pragmatically-sensitive inferences over graded adjectives and qualifiers.** For each variation, a qualitatively smooth and interpretable distribution is reflected over \(\theta\). For _"Jack is strong"_ the majority of probability mass falls \(>\mu_{strength}\), and when _"Jack is very strong"_ it shifts further. For the control _"Jack has at least average strength"_, the mass is correctly placed on \(\theta=\mu_{strength}\).
**LLMs make mostly human-like inferences, but struggle with negation.** On the same Figure 3A, we see \(P_{human}(\theta)\) presented in blue. Remarkably, \(P_{human}(\theta)\) and \(P_{model}(\theta)\) are generally highly overlapping, even often with complex qualifier composition. In fact, such distributions present with significant similarity for all sentences lacking negation (Figure 3A). However, of the sentences including negation, only half of the interpretations are well-aligned.
**LLMs struggle further with polarity inversion.** To further evaluate the robustness of this framework, a follow-up experiment was conducted, exploring inversion in concept polarity. For a collection of sentences describing the Jack's
weakness, programs of the form, (condition (< (strength 'jack) \(\theta\))) were evaluated over the domain of \(\theta\). \(P_{model}(\theta)\) is presented in green in Figure 3B. Once again, distributions appear qualitatively smooth and present with some intuitive characteristics. For example, _"Jack is very weak"_ is less than _"Jack is weak"_, and the mean is correctly parsed in the control _"Jack has at most average strength."_ However, a different trend is observed with respect to the alignment with human participants. In this case, where the evaluated concept is of negative polarity with respect to the variable presented in the prompt, \(\theta\) tends to be consistently overestimated by the model. For all sentences other than the control, there is an inability to detect significant similarity between \(P_{model}(\theta)\) and \(P_{human}(\theta)\).
### E2: Comparison Classes
Selecting the control, _"at least average"_, and the condition deviated most from \(\mu_{strength}\) in Figure 3A, _"very strong"_, a new set of sentences were compiled to describe the strength of _"Jack"_ contingent on his membership in different _"leagues"_ with individual strength priors. The prompt explicitly presents _"beginner"_, _"intermediate"_, and _"professional"_ leagues, with respective means of 30, 50, and 70.
LLMs accurately parse conditional mixtures, even inferring group membership from indirect descriptors.Sentences of the form _"Jack...for a..."_ were presented for each strength description and each league, including both the exact leagues described in the prompt (Figure 4A), as well as previously unseen league descriptors as synonyms (Figure 4B), and even indirect allusions (Figure 4C). Such sentences were parsed and interpreted with outstanding success, significantly aligning with human expectations for 17 of the 18 sentences evaluated, including all control sentences and all sentences at the complexity of direct matches or synonyms.
## Discussion
We began this work with a framework of pragmatic language understanding as an inferential procedure, and next motivated a view of linguistic meaning representation as probabilistic programs. Selecting gradable adjectives as our test bed, we designed a task to evaluate the pragmatic reasoning capacity of LLMs in a complex semantic parsing exercise. Contextualized on code expressing a generative world model defining the semantics of a tug-of-war game, we evaluated a number of sentences about the strength of a fictional player, often composing such sentences with pragmatically complex phenomena. Using an LLM, we estimated \(P_{model}(\theta)\) for each target sentence and conducted human behavioral experiments to empirically measure each corresponding \(P_{human}(\theta)\).
From our initial evaluation (E1; Figure 3), we learned that LLMs can effectively amortize inference of a smooth distribution over \(\theta\) in a way that is contextually-grounded to the semantics of the prompt and pragmatically-sensitive with respect to gradable adjectives and qualifiers. Such model estimates aligned with human measurements for all descriptions of how _"strong"_ a player was, but failed to recapitulate the intricacies of human distributions in the majority of cases where the player was _"weak"_, _"not weak"_, or _"not strong"_. These results suggest that while the model can estimate _some_ approximate distribution for each of these cases, the ability to infer an exactly human-like distribution suffers when composing negation in the lexical space, e.g., _"strong"_ vs. _"not strong"_, or polarity inversion in the conceptual space, e.g., _"strong"_ vs. _"weak"_. This is consistent with prior work noting LLM difficulty in resolving negation more generally [23, 24, 25]. It also draws intriguing parallels to child developmental work on concept acquisition, noting observed lags in the mastery of negative polarity concepts, e.g., _"short"_, relative to their positive polarity counter
Figure 3: Model-estimated and human-measured distributions over \(P(\theta)\). Panel A explores programs of the form: (condition (> (strength ’jack) \(\theta\))), and Panel B: (condition (< (strength ’jack) \(\theta\))). Each subplot considers a unique sentence, with \(P_{model}(\theta)\) presented in green and \(P_{human}(\theta)\) in blue. An asterisk indicates significant similarity (\(p<0.05\); FDR-corrected) between \(P_{model}(\theta)\) and \(P_{human}(\theta)\), instantiated as a reduced Jensen-Shannon Distance (JSD; Equation 4) relative to a null permutation analysis.
parts, e.g., _"tall"_, perhaps highlighting more general asymmetries in concept complexity [12, 13, 14]. From our evaluation of class comparisons (E2; Figure 4), we further highlighted the context-sensitivity of such models in appropriately resolving conditional mixtures, presenting with impressive robustness in the presence of incorrect references nearby in context. These results are even more powerful when the match between the query and context variable is not exact, but instead needs to be estimated from a synonym or indirect allusion. These results support an argument for the lexical semantic robustness of LLMs under this approach, a convenient case relative to some traditional semantic parsers based on combinatory categorical grammars (CCGs), for which more complex workarounds are often required [12, 13, 15].
Overall, these results paint a picture of LLMs as effectively recovering some reasonable distribution in each of these complex test cases, yet highlight some discrepancies with human inferences. If we had perfectly recovered human distributions, this would have led to a series of possible interpretations. One interpretation of such a finding might be that LLMs, just as they appear to implicitly represent other forms of linguistic structure, here implicitly perform inference, as alluded to via other works on amortization [16, 17]. Another interpretation could be that, in practice, the statistical regularities of text during training are sufficient to recover these distributions at test time without explicit computation over a world model. Such an account might inform resource-rational frameworks of human language processing, possibly suggesting that partial pragmatic computations could in principle be heuristically approximated, or even retrieved, instead of explicitly recomputed at each instance [1, 13, 14]. While our data do not present LLMs as perfect estimates of human populations across all cases, we believe that these data still at least partially support this second hypothesis. It is indeed possible that some, but not all, of the computations required to solve our task, are amortizable, lending to human-like distributions in some cases, but incorrect approximation in other out-of-domain cases. For example, perhaps composition with negation requires more explicit computation at test time by human participants, which leads to this distributional shift relative to the heuristic estimate of the LLMs. Future work should consider more directly testing this, starting from a framework of computational utility.
LimitationsWhile the results presented in this work have proposed a primarily positive image of LLMs as elegantly handling pragmatic inference within a complex semantic parsing task, only a small number of examples within a single scope have been explored thus far. In order to confirm that the conclusions of these results generalize, evaluation of a broader class of pragmatic phenomena in additional task contexts would be required.
Future DirectionsOne particularly exciting future direction is connecting LLM-mediated inferences over PPL programs with actual execution of such programs and evaluation of their resulting distributions. If we ask _"Can Jill, a very strong beginner, beat Jane, a somewhat strong intermediate?"_, such a question can be reduced to neuro-symbolic programming. Leveraging an LLM inference, a distribution over the thresholds on each players' strengths can be derived. Next, such programs can be explicitly executed in a PPL interpreter, inducing a distribution over each player strength. From this state, it follows easily to query the winner of such a match: (query (won-against '(jill) '(jane))). In then considering more difficult cases, e.g., those involving negation, a hybrid between RSA-like and LLM-mediated approaches might be considered. For example, using LLM estimates to initialize Sequential Monte Carlo (SMC) hypotheses that get updated based on probabilistic program inferences.
Figure 4: Model-estimated and human-measured distributions over \(P(\theta)\), incorporating comparison class. Panel A uses exact class from prompt, Panel B: synonyms, and Panel C: allusions. As in Figure 3, \(P_{model}(\theta)\) is presented in green, \(P_{human}(\theta)\) in blue, and an asterisk indicates significant (\(p<0.05\); FDR) similarity between \(P_{model}(\theta)\) and \(P_{human}(\theta)\).
## Acknowledgments
We thank our anonymous reviewers for their insightful feedback and recommendations. BL is supported by an MIT Presidential Fellowship and GG by the National Science Foundation Graduate Research Fellowship under Grant No. 2141064. LW and JBT are supported by the MIT Quest for Intelligence, AFOSR Grant #FA9550-19-1-0269, the MITI-IBM Watson AI Lab, ONR Science of AI and DARPA Machine Common Sense.
|
2309.02824
|
Geometry and Wideband Performance of a Maximal Ratio Combining Beam
|
This paper discusses the geometrical features and wideband performance of the
beam with maximal ratio combining coefficients for a generic multi-antenna
receiver. In particular, in case the channel is a linear combination of plane
waves, we show that such a beam can be decomposed in a linear combination of
beams pointed in the direction of each plane wave, and we compute how many
directions can be effectively utilized. This highlights that such beam is
better exploiting the spatial diversity provided by the channel, and therefore
it is expected to be more robust to disruptions. Moreover, we compute the
achieved Signal-to-Noise-Ratio for a wideband receiver, showing that it is not
significantly worse than for other methods. Finally, we provide some insights
on the robustness of the method by simulating the impact of the blockage of one
multipath components.
|
Andrea Bedin, Andrea Zanella
|
2023-09-06T08:12:06Z
|
http://arxiv.org/abs/2309.02824v1
|
# Geometry and Wideband Performance of a Maximal Ratio Combining Beam
###### Abstract
This paper discusses the geometrical features and wideband performance of the beam with maximal ratio combining coefficients for a generic multi-antenna receiver. In particular, in case the channel is a linear combination of plane waves, we show that such a beam can be decomposed in a linear combination of beams pointed in the direction of each plane wave, and we compute how many directions can be effectively utilized. This highlights that such beam is better exploiting the spatial diversity provided by the channel, and therefore it is expected to be more robust to disruptions. Moreover, we compute the achieved Signal-to-Noise-Ratio for a wideband receiver, showing that it is not significantly worse than for other methods. Finally, we provide some insights on the robustness of the method by simulating the impact of the blockage of one multipath components.
MRC, beamforming, diversity +
Footnote †: This work has received funding from the European Union’s EU Framework Programme for Research and Innovation Horizon 2020 under Grant Agreement No 861222.
## I Introduction
Modern communication systems often use a codebook-based beamforming approach, where the beams in the codebook are concentrating the gain on a single direction. This approach, while providing good average data rate and implementation simplicity, suffers from the lack of spatial diversity, as it commits on using almost exclusively the multipath component in the high-gain direction. This results in large Signal-to-Noise-Ratio (SNR) drops when the selected component is disrupted by, e.g., a blocker. In contrast, Maximal Ratio Combining (MRC) is a well-known technique to combine signals received by a multi-antenna system, dating back to 1954 [1]. Despite being so dated, it is still widely used and it has proven to be robust and to provide good performance in all sorts of communication conditions. The classical derivation of MRC comes from the SNR maximization problem in narrowband scenarios [2]. Modern communication systems, however, are typically wideband and use analog beamforming, and are thus not capable of fully exploiting the linear gain of MRC. Nevertheless, MRC turns out to be robust and effective also in such systems, and understanding the reason for this unexpectedly good performance is not straightforward.
In this paper, we investigate this aspect by analyzing the geometric features of the Array Factor (AF) of the beam with MRC coefficients. Moreover, we analyze the performance of MRC outside its design coherence bandwidth, evaluating the SNR that can be obtained by an analog beamforming wideband system with such a beam. Although over the years many analysis [3, 4, 5] and variants [6, 7] of MRC have been proposed, to the best of our knowledge, these characteristics of the method have never beed investigated. The main contributions of this work can be hence summarized as follows:
* We show that, when the channel is a linear combination of plane waves, the beam with the MRC coefficients can be decomposed in a linear combination of beams, each pointed towards one of the plane waves;
* We provide a statistical characterization of the number of beam components that are actually active (i.e., are weighted with a relevant coefficient in the linear combination);
* We compute the average SNR achieved by the beam in a wideband setting, and compare it with that achieved by a single-direction beam pointed in the best direction;
* We provide a numerical evaluation of the distribution of the achieved SNR when one component of the channel is blocked, and compare it to the single-direction beam solution, to demonstrate the robustness of MRC.
These results highlight how such beam is better suited for Ultra Reliable Low Latency Communications (URLLC) than the single-direction beam approach, as it inherently provides more diversity. Hence, considering the importance of URLLC in the modern communication scene [8, 9, 10], this beamforming technique should be considered as an alternative to classical beamforming methods. Furthermore, we observe that implementing such beam is feasible in practice. For example, the Channel State Information (CSI) for the beam design can be acquired with a low-cost low-bandwidth digital beamforming chain working alongside the wideband analog beamforming [11], or with other methods such as using reference tones [12]. Moreover, the design of the MRC beam has negligible computational complexity, as it only involves the computation
of the complex conjugate of the channel coefficients.
## II System model
In this paper, we consider a system with an arbitrary antenna array with antennas in positions \(A_{1}\) to \(A_{N}\). When a plane wave is impinging on the array in direction \(\vec{k}\), the projection of \(A_{n}\) on \(\textit{span}\left\{\vec{k}\right\}\) is denoted by\(P_{n,\vec{k}}\), and is given by
\[P_{n,\vec{k}}=\frac{A_{n}^{T}\vec{k}}{\vec{k}^{T}\vec{k}}\vec{k}. \tag{1}\]
In Fig. 1 we can see that the distance between the projection point and the origin equals the distance traveled by the wave before reaching antenna \(n\). Assuming phase \(0\) at the origin, and calling \(\lambda\) the wavelength of the signal, this means that the phase observed by antenna \(n\) is
\[\phi_{n,\vec{k}}=\frac{|P_{n,\vec{k}}|}{\lambda}=\frac{A_{n}^{T}\vec{k}}{(\vec {k}^{T}\vec{k})\lambda}|\vec{k}|=\frac{A_{n}^{T}\vec{k}}{|\vec{k}|\lambda}. \tag{2}\]
Therefore, if we assume \(M\) multipath components with amplitude \(\{\alpha_{m}\}\), direction \(\{\vec{k}_{m}\}\) and delay \(\{\tau_{m}\}\), the channel frequency response (neglecting the frequency dependence of \(\phi_{n,\vec{k}_{m}}\)) at antenna \(n\) is
\[H_{n}(f)=\sum_{m=1}^{M}\alpha_{m}e^{j\phi_{n,\vec{k}_{m}}}e^{-j2\pi f\tau_{m}}. \tag{3}\]
We assume analog beamforming is performed according to MRC for the center of the frequency band, i.e., the beamforming coefficient for antenna \(n\) is1
Footnote 1: This could be normalized to have a unitary beamforming vector, however this normalization does not impact the conclusion of this work and is therefore unnecessarily cumbersome.
\[\beta_{n}=\frac{1}{N}H_{n}^{*}(0)=\frac{1}{N}\sum_{m=1}^{M}\alpha_{m}^{*}e^{-j \phi_{n,\vec{k}_{m}}}. \tag{4}\]
Finally, we assume that the system is affected by Gaussian noise with standard deviation \(\sigma_{n_{0}}\) at each antenna.
## III Beam Geometry
The array factor in direction \(\vec{r}\), with \(|\vec{r}|=1\), is
\[F(\vec{r}) =\sum_{n=1}^{N}\beta_{n}e^{j\phi_{n,\vec{r}}} \tag{5}\] \[=\frac{1}{N}\sum_{n=1}^{N}\left(\sum_{m=1}^{M}\alpha_{m}^{*}e^{-j \phi_{n,\vec{k}_{m}}}\right)e^{j\phi_{n,\vec{r}}}. \tag{6}\]
By rearranging the sums, we can highlight the contribution of each multipath component to the array factor, obtaining the expression:
\[F(\vec{r})=\sum_{m=1}^{M}\alpha_{m}^{*}\frac{1}{N}\left(\sum_{n=1}^{N}e^{-j \phi_{n,\vec{k}_{m}}}e^{j\phi_{n,\vec{r}}}\right)=\sum_{m=1}^{M}\alpha_{m}^{* }F_{m}(\vec{r}) \tag{7}\]
where
\[F_{m}(\vec{r})=\frac{1}{N}\sum_{n=1}^{N}e^{-j\phi_{n,\vec{k}_{m}}}e^{j\phi_{n,\vec{r}}}, \tag{8}\]
which denotes the array factor component associated to the multipath component \(m\). Clearly, it holds \(F_{m}(\vec{r})\leq 1\) and
\[F_{m}(\vec{k}_{m})=\frac{1}{N}\sum_{n=1}^{N}e^{-j\phi_{n,\vec{k}_{m}}}e^{j\phi _{n,\vec{k}_{m}}}=\frac{1}{N}\sum_{n=1}^{N}1=1. \tag{9}\]
Therefore, we can conclude that each array factor component has a global maximum in the direction of the plane it is associated with. Let us now determine the gain observed by a generic component. For the generic direction \(\vec{k}_{h}\), we obtain a total gain:
\[F(\vec{k}_{h})=\alpha_{h}^{*}+\frac{1}{N}\sum_{\begin{subarray}{c}m=1\\ m\neq h\end{subarray}}^{M}\alpha_{m}^{*}\left(\sum_{n=1}^{N}e^{-j\phi_{n, \vec{k}_{m}}}e^{j\phi_{n,\vec{k}_{h}}}\right). \tag{10}\]
If the multipath components are few and spread apart, and the array factor components are narrow beams, i.e., the gain rapidly decreases moving away form the maximum, we have that the second term of the sum is small, therefore \(F(\vec{k}_{h})\approx\alpha_{h}^{*}\). In other words, the MRC between the antennas is equivalent to the MRC between the components. On the other hand, if the amplitude of the \(h\)-th component is small, and a lot of other components are present, the second term in (10) becomes relevant. In this case, the \(h\)-th component might bring negligible contribution to the total received power. For this reason, we define a condition of effectiveness for the component according to which component \(h\) is effective if
\[|\alpha_{h}^{*}|\geq|X_{h}|, \tag{11}\]
where
\[X_{h}=\sum_{\begin{subarray}{c}m=1\\ m\neq h\end{subarray}}^{M}\frac{1}{N}\alpha_{m}^{*}\left(\sum_{n=1}^{N}e^{-j \phi_{n,\vec{k}_{m}}}e^{j\phi_{n,\vec{r}_{1}}}\right). \tag{12}\]
To understand this definition, let us consider the impact of amplitude variations of the \(h\)-th multipath component on the
Fig. 1: Array model.
overall channel. In case the component is effective, we have that
\[\frac{\partial H(f)}{\partial\alpha_{h}}=\frac{\partial F(\vec{k}_{h})\alpha_{h}} {\partial\alpha_{h}}\approx\frac{\partial|\alpha_{h}|^{2}}{\partial\alpha_{h}}. \tag{13}\]
In contrast, in the ineffective case we have
\[\frac{\partial H(f)}{\partial\alpha_{h}}=\frac{\partial F(\vec{k}_{h})\alpha_{ h}}{\partial\alpha_{h}}\approx\frac{\partial X_{h}\alpha_{h}}{\partial \alpha_{h}}. \tag{14}\]
Clearly, this shows how changing the amplitude of an effective components has a quadratic effect on the channel, whereas an ineffective component will have only a linear impact. Moreover, the impact of an ineffective component is scaled by \(X_{h}\), that when the component is ineffective, is by definition smaller than the amplitude of the effective components. Therefore, a disruption of an ineffective component will affect the channel negligibly. To visually exemplify the definition, in Fig. 2 we show the array pattern for a channel with the parameters listed in Tab. I. In this case, \(X_{4}=0.32\) and we plot the pattern for \(\alpha_{4}=0.15\) (Fig. 1(a)) and \(\alpha_{4}=0.6\) (Fig. 1(b)). It can be clearly seen that, in the effective case, the beam pattern has an additional lobe in the direction of \(k_{4}\).
To characterize the probability of effectiveness of the components, we consider the following assumptions:
1. \(\alpha_{m}\) are distributed according to a complex normal Random Variable (r.v.), \(\mathcal{CN}(0,1)\);
2. \(\vec{k}_{m}\) can have an arbitrary distribution, however they are typically taken as uniformly distributed within elements aperture;
3. all \(\alpha_{m}\) and \(\vec{k}_{m}\) are mutually uncorrelated.
With these assumptions, we study the random variable
\[X=X_{1}=\sum_{m=2}^{M}\frac{1}{N}\alpha_{m}^{*}\left(\sum_{n=1}^{N}e^{-j\phi_ {n,\vec{k}_{m}}}e^{j\phi_{n,\vec{k}_{1}}}\right). \tag{15}\]
where, without loss of generality, we consider \(h=1\). Clearly, as each component of the sum is zero mean, we have
\[\mathbb{E}\left[X\right]=0. \tag{16}\]
Since \(\{\alpha_{m}\}\) has unitary variance by assumption, and amplitudes and angles are independent, by defining \(\bar{M}=M-1\) we have:
\[Var\left[X\right] =\mathbb{E}\left[\left|\sum_{m=2}^{M}\alpha_{m}^{*}\left(\frac{1 }{N}\sum_{n=1}^{N}e^{-j\phi_{n,\vec{k}_{m}}}e^{j\phi_{n,\vec{k}_{1}}}\right) \right|^{2}\right] \tag{17}\] \[=\mathbb{E}\left[\left.\sum_{m=2}^{M}\left|\alpha_{m}\left(\frac {1}{N}\sum_{n=1}^{N}e^{-j\phi_{n,\vec{k}_{m}}}e^{j\phi_{n,\vec{k}_{1}}}\right) \right|^{2}\right]\] (18) \[=\bar{M}\mathbb{E}\left[\left|\left(\frac{1}{N}\sum_{n=1}^{N}e^{- j\phi_{n,\vec{k}_{m}}}e^{j\phi_{n,\vec{k}_{1}}}\right)\right|^{2}\right]\] (19) \[=\bar{M}a^{2}; \tag{20}\]
where
\[a=\mathbb{E}\left[\left|\left(\frac{1}{N}\sum_{n=1}^{N}e^{-j\phi_{n,\vec{k}_{ m}}}e^{j\phi_{n,\vec{k}_{1}}}\right)\right|^{2}\right] \tag{21}\]
is the only parameter that depends on the array geometry, and therefore we call it the _array parameter_. This value can be computed numerically for the array of interest through, e.g., Monte Carlo simulation. We can then approximate \(X\) by a zero mean complex Gaussian r.v. with variance \(\bar{M}a^{2}\). Such approximation is suggested by the law of large numbers, but it is not necessarily verified in practice. Nonetheless, this approximation is mathematically convenient, and it determines a relativelysmall gap with physically-accurate simulations, as we will show in the results section. With this approximation, the conditional probability of ineffectiveness given \(\alpha_{1}\) is the probability that \(|\alpha_{1}|\) is smaller than \(|X|\), which is a Rayleigh r.v. of parameter \(\sqrt{\frac{\bar{M}}{2}}a\). Therefore we have
\[P_{ineff}(z)\triangleq P\left[|X|\geq|\alpha_{1}|\,\bigg{|}\,|\alpha_{1}|=z \right]=e^{-\frac{z^{2}}{Ma^{2}}}. \tag{22}\]
The overall \(P_{ineff}\) can be obtained by integrating \(P_{ineff}(z)\) over the distribution of \(|\alpha_{1}|\), which is Rayleigh with parameter \(\frac{1}{\sqrt{2}}\). This can be expressed as
\[P_{ineff} =\int_{0}^{\infty}P\left[|\alpha_{1}|=z\right]P_{ineff}(z)dz \tag{23}\] \[=\int_{0}^{\infty}2ze^{-z^{2}}e^{-\frac{z^{2}}{Ma^{2}}}dz\] (24) \[=\frac{\bar{M}a^{2}}{1+\bar{M}a^{2}}. \tag{25}\]
## IV Wideband behavior
### _MRC performance_
When the MRC beam is used in an analog beamforming wideband system, the phases of the channel coefficients are
\begin{table}
\begin{tabular}{c|c|c} \(k\) & \(\vec{k}_{m}\) & \(\alpha_{m}\) \\ \hline
1 & \((-1,0,0)\) & \(0.5\) \\
2 & \(\left(\frac{1}{\sqrt{3}},-\frac{1}{\sqrt{3}},-\frac{1}{\sqrt{3}}\right)\) & \(1\) \\
3 & \(\left(-\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}}\right)\) & \(1.5\) \\
3 & \(\left(-\frac{1}{\sqrt{3}},-\sqrt{\frac{2}{3}},0\right)\) & \(\alpha_{4}\) \\ \end{tabular}
\end{table} TABLE I: Example channel parameters
Fig. 2: Change in radiation pattern induced by an effective and an ineffective component.
frequency dependent, therefore the classical SNR formulation does not apply outside one coherence bandwidth from the carrier frequency. Instead, the received power outside of the coherence bandwidth can be expressed as
\[H(f)= \sum_{m=1}^{M}|\alpha_{m}|^{2}e^{-j2\pi f\tau_{m}}+\sum_{m=1}^{M} \alpha_{m}e^{-j2\pi f(\tau_{m}-\tau_{m^{\prime}})}\] \[\sum_{m^{\prime}\neq m}\frac{1}{N}\alpha_{m^{\prime}}^{*}\left( \sum_{n=1}^{N}e^{-j\phi_{n,\vec{k}_{m^{\prime}}}}e^{j\phi_{n,\vec{k}_{m}}} \right), \tag{26}\]
where the first summation accounts for the contribution of the \(M\) multipath components received by the corresponding beam components, while the other term is the aggregate contribution of the multipath components not aligned with the beam components. Considering a frequency well outside the coherence bandwidth of the channel, we can assume that the phases \(2\pi f(\tau_{m}-\tau_{m^{\prime}})\) are uniformly distributed and independent. With this assumption, the expected channel power gain is
\[\mathbb{E}\left[|H(f)|^{2}\right]=\sum_{m=1}^{M}\mathbb{E}\left[| \alpha_{m}|^{4}\right] \tag{27}\] \[+\sum_{m=1}^{M}\mathbb{E}\Bigg{[}|\alpha_{m}|^{2}\left|\sum_{m^{ \prime}\neq m}\frac{1}{N}\alpha_{m^{\prime}}^{*}\left(\sum_{n=1}^{N}e^{-j\phi _{n,\vec{k}_{m^{\prime}}}}e^{j\phi_{n,\vec{k}_{m}}}\right)\right|^{2}\Bigg{]}.\]
Recalling the independence between paths and the definition of the _array parameter_\(a\) in (21), we can rewrite (27) as
\[\mathbb{E}\left[|H(f)|^{2}\right]=M\left(\mathbb{E}\left[|\alpha_{m}|^{4} \right]+\mathbb{E}\left[|\alpha_{m}|^{2}\right]\bar{M}a^{2}\right). \tag{28}\]
Using the assumption that \(\alpha_{m}\sim\mathcal{CN}(0,1)\), the expectations \(\mathbb{E}\left[|\alpha_{m}|^{4}\right]\) and \(\mathbb{E}\left[|\alpha_{m}|^{2}\right]\) are the \(4^{\text{th}}\) and \(2^{\text{nd}}\) moment of a Rayleigh r.v. with parameter \(\frac{1}{\sqrt{2}}\), which are \(2\) and \(1\), respectively. Thus, the final expression for the gain is
\[\mathbb{E}\left[|H(f)|^{2}\right]=M\left(2+\bar{M}a^{2}\right). \tag{29}\]
The noise is a linear combination of Gaussian r.v.s with coefficients \(\beta_{n}\), therefore the variance is
\[\sigma_{n}^{2}=\sigma_{n_{0}}^{2}\sum_{n=1}^{N}|\beta_{n}|^{2}, \tag{30}\]
with expected value
\[\mathbb{E}\left[\sigma_{n}^{2}\right]=N\sigma_{n_{0}}^{2}\mathbb{E}\left[| \beta_{n}^{2}|\right]. \tag{31}\]
The right-most expectation in (31) can be computed as
\[\mathbb{E}\left[|\beta_{n}|^{2}\right] =\mathbb{E}\left[\left|\frac{1}{N}\sum_{m=1}^{M}\alpha_{m}^{*}e^{- j\phi_{n,\vec{k}_{m}}}\right|^{2}\right] \tag{32}\] \[=\frac{1}{N^{2}}\sum_{m=1}^{M}\mathbb{E}\left[|\alpha_{m}|^{2} \right]=\frac{M}{N^{2}}, \tag{33}\]
so that the expected noise variance is
\[\mathbb{E}\left[\sigma_{n}^{2}\right]=N\sigma_{n_{0}}^{2}\frac{M}{N^{2}}= \frac{M}{N}\sigma_{n_{0}}^{2}. \tag{34}\]
Based on these results, we can compute the average-signal-to-average-noise-ratio as
\[\Gamma=\frac{\mathbb{E}\left[|H(f)|^{2}\right]}{\mathbb{E}\left[ \sigma_{n}^{2}\right]} =M\left(2+\bar{M}a^{2}\right)\frac{N}{M\sigma_{n_{0}}^{2}} \tag{35}\] \[=\frac{N}{\sigma_{n_{0}}^{2}}\left(2+\bar{M}a^{2}\right). \tag{36}\]
Note that, despite the function looks linear in \(N\), this is not guaranteed. In fact, the _array parameter_\(a\) also depends in a complicated manner on the array size and geometry. Moreover, for \(M=1\), (36) gives a \(2N\) gain compared to the SNR observed by a single element. This is actually an artifact of approximating the SNR using the average signal to average noise ratio. In this case in fact, the MRC corresponds to the classical beamforming, which is known to have an SNR gain of \(N\). We therefore note that the proposed approximation has a \(3\)dB error for \(M=1\).
### _Single-direction beam performance_
As a comparison, we compute the gain obtained with a single beam pointed towards the largest component that, without loss of generality, we assume to be the first. Therefore, we set the beam coefficients to
\[\beta_{n}^{(Sing)}=\frac{1}{N}e^{-j\phi_{n,\vec{k}_{1}}}. \tag{37}\]
With this assumption, and again using the definition of _array parameter_, the channel power gain can be written as
\[\mathbb{E}\left[|H^{(Sing)}(f)|^{2}\right]=\mathbb{E}\left[|\alpha_{1}|^{2} \right]+\sum_{m=2}^{M}a^{2}\mathbb{E}\left[|\alpha_{m}|^{2}\right]. \tag{38}\]
Note that, under the assumption that \(\alpha_{1}\) is the largest component, its statistical distribution changes. In fact, if we assume that a generic \(\alpha_{m}\) has exponentially distributed power with parameter 1 (which is a direct consequence of the Gaussian distribution of \(\alpha_{m}\)), the Cumulatve Distribution Function (CDF) of \(|\alpha_{1}|^{2}\) is given by
\[P\left[|\alpha_{1}|^{2}<x\right]=\left(1-e^{-x}\right)^{M}, \tag{39}\]
and its Probability Density Function (PDF) is hence
\[\frac{\partial}{\partial x}\left(1-e^{-x}\right)^{M}=Me^{-x}\left(1-e^{-x} \right)^{M-1}. \tag{40}\]
Finally, its expected value is
\[\int_{0}^{\infty}Mxe^{-x}\left(1-e^{-x}\right)^{M-1}dx=H_{M}\approx\log(M)+\gamma, \tag{41}\]
\begin{table}
\begin{tabular}{c|c|c|c} Field of View (FoV) & \(180^{\circ}\) & \(120^{\circ}\) & \(60^{\circ}\) \\ \hline
**ULA Elements** & & \(a\) & \\ \hline
2 & 0.55 & 0.50 & 0.69 \\
4 & 0.30 & 0.26 & 0.40 \\
8 & 0.17 & 0.14 & 0.22 \\
16 & 0.09 & 0.07 & 0.12 \\
32 & 0.05 & 0.04 & 0.06 \\
64 & 0.03 & 0.02 & 0.03 \\ \hline \end{tabular}
\end{table} TABLE II: _Array parameters_ for some Uniform Linear antenna Array (ULAs)
where \(H_{M}\) is the \(M\)-th harmonic number and \(\gamma\) is the Euler's constant. The statistics of \(\alpha_{m}\) for \(m\neq 1\) would also change, but we neglect this aspect, thus obtaining
\[\mathbb{E}\left[|H^{(Sing)}(f)|^{2}\right]\approx\log(M)+\gamma+\bar{M}a^{2}. \tag{42}\]
The expected noise power will be simply
\[\mathbb{E}\left[\sigma_{n}^{2}\right]=\frac{\sigma_{n_{0}}^{2}}{N}, \tag{43}\]
and the SNR is
\[\Gamma^{(Sing)}=\frac{\mathbb{E}\left[|H(f)|^{2}\right]}{\mathbb{E}\left[ \sigma_{n}^{2}\right]}=\frac{N(\log(M)+\gamma+\bar{M}a^{2})}{\sigma_{n_{0}}^{2 }}. \tag{44}\]
The ratio between the SNR with a single-direction and with MRC is hence
\[\frac{\Gamma^{(Sing)}}{\Gamma} =\frac{N(\log(M)+\gamma+\bar{M}a^{2})}{\sigma_{n_{0}}^{2}}\frac{ \sigma_{n_{0}}^{2}}{N\left(2+\bar{M}a^{2}\right)} \tag{45}\] \[=\frac{(\log(M)+\gamma+\bar{M}a^{2})}{\left(2+\bar{M}a^{2}\right)}. \tag{46}\]
We note that
\[\lim_{M\rightarrow\infty}\frac{(\log(M)+\gamma+\bar{M}a^{2})}{\left(2+\bar{M} a^{2}\right)}=1, \tag{47}\]
therefore, for rich multipath channels the two methods show the same gain.
## V Results
In Tab. II are listed the values of the _array parameter_\(a\) for various Uniform Linear antenna Array (ULAs) and uniformly distributed angle of arrival within a fixed Field of View (FoV). As expected, the value decreases with the number of antennas, since the probability of the array having large gain in a random direction decreases. It also decreases with the FoV, as with a smaller FoV there is less space covered by the sidelobes.
To verify the theoretical results, we performed some numerical simulations randomly generating some channels according to (3) and assuming a ULA. The directions of arrival are uniformly distributed within the FoV of the array and the delays are uniformly distributed between \(0\) and \(100\)ns. We averaged the channel gain over a bandwidth of \(1\)GHz. Fig. (a)a shows the probability of ineffectiveness of a component for different ULAs as a function of the total number of multipath components. The lines represent the theoretical value according to (25), whereas the marks represent the value estimated numerically over 1000 realizations. Similarly, Fig. (b)b shows the median number of effective components \(C_{eff}=M\left(1-P_{ineff}\right)=M\left(1-\frac{\bar{M}a^{2}}{1+Ma^{2}}\right)\). Again, the solid lines represent the theoretical value and the marks represent the numerical estimations. As we can see, in both cases the theory follows the numerical estimation quite closely, with a small gap caused by the gaussian approximation of \(X\). We also notice that even with a modest array of only \(8\) antennas we can exploit as many as \(4\) components in a channel that has a total of only \(6\).
Fig. 4 shows how the SNR changes as a function of \(M\) for both the single-direction beam and the MRC methods. In particular, the lines are calculated with equations (36) and (44) respectively, whereas the marks are the simulated SNR for the corresponding parameters. Here, it can be clearly seen that the accuracy of the approximation average-signal to average-noise ratio in place of the SNR improves as \(M\) increases. As noted in Sec. IV, this approximation leads to an error of 3dB for \(M=1\), therefore we expect the error to be always lower than this value. For the proposed configurations, we can observe that the approximation error drops below \(1\)dB for \(M>4\). We also note that the gap between the two methods is relatively small, in the order of a few dB. Moreover, despite \(M\) is not large enough to show the convergence expected from (47), such result shows that the gap will not increase in more complex channels.
To evaluate the increase of robustness enabled by the diversity generated by the MRC, we generated the beam for a given channel \(H(f)\), then simulated a blockage by removing component \(m^{\prime}\), where \(m^{\prime}\) is randomly selected between \(1\) and \(M\), generating the new channel
\[H^{\prime}(f)=H(f)-\alpha_{m^{\prime}}e^{j\phi_{n,\bar{k}_{m^{\prime}}}}e^{-j2 \pi f\tau_{m^{\prime}}}. \tag{48}\]
We then applied the beam designed for \(H(f)\) to the new channel \(H^{\prime}(f)\) and evaluated the SNR obtained by the two
Fig. 3: (a) ineffectiveness probability and (b) number of components utilized obtained by MRC for different ULAs (FoV \(180^{\circ}\)). The lines represent the theoretical values, whereas the marks are given by numerical evaluation.
methods. In Fig. 5 we can observe the resulting SNR distribution. As expected, we can observe that the tail of the SNR obtained with a single beam extends much further than that of MRC, because the single beam approach is much more sensitive too the loss of the component used by the beam, since the remaining energy only comes from sidelobes.
## VI Conclusions and future work
In this paper, we studied the properties of the beam generated with MRC coefficients, when this is used outside one coherence bandwidth from its design frequency. This provides an evaluation of the performance of such method in a wideband analog beamforming system where the coherence bandwidth is much smaller than the system bandwidth. We have shown that the method can be implemented with minimal degradation of the SNR compared to the classical beam pointed in a single direction. In contrast, we proved that MRC generates a beam with multiple lobes, and therefore it can better exploit the spatial diversity offered by the environment. Thanks to this additional diversity, it can better handle blockage events, significantly shortening the tail of the distribution of the SNR when some multipath components are suddenly removed. This allows a trade-off between average rate and robustness, that makes the method a viable choice for ultra reliable communications. As we expect the method to be more susceptible to interference due to the lower selectivity of the beam pattern, in future works we plan to study the interference properties of such beam, and possibly propose interference mitigation techniques to overcome such limitation.
|
2302.11294
|
Distributional Learning of Variational AutoEncoder: Application to
Synthetic Data Generation
|
The Gaussianity assumption has been consistently criticized as a main
limitation of the Variational Autoencoder (VAE) despite its efficiency in
computational modeling. In this paper, we propose a new approach that expands
the model capacity (i.e., expressive power of distributional family) without
sacrificing the computational advantages of the VAE framework. Our VAE model's
decoder is composed of an infinite mixture of asymmetric Laplace distribution,
which possesses general distribution fitting capabilities for continuous
variables. Our model is represented by a special form of a nonparametric
M-estimator for estimating general quantile functions, and we theoretically
establish the relevance between the proposed model and quantile estimation. We
apply the proposed model to synthetic data generation, and particularly, our
model demonstrates superiority in easily adjusting the level of data privacy.
|
Seunghwan An, Jong-June Jeon
|
2023-02-22T11:26:50Z
|
http://arxiv.org/abs/2302.11294v3
|
# Distributional Variational AutoEncoder:
###### Abstract
The Gaussianity assumption has been pointed out as the main limitation of the Variational AutoEncoder (VAE) in spite of its usefulness in computation. To improve the distributional capacity (i.e., expressive power of distributional family) of the VAE, we propose a new VAE learning method with a nonparametric distributional assumption on its generative model. By estimating an infinite number of conditional quantiles, our proposed VAE model directly estimates the conditional cumulative distribution function, and we call this approach distributional learning of the VAE. Furthermore, by adopting the continuous ranked probability score (CRPS) loss, our proposed learning method becomes computationally tractable. To evaluate how well the underlying distribution of the dataset is captured, we apply our model for synthetic data generation based on inverse transform sampling. Numerical results with real tabular datasets corroborate our arguments.
## 1 Introduction
Variational Autoencoder (VAE) Kingma & Welling (2013) and Generative Adversarial Networks (GAN) Goodfellow et al. (2014) are generative models that are used to estimate the underlying distribution of a given dataset. To avoid the curse of dimensionality, VAE and GAN commonly introduce a low-dimensional latent space on which a conditional generative model is defined. By minimizing an information divergence between the original data and its generated data, the generative models are learned to produce synthetic data similar to the original one. Accordingly, VAE and GAN have been applied in various applications, such as generating realistic images, texts, and synthetic tabular data for privacy preservation purposes Karras et al. (2018); Wang et al. (2019); Xu et al. (2019); Zhao et al. (2021).
However, the difference in the strength of the assumption about the generative distribution brings significant contrasts in the VAE and GAN generation performances. In the GAN framework, the adversarial loss enables direct minimization of the Jensen-Shannon divergence between the ground-truth density function and the generative distribution under no distributional assumption. Roughly speaking, the GAN employs a nonparametric model as its conditional generative model defined on the latent space.
On the contrary, in the VAE framework, the Gaussianity assumption has been favored Lucas et al. (2019). It is because Gaussianity gives us three advantages: 1) the reconstruction loss can be interpreted as \(L_{2}\) loss which is one of the most popular losses in optimization theory, 2) generating a new sample is computationally straightforward, and 3) KL-divergence is computed in a simple closed form. However, these benefits have led us to pay the price for the distributional capacity of the generative model, in that the generative model of the VAE is constrained in the form of marginalization of the product of the two Gaussian distributions. Here, the distributional capacity means the expressive power of the distributional family. This restricted distributional capacity has been the critical limitation Burda et al. (2015); Kingma et al. (2016) and leads to a heavy parameterization of the decoder mean-vector to approximate complex underlying distributions.
To increase the distributional capacity in the VAE framework, Xu et al. (2019); Zhao et al. (2021) introduce the multi-modality in the distributional assumption of the decoder, which is known as the mode-specific normalization technique. Although the mixture-Gaussian decoder modeling of Xu et al. (2019)
allows handling more complex distributions of the observed dataset while preserving all of the advantages of Gaussianity, we numerically find that the mixture Gaussian is not enough to capture the underlying distribution.
Our main contribution is that, beyond Gaussianity, we propose a novel VAE learning method that directly estimates the conditional cumulative distribution function (CDF). It implies that we have a nonparametric distribution assumption on the VAE model. We call this approach distributional learning of the VAE, which is enabled by estimating an infinite number of conditional quantiles. By adopting the loss function of continuous ranked probability score (CRPS) used to estimate the CDF, the objective of the distribution learning is computationally tractable Gneiting and Raftery (2007); Matheson and Winkler (1976).
Therefore, in our proposed distributional learning framework, 1) the reconstruction loss can be interpreted as CRPS loss which is a well-known _proper scoring rule_Gneiting and Raftery (2007), 2) generating a new sample is still computationally straightforward due to inverse transform sampling, and 3) KL-divergence is still computed in a simple closed form. To show the effectiveness of our proposed model in capturing the underlying distribution of the dataset, we evaluate our model for synthetic data generation with real tabular datasets.
## 2 Related Work
**Various decoder modeling.** Decoder modeling has been primarily focused on increasing distributional capacity while maintaining the simple calculation of the KL-divergence. Takahashi et al. (2018); Akrami et al. (2020) assume their decoder distributions as Student-\(t\) and asymmetric Laplace distributions, respectively. These assumptions mitigate the _zero-variance problem_ that the model training becomes unstable if the estimated variance of the decoder shrinks to zero in Gaussian VAE. In the image domain, Gaussian assumption hinders the reconstruction of images and fails to capture the properties of human perception Larsen et al. (2015). Therefore, Larsen et al. (2015); Rosca et al. (2017); Munjal et al. (2019) replace the reconstruction loss with an adversarial loss, and the decoder is trained without a parametric distributional assumption. However, adopting adversarial loss induces unstable model training in general.
**Synthetic data generation.** The GAN framework is widely adopted in the synthetic data generation task since it enables handling columns of tabular datasets that are usually non-Gaussian Choi et al. (2017); Park et al. (2018); Xu et al. (2019); Zhao et al. (2021). Especially, Xu et al. (2019); Zhao et al. (2021) assume their decoder as Gaussian mixture distribution and preprocess the continuous variables using the Variational Gaussian mixture model Blei et al. (2016), which is known as the mode-specific normalization technique. However, this preprocessing requires additional computational resources and hyperparameter tuning of the number of modes.
## 3 Proposal
Let \(\mathbf{x}\in\mathcal{X}\subset\mathbb{R}^{p}\) be a observation, where \(\mathbf{x}_{1},\cdots,\mathbf{x}_{p_{1}}\) are continuous random variables, and \(\mathbf{x}_{p_{1}+1},\cdots,\mathbf{x}_{p}\) are discrete random variables. Denote \(q(\mathbf{x})\) as the true underlying distribution defined over \(\mathbf{x}\in\mathcal{X}\). Let \(\mathbf{z}\) be a latent variable, where \(\mathbf{z}\in\mathbb{R}^{d}\) and \(d<p\). The prior and posterior distribution of \(\mathbf{z}\) is assumed as \(p(\mathbf{z})=\mathcal{N}(\mathbf{z}|0,I)\) and \(q(\mathbf{z}|\mathbf{x};\phi)=\mathcal{N}\big{(}\mathbf{z}|\mu(\mathbf{x}; \phi),diag(\sigma^{2}(\mathbf{x};\phi))\big{)}\), respectively, where \(I\) is \(d\times d\) identity matrix, \(\phi\) is the neural network parameter, and \(diag(a),a\in\mathbb{R}^{d}\) denotes a diagonal matrix with diagonal elements \(a\). Assume that there exists \(p(\mathbf{x}|\mathbf{z})\) such that \(q(\mathbf{x})=\int p(\mathbf{z})p(\mathbf{x}|\mathbf{z})d\mathbf{z}\).
In addition, let \(\boldsymbol{\alpha}\) be a discrete random variable of which the possible values set is \(\{1/K,2/K,\cdots,(K-1)/K,1\}\). \(\boldsymbol{\alpha}\) follows a discrete uniform distribution, and we denote \(\alpha_{k}\coloneqq k/K\) for \(k=1,\cdots,K\).
### Model Assumptions
First, we assume that \(\mathbf{x}_{1},\cdots,\mathbf{x}_{p}\) are conditionally mutually independent given \(\mathbf{z}\). The distribution of \(\mathbf{x}_{j}\) given \(\mathbf{z}\) and \(\alpha_{k}\) is assumed as the asymmetric Laplace distribution, for \(j=1,2,\cdots,p_{1}\). For \(l=p_{1}+1,p_{1}+2,\cdots,p\), the distribution of \(\mathbf{x}_{l}\) given \(\mathbf{z}\) is categorical distribution and the number of category of \(\mathbf{x}_{l}\) is denoted as \(T_{l}\). Then, for \(k=1,\cdots,K\), the decoder is written as
\[p(\mathbf{x}|\mathbf{z},\alpha_{k};\theta,\beta) \tag{1}\] \[= \prod_{j=1}^{p_{1}}p(\mathbf{x}_{j}|\mathbf{z},\alpha_{k};\theta _{j},\beta)\cdot\prod_{l=p_{1}+1}^{p}p(\mathbf{x}_{l}|\mathbf{z};\theta_{l},\beta)\] \[= \prod_{j=1}^{p_{1}}\frac{\alpha_{k}(1-\alpha_{k})}{\beta}\exp \left(-\rho_{\alpha_{k}}\left(\frac{\mathbf{x}_{j}-D_{j}(\alpha_{k}|\mathbf{z},\theta_{j})}{\beta}\right)\right)\] \[\cdot\prod_{l=p_{1}+1}^{p}\prod_{t=1}^{T_{l}}\pi(\mathbf{z}; \theta_{l})_{t}^{I(\mathbf{x}_{l}=t)},\]
where \(\theta=(\theta_{1},\cdots,\theta_{p})\), \(\beta\) is the non-trainable hyperparameter, \(\rho_{v}(\mathbf{u})=\mathbf{u}(v-I(\mathbf{u}<0))\) is the check
function, and \(I(\cdot)\) is indicator function. \(D_{j}(\cdot|\cdot,\theta_{j}):[0,1]\times\mathbb{R}^{d}\mapsto\mathbb{R}\) is location parameter of conditional distribution of continuous \(\mathbf{x}_{j}\). Note that all continuous variables share the same scale parameter \(\beta\). For discrete variables, \(\pi(\cdot;\theta_{l}):\mathbb{R}^{d}\mapsto[0,1]^{T_{l}}\) and \(\sum_{t=1}^{T_{l}}\pi(\mathbf{z};\theta_{l})_{t}=1\), for all \(\mathbf{z}\in\mathbb{R}^{d}\).
With proposal distributions, the negative ELBO is written as
\[\sum_{j=1}^{p_{1}}\mathbb{E}_{q(\mathbf{z}|\mathbf{x};\phi)}\left[ \frac{1}{2\cdot K}\sum_{k=1}^{K}2\cdot\rho_{\alpha_{k}}\Big{(}\mathbf{x}_{j}-D _{j}(\alpha_{k}|\mathbf{z},\theta_{j})\Big{)}\right] \tag{2}\] \[- \beta p_{1}\frac{1}{K}\sum_{k=1}^{K}\log\alpha_{k}(1-\alpha_{k}) +\beta p_{1}\log\beta\] \[- \beta\cdot\sum_{l=p_{1}+1}^{p}\mathbb{E}_{q(\mathbf{z}|\mathbf{x };\phi)}\left[\sum_{t=1}^{T_{l}}I(\mathbf{x}_{l}=t)\cdot\log\pi(\mathbf{z}; \theta_{l})_{t}\right]\] \[+ \beta\cdot\mathcal{KL}(q(\mathbf{z}|\mathbf{x};\phi)\|p(\mathbf{ z}))\]
(see Appendix A.1 for detailed derivation). Note that the reconstruction loss of (2) can be seen as estimating \(K\) multiple conditional quantiles in Bayesian framework Yu and Moyeed (2001); Moon et al. (2021), where \(\alpha_{k}\) plays the role of quantile level.
### Distributional Learning
For distributional learning of the VAE, we need to estimate conditional quantiles for an infinite number of quantile levels, i.e., \(K\rightarrow\infty\). The following Proposition 1 shows that the negative ELBO (2) converges to the continuous ranked probability score (CRPS) loss, which measures how accurately the proposed CDF approximates the true CDF of the dataset.
**Proposition 1** (Convergence to CRPS).: _Suppose that \(\int_{0}^{1}\mathbb{E}_{q(\mathbf{z}|\mathbf{x};\phi)}\Big{|}2\cdot\rho_{ \boldsymbol{\alpha}}\Big{(}\mathbf{x}_{j}-D_{j}(\boldsymbol{\alpha}|\mathbf{z },\theta_{j})\Big{)}\Big{|}d\boldsymbol{\alpha}<\infty\). Then, for \(j=1,\cdots,p_{1}\),_
\[\lim_{K\rightarrow\infty}\mathbb{E}_{q(\mathbf{z}|\mathbf{x}; \phi)}\left[\frac{1}{K}\sum_{k=1}^{K}2\cdot\rho_{\alpha_{k}}\Big{(}\mathbf{x}_ {j}-D_{j}(\alpha_{k}|\mathbf{z},\theta_{j})\Big{)}\right]\] \[= \mathbb{E}_{q(\mathbf{z}|\mathbf{x};\phi)}\left[\int_{0}^{1}2 \cdot\rho\boldsymbol{\alpha}\Big{(}\mathbf{x}_{j}-D_{j}(\boldsymbol{\alpha}| \mathbf{z},\theta_{j})\Big{)}d\boldsymbol{\alpha}\right]\] \[= \mathbb{E}_{q(\mathbf{z}|\mathbf{x};\phi)}\left[\text{CRPS}(D_{j} (\cdot|\mathbf{z};\theta_{j}),\mathbf{x})\right],\]
_where CRPS\((\cdot,\cdot)\) is the continuous ranked probability score (CRPS), and_
\[\lim_{K\rightarrow\infty}\frac{1}{K}\sum_{k=1}^{K}\log\alpha_{k}(1-\alpha_{k}) =\int_{0}^{1}\log\boldsymbol{\alpha}(1-\boldsymbol{\alpha})d\boldsymbol{\alpha }=-2.\]
Therefore, when \(K\rightarrow\infty\), our final objective is minimizing
\[\sum_{j=1}^{p_{1}}\mathbb{E}_{q(\mathbf{x})}\mathbb{E}_{q(\mathbf{ z}|\mathbf{x};\phi)}\left[\frac{1}{2}\cdot\text{CRPS}(D_{j}(\cdot| \mathbf{z};\theta_{j}),\mathbf{x})\right] \tag{3}\] \[- \sum_{l=p_{1}+1}^{p}\mathbb{E}_{q(\mathbf{x})}\mathbb{E}_{q( \mathbf{z}|\mathbf{x};\phi)}\left[\sum_{t=1}^{T_{l}}I(\mathbf{x}_{l}=t)\cdot \log\pi(\mathbf{z};\theta_{l})_{t}\right]\] \[+ \beta\cdot\mathbb{E}_{q(\mathbf{x})}[\mathcal{KL}(q(\mathbf{z}| \mathbf{x};\phi)\|p(\mathbf{z}))]\]
with respect to \((\theta,\phi)\), where constant terms are omitted. To balance the learning of two reconstruction losses in (3), we remove coefficient \(\beta\) of the second term, which is the reconstruction loss of discrete variables. We call our model DistVAE. In addition, see Appendix A.2 for the interpretation of distributional learning in terms of model misspecification error in MLE.
Higgins et al. (2016) shows that the KL-divergence coefficient \(\beta\) (the scale parameter of asymmetric Laplace distribution) controls the reconstruction precision. Since our reconstruction loss consists of CRPS loss, a larger \(\beta\) induces an inaccurate estimation of the true CDF, leading to a lower quality of synthetic data. Consequently, the privacy level will be lower if \(\beta\) is small. Therefore, \(\beta\) creates a trade-off between the synthetic data quality and the risk of privacy leakage, which means that the privacy level is controllable via \(\beta\)Park et al. (2018) (see Section 4).
#### 3.2.1 Proper Scoring Rule
In this section, we will show that the reconstruction loss of (3) is a _proper scoring rule_Gneiting and Raftery (2007) relative to the true conditional quantile function. Let \(F(\mathbf{x}|\mathbf{z})\) is CDF of \(p(\mathbf{x}|\mathbf{z})\) and denote \(F_{j}(\mathbf{x}_{j}|\mathbf{z})\) as the marginal conditional CDF of \(\mathbf{x}_{j}\), for \(j=1,\cdots,p_{1}\). Denote \(D_{j}^{*}(\alpha|\mathbf{z})\) as the true conditional \(\alpha\)-quantile with respect to \(F_{j}(\mathbf{x}_{j}|\mathbf{z})\) for \(\alpha\in(0,1)\) and \(\mathbf{z}\in\mathbb{R}^{d}\). It implies that \(F_{j}\Big{(}D_{j}^{*}(\alpha|\mathbf{z})|\mathbf{z})=\alpha\). We define a risk functional of \(D_{j}\in\mathcal{D}_{j}\) by
\[\mathcal{S}_{\alpha}(D_{j},D_{j}^{*}) = \mathbb{E}_{q(\mathbf{x})}\mathbb{E}_{q(\mathbf{z}|\mathbf{x};\phi) }\Big{[}\rho_{\alpha}(\mathbf{x}_{j}-D_{j}(\alpha|\mathbf{z}))\Big{]}\] \[\mathcal{S}(D_{j},D_{j}^{*}) = \mathbb{E}_{q(\mathbf{x})}\mathbb{E}_{q(\mathbf{z}|\mathbf{x}; \phi)}\left[\int_{0}^{1}\rho_{\alpha}(\mathbf{x}_{j}-D_{j}(\alpha|\mathbf{z}))d \alpha\right],\]
where \(\alpha\in(0,1)\), and \(\mathcal{D}_{j}\) is a set of isotonic functions \(D_{j}\) such that \(D_{j}(\cdot|\cdot):[0,1]\times\mathbb{R}^{d}\mapsto\mathbb{R}\). Note that \(\mathcal{S}(D_{j},D_{j}^{*})\) is equivalent to the reconstruction loss of (3).
**Assumption 1**.: _The distributional family of \(q(\mathbf{z}|\mathbf{x};\phi)\) is sufficiently large and we have \(q(\mathbf{z}|\mathbf{x};\phi)\) such that_
\[q(\mathbf{x})q(\mathbf{z}|\mathbf{x};\phi)=p(\mathbf{z})p(\mathbf{x}|\mathbf{z}).\]
**Proposition 2** (Proper scoring rule).: _Suppose that \(\mathbb{E}_{q(\mathbf{x})}\mathbb{E}_{q(\mathbf{z}|\mathbf{x};\phi)}\left[\int_{0} ^{1}\big{|}\rho_{\alpha}(\mathbf{x}_{j}-D_{j}(\alpha|\mathbf{z}))\big{|}d \alpha\right]<\infty\) for all \(D_{j}\in\mathcal{D}_{j}\), \(j=1,\cdots,p_{1}\). Under Assumption 1, for all \(\alpha\in(0,1)\),_
\[\mathbb{E}_{q(\mathbf{x})}\mathbb{E}_{q(\mathbf{z}|\mathbf{x};\phi)}\Big{[} \rho_{\alpha}(\mathbf{x}_{j}-D_{j}^{*}(\alpha|\mathbf{z}))\Big{]}=\min_{D_{j} \in\mathcal{D}_{j}}\mathcal{S}_{\alpha}(D_{j},D_{j}^{*}),\]
_and \(\mathcal{S}(D_{j},D_{j}^{*})\geq\mathcal{S}(D_{j}^{*},D_{j}^{*})\)._
Furthermore, based on the conditional CDF \(D_{j}^{-1}\), the following Proposition 3 shows that the marginal CDF is equivalent to the marginalization of the conditional CDF with respect to the prior distribution.
**Proposition 3** (Marginal CDF).: _For \(j=1,2,\cdots,p_{1}\), the marginal CDF \(F_{j}(\mathbf{x}_{j})\) of \(\mathbf{x}_{j}\) is written as_
\[F_{j}(x_{j})=\int_{\mathbb{R}^{d}}D_{j}^{-1}(x_{j}|\mathbf{z})p(\mathbf{z})d \mathbf{z},\]
_where \(x_{j}\in\mathbb{R}\)._
In practice, the marginal CDF \(F_{j}(x_{j})\) is approximated by \(\frac{1}{B}\sum_{b=1}^{B}D_{j}^{-1}(x_{j}|z_{b})\), where \(z_{b}\sim p(\mathbf{z}),b=1,\cdots,B\).
#### 3.2.2 Closed Form Loss
To compute the CRPS loss in the closed form for computational efficiency Gasthaus et al. (2019), we parameterize the function \(D_{j}\) by a linear isotonic regression spline as follows
\[D_{j}(\alpha|\mathbf{z};\theta_{j}) = \gamma^{(j)}(\mathbf{z})+\sum_{m=0}^{M}b_{m}^{(j)}(\mathbf{z})( \alpha-d_{m})_{+}\] (4) subject to \[\sum_{m=0}^{k}b_{m}^{(j)}(\mathbf{z})\geq 0,k=1,\cdots,M\]
where \(\alpha\in[0,1]\), \(\gamma^{(j)}(\mathbf{z})\in\mathbb{R}\), \(b^{(j)}(\mathbf{z})=(b_{0}^{(j)}(\mathbf{z}),\cdots,b_{M}^{(j)}(\mathbf{z})) \in\mathbb{R}^{M+1}\), \(d=(d_{0},\cdots,d_{M})\in[0,1]^{M+1}\), \(0=d_{0}<\cdots<d_{M}=1\), and \((\mathbf{u})_{+}:=\max(0,\mathbf{u})\). \(\theta_{j}\) is a neural network parameterized mapping such that \(\theta_{j}:\mathbb{R}^{d}\mapsto\mathbb{R}\times\mathbb{R}^{M+1}\), which outputs \(\gamma^{(j)}(\mathbf{z})\) and \(b^{(j)}(\mathbf{z})\).
Consequently, CRPS loss is computed in the closed form as
\[\mathrm{CRPS}(D_{j}(\cdot|\mathbf{z};\theta_{j}),\mathbf{x}_{j})\] \[= (2\tilde{\alpha}_{j}-1)\mathbf{x}_{j}+(1-2\tilde{\alpha}_{j}) \gamma^{(j)}(\mathbf{z})\] \[+ \sum_{m=1}^{M}b_{m}^{(j)}(\mathbf{z})\Bigg{(}\frac{1-d_{m}^{3}}{3 }-d_{m}\] \[\quad-\max(\tilde{\alpha}_{j},d_{m})+2\max(\tilde{\alpha}_{j},d_ {m})d_{m}\Bigg{)},\]
where
\[D_{j}(\tilde{\alpha}_{j}|\mathbf{z};\theta_{j}) = \mathbf{x}_{j}\] \[\tilde{\alpha}_{j} = \frac{\mathbf{x}_{j}-\gamma^{(j)}(\mathbf{z})+\sum_{m=1}^{m_{0} }b_{m}^{(j)}(\mathbf{z})d_{m}}{\sum_{m=1}^{m_{0}}b_{m}^{(j)}(\mathbf{z})}\] \[D_{j}(d_{m_{0}}|\mathbf{z};\theta_{j}) \leq \mathbf{x}_{j}\leq D_{j}(d_{m_{0}+1}|\mathbf{z};\theta_{j})\] \[j = 1,2,\cdots,p_{1}.\]
It indicates that our objective function (3) is still computationally tractable even if we consider an infinite number of quantile levels.
### Sampling Mechanism
To generate a synthetic sample, we first sample a latent variable from the prior distribution (the \(d\)-dimensional standard Gaussian distribution). And all continuous and discrete variables share the same sampled latent variable \(z\). Denote \(\hat{x}_{j}\) as a synthetic sample of \(\mathbf{x}_{j}\), for \(j=1,\cdots,p\).
For \(j=1,\cdots,p_{1}\), continuous variables, we generate a synthetic sample by inverse transform sampling. Specifically, \(\hat{x}_{j}=D_{j}(u_{j}|z;\theta_{j})\), where \(u_{j}\sim U(0,1)\) and \(U\) is uniform distribution. For \(l=p_{1}+1,\cdots,p\), discrete variables, we use the Gumbel-Max trick Gumbel (1954) to generate a synthetic sample. \(\hat{x}_{l}=\arg\max_{t=1,\cdots,T_{l}}\{\log\pi(z;\theta_{l})_{t}+G_{t}\}\), where \(G_{t}\sim^{i.i.d.}Gumbel(0,1),t=1,\cdots,T_{l}\) and \(Gumbel\) is Gumbel distribution. We discover that even when the labels of the discrete variable are highly imbalanced, the sampling based on the Gumbel-Max trick maintains the label's imbalanced ratio.
### Calibration of Estimated CDF
To ensure that the estimated CDF is properly discretized according to the support of the variable, especially the count variable, a post-ad-hoc calibration including discretization Salimans et al. (2017) can be applied. Here, we denote the Monte Carlo approximated estimated CDF \(\hat{F}(x;\theta)\coloneqq\frac{1}{B}\sum_{b=1}^{B}D^{-1}(x|z_{b};\theta)\) for \(x\in\mathbb{R}\), where the subscript \(j\) is omitted for brevity. Let the observed possible values set of the variable to be discretized is \(\{x^{(1)},x^{(2)},\cdots,x^{(m)}\}\). The calibration algorithm of the estimated CDF is shown in Algorithm 1, and an example of the calibration algorithm result is shown in Figure 1.
## 4 Experiments
In this section, to illustrate that our proposed method can capture the underlying distribution of the
given dataset, we numerically show that DistVAE can generate synthetic data, which can be used as a good proxy of the original data.
### Overview
**Dataset.** For evaluation, we consider following real tabular datasets: covertype, credit, loan, adult, cabs, and kings (see Appendix A.6 for detailed data descriptions).
**Compared models.** We compared the state-of-the-art synthesizers; CTGAN Xu et al. (2019), TVAE Xu et al. (2019), and CTAB-GAN Zhao et al. (2021). Notably, all models have the same size of the latent dimension.
### Evaluation Metrics
To evaluate the synthetic data quality, we investigate three types of metrics: machine learning utility, statistical similarity, and privacy preservability. Note that we computed all these metrics after standardization because continuous variables have different units.
**Machine learning utility.** To evaluate the machine learning utility (MLu), we use the synthetic data as training data for three widely used machine learning algorithms: linear (logistic) regression, Random Forest, and Gradient Boosting. We average the following metrics: Mean Absolute Relative Error (MARE) for the regression and \(F_{1}\) for the classification problem. We choose \(F_{1}\) since some discrete target variables have imbalanced labels. Note that the synthetic and real training data have the same size.
To measure the MLu, the coefficient of determination \(R^{2}\) has been widely used Xu et al. (2019); Zhao et al. (2021); Wen et al. (2021); Kamthe et al. (2021). However, Li (2017) shows that the \(R^{2}\) should not be used to assess predictive performance because \(R^{2}\) is biased, insufficient, and misleading. Because we need to aggregate predictive performance in several different datasets, we use MARE, which is scale independent and bounded from zero to one Botchkarev (2019).
**Statistical similarity.** Next, to measure the statistical similarity between real and synthetic data, we use two statistical distances; the Kolmogorov statistic and the 1-Wasserstein distance, which measure the distance between empirical CDFs of real training data and synthetic data. The Kolmogorov statistic tests whether samples are drawn from a specific reference distribution (testing goodness of fit) Lehmann (1998). Note that we average the statistical distances across all variables.
**Privacy preservability.** Lastly, to check whether privacy is preserved in synthetic data generation, we use three metrics; _Distance to Closest Record_ (DCR) Park et al. (2018); Zhao et al. (2021), _membership inference attack_Shokri et al. (2016); Choi et al. (2017); Park et al. (2018), and _attribute disclosure_Choi et al. (2017); Matwin et al. (2015).
As in Zhao et al. (2021), we define the DCR as the \(5^{th}\) percentiles of \(L_{2}\) distances between all real and synthetic samples (or between synthetic samples). Since DCR is a \(L_{2}\) distance-based metric, we compute DCR for only continuous variables. The higher score of DCR between the real and synthetic datasets indicates that privacy is preserved well since it implies no overlapped record between real and synthetic datasets. However, if the DCR score is too large, it indicates that the quality of the generated synthetic dataset is very poor.
The membership inference attack is evaluated ac
Figure 1: Calibrated estimated CDF for educational-num covariate of adult dataset. ‘estimate’ indicates \(\hat{F}(\cdot;\theta)\), ‘calibration’ indicates \(\hat{F}^{*}(\cdot;\theta)\), and ‘empirical’ indicates the empirical CDF of the observed dataset.
cording to the steps outlined in Appendix A.7. Since we customize the membership inference attack procedure to attack a VAE-based synthesizer, only DistVAE and TVAE are assessed. Since we convert the problem of identifying the complex relationship between real and synthetic dataset members into a binary classification problem, better binary classification scores indicate that the target synthesizer is vulnerable to the membership inference attack.
Attribute disclosure occurs when attackers can reveal additional covariates of a record based on a subset of covariates that attackers already have and similar records from the synthetic dataset. In addition, classification metrics are utilized to check the degree to which attackers accurately identify additional variables. Therefore, higher attribute disclosure metrics indicate that attackers can reveal unknown variables precisely, and the target synthesizer has an increased risk of privacy leakage. Since attackers are assumed to have only a subset of covariates of a record in attribute disclosure, attribute disclosure can be considered more major privacy leakage issue.
### Results Analysis
**Machine learning utility.** Table 1 shows the averaged MLu for all tabular datasets, and a better synthesizer is expected to generate synthetic data which shows comparable predictive performance to that of the real training dataset (which is denoted as 'Baseline'). DistVAE shows a competitive MARE score and outperforms other methods in \(F_{1}\).
For the detailed comparison, we plot the paired (MARE, \(F_{1}\)) scores for all tabular datasets and compared models in Figure 2. In Figure 2, a better score (i.e., the synthesizer having the better MLu) is indicated by a dot in the upper left corner. Figure 2 demonstrates that DistVAE consistently shows the best or at least competitive MLu across all tabular datasets. Note that TVAE shows quite a low \(F_{1}\) score in credit dataset because it fails to handle the highly imbalanced categorical target variable. See Appendix A.9 for detailed MLu scores for all tabular datasets.
**Statistical similarity.** The averaged statistical similarity is reported in Table 2. DistVAE achieves the best statistical similarity for continuous variables between real and synthetic datasets in the Kolmogorov-Smirnov statistic and 1-Wasserstein distance. It implies that the proposed distributional learning method with the CRPS loss can precisely capture the observed dataset's underlying distribution.
For discrete variables, Table 2 indicates that Dist
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & MARE \(\downarrow\) & \(F_{1}\uparrow\) \\ \hline CTGAN & \(0.321_{\pm 0.271}\) & \(0.672_{\pm 0.234}\) \\ TVAE & \(\mathbf{0.225}_{\pm 0.215}\) & \(0.594_{\pm 0.295}\) \\ CTAB-GAN & \(0.403_{\pm 0.392}\) & \(0.702_{\pm 0.162}\) \\ \hline DistVAE(\(\beta=0.5\)) & \(0.349_{\pm 0.328}\) & \(\mathbf{0.769}_{\pm 0.128}\) \\ Baseline & \(0.150_{\pm 0.200}\) & \(0.814_{\pm 0.101}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Averaged machine learning utilities for synthetic datasets. Mean and standard deviation values are obtained from 10 repeated experiments. \(\uparrow\) denotes higher is better and \(\downarrow\) denotes lower is better.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & K-S & 1-WD \\ \hline CTGAN & \(0.168_{\pm 0.195}\) & \(0.521_{\pm 0.532}\) \\ TVAE & \(0.385_{\pm 0.144}\) & \(1.681_{\pm 1.668}\) \\ CTAB-GAN & \(0.106_{\pm 0.083}\) & \(0.412_{\pm 0.378}\) \\ \hline DistVAE(\(\beta=0.5\)) & \(\mathbf{0.030}_{\pm 0.017}\) & \(\mathbf{0.118}_{\pm 0.100}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Averaged statistical similarity of synthetic datasets. K-S represents the Kolmogorov–Smirnov statistic, and 1-WD represents the 1-Wasserstein distance. Mean and standard deviation values are obtained from 10 repeated experiments. Lower is better.
Figure 2: Machine learning utilities for compared models and real tabular datasets.
VAE outperforms in the statistical similarity performance. Note that we do not rely on the additional training technique such as _training-by-sampling_ Xu et al. (2019), which causes computational burden. See Appendix A.9 for detailed statistical similarity scores for all tabular datasets.
**Privacy preservability.** The privacy preservability of each model based on DCR scores is shown in Table 3, and we evaluate DCR scores of DistVAE with various \(\beta\) values. As \(\beta\) increases, the DCR between the real and synthetic datasets of DistVAE (R&S) increases. It implies that \(\beta\) can control the risk of privacy leakage. Also, DistVAE consistently shows the larger DCR for the synthetic dataset (S) for all \(\beta\) values. It means that DistVAE can generate more diverse synthetic samples than other methods. We find duplicated records in the synthetic dataset generated by CTABGAN, which results in a relatively low DCR score for the synthetic dataset (S). See Appendix A.9 for detailed DCR scores for all tabular datasets.
We prepared the attack models to evaluate the membership inference attack (one per class) following the steps outlined in Section A.7. Also, the attack testing records consist of the same number of real training and test records; real training and test records have the labels of \(in\) and \(out\), respectively. Note that the real test records are not used to build attack models. We use gradient-boosting classifiers as attack models. Due to computational issues, the number of attack models is one (i.e., \(C=1\)).
Since the large \(in/out\) labels are balanced and the membership inference attack is a binary classification problem, we consider accuracy and AUC (Area Under Curve) as binary classification metrics. Table 4 shows that DistVAE and TVAE attain an AUC score of 0.5, meaning that attack models can not distinguish between members of real training and test datasets, and the membership inference attack is unsuccessful. Therefore, DistVAE can generate synthetic datasets while preserving privacy regarding the membership inference attack. See Appendix A.9 for detailed membership inference attack performances for all tabular datasets.
We evaluate attribute disclosure based on the experiment setup of Choi et al. (2017) while varying the number of nearest neighbors in the synthetic dataset. In detail, we assume that only continuous variables are known to attackers and set the number of covariates known to the attacker as 5. Unknown discrete variables are estimated based on the majority vote of \(k\)-nearest neighbors. Note that we construct the nearest neighbors based on \(L_{2}\) distance. The attribute disclosure performance is measured by \(F_{1}\) because imbalanced discrete variables exist across all tabular datasets.
The results of attribute disclosure performance are presented in Table 5. For all numbers of neighbors (\(k\)), the \(F_{1}\) score of DistVAE decreases as \(\beta\) increases, and DistVAE achieves the smallest \(F_{1}\) scores where \(k\) is 10 and 100. These results indicate that DistVAE can generate synthetic datasets with a low risk of attribute disclosure, and the privacy level is controlled by \(\beta\). See Appendix A.9 for detailed attribute disclosure
\begin{table}
\begin{tabular}{l r r} \hline \hline Model & Accuracy & AUC \\ \hline TVAE & \(0.495_{\pm 0.019}\) & \(0.495_{\pm 0.019}\) \\ DistVAE(\(\beta=0.5\)) & \(0.500_{\pm 0.003}\) & \(0.500_{\pm 0.003}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Privacy preservability: Averaged membership inference attack performance. Mean and standard deviation values are obtained from 10 repeated experiments. Lower is better.
\begin{table}
\begin{tabular}{l r r} \hline \hline Model & R\&S & S \\ \hline CTGAN & \(0.426_{\pm 0.229}\) & \(0.356_{\pm 0.202}\) \\ TVAE & \(0.470_{\pm 0.181}\) & \(0.278_{\pm 0.195}\) \\ CTAB-GAN & \(0.508_{\pm 0.259}\) & \(0.039_{\pm 0.073}\) \\ \hline DistVAE(\(\beta=0.5\)) & \(0.444_{\pm 0.250}\) & \(0.463_{\pm 0.288}\) \\ DistVAE(\(\beta=1\)) & \(0.463_{\pm 0.282}\) & \(0.479_{\pm 0.310}\) \\ DistVAE(\(\beta=5\)) & \(\mathbf{0.517}_{\pm 0.272}\) & \(\mathbf{0.511}_{\pm 0.335}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Privacy preservability: Averaged distance to closest record (DCR) between real and synthetic datasets (R&S) and between the same synthetic datasets (S). Mean and standard deviation values are obtained from 10 repeated experiments. Higher is better.
\begin{table}
\begin{tabular}{l r r r} \hline \hline \multicolumn{4}{c}{Number of neighbors (\(k\))} \\ \cline{2-4} Model & 1 & 10 & 100 \\ \hline CTGAN & \(0.262_{\pm 0.091}\) & \(0.282_{\pm 0.087}\) & \(0.275_{\pm 0.087}\) \\ TVAE & \(0.437_{\pm 0.162}\) & \(0.438_{\pm 0.160}\) & \(0.432_{\pm 0.162}\) \\ CTAB-GAN & \(\mathbf{0.257}_{\pm 0.123}\) & \(0.258_{\pm 0.114}\) & \(0.261_{\pm 0.111}\) \\ \hline DistVAE(0.5) & \(0.328_{\pm 0.088}\) & \(0.328_{\pm 0.076}\) & \(0.310_{\pm 0.072}\) \\ DistVAE(1) & \(0.307_{\pm 0.073}\) & \(0.313_{\pm 0.068}\) & \(0.297_{\pm 0.066}\) \\ DistVAE(5) & \(0.265_{\pm 0.105}\) & \(\mathbf{0.253}_{\pm 0.103}\) & \(\mathbf{0.232}_{\pm 0.101}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Privacy preservability: Averaged attribute disclosure performance with \(F_{1}\), where the number of known variables is set to 5 for all tabular datasets. Mean and standard deviation values are obtained from 10 repeated experiments. Lower is better. The number in the parentheses represents the value of \(\beta\).
performances for all tabular datasets.
#### 4.3.1 Additional Study
To check the accuracy of the estimated quantile function from DistVAE, we evaluate DistVAE with the \(\alpha\)-Rate Chen et al. (2011) criterion. \(\alpha\)-Rate is defined as:
\[\alpha\text{-Rate}=\frac{1}{|I_{test}|}\sum_{i\in I_{test}}I(x_{i}<\hat{F}^{-1} (\alpha)), \tag{5}\]
where \(\hat{F}^{-1}(\cdot)\) is the estimated quantile function, \(\alpha\in[0,1]\), and \(I_{test}\) is the set of indices of test dataset. \(\alpha\)-Rate is simply the proportion of compliance samples. Inherently, the \(\alpha\)-Rate should be close to the \(\alpha\).
We estimate quantiles for five levels (0.1, 0.3, 0.5, 0.7, 0.9) based on the estimated marginal CDFs 3 and Table 6 shows the averaged \(\alpha\)-Rate results of DistVAE across all tabular datasets. As \(\alpha\) increases from 0.1 to 0.9, the ratio of violation test samples decreases. It implies that quantiles estimated by DistVAE can be incorrect for lower quantile levels. We conjecture that extremely skewed continuous variables, such as capital-gain and capital-loss from adult dataset, make the quantile estimation unstable.
## 5 Conclusion and Limitations
This paper proposes a novel distributional learning method for VAE to capture the underlying distribution of the observed dataset. In this paper, distributional learning is defined as estimating the conditional CDF; hence, there is no assumption on the generative model of the VAE. Distributional learning is enabled by estimating an infinite number of conditional quantiles, which becomes computationally tractable by adopting the CRPS loss. And we show that our objective function is a proper relative to the true conditional quantile function.
Since each conditional CDF depends on a common latent variable (confounded structure), the latent variable simultaneously affects the generation process of all covariates, which makes covariates correlated in the synthetic data generation process. However, the correlation structure of covariates can only be 'partially' explained by the confounded design of the latent variable because it can not account for the direct correlation structure. So, our future work is extending the decoder modeling by including the direct correlation structure between covariates.
|
2304.02600
|
A note on the classification of positive solutions to the critical
p-Laplace equation in $\mathbb{R}^n$
|
In this note, we obtain a classification result for positive solutions to the
critical p-Laplace equation in $\mathbb{R}^n$ with $n\ge4$ and $p>p_n$ for some
number $p_n\in\left(\frac{n}{3},\frac{n+1}{3}\right)$ such that
$p_n\sim\frac{n}{3}+\frac{1}{n}$, which slightly improves upon a similar result
recently obtained by Ou under the condition $p\ge\frac{n+1}{3}$.
|
Jérôme Vétois
|
2023-04-05T17:11:11Z
|
http://arxiv.org/abs/2304.02600v2
|
A note on the classification of positive solutions to the critical p-Laplace equation in \(\mathbb{R}^{n}\)
###### Abstract.
In this note, we obtain a classification result for positive solutions to the critical p-Laplace equation in \(\mathbb{R}^{n}\) with \(n\geq 4\) and \(p>p_{n}\) for some number \(p_{n}\in\left(\frac{n}{3},\frac{n+1}{3}\right)\) such that \(p_{n}\sim\frac{n}{3}+\frac{1}{n}\), which slightly improves upon a similar result recently obtained by Ou [13] under the condition \(p\geq\frac{n+1}{3}\).
The author was supported by the NSERC Discovery Grant RGPIN-2022-04213
## 1. Introduction and main result
We consider positive, weak solutions \(u\in W^{1,p}_{\mathrm{loc}}\left(\mathbb{R}^{n}\right)\cap L^{\infty}_{ \mathrm{loc}}\left(\mathbb{R}^{n}\right)\) to the critical \(p\)-Laplace equation
\[-\Delta_{p}u=u^{p^{*}-1}\quad\text{in }\mathbb{R}^{n}, \tag{1.1}\]
where \(n\geq 2\), \(1<p<n\), \(\Delta_{p}:=\mathrm{div}\left(\left|\nabla u\right|^{p-2}\nabla u\right)\) is the \(p\)-Laplace operator and \(p^{*}:=np/\left(n-p\right)\) is the critical Sobolev exponent.
Well-known solutions to (1.1) are the functions
\[u_{\mu,x_{0}}\left(x\right):=\left(\frac{n^{\frac{1}{p}}\left(\frac{n-p}{p-1} \right)^{\frac{p-1}{p}}\mu^{\frac{1}{p-1}}}{\mu^{\frac{p}{p-1}}+\left|x-x_{0} \right|^{\frac{p}{p-1}}}\right)^{\frac{n-p}{p}}\quad\quad\forall x\in\mathbb{ R}^{n}, \tag{1.2}\]
where \(\mu>0\) and \(x_{0}\in\mathbb{R}^{n}\). As was shown by Rodemich [14], Aubin [2] and Talenti [18], these functions realize the equality in the optimal Sobolev inequality in \(\mathbb{R}^{n}\). Guedda and Veron [11] obtained that the functions defined in (1.2) are the only positive, radially symmetric solutions to (1.1). In the case where \(p=2\), Caffarelli, Gidas and Spruck [3] (see also Chen and Li [5]) used the moving plane method to obtain that these functions are in fact the only positive solutions of (1.1). This classification result was later extended by Damascelli and Ramaswamy [8] to the case of solutions with sufficiently fast decay at infinity with \(1<p<2\), and in a series of papers by Damascelli, Merchan, Montoro and Sciunzi [7], Vetois [20] and Sciunzi [16] to the case of solutions
in \(D^{1,p}\left(\mathbb{R}^{n}\right)\) for all \(p\in\left(1,n\right)\). We mention in passing that a similar classification result was also obtained by Esposito [10] for solutions with finite mass of the critical \(n\)-Laplace equation, in which case the nonlinearity is of exponential type.
More recently, Ciraolo, Figalli and Roncoroni [6] used a strategy based on integral estimates to extend the classification of positive \(D^{1,p}\)-solutions to a class of anisotropic \(p\)-Laplace-type equations in convex cones (see also the survey article by Roncoroni [15] on this topic). In the case where \(p=2\), this type of approach can be traced back to the work of Obata [12] on the conformal transformations of the sphere. An approach of this type was then used by Catino, Monticelli and Roncoroni [4] to obtain new classification results for positive, weak solutions to (1.1) which are not a priori in \(D^{1,p}\left(\mathbb{R}^{n}\right)\). In particular, Catino, Monticelli and Roncoroni [4] managed to obtain the complete classification of positive, weak solutions to (1.1) in the case where \(n=2\) or \(\left[n=3\text{ and }3/2<p<2\right]\). The method was recently improved by Ou [13] who managed to extend this result to the case where \(n\geq 3\) and \(p\geq\left(n+1\right)/3\).
In this note, we obtain the following extension of Catino, Monticelli and Roncoroni [4] and Ou's [13] results:
**Theorem 1.1**.: _Assume that \(n\geq 4\) and \(p_{n}<p<n\), where_
\[p_{n}:=\begin{cases}\dfrac{8}{5}&\text{if }n=4\\ \dfrac{4n+3-\sqrt{4n^{2}+12n-15}}{6}&\text{if }n\geq 5.\end{cases}\]
_Then every positive, weak solution \(u\in W^{1,p}_{\operatorname{loc}}\left(\mathbb{R}^{n}\right)\cap L^{\infty}_{ \operatorname{loc}}\left(\mathbb{R}^{n}\right)\) to (1.1) is of the form (1.2), i.e. \(u\equiv u_{\mu,x_{0}}\) for some \(\mu>0\) and \(x_{0}\in\mathbb{R}^{n}\)._
It is easy to see that
\[\frac{n}{3}<p_{n}<\frac{n+1}{3}\quad\forall n\geq 4\]
and
\[p_{n}\sim\frac{n}{3}+\frac{1}{n}\quad\text{as }n\to\infty.\]
The main difficulty in our proof in the case where \(p<\left(n+1\right)/3\) is to obtain a priori integral estimates with an exponent on the gradient which is larger than \(p\). This can be seen for example by looking at the formula (2.22) in our proof, where the exponent on the function g (defined in (2.2)) is less than \(1\) for small \(\varepsilon>0\) if and only if \(p>\left(n+1\right)/3\). While the former case can be achieved by using some rather
straightforward estimates (see Lemma 2.1), the case where \(p_{n}<p<\left(n+1\right)/3\) requires a little more work. In this case, by using the integral identity in Lemma 2.3, we manage to obtain the key estimate (2.35), which compares two integrals with different exponents on the gradient and from which we manage to derive our classification result. The case where \(p\leq p_{n}\) remains open. In this case, the exponent on the gradient in the right-hand side of (2.35) becomes too large for us to conclude. The situation appears to be even more problematic when \(p<n/3\) since the exponent on the gradient in the right-hand side of (2.35) then becomes greater than the exponent in the left-hand side.
## 2. Proof of Theorem 1.1
Let \(u\in W^{1,p}_{\mathrm{loc}}\left(\mathbb{R}^{n}\right)\cap L^{\infty}_{ \mathrm{loc}}\left(\mathbb{R}^{n}\right)\) be a positive, weak solution of (1.1). Results by DiBenedetto [9] and Tolksdorf [19] give that \(u\in C^{1,\alpha}_{\mathrm{loc}}\left(\mathbb{R}^{n}\right)\) for some \(\alpha\in\left(0,1\right)\). Furthermore, as was shown by Antonini, Ciraolo and Farina [1] (see also the references therein for previous results), the critical set \(Z:=\left\{x\in\mathbb{R}^{n}:\left|\nabla u\left(x\right)\right|=0\right\}\) has measure zero, \(u\in W^{2,2}_{\mathrm{loc}}\left(\mathbb{R}^{n}\backslash Z\right)\), \(\left|\nabla u\right|^{p-2}\nabla u\in W^{1,2}_{\mathrm{loc}}\left(\mathbb{R} ^{n}\right)\) and \(\left|\nabla u\right|^{p-2}\nabla^{2}u\in L^{2}_{\mathrm{loc}}\left(\mathbb{R} ^{n}\right)\).
Following the approach developped by Catino, Monticelli and Roncoroni [4] and Ou [13] (see also the previous work by Ciraolo, Figalli and Roncoroni [6]), we define the function
\[v:=u^{-\frac{p}{n-p}}. \tag{2.1}\]
The equation (1.1) can then be rewritten as
\[\Delta_{p}v=g:=\frac{n\left(p-1\right)}{p}v^{-1}\left|\nabla v\right|^{p}+ \left(\frac{p}{n-p}\right)^{p-1}v^{-1}\quad\text{in }\mathbb{R}^{n}. \tag{2.2}\]
Furthermore, it follows from the above-mentioned regularity properties of \(u\) that \(v\in C^{1,\alpha}_{\mathrm{loc}}\left(\mathbb{R}^{n}\right)\cap W^{2,2}_{ \mathrm{loc}}\left(\mathbb{R}^{n}\backslash Z\right)\), \(\left|\nabla v\right|^{p-2}\nabla v\in W^{1,2}_{\mathrm{loc}}\left(\mathbb{R} ^{n}\right)\) and \(\left|\nabla v\right|^{p-2}\nabla^{2}v\in L^{2}_{\mathrm{loc}}\left(\mathbb{R} ^{n}\right)\).
We now state some preliminary results, starting with the following lemma, of which more or less general versions can be found in either of the work by Serrin and Zou [17, Lemma 2.4], Catino, Monticelli and Roncoroni [4, Lemma 5.1] and Ou [13, Lemma 3.1]:
**Lemma 2.1**.: _Let \(n\geq 2\), \(p\in\left(1,n\right)\), \(r\in\left[0,p\right]\), \(q<\left(np-n+p\right)/p\), \(R>0\), \(u\in W^{1,p}_{\mathrm{loc}}\left(\mathbb{R}^{n}\right)\cap L^{\infty}_{ \mathrm{loc}}\left(\mathbb{R}^{n}\right)\) be a positive, weak solution of (1.1) and \(v\) be the function defined in (2.1). Then_
\[\int_{B_{R}\left(0\right)}v^{-q}\left|\nabla v\right|^{r}\leq CR^{n-\min\left( \frac{pq-r}{p-1},q\right)} \tag{2.3}\]
_for some constant \(C=C\left(n,p,q,r\right)>0\)._
Proof of Lemma 2.1.: We refer to Ou [13, Lemma 3.1] for the proof of (2.3) when \(\left[q\geq 0\text{ and }r=0\right]\) or \(\left[q\geq p\text{ and }r=p\right]\). In the case where \(q\geq r\) and \(0<r<p\), Holder's inequality gives
\[\int_{\mathbb{R}^{n}}v^{-q}\left|\nabla v\right|^{r}\leq\left(\int_{\mathbb{R} ^{n}}v^{-q-\sigma\left(p-r\right)}\left|\nabla v\right|^{p}\right)^{\frac{r}{p }}\left(\int_{\mathbb{R}^{n}}v^{-q+\sigma r}\right)^{\frac{p-r}{p}}, \tag{2.4}\]
where
\[\sigma:=\max\left(\frac{p-q}{p-r},0\right),\]
so that
\[q+\sigma\left(p-r\right)=\max\left(p,q\right)\in\left[p,\frac{np-n+p}{p}\right) \tag{2.5}\]
and
\[q-\sigma r=\min\left(\frac{p\left(q-r\right)}{p-r},q\right)\in\left[0,\frac{ np-n+p}{p}\right). \tag{2.6}\]
It follows from (2.4), (2.5) and (2.6) that
\[\int_{\mathbb{R}^{n}}v^{-q}\left|\nabla v\right|^{r}\leq C\left(R^{n-q-\sigma \left(p-r\right)}\right)^{\frac{r}{p}}\left(R^{n-q+\sigma r}\right)^{\frac{p- r}{p}}=CR^{n-q}\]
for some constant \(C=C\left(n,p,q,r\right)>0\). We now consider the case where \(q<r\). In this case, by observing that \(\Delta_{p}u\leq 0\) in \(\mathbb{R}^{n}\), we obtain (see Serrin and Zou [17, Lemma 2.3])
\[u\left(x\right)\geq C\left|x\right|^{-\frac{n-p}{p-1}},\quad\text{i.e. }v\left(x\right)\leq C^{-\frac{p}{n-p}}\left|x\right|^{\frac{p}{p-1}}\quad \forall x\in\mathbb{R}^{n} \tag{2.7}\]
for some constant \(C=C\left(n,p\right)>0\). It follows from (2.7) that
\[\int_{B_{R}\left(0\right)}v^{-q}\left|\nabla u\right|^{r} \leq C^{-\frac{p\left(r-q\right)}{n-p}}R^{\frac{p\left(r-q\right) }{p-1}}\int_{B_{R}\left(0\right)}v^{-r}\left|\nabla u\right|^{r}\] \[\leq C^{\prime}R^{\frac{p\left(r-q\right)}{p-1}+n-r}\] \[=C^{\prime}R^{n-\frac{pq-r}{p-1}},\]
for some constant \(C^{\prime}=C^{\prime}\left(n,p,q,r\right)>0\). This ends the proof of Lemma 2.1.
Next, we state the following lemma obtained by Ou [13, Proposition 2.3], which extends a previous result by Catino, Monticelli and Roncoroni [4, Proposition 2.2] (see also Serrin and Zou [17, Proposition 6.2]):
**Lemma 2.2**.: _Let \(n\geq 2\), \(p\in\left(1,n\right)\), \(m\in\mathbb{R}\), \(u\in W^{1,p}_{\mathrm{loc}}\left(\mathbb{R}^{n}\right)\cap L^{\infty}_{ \mathrm{loc}}\left(\mathbb{R}^{n}\right)\) be a positive, weak solution of (1.1), \(v\) and \(g\) be the functions defined in
(2.1) and (2.2), and \(\varphi\) be a smooth, nonnegative function with compact support in \(\mathbb{R}^{n}\). Then_
\[\int_{\mathbb{R}^{n}}\varphi v^{1-n}g^{m}\operatorname{Tr}\left(E^{2 }\right)+nm\int_{\mathbb{R}^{n}}\varphi v^{-n}g^{m-1}\left|\nabla v\right|^{p-2} \left\langle E^{2}\nabla v,\nabla v\right\rangle\\ \leq-\int_{\mathbb{R}^{n}}v^{1-n}g^{m}\left|\nabla v\right|^{p-2} \left\langle E\nabla v,\nabla\varphi\right\rangle, \tag{2.8}\]
_where \(E=\left(E_{ij}\right)_{1\leq i,j\leq n}\) is the matrix-valued function with coefficients defined by_
\[E_{ij}:=\partial_{x_{j}}\left(\left|\nabla v\right|^{p-2}\partial_{x_{i}}v \right)-\frac{1}{n}g\delta_{ij}, \tag{2.9}\]
_where \(\delta_{ij}\) stands for the Kronecker symbol._
Now, we prove the following additional result:
**Lemma 2.3**.: _Let \(n\geq 2\), \(p\in(1,n)\), \(m,q\in\mathbb{R}\), \(u\in W^{1,p}_{\mathrm{loc}}\left(\mathbb{R}^{n}\right)\cap L^{\infty}_{ \mathrm{loc}}\left(\mathbb{R}^{n}\right)\) be a positive, weak solution of (1.1), \(v\) and \(g\) be the functions defined in (2.1) and (2.2), \(E\) be the matrix-valued function defined in (2.9), and \(\varphi\) be a smooth function with compact support in \(\mathbb{R}^{n}\). Then_
\[\int_{\mathbb{R}^{n}}\varphi v^{-q}g^{m}\left(\left(\frac{np-n+p }{p}-q\right)\left|\nabla v\right|^{p}+\left(\frac{p}{n-p}\right)^{p-1}\right) \\ +nm\int_{\mathbb{R}^{n}}\varphi v^{-q}g^{m-1}\left|\nabla v \right|^{p-2}\left\langle E\nabla v,\nabla v\right\rangle\\ =-\int_{\mathbb{R}^{n}}v^{1-q}g^{m}\left|\nabla v\right|^{p-2} \left\langle\nabla v,\nabla\varphi\right\rangle. \tag{2.10}\]
Proof of Lemma 2.3.: By testing (2.2) against the function \(\varphi v^{1-q}g^{m}\) (\(\in C^{0,\alpha}_{\mathrm{loc}}\left(\mathbb{R}^{n}\right)\cap W^{1,2}_{loc} \left(\mathbb{R}^{n}\right)\) according to the above-mentioned regularity properties of the function \(v\)), we obtain
\[\int_{\mathbb{R}^{n}}\varphi v^{1-q}g^{m+1}-(q-1)\int_{\mathbb{R }^{n}}\varphi v^{-q}g^{m}\left|\nabla v\right|^{p}\\ +m\int_{\mathbb{R}^{n}}\varphi v^{1-q}g^{m-1}\left|\nabla v \right|^{p-2}\left\langle\nabla g,\nabla v\right\rangle\\ +\int_{\mathbb{R}^{n}}v^{1-q}g^{m}\left|\nabla v\right|^{p-2} \left\langle\nabla v,\nabla\varphi\right\rangle=0. \tag{2.11}\]
The formula (2.10) then follows from (2.11) together with the definition of \(g\) and the fact that \(\partial_{x_{j}}g=nv^{-1}E_{ij}\partial_{x_{i}}v\) for all \(j\in\{1,\ldots,n\}\) (see Ou [13, Lemma 2.1 (ii)]).
Finally, we state the following results obtained by Ou [13, Corollary 2.6 and Lemma 2.7]:
**Lemma 2.4**.: _Let \(n\geq 2\), \(p\in(1,n)\), \(u\in W^{1,p}_{\rm loc}\left(\mathbb{R}^{n}\right)\cap L^{\infty}_{\rm loc} \left(\mathbb{R}^{n}\right)\) be a positive, weak solution of (1.1), \(v\) and \(g\) be the functions defined in (2.1) and (2.2) and \(E\) be the matrix-valued function defined in (2.9). Then_
1. \(\left\langle E^{2}\nabla v,\nabla v\right\rangle\leq\operatorname{Tr}\left(E ^{2}\right)\left|\nabla v\right|^{2}\)__
2. _For each_ \(n\times n\) _matrix-valued function_ \(B\)_,_ \[\operatorname{Tr}\left(BE\right)\leq\operatorname{Tr}\left(E^{2}\right)+C \operatorname{Tr}\left(BB^{t}\right)\] _for some constant_ \(C=C\left(p\right)>0\)_._
We are now in position to prove Theorem 1.1.
Proof of Theorem 1.1.: The beginning of the proof follows ideas from Catino, Monticelli and Roncoroni [4] and Ou [13]. We include it for the sake of completeness. Let \(u\in W^{1,p}_{\rm loc}\left(\mathbb{R}^{n}\right)\cap L^{\infty}_{\rm loc} \left(\mathbb{R}^{n}\right)\) be a positive, weak solution of (1.1), \(v\) and \(g\) be the functions defined in (2.1) and (2.2), and \(E\) be the matrix-valued function defined in (2.9). Let \(\eta\) be a smooth, nonnegative cutoff function in \(\mathbb{R}^{n}\) such that \(\eta\equiv 1\) in \(B_{1}\left(0\right)\), \(\eta\equiv 0\) in \(\mathbb{R}^{n}\backslash B_{2}\left(0\right)\) and \(\left|\nabla\eta\right|\leq 2\) in \(B_{2}\left(0\right)\backslash B_{1}\left(0\right)\). For each \(R>0\), let \(\eta_{R}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) be the function defined as \(\eta_{R}\left(x\right):=\eta\left(x/R\right)\) for all \(x\in\mathbb{R}^{n}\), so that \(\eta_{R}\equiv 1\) in \(B_{R}\left(0\right)\), \(\eta_{R}\equiv 0\) in \(\mathbb{R}^{n}\backslash B_{2R}\left(0\right)\) and \(\left|\nabla\eta\right|\leq 2/R\) in \(B_{2R}\left(0\right)\backslash B_{R}\left(0\right)\). Let \(\theta>1\) to be chosen large later on. By using (2.8) with \(\varphi=\eta_{R}^{\theta}\) together with Lemma 2.4 (i) and the definition of \(g\), we obtain that for small \(\varepsilon>0\),
\[\int_{\mathbb{R}^{n}}\eta_{R}^{\theta}v^{-n}g^{-\frac{2p-1}{p}+ \varepsilon}\left(n\varepsilon\left|\nabla v\right|^{p}+\left(\frac{p}{n-p} \right)^{p-1}\right)\operatorname{Tr}\left(E^{2}\right)\\ \leq-\theta\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-1}v^{1-n}g^{- \frac{p-1}{p}+\varepsilon}\left|\nabla v\right|^{p-2}\left\langle E\nabla v, \nabla\eta_{R}\right\rangle. \tag{2.12}\]
Observe that
\[n\varepsilon\left|\nabla v\right|^{p}+\left(\frac{p}{n-p}\right)^{p-1}\leq \frac{p\varepsilon}{p-1}vg \tag{2.13}\]
provided \(\varepsilon\) is chosen small enough. For each \(\delta>0\), Lemma 2.4 (ii) with \(B=-\delta^{-1/2}\eta_{R}^{-1}\left|\nabla v\right|^{p-2}\nabla\eta_{R}\otimes\nabla v\) gives
\[-\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-1}v^{1-n}g^{-\frac{p-1}{p }+\varepsilon}\left|\nabla v\right|^{p-2}\left\langle E\nabla v,\nabla\eta_{R }\right\rangle-\] \[\leq C\delta^{-1}\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2}v^{1-n}g ^{-\frac{p-1}{p}+\varepsilon}\left|\nabla v\right|^{2p-2}\left|\nabla\eta_{R} \right|^{2}\] \[\quad+\delta\int_{\mathbb{R}^{n}}\eta_{R}^{\theta}v^{1-n}g^{- \frac{p-1}{p}+\varepsilon}\operatorname{Tr}\left(E^{2}\right) \tag{2.14}\]
for some constant \(C=C\left(p\right)>0\). If \(\delta\) is chosen small enough, then it follows from (2.12), (2.13) and (2.14) that
\[\int_{\mathbb{R}^{n}}\eta_{R}^{\theta}v^{1-n}g^{-\frac{p-1}{p}+ \varepsilon}\operatorname{Tr}\left(E^{2}\right)\] \[\qquad\leq C\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2}v^{1-n}g^{- \frac{p-1}{p}+\varepsilon}\left|\nabla v\right|^{2p-2}\left|\nabla\eta_{R} \right|^{2}. \tag{2.15}\]
for some constant \(C=C\left(n,p,\varepsilon,\theta\right)>0\). By observing that
\[\left|\nabla v\right|\leq\left(\frac{n\left(p-1\right)}{p}vg\right)^{1/p} \tag{2.16}\]
and since \(\left|\nabla\eta_{R}\right|\leq 2/R\), we obtain
\[\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2}v^{1-n}g^{-\frac{p-1}{p}+\varepsilon} \left|\nabla v\right|^{2p-2}\left|\nabla\eta_{R}\right|^{2}\leq CR^{-2}\int_{ \mathbb{R}^{n}}\eta_{R}^{\theta-2}v^{-\frac{np-3p+2}{p}}g^{\frac{p-1}{p}+\varepsilon} \tag{2.17}\]
for some constant \(C=C\left(n,p\right)>0\). It follows from (2.15) and (2.17) that if
\[\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2}v^{-\frac{np-3p+2}{p}}g^{\frac{p-1}{p} +\varepsilon}=\mathrm{o}\left(R^{2}\right)\quad\text{as }R\to\infty \tag{2.18}\]
and we choose \(\theta>2\), then
\[\int_{\mathbb{R}^{n}}v^{1-n}g^{-\frac{p-1}{p}+\varepsilon}\operatorname{Tr} \left(E^{2}\right)\leq 0. \tag{2.19}\]
It then follows from (2.19) that \(E\equiv 0\) almost everywhere in \(\mathbb{R}^{n}\), which in turn gives
\[v\left(x\right):=c_{1}+c_{2}\left|x-x_{0}\right|^{\frac{p}{p-1}}\quad\forall x \in\mathbb{R}^{n} \tag{2.20}\]
for some \(c_{1},c_{2}\in\mathbb{R}\) (see Catino, Monticelli and Roncoroni [4, Section 4.1] or Ciraolo, Figalli and Roncoroni [6, Section 3.2]). By putting together (2.1) and (2.20) and using (1.1), we then obtain that the function \(u\) is of the form (1.2). Therefore, we are left with showing that (2.18) holds true. We separate two cases:
**Case \(p>\left(n+1\right)/3\).** We simplify the arguments used by Ou [13] in this case. By observing that
\[\frac{np-3p+2}{p}+\frac{p-1}{p}+\varepsilon =\frac{np-2p+1}{p}+\varepsilon<\frac{np-n+p}{p},\] \[0 <\frac{p-1}{p}+\varepsilon <1\]
and
\[n-\min\left(\frac{np-3p+2}{p-1},\frac{np-2p+1}{p}+\varepsilon\right) \\ =\max\left(\frac{3p-n-2}{p-1},\frac{2p-1}{p}+\varepsilon\right)<2\]
for small \(\varepsilon\), we can apply (2.3), which gives (2.18).
**Case \(p_{n}<p\leq\left(n+1\right)/3\).** In this case, by observing that for small \(\varepsilon\),
\[\left(\frac{p}{n-p}\right)^{p-1}\leq vg\leq\frac{n\left(p-1\right)}{p \varepsilon}\left(\varepsilon\left|\nabla v\right|^{p}+\left(\frac{p}{n-p} \right)^{p-1}\right), \tag{2.21}\]
we obtain
\[\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2}v^{-\frac{np-3p+2}{p}}g^{ \frac{p-1}{p}+\varepsilon}\\ \leq\left(\frac{n-p}{p}\right)^{\frac{\left(p-1\right)\left(n-3p +2+p\varepsilon\right)}{p}}\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2}v^{-\frac{ n\left(p-1\right)}{p}+\varepsilon}g^{\frac{n-2p+1}{p}+2\varepsilon} \tag{2.22}\]
and
\[\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2}v^{-\frac{n\left(p-1 \right)}{p}+\varepsilon}g^{\frac{n-2p+1}{p}+2\varepsilon}\leq\frac{n\left(p-1 \right)}{p\varepsilon}\\ \times\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2}v^{-\frac{np-n+p} {p}+\varepsilon}g^{\frac{n-3p+1}{p}+2\varepsilon}\left(\varepsilon\left| \nabla v\right|^{p}+\left(\frac{p}{n-p}\right)^{p-1}\right). \tag{2.23}\]
On the other hand, by using (2.10), we obtain
\[\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2}v^{-\frac{np-n+p}{p}+ \varepsilon}g^{\frac{n-3p+1}{p}+2\varepsilon}\left(\varepsilon\left|\nabla v \right|^{p}+\left(\frac{p}{n-p}\right)^{p-1}\right)\\ =-\left(\theta-2\right)\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-3}v^ {-\frac{n\left(p-1\right)}{p}+\varepsilon}g^{\frac{n-3p+1}{p}+2\varepsilon} \left|\nabla v\right|^{p-2}\left\langle\nabla v,\nabla\eta_{R}\right\rangle\\ -n\left(\frac{n-3p+1}{p}+2\varepsilon\right)\int_{\mathbb{R}^{n }}\eta_{R}^{\theta-2}v^{-\frac{np-n+p}{p}+\varepsilon}g^{\frac{n-4p+1}{p}+2 \varepsilon}\left|\nabla v\right|^{p-2}\\ \times\left\langle E\nabla v,\nabla v\right\rangle. \tag{2.24}\]
We begin with estimating the first term in the right-hand side of (2.24). For each \(\delta>0\) and \(q>1\), Young's inequality gives
\[-\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-3}v^{-\frac{n(p-1)}{p}+ \varepsilon}g^{\frac{n-3p+1}{p}+2\varepsilon}\left|\nabla v\right|^{p-2}\left\langle \nabla v,\nabla\eta_{R}\right\rangle\] \[\quad\leq\frac{1}{q}\delta^{1-q}\int_{\mathbb{R}^{n}}\eta_{R}^{ \theta-2-q}v^{-\frac{n(p-1)}{p}+\varepsilon}g^{\frac{n-2p+1}{p}-q+2\varepsilon }\left|\nabla v\right|^{q(p-1)}\left|\nabla\eta_{R}\right|^{q}\] \[\quad\quad+\frac{q-1}{q}\delta\int_{\mathbb{R}^{n}}\eta_{R}^{ \theta-2}v^{-\frac{n(p-1)}{p}+\varepsilon}g^{\frac{n-2p+1}{p}+2\varepsilon}. \tag{2.25}\]
By using (2.16) and since \(\left|\nabla\eta_{R}\right|\leq 2/R\), we obtain
\[\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2-q}v^{-\frac{n(p-1)}{p}+ \varepsilon}g^{\frac{n-2p+1}{p}-q+2\varepsilon}\left|\nabla v\right|^{q(p-1)} \left|\nabla\eta_{R}\right|^{q}\\ \leq CR^{-q}\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2-q}v^{-\frac{ (p-1)(n-q)}{p}+\varepsilon}g^{\frac{n-2p+1-q}{p}+2\varepsilon} \tag{2.26}\]
for some constant \(C=C\left(n,p,q\right)>0\). By observing that \(n-2p>0\) when \(p\leq\left(n+1\right)/3\), we let
\[q:=n-2p+1+2p\varepsilon,\]
so that
\[q>1,\] \[\frac{n-2p+1-q}{p}+2\varepsilon =0 \tag{2.27}\] \[\frac{\left(p-1\right)\left(n-q\right)}{p}-\varepsilon =\frac{\left(p-1\right)\left(2p-1\right)}{p}-\varepsilon\left(2p-1\right)\] \[\in\left(0,\frac{np-n+p}{p}\right) \tag{2.28}\]
and
\[n-q-\frac{\left(p-1\right)\left(n-q\right)}{p}+\varepsilon=\frac{2p-1}{p}+ \varepsilon<2 \tag{2.29}\]
provided \(\varepsilon\) is chosen small enough. It follows from (2.3), (2.27) and (2.28) that if \(\theta\) is chosen large enough and \(\varepsilon\) is chosen small enough, then
\[R^{-q}\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2-q}v^{-\frac{(p-1)(n-q)}{p}+ \varepsilon}g^{\frac{n-2p+1-q}{p}+2\varepsilon}=\mathrm{o}\left(R^{2}\right) \text{ as }R\rightarrow\infty. \tag{2.30}\]
By choosing \(\delta\) small enough (depending on \(n\), \(p\), \(\varepsilon\) and \(\theta\)) and putting together (2.23), (2.24), (2.25) and (2.26), we obtain
\[\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2}v^{-\frac{n(p-1)}{p}+ \varepsilon}g^{\frac{n-2p+1}{p}+2\varepsilon}=\mathrm{o}\left(R^{2}\right)+ \mathrm{O}\left(\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2}v^{-\frac{np-n+p}{p}+ \varepsilon}\right.\\ \left.\times g^{\frac{n-4p+1}{p}+2\varepsilon}\left|\nabla v \right|^{p-2}\left|\left\langle E\nabla v,\nabla v\right\rangle\right|\right) \quad\text{as }R\rightarrow\infty. \tag{2.31}\]
For each \(\delta>0\), Lemma 2.4 (ii) with
\[B=\delta^{-1/2}R^{-1}\eta_{R}^{-2}v^{\frac{n-2p}{p}}g^{\frac{n-3p}{p}+ \varepsilon}\left|\nabla v\right|^{p-2}\nabla v\otimes\nabla v\]
gives
\[\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2}v^{-\frac{np-n+p}{p}+ \varepsilon}g^{\frac{n-4p+1}{p}+2\varepsilon}\left|\nabla v\right|^{p-2} \left|\left\langle E\nabla v,\nabla v\right\rangle\right|\\ \leq C\delta^{-1}R^{-2}\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-4}v ^{-\frac{np-2n+3p}{p}+2\varepsilon}g^{\frac{2n-7p+1}{p}+3\varepsilon}\left| \nabla v\right|^{2p}\\ +\delta R^{2}\int_{\mathbb{R}^{n}}\eta_{R}^{\theta}v^{1-n}g^{- \frac{p-1}{p}+\varepsilon}\operatorname{Tr}\left(E^{2}\right) \tag{2.32}\]
for some constant \(C=C\left(p\right)>0\). By using (2.15), (2.17) and (2.22), we obtain
(2.33)
for some constant \(C=C\left(n,p,\varepsilon,\theta\right)>0\). On the other hand, by using (2.16), we obtain
\[\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-4}v^{-\frac{np-2n+3p}{p}+2 \varepsilon}g^{\frac{2n-7p+1}{p}+3\varepsilon}\left|\nabla v\right|^{2p}\\ \leq\left(\frac{n\left(p-1\right)}{p}\right)^{2}\int_{\mathbb{R} ^{n}}\eta_{R}^{\theta-4}v^{-\frac{np-2n+p}{p}+2\varepsilon}g^{\frac{2n-5p+1}{ p}+3\varepsilon}. \tag{2.34}\]
By choosing \(\delta\) small enough (depending on \(n\), \(p\), \(\varepsilon\) and \(\theta\)) and putting together (2.31), (2.32), (2.33) and (2.34), we obtain
\[\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2}v^{-\frac{n(p-1)}{p}+ \varepsilon}g^{\frac{n-2p+1}{p}+2\varepsilon}=\mathrm{o}\left(R^{2}\right)\\ +\mathrm{O}\left(R^{-2}\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-4}v ^{-\frac{np-2n+p}{p}+2\varepsilon}g^{\frac{2n-5p+1}{p}+3\varepsilon}\right) \quad\text{as }R\rightarrow\infty. \tag{2.35}\]
For each \(\delta>0\) and \(q>1\), Young's inequality gives
\[R^{-2}\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-4}v^{-\frac{np-2n+p}{p}+ 2\varepsilon}g^{\frac{2n-5p+1}{p}+3\varepsilon}\] \[\quad\leq\frac{1}{q}\delta^{1-q}R^{-2q}\int_{\mathbb{R}^{n}}\eta_ {R}^{\theta-2-2q}v^{-a\left(n,p,q,\varepsilon\right)}g^{b\left(n,p,q,\varepsilon \right)}\] \[\quad\quad+\frac{q-1}{q}\delta\int_{\mathbb{R}^{n}}\eta_{R}^{ \theta-2}v^{-\frac{n\left(p-1\right)}{p}+\varepsilon}g^{\frac{n-2p+1}{p}+2 \varepsilon}, \tag{2.36}\]
where
\[a\left(n,p,q,\varepsilon\right):=\frac{np-n-q\left(n-p\right)}{p}-\varepsilon \left(q+1\right)\]
and
\[b\left(n,p,q,\varepsilon\right):=\frac{n-2p+1-q\left(3p-n\right)}{p}+ \varepsilon\left(q+2\right).\]
If we assume that
\[0<b\left(n,p,q,\varepsilon\right)<1 \tag{2.37}\]
and
\[a\left(n,p,q,\varepsilon\right)+b\left(n,p,q,\varepsilon\right)<\frac{np-n+p} {p}, \tag{2.38}\]
and we choose \(\theta\) large enough, then it follows from (2.3) that
\[\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2-2q}v^{-a\left(n,p,q,\varepsilon\right) }g^{b\left(n,p,q,\varepsilon\right)}=CR^{n-\min\left(\frac{pa\left(n,p,q, \varepsilon\right)}{p-1},a\left(n,p,q,\varepsilon\right)+b\left(n,p,q, \varepsilon\right)\right)} \tag{2.39}\]
for some constant \(C=C\left(n,p,q,\varepsilon\right)>0\). If we assume moreover that
\[\min\left(\frac{pa\left(n,p,q,\varepsilon\right)}{p-1},a\left(n,p,q, \varepsilon\right)+b\left(n,p,q,\varepsilon\right)\right)>n-2q-2 \tag{2.40}\]
and we choose \(\delta\) small enough (depending on \(n\), \(p\), \(\varepsilon\) and \(\theta\)), then it follows from (2.35), (2.36) and (2.39) that
\[\int_{\mathbb{R}^{n}}\eta_{R}^{\theta-2}v^{-\frac{n\left(p-1\right)}{p}+ \varepsilon}g^{\frac{n-2p+1}{p}+2\varepsilon}=\mathrm{o}\left(R^{2}\right) \quad\text{as }R\rightarrow\infty. \tag{2.41}\]
Then (2.18) follows from (2.22) and (2.41). Therefore, it remains to show that for small \(\varepsilon\), there exists \(q>1\) such that (2.37), (2.38) and (2.40) simultaneously hold true. When \(\varepsilon=0\), we can rewrite (2.37) as
\[\frac{n-3p+1}{3p-n}<q<\frac{n-2p+1}{3p-n} \tag{2.42}\]
(observe that \(3p-n>0\) since \(p>p_{n}>n/3\)). By observing that
\[a\left(n,p,q,0\right)+b\left(n,p,q,0\right)=n-2q-2+\frac{1}{p}>n-2q-2,\]
we can rewrite (2.38) and (2.40) with \(\varepsilon=0\) as
\[q>\frac{n-3p+1}{p} \tag{2.43}\]
and
\[a\left(n,p,q,0\right)>\frac{\left(p-1\right)\left(n-2q-2\right)}{p},\quad\text{ i.e. }q<\frac{2\left(p-1\right)}{n-3p+2}, \tag{2.44}\]
respectively (observe that \(n-3p+2\geq 1\) since \(p\leq\left(n+1\right)/3\)). By observing that
\[\frac{n-3p+1}{p}\leq\frac{n-3p+1}{3p-n}\]
when \(p\leq\left(n+1\right)/3\), we obtain that (2.42) implies (2.43). On the other hand, it is easy to see that (2.42) and (2.44) simultaneously hold true for some \(q>1\) if and only if
\[\max\left(\frac{n-3p+1}{3p-n},1\right)<\frac{2\left(p-1\right)}{n-3p+2}. \tag{2.45}\]
A straightforward computation gives that (2.45) is equivalent to
\[p>\max\left(\frac{n+4}{5},\frac{4n+3-\sqrt{4n^{2}+12n-15}}{6}\right)=p_{n}.\]
By passing to the limit as \(\varepsilon\to 0\), we then obtain that if \(p>p_{n}\) and \(\varepsilon\) is small enough, then there exists \(q>1\) such that (2.37), (2.38) and (2.40) simultaneously hold true. This ends the proof of Theorem 1.1.
|
2302.01087
|
A Cluster Expansion Proof That The Stochastic Exponential Of A Brownian
Motion Is A Martingale
|
Let ${\psi}:\mathbb{R}^{+}\rightarrow\mathbb{R}^{+}$ be a smooth and
continuous real function and $\psi\in\mathrm{L}^{2}(\mathbb{R}^{+})$. Let
${B}(t)$ be a standard Brownian motion defined with respect to a probability
space $(\Omega,\mathscr{F},{\mathsf{P}})$ and where $d{B}(t)={\xi}(t)dt$ and
$t\in\mathbb{R}^{+}$. The process $\xi(t)$ is a Gaussian white noise with
expectation $\mathbf{\mathsf{E}}~\xi(t)=0$ and with covariance
${\mathsf{E}}~\xi(t)\xi(s)=\delta(t-s)$. The Dolean-Dades stochastic
exponential ${Z}(t)$ is the solution to the linear stochastic differential
equation describing a geometric Brownian motion such that
$d{Z}(t)=\psi(t){Z}(t)d{B}(t)=\psi(t){Z}(t)\xi(t)dt$. Using a cluster expansion
method, and the moment and cumulant generating functions for $\xi(t)$, it is
shown that ${Z}(t)$ is a martingale. The original Novikov criteria for ${Z}(t)$
being a true martingale are reproduced and exactly satisfied, namely that
\begin{align}
{\mathsf{E}}\mathrm{Z}(t)={\mathsf{E}}\exp\left(\int_{o}^{t}{\psi}(u)d{B}(u)
-\frac{1}{2}\int_{0}^{t}|{\psi}(u)|^{2}du\right)=1\nonumber \end{align}
provided that $\exp\big(\int_{0}^{t}|{\psi}(u)|^{2}du\big)<\infty$ for all
$t>0$. However, ${\mathsf{E}}\big[|{Z}(t)|^{p}\big]
=\exp(\tfrac{1}{2}p(p-1)\phi(t))$, if $\phi(t)=\int_{0}^{t}|\psi(u)|^{2}du$ is
monotone increasing and is a submartingale for all $p>1$.
|
Steven D Miller
|
2023-02-02T13:20:50Z
|
http://arxiv.org/abs/2302.01087v2
|
# A cluster expansion proof that the stochastic exponential of a Brownian motion is a martingale
###### Abstract.
Let \(\psi:\mathbb{R}^{+}\to\mathbb{R}^{+}\) be a smooth and continuous real function and \(\psi\in\mathbb{L}^{2}(\mathbb{R}^{+})\). Let \(B(t)\) be a standard Brownian motion defined with respect to a probability space \((\Omega,\mathscr{F},\textbf{P})\) and where \(dB(t)=\xi(t)dt\) and \(t\in\mathbb{R}^{+}\). The process \(\xi(t)\) is a Gaussian white noise with expectation \(\textbf{E}\ \xi(t)=0\) and with covariance \(\textbf{E}\ \xi(t)\xi(s)=\delta(t-s)\). The Dolean-Dades stochastic exponential \(Z(t)\) is the solution to the linear stochastic differential equation describing a geometric Brownian motion such that \(dZ(t)=\psi(t)Z(t)dB(t)=\psi(t)Z(t)\xi(t)dt\). Using a cluster expansion method, and the moment and cumulant generating functions for \(\xi(t)\), it is shown that \(Z(t)\) is a martingale. The original Novikov criteria for \(Z(t)\) being a true martingale are reproduced and exactly satisfied, namely that
\[\textbf{E}Z(t)=\textbf{E}\exp\left(\int_{o}^{t}\psi(u)dB(u)-\frac{1}{2}\int_{0 }^{t}|\psi(u)|^{2}du\right)=1\]
provided that \(\exp\big{(}\int_{0}^{t}|\psi(u)|^{2}du\big{)}<\infty\) for all \(t>0\). However, \(\textbf{E}\big{[}|Z(t)|^{p}\big{]}=\exp(\frac{1}{2}p(p-1)\phi(t))\), if \(\phi(t)=\int_{0}^{t}|\psi(u)|^{2}du\) is monotone increasing and is a submartingale for all \(p>1\).
## 1. **Introduction**
Let \(B(t)\) be a standard Brownian motion with respect to a probability space \((\Omega,\mathscr{F},\textbf{P})\), and let \(\psi:\mathbb{R}^{+}\to\mathbb{R}^{+}\) be smooth continuous real function [1]. A classical problem in stochastic analysis is to prove that the Dolean-Dades stochastic exponential
\[Z(t)=\exp\left(\int_{o}^{t}\psi(u)dB(u)-\frac{1}{2}\int_{0}^{t}| \psi(u)|^{2}du\right) \tag{1.1}\] \[Z(0)=0 \tag{1.2}\]
is a true martingale [2]. The DDSE is the exact solution of the geometric Brownian motion [1]
\[dZ(t)=\psi(t)Z(t)\ \xi(t)dt\equiv\psi(t)Z(t)\ dB(t) \tag{1.3}\]
where \(dB(t)=\xi(t)dt\) and \(\xi(t)\) is a Gaussian white noise. The solution has the Ito stochastic integral representation
\[Z(t)=\int_{0}^{t}\psi(u)Z(u)dB(u) \tag{1.4}\]
The explicit solution (1.1) is obtained [1] via the Ito expansion of \(\log Z(t)\). Proving that \(Z(t)\) is a true martingale is actually a somewhat difficult and subtle problem and is relevant to other theorems and results [2-7]. Ito integrals of the form (1.4) are not always martingales. The well-known necessary and sufficient conditions for \(Z(t)\) to be a true martingale are due to Novikov [2] and also Kazamaki [5]. The Novikov criteria are
\[\textbf{E}Z(t)=\textbf{E}\exp\left(\int_{o}^{t}\psi(u)dB(u)-\frac{1}{2}\int_{ 0}^{t}|\psi(u)|^{2}du\right)=1 \tag{1.5}\]
and the bound
\[\exp\left(\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)<\infty,\ \forall t>0 \tag{1.6}\]
or equivalently \(\frac{1}{2}\|\psi\|_{L_{2}(\mathbb{R})}^{2}=\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2} du<\infty\). The function \(\psi\) can also be a random process \(\psi(t,\omega)\) with respect to \(\omega\in\Omega\) with Novikov criteria [6]
\[\mathsf{E}Z(t)=\mathsf{E}\exp\left(\int_{o}^{t}\psi(u,\omega)dB(u,\omega)- \frac{1}{2}\int_{0}^{t}|\psi(u,\omega)|^{2}du\right)=1 \tag{1.7}\]
and
\[\mathsf{E}\left\{\exp\left(\int_{0}^{t}|\psi(u,\omega)|^{2}du\right)\right\}< \infty,\ \forall t>0 \tag{1.8}\]
In this note we only consider (1.5), and a short new proof is given to establish that \(Z(t)\) is a martingale. The exact criteria (1.5) and (1.6) are established, via a cluster-expansion method and utilising moment and cumulant-generating functions.
Geometric Brownian motion arises naturally in problems of stochastic exponential growth, which occurs when a quantity of interest grows exponentially but with a random multiplier and/or random waiting time between spurts of growth. Such processes are ubiquitous in both Nature and in human-created systems: bacterial growth in a sustaining medium; in nuclear and cellular fission; tissue growth in embryonic and cancer biology; in viral epidemics; in financial markets and bubbles, and the Black-Scholes option pricing model; internet growth and propagation of viral social media posts; population and extinction dynamics; Moore's law for computer processing power; and inflationary expansion of the very early Universe with random fluctuations [8-17]. Noise can either boost stochastic exponential growth or drive a population to extinction [8,11,12].
Most such models on random growth have focussed on the SDE for geometric Brownian motion since it can be solved exactly with a strong solution. Suppose a system evolves exponentially via the simple linear ODE \(dX(t)=\alpha X(t)dt\) then \(X(t)=X(0)\exp(\alpha t)\). If \(\alpha>0\) then the system undergoes exponential growth and \(X(t)\to\infty\) as \(t\to\infty\) and if \(\alpha<0\) then the system exponentially decays or collapses so that \(X(t)\to 0\) as \(t\to\infty\). The system is stable or static for \(\alpha=0\). If the system is randomly perturbed by white noise (additively) then
\[dX(t)=\alpha X(t)dt+\psi(t)X(t)\xi(t)dt\equiv\alpha X(t)dt+\psi(t)X(t)dB(t) \tag{1.9}\]
A strong solution exists for this SDE [1]. Given any \(C^{2}\)-functional \(f(X(t))\), the Ito Lemma is
\[df(X(t))=\nabla_{X}f(X(t))dX(t)+\frac{1}{2}\nabla_{X}^{2}f(X(t) )d[X,X](t)\] \[=\nabla_{X}f(X(t))dX(t)+\frac{1}{2}\nabla_{X}^{2}f(X(t))|\psi(t)| ^{2}dt\] \[=\nabla_{X}f(X(t))\big{\{}\alpha X(t)dt+\psi(t)X(t)dB(t)\big{\}}+ \frac{1}{2}\nabla_{X}^{2}f(X(t))|\psi(t)|^{2}dt \tag{1.10}\]
where \(\nabla_{X}=d/dX(t)\) and \(\nabla_{X}^{2}=d^{2}/dX(t)^{2}\). Using \(f(X(t))=\log X(t)\)
\[dU(t)\equiv d\log X(t)=\frac{1}{X(t)}dX(t)+\frac{1}{2}\left(- \frac{1}{|X(t)|^{2}}\right)|\psi(t)|^{2}|X(t)|^{2}dt\] \[=\frac{1}{X(t)}\big{\{}\alpha X(t)dt+\psi(t)X(t)dB(t)\big{\}}- \frac{1}{2}\psi(t)|^{2}dt\] \[=\left(\alpha-\frac{1}{2}|\psi(t)|^{2}\right)dt+\psi(t)dB(t) \tag{1.11}\]
Then the solution is
\[X(t)=X(0)\exp(\alpha t)\exp\left(\int_{0}^{t}\psi(u)dB(u)-\frac{1}{2}\int_{0 }^{t}|\psi(u)|^{2}du\right)=\exp(\alpha t)Z(t) \tag{1.12}\]
The expectation is
\[\mathsf{E}X(t)=X(0)\exp(\alpha t)\mathsf{E}\exp\left(\int_{0}^{t}\psi(u)dB(u)- \frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)=\exp(\alpha t)\mathsf{E}Z(t) \tag{1.13}\]
which is \(\mathsf{E}X(t)=\exp(\alpha t)\) if \(\mathsf{E}Z(t)=1\). The martingale property of \(Z(t)\) is therefore desirable and ensures there is no blow up at any finite time \(t>0\).
## 2. Main theorem and proof utilising a cluster expansion method
### Preliminary definitions and lemmas
Prior to the proof, we first establish the following preliminary definitions and lemmas. The proof utilises a cluster expansion method [18-21].
**Definition 2.1**.: _White noise is (informally) defined as a zero-centred Gaussian process \(\xi\) with covariance \(\mathsf{E}\ \xi(t)\xi(s)=\delta(t-s)\) and expectation \(\mathsf{E}\xi(t)=0\). A scalar or inner product \(\langle\bullet,\bullet\rangle\) in \(L^{2}(\mathbb{R})\) can be defined as_
\[\langle\psi,\phi\rangle=\mathsf{E}\big{\langle}\psi,\xi\big{\rangle}\big{ }\big{\langle}\phi,\xi\big{\rangle}=\iint\psi(s)\phi(t)\mathsf{E}\ \xi(s)\xi(t)dsdt=\iint\psi(s)\phi(t)\delta(t-s)dsdt \tag{2.1}\]
_Then_
\[\langle\psi,\psi\rangle=\mathsf{E}\big{\langle}\psi,\xi\big{\rangle}\big{ }\big{\langle}\psi,\xi\big{\rangle}=\iint\psi(s)\psi(t)\mathsf{E}\ \xi(s)\xi(t)dsdt=\iint\psi(s)\psi(t)\delta(t-s)dsdt \tag{2.2}\]
_which is_
\[\langle\psi,\psi\rangle=\int|\psi(s)|^{2}ds=\|\psi\|_{L^{2}(\mathbb{R})}^{2} \tag{2.3}\]
_The noise \(\xi\) can be formulated as a Gaussian random process on any space of distributions containing \(\mathrm{L}^{2}(\mathbb{R})\). Stochastic integrals of the form \(\int\psi(s)\xi(s)ds\) are well defined if \(\psi\in\mathrm{L}^{2}(\mathbb{R})\). If \(\psi\) is an indicator function on \([0,t]\) then the process \(B(t)=\int_{0}^{t}\xi(s)ds\) is a Brownian motion and \(dB(t)=\xi(t)dt\)._
**Definition 2.2**.: _Given a time-ordered set \((t_{1},...t_{m})\in(0,T)\), with \(t_{1}<t_{2}<t_{3}<...<t_{m-1}<t_{m}\), the mth-order moments and cumulants for the white noise \(\xi(t)\) are given by_
\[\overrightarrow{\mathbb{T}}\mathsf{E}\ \xi(t_{1})\otimes... \otimes\xi(t_{m})=\overrightarrow{\mathbb{T}}\mathsf{E}\prod_{q=1}^{m}\xi(t_{ q})\] \[\overrightarrow{\mathbb{T}}\mathsf{C}\ \xi(t_{1})\otimes... \otimes\xi(t_{m})=\overrightarrow{\mathbb{T}}\mathsf{C}\prod_{q=1}^{m}\xi(t_{ q}) \tag{2.4}\]
_where \(\mathbb{T}\) is a 'time-ordering operator'. For example if \(t_{1}<t_{2}<t_{3}\) and \(f(t)\) is a function of t then \(\overrightarrow{\mathbb{T}}f(t_{2})f(t_{1})f(t_{3}=f(t_{1})f(t_{2})f(t_{3})\). Since \(\xi(t)\) is a Gaussian, it is defined entirely by its first two moments and all cumulants or order \(m\geq 3\) vanish so that_
\[\left\{\overrightarrow{\mathbb{T}}\mathsf{C}\prod_{q=1}^{m}\xi(t_{q})\right\} _{m\geq 3}=0 \tag{2.5}\]
For example, at second order for a Gaussian process the binary cumulant is equivalent to the binary moment
\[\mathsf{C}\xi(t_{1})\xi(t_{2})=\mathsf{E}\ \xi(t_{1})\xi(t_{2})-\mathsf{E}\ \xi(t_{1})\mathsf{E}\ \xi(t_{2})=\mathsf{E}\xi(t_{1})\xi(t_{2})=\delta(t_{2}-t_{1}) \tag{2.6}\]
**Definition 2.3**.: _The moment generating function (MGF) \(\mathscr{M}[\xi(t)]\) and the cumulant-generating function [CGF] \(\mathscr{C}[\xi(t)]\) of the white noise \(\xi(t)\) are given by_
\[\mathscr{M}[\xi(t)]=\sum_{m=0}^{\infty}\frac{\beta^{m}}{m!}\int_{0} ^{t}...\int_{0}^{t_{m-1}}dt_{1}...dt_{m}\overrightarrow{\mathbb{T}}\,\mathsf{ E}\prod_{q=1}^{m}\ \xi(t_{q}) \tag{2.7}\] \[\mathscr{C}[\xi(t)]=\sum_{m=1}^{\infty}\frac{\beta^{m}}{m!}\int_{ 0}^{t}...\int_{0}^{t_{m-1}}dt_{1}...dt_{m}\overrightarrow{\mathbb{T}}\, \mathsf{C}\prod_{q=1}^{m}\xi(t_{q}) \tag{2.8}\]
_where \(\beta\) is an arbitrary real constant. It is important to note that the summation in (2.8) begins from \(m=1\) and not \(m=0\) and that_
\[\overrightarrow{\mathbb{T}}\,\mathsf{E}\prod_{q=1}^{m}\xi(t_{q})\neq \overrightarrow{\mathbb{T}}\prod_{q=1}^{m}\mathsf{E}\xi(t_{q}) \tag{2.9}\] \[\overrightarrow{\mathbb{T}}\,\mathsf{C}\prod_{q=1}^{m}\xi(t_{q}) \neq\overrightarrow{\mathbb{T}}\prod_{q=1}^{m}\mathsf{C}\xi(t_{q}) \tag{2.10}\]
_Equations (2.7) and (2.8) can also be written as_
\[\mathscr{M}[\xi(t)]=\sum_{m=0}^{\infty}\frac{\beta^{m}}{m!}\int \mathbf{D}_{m}[t_{1}...t_{m}]\overrightarrow{\mathbb{T}}\,\mathsf{E}\prod_{q =1}^{m}\xi(t_{q}) \tag{2.11}\] \[\mathscr{C}[\xi(t)]=\sum_{m=1}^{\infty}\frac{\beta^{m}}{m!}\int \mathbf{D}_{m}[t_{1}...t_{m}]\overrightarrow{\mathbb{T}}\,\mathsf{C}\prod_{q =1}^{m}\xi(t_{q}) \tag{2.12}\]
_where \(\int\mathbf{D}_{m}[t]=\int...\int dt_{1}...dt_{m}\) is a 'path integral'. Choosing \(\beta=+1\)_
\[\mathscr{M}[\xi(t)]=\sum_{m=0}^{\infty}\frac{1}{m!}\int\mathbf{D }_{m}[t_{1}...t_{m}]\overrightarrow{\mathbb{T}}\,\mathsf{E}\prod_{q=1}^{m}\xi (t_{q}) \tag{2.13}\] \[\mathscr{C}[\xi(t)]=\sum_{m=1}^{\infty}\frac{1}{m!}\int\mathbf{D }_{m}[t_{1}...t_{m}]\overrightarrow{\mathbb{T}}\,\mathsf{C}\prod_{q=1}^{m}\xi (t_{q}) \tag{2.14}\]
**Lemma 2.4**.: _The relation between the MGF and the CGF is_
\[\log\mathscr{M}[\xi(t)]=\mathscr{C}[\xi(t)] \tag{2.15}\]
_so that_
\[\mathscr{M}[\xi(t)]=\exp\left(\mathscr{C}[\xi(t)]\right) \tag{2.16}\]
_Hence_
\[\sum_{m=0}^{\infty}\frac{1}{m!}\int\mathbf{D}_{m}[t_{1}...t_{m}] \overrightarrow{\mathbb{T}}\,\mathsf{E}\prod_{q=1}^{m}\xi(t_{q}) \tag{2.17}\] \[\exp\left(\sum_{m=1}^{\infty}\frac{1}{m!}\int\mathbf{D}_{m}[t_{1}...t_{m}]\overrightarrow{\mathbb{T}}\,\mathsf{C}\prod_{q=1}^{m}\xi(t_{q})\right) \tag{2.18}\]
**Proposition 2.5**.: _Given \(\psi\in\mathrm{L}^{2}(\mathbb{R})\) and the white noise \(\xi(t)\), then for all \(t>0\) define the Gaussian noise or random function_
\[\Xi(t)=\psi(t)\xi(t) \tag{2.19}\]
_Then \(\mathsf{E}\Xi(t)=0\) and_
\[\mathsf{E}\ \Xi(t)\Xi(s)=\psi(t)\psi(s)\mathsf{E}\ \xi(t)\xi(s)=\psi(t)\psi(s) \delta(t-s) \tag{2.20}\]
**Lemma 2.6**.: _Given \(\Xi(t)=\psi(t)\xi(t)\) then for any \(t_{2}>t_{1}\) and \((t_{1},t_{2})\in(0,t)\)_
\[\langle\psi,\psi\rangle\equiv\int_{0}^{t}\int_{0}^{t_{1}}\mathsf{E}\ \Xi(t_{1})\Xi(t_{2})dt_{1}dt_{2}=\int_{0}^{t}|\psi(t_{1})|^{2}dt_{1}=\|\psi\|_{ L_{2}(\mathbb{R})} \tag{2.21}\]
Proof.: Using the sifting property of the delta function
\[\int_{0}^{t}\int_{0}^{t_{1}}\mathsf{E}\ \Xi(t_{1})\Xi(t_{2})dt_{1}dt_{2}\] \[=\int_{0}^{t}\int_{0}^{t_{1}}\psi(t_{1})\psi(t_{2})\mathsf{E}\ \Xi(t_{1})\Xi(t_{2})dt_{1}dt_{2}\] \[=\int_{0}^{t}\psi(t_{1})\bigg{|}\int_{0}^{t_{1}}\psi(t_{2})\delta (t_{2}-t_{1})dt_{2}\bigg{|}dt_{1}\] \[=\int_{0}^{t}\psi(t_{1})\psi(t_{1})dt_{1}=\int_{0}^{t}|\psi(t_{1} )|^{2}dt_{1}\equiv\|\psi\|_{L_{2}(\mathbb{R}^{+})} \tag{2.22}\]
**Proposition 2.7**.: _Given \(\Xi(t)=\beta(t)\xi(t)\) then the Dolean-Dades stochastic exponential \(Z(t)\) can be expressed as_
\[Z(t)=\exp\left(\int_{0}^{t}\psi(u)dB(u)-\frac{1}{2}\int_{0}^{t}| \psi(u)|^{2}du\right)\] \[=\exp\left(\int_{0}^{t}\psi(u)\xi(u)du-\frac{1}{2}\int_{0}^{t}| \psi(u)|^{2}du\right)\] \[=\exp\left(\int_{0}^{t}\Xi(u)du-\frac{1}{2}\int_{0}^{t}|\psi(u)| ^{2}du\right) \tag{2.23}\]
### Main theorem
The main theorem and proof are now as follows:
**Theorem 2.8**.: _The stochastic exponential \(Z(t)\) is a true martingale iff_
\[\mathsf{E}Z(t)\big{\}}=\mathsf{E}\exp\left(\int_{0}^{t}\psi(u)dB(u)-\frac{1} {2}\int_{0}^{t}|\psi(u)|^{2}du\right)=1 \tag{2.24}\]
_and requiring_
\[\exp(\|\psi\|_{L^{2}(\mathbb{R})}^{2})=\exp\left(\frac{1}{2}\int_{0}^{t}|\psi (u)|^{2}du\right)<\infty,\ \forall t>0 \tag{2.25}\]
Proof.: \[\mathsf{E}Z(t)=\mathsf{E}\exp\left(\int_{0}^{t}\psi(u)dB(u)-\frac{ 1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\] \[=\mathsf{E}\exp\left(\int_{0}^{t}\psi(u)\xi(u)du-\frac{1}{2}\int _{0}^{t}|\psi(u)|^{2}du\right)\] \[=\mathsf{E}\exp\left(\int_{0}^{t}\Xi(u)du-\frac{1}{2}\int_{0}^{t} |\psi(u)|^{2}du\right)\] \[=\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\mathsf{E }\exp\left(\int_{0}^{t}\Xi(u)du\right)\]
\[=\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\mbox{ \sf E}\sum_{m=0}^{\infty}\frac{1}{m!}\int_{0}^{t}dt_{1}...\int_{0}^{t_{m-1}}dt_ {m}\prod_{q=1}^{m}\Xi(t_{q})\] \[\equiv\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right) \mbox{\sf E}\sum_{m=0}^{\infty}\frac{1}{m!}\int\mbox{\bf D}_{m}[t_{1}...t_{m} ]\prod_{q=1}^{m}\Xi(t_{q})\] \[=\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\sum_{m=0 }^{\infty}\frac{1}{m!}\int\mbox{\bf D}_{m}[t_{1}...t_{m}]\mbox{\sf E}\prod_{q=1 }^{m}\Xi(t_{q}) \tag{2.26}\]
However, from (2.11), the MGF is
\[\mathscr{M}[\Xi(t)]=\sum_{m=0}^{\infty}\frac{1}{m!}\int\mbox{\bf D}_{m}[t_{1}...t_{m}]\mbox{\sf E}\prod_{q=1}^{m}\Xi(t_{q}) \tag{2.27}\]
so (2.26) becomes
\[\mbox{\sf E}Z(t)=\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right) \mathscr{M}[\Xi(t)] \tag{2.28}\]
Now using Lemma (2.4)
\[\mathscr{M}[\Xi(t)]=\exp(\mathscr{C}[\Xi(t)]) \tag{2.29}\]
giving
\[\mbox{\sf E}Z(t)=\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du \right)\mathscr{M}[\Xi(t)]\] \[=\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\exp \left(\mathscr{C}[\Xi(t)]\right)\] \[=\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\exp \left(\sum_{m=1}^{\infty}\frac{1}{m!}\int\mbox{\bf D}_{m}[t_{1}...t_{m}]\mbox{ \sf C}\overrightarrow{\mathbb{T}}\prod_{q=1}^{m}\Xi(t_{q})\right)\] \[=\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\exp \left(\sum_{m=1}^{\infty}\frac{1}{m!}\int\mbox{\bf D}_{m}[t_{1}...t_{m}]\mbox{ \sf C}\overrightarrow{\mathbb{T}}\prod_{q=1}^{m}\psi(t_{q})\xi(t_{q})\right) \tag{2.30}\]
Now since \(\xi(t)\) is a Gaussian process, all cumulants of order \(m\geq 3\) vanish so that
\[\mbox{\sf C}\overrightarrow{\mathbb{T}}\left\{\prod_{q=1}^{m}\Xi(t_{q}) \right\}_{m\geq 3}=\mbox{\sf C}\overrightarrow{\mathbb{T}}\left\{\prod_{q=1}^{m} \psi(t_{q})\xi(t_{q})\right\}_{m\geq 3}=0 \tag{2.31}\]
This leaves
\[\mbox{\bf E}Z(t)=\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\] \[\times\exp\left(\int\mbox{\bf D}_{m}[t_{1}]\psi(t_{1})\mbox{\sf C }\xi(t_{1})\right\}+\frac{1}{2}\int\mbox{\bf D}[t_{1},t_{2}]\psi(t_{1})\psi(t_{ 2})\mbox{\sf C}\big{\{}\xi(t_{1})\xi(t_{2})\Big{\}}\] \[\equiv\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\] \[\times\exp\left(\int_{0}^{t}dt_{1}\psi(t_{1})\mbox{\sf C}\xi(t_{1} )+\frac{1}{2}\int_{0}^{t}\int_{0}^{t_{1}}dt_{1}dt_{2}\psi(t_{1})\psi(t_{2}) \mbox{\sf C}\xi(t_{1})\xi(t_{2})\right)\]
\[\equiv\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\] \[\times\exp\left(\int_{0}^{t}dt_{1}\psi(t_{1})\underbrace{\mbox{ \sf E}\ \{\xi(t_{1})\}}_{=0}+\frac{1}{2}\int_{0}^{t}\int_{0}^{t_{1}}dt_{1}dt_{2}\psi(t _{1})\psi(t_{2})\mbox{\sf E}\ \xi(t_{1})\xi(t_{2})\right)\] \[\equiv\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\exp \left(\frac{1}{2}\int_{0}^{t}\int_{0}^{t_{1}}dt_{1}dt_{2}\psi(t_{1})\psi(t_{2} )\mbox{\sf E}\ \xi(t_{1})\xi(t_{2})\right)\] \[\equiv\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\exp \left(\frac{1}{2}\int_{0}^{t}\int_{0}^{t_{1}}dt_{1}dt_{2}\psi(t_{1})\psi(t_{2} )\delta(t_{2}-t_{1})\right)\] \[=\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\exp \left(\frac{1}{2}\int_{0}^{t}dt_{1}\psi(t_{1})\bigg{|}\int_{0}^{t_{1}}dt_{2} \psi(t_{2})\delta(t_{2}-t_{1})\bigg{|}\right)\] \[=\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\exp \left(\frac{1}{2}\int_{0}^{t}dt_{1}\psi(t_{1})\psi(t_{1})\right)\] \[=\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\exp \left(\frac{1}{2}\int_{0}^{t}|\psi(t_{1})|^{2}dt_{1}\right)\] \[\equiv\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\exp \left(\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\] \[\equiv\exp\left(-\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du+\frac{1}{ 2}\int_{0}^{t}|\psi(u)|^{2}du\right)=1 \tag{2.32}\]
iff
\[\exp\left(\frac{1}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)<\infty \tag{2.33}\]
which are the Novikov criteria. If for some function \(\psi(t),\ \exists\ T>0\) such that \(\frac{1}{2}\int_{0}^{T}|\psi(u)|^{2}du=\infty\) then \(\exp\left(\frac{1}{2}\int_{0}^{T}|\psi(u)|^{2}du\right)=\infty\) and \(\exp\left(-\frac{1}{2}\int_{0}^{T}|\psi(u)|^{2}du\right)=0\) so that \(\mbox{\sf E}Z(t)=0\). Hence \(Z(t)\) is a martingale for these criteria and the proof is complete.
As a corollary, it follows easily that \(\mbox{\sf M}(t,p)=\mbox{\sf E}[|Z(t)|^{p}]\) is a submartingale for all \(p>1\) if \(\phi(t)=\int_{0}^{t}|\psi(u)|^{2}du\) is monotone increasing with t.
**Corollary 2.9**.: _Given \(\mbox{\sf E}Z(t)=1\) it follows that_
\[\mbox{\sf M}(t,p)=\mbox{\sf E}[|Z(t)|^{p}]=\exp\left(\frac{1}{2}[p(p-1)\phi(t )\right) \tag{2.34}\]
_is a submartingale for all \(p>1\), if \(\phi(t))=\int_{0}^{t}|\psi(u)|^{2}du\) is bounded but monotone increasing with t._
Proof.: \[|Z(t)|^{p}=\left(\exp\left|\int_{0}^{t}\psi(u)dB(u)-\frac{1}{2} \int_{0}^{t}|\psi(u)|^{2}du\right)\right|^{p}\] \[=\exp\left(p\int_{0}^{t}\psi(u)dB(u)-\frac{p}{2}\int_{0}^{t}|\psi (u)|^{2}du\right)\] \[=\exp\left(-\frac{p}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\exp \left(p\int_{0}^{t}\psi(u)dB(u)\right)\] (2.35)
Then
\[\operatorname{\boldsymbol{\mathsf{M}}}(t,p)= \operatorname{\boldsymbol{\mathsf{E}}}[|Z(t)|^{p}]=\exp\left(-\frac{ p}{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\operatorname{\boldsymbol{\mathsf{E}}}\exp \left(p\int_{0}^{t}\psi(u)dB(u)\right)\] \[=\exp\left(-\frac{1}{2}p\int_{0}^{t}|\psi(u)|^{2}du\right)\exp \left(\frac{1}{2}p^{2}\int_{0}^{t}|\psi(u)|^{2}du\right)\] \[=\exp\left(\frac{1}{2}p(p-1)\int_{0}^{t}|\psi(u)|^{2}du\right)= \exp\left(\tfrac{1}{2}p(p-1)\phi(t)\right) \tag{2.36}\]
If \(\phi(t)>\phi(t^{\prime})\) for all \(t>t^{\prime}\) then \(\operatorname{\boldsymbol{\mathsf{M}}}(t,p)>\operatorname{\boldsymbol{\mathsf{ M}}}(t^{\prime},p)\) so that \(\operatorname{\boldsymbol{\mathsf{M}}}(t,p)\) is monotone increasing. Hence, \(\operatorname{\boldsymbol{\mathsf{M}}}(t,p)\) is a submartingale on \(\mathbb{R}^{+}\). If \(p=1\) then \(\operatorname{\boldsymbol{\mathsf{M}}}(t,1)=\operatorname{\boldsymbol{\mathsf{ E}}}Z(t)=1\).
|
2307.04721
|
Large Language Models as General Pattern Machines
|
We observe that pre-trained large language models (LLMs) are capable of
autoregressively completing complex token sequences -- from arbitrary ones
procedurally generated by probabilistic context-free grammars (PCFG), to more
rich spatial patterns found in the Abstraction and Reasoning Corpus (ARC), a
general AI benchmark, prompted in the style of ASCII art. Surprisingly, pattern
completion proficiency can be partially retained even when the sequences are
expressed using tokens randomly sampled from the vocabulary. These results
suggest that without any additional training, LLMs can serve as general
sequence modelers, driven by in-context learning. In this work, we investigate
how these zero-shot capabilities may be applied to problems in robotics -- from
extrapolating sequences of numbers that represent states over time to complete
simple motions, to least-to-most prompting of reward-conditioned trajectories
that can discover and represent closed-loop policies (e.g., a stabilizing
controller for CartPole). While difficult to deploy today for real systems due
to latency, context size limitations, and compute costs, the approach of using
LLMs to drive low-level control may provide an exciting glimpse into how the
patterns among words could be transferred to actions.
|
Suvir Mirchandani, Fei Xia, Pete Florence, Brian Ichter, Danny Driess, Montserrat Gonzalez Arenas, Kanishka Rao, Dorsa Sadigh, Andy Zeng
|
2023-07-10T17:32:13Z
|
http://arxiv.org/abs/2307.04721v2
|
# Large Language Models as General Pattern Machines
###### Abstract
We observe that pre-trained large language models (LLMs) are capable of autoregressively completing complex token sequences - from arbitrary ones procedurally generated by probabilistic context-free grammars (PCFG), to more rich spatial patterns found in the Abstract Reasoning Corpus (ARC), a general AI benchmark, prompted in the style of ASCII art. Surprisingly, pattern completion proficiency can be partially retained even when the sequences are expressed using tokens randomly sampled from the vocabulary. These results suggest that without any additional training, LLMs can serve as general sequence modelers, driven by in-context learning. In this work, we investigate how these zero-shot capabilities may be applied to problems in robotics - from extrapolating sequences of numbers that represent states over time to complete simple motions, to least-to-most prompting of reward-conditioned trajectories that can discover and represent closed-loop policies (e.g., a stabilizing controller for CartPole). While difficult to deploy today for real systems due to latency, context size limitations, and compute costs, the approach of using LLMs to drive low-level control may provide an exciting glimpse into how the patterns among words could be transferred to actions.
large language models, in-context learning, language for robotics
## 1 Introduction
Large language models (LLMs) are trained to absorb the myriad of patterns that are woven into the structure of language. They not only exhibit various out-of-the-box capabilities such as generating chains of reasoning [1; 2], solving logic problems [3; 4], and completing math puzzles [5], but also have been applied in robotics where they can serve as high-level planners for instruction following tasks [6; 7; 8; 9; 10; 11; 12], synthesize programs representing robot policies [13; 14], design reward functions [15; 16], and generalize user preferences [17]. These settings rely on the few-shot in-context examples in text prompts that specify the domain and input-output format for their tasks [18; 19], and remain highly semantic in their inputs and outputs.
A key observation of our work - and perhaps contrary to the predominant intuition - is that an LLM's ability to represent, manipulate, and extrapolate _more abstract, nonlinguistic_ patterns may allow them to serve as basic versions of _general pattern machines_. To illustrate this idea, consider the Abstract Reasoning Corpus [20], a general AI benchmark that contains collections of 2D grids with patterns that evoke abstract concepts (e.g., infilling, counting, and rotating shapes). Each problem provides a small number of input-output examples, followed by test input(s) for which the objective is to predict the corresponding output. Most methods (based on program synthesis) are manually engineered with domain-specific languages [21; 22; 23; 24] or evaluated on simplified extensions or subsets of the benchmark [25; 26; 27]. End-to-end machine learning methods only solve a handful of test problems [28]; however, our experiments indicate that LLMs in-context prompted in the style of ASCII art (see Fig. 1) can correctly predict solutions for up to 85 (out of 800) problems - exceeding some of the best performing methods to date [21; 22; 24], without additional model training or fine-tuning. Surprisingly, we find this extends beyond ASCII numbers, and
Figure 1: LLMs out-of-the-box can complete (highlighted) complex ARC patterns [20] expressed in arbitrary tokens.
that when they are replaced with a mapping to _randomly sampled tokens_ in the vocabulary, LLMs can still generate valid solutions. These results suggest an intriguing insight: that LLMs may exhibit more general capabilities of representing and extrapolating symbolic patterns, invariant to the specific tokens involved. This is in-line with - and complementary to - recent observations that using random or abstract label mappings for in-context classification retains some performance compared to ground-truth labels [29, 30]. We hypothesize that the capabilities that drive pattern reasoning on the ARC may allow general pattern manipulation at various levels of abstraction useful for robotics and sequential decision making [31, 32], wherein a diverse array of problems involve patterns that may be difficult to reason about precisely in words. For example, a procedure for spatially rearranging tabletop objects could be represented using arbitrary tokens (see Fig. 2). As another example, optimizing a trajectory with respect to a reward function can be framed as extrapolating a sequence consisting of state and action tokens with increasing returns.
Orthogonal and complementary to efforts that develop multi-task policies by pre-training on large amounts of robot data [33], or robotics foundation models [34] that can be fine-tuned for downstream tasks [35, 36, 37], our goal is instead to (i) assess the zero-shot capabilities that LLMs may already contain to perform some degree of general pattern manipulation, and (ii) investigate how these abilities can be used in robotics. These capabilities are certainly _not_ sufficient to replace specialized algorithms; nonetheless, they are useful to characterize, and doing so may help inform priorities for training generalist models in robotics.
We assess LLMs as pattern machines categorized into three areas: sequence transformation, sequence completion, and sequence improvement (see Fig. 2). First, we show that LLMs are capable of generalizing certain sequence transformations of increasing complexity with a degree of token invariance, and posit that this can carry over to spatial reasoning capabilities in robotic tasks. Next, we assess LLMs' ability to complete patterns from simple functions (e.g., sinusoids) and show this can be applied to robotic tasks like extending a wiping motion from kinesthetic demonstrations, or drawing patterns on a whiteboard. The combination of in-context sequence transformation and extrapolation further enables LLMs to do basic forms of sequence improvement. We show that providing reward-labeled trajectories as context, coupled with online interaction, can enable an LLM-based agent to learn to navigate through a small grid, discover a stabilizing CartPole controller, and optimize simple trajectories via human-in-the-loop "clicker" reward training. Code, benchmarks, and videos will be made available at [https://general-pattern-machines.github](https://general-pattern-machines.github).
Fig. 2: Pre-trained LLMs out-of-the-box may serve as basic versions of _general pattern machines_ that can recognize and complete sequences of numeric or arbitrary (symbolic) tokens expressing abstract problems in robotics and sequential decision-making. Experiments show that to an extent, LLMs can in-context learn (i) sequence transformations (e.g., to reason over spatial rearrangements of symbols, for dynamics modeling and next state prediction on downsampled images), (ii) completion of simple functions (e.g., to extrapolate kinesthetic demonstrations), or (iii) meta-patterns to improve return-conditioned policies (e.g., to discover oscillatory behaviors to stabilize a CartPole).
Related Work
Pattern reasoning by prompting pre-trained LLMs with few-shot input-output examples is driven by in-context learning [38; 39]. The examples serve as a form of task specification, where the model is expected to complete further instances of the task by simply predicting what comes next. In-context learning extends the concept of "task prefixes" (predefined task-specific token sequences e.g., [40]), but swapped in with actual task examples instead. Brown et al. [39] observes that it improves (in particular, out-of-distribution generalization) from scaling model size. This is in contrast to scaling models for pre-training + fine-tuning, which has been shown to not necessarily improve OOD generalization on language tasks [41]. Nonetheless, despite compelling OOD generalization abilities, in-context learning still comes at a cost, as it continues to lag behind in terms of absolute performance on benchmarks compared to task-specific fine-tuning [38].
In-context learning is explicitly trained for by packing examples from the same task and dataset into the same context buffer that is fed as input to an LLM with an unsupervised autoregressive objective [39], sometimes referred to as meta-training. However, it can also emerge implicitly from training on unsupervised datasets where tokens exhibit a Zipfian distribution [42] on Transformer architectures, but not necessarily with recurrent architectures (e.g., vanilla RNNs or LSTMs) [42]. Other works have shown that in-context learning with Transformers can learn simple function classes on par with least squares [43; 44], and can generalize to a seemingly unbounded number of tasks (when trained on tasks from the same task family) better than multitask MLPs [45], with Bayesian interpretations of this phenomenon [46][47].
In-context learning occurs during inference without gradient updates to the weights of the model, and can be differentiated from in-weights learning, which relies on information stored in the weights of the model during LLM training [48] (and can be useful for completion tasks such as "Abraham Lincoln was born in _ ). Chan et al. [48] observes that generalization of in-context learning can be characterized as more "exemplar-based" (on the basis of similarity to in-context examples [49]), as opposed to generalization of in-weights learning which tends to be more "rule-based" (on the basis of minimal features that support category boundaries in the training data [50]). The vast capabilities of LLMs [39; 51; 52; 53; 54] have been driven by a combination of both forms of learning. In this work, we are particularly interested in in-context learning, and (depending on the task) using the semantic priors of numeric tokens (e.g., "0" to "100") to drive new capabilities such as in-context sequence completion (Section 5) and improvement (Section 6).
LLMs have been applied across a number of areas in robotics - most recently in decomposing high-level task domain descriptions in natural language to mid-level step-by-step plans [6; 7; 55; 56; 57; 58], robot code [13; 17; 14; 59], and planning domain definition languages [10]. These methods leverage the semantic priors stored in LLMs to compose new plans or parameterize primitive APIs, but whether LLMs can directly influence control (e.g., at the level of trajectories) in a zero-shot manner remains an open problem. As a reaction to this, we investigate how the pattern reasoning capabilities of LLMs may drive various control tasks, to extend or optimize low-level action sequences. While it is possible to explicitly train models for these capabilities [60; 61; 62; 63], this work instead focuses on the inherent abilities of LLMs out-of-the-box, which may have downstream implications for the role of language pre-training for building generalist embodied AI systems. Our findings may also benefit domains where data collection is expensive or difficult to scale. Closely related to our work is Brooks et al. [64], which uses an LLM to represent a rollout-policy and world-model in-context, and then uses model-based Q-learning to drive policy improvement across a collection of toy environments with linguistic representations. Our use of LLMs for sequence improvement can be seen as a simplification of in-context policy iteration that supports both learning from demonstrations and in-context RL, driven by the generality of LLMs as pattern machines.
## 3 Language Models as General Pattern Machines
The capacity of LLMs to act as general pattern machines is driven by their ability to perform in-context learning on sequences of numeric or arbitrary tokens. An LLM typically represents sequence modeling autoregressively, with a decoder-only Transformer [65], by factorizing the probability of a sequence \(x\), which is a sequence of symbols \((s_{1},\...,\ s_{n})\), into the product of conditional probabilities
\(\prod_{i=1}^{n}\!p(s_{i}|s_{1},...,s_{i-1})\). To perform in-context learning, the model can be conditioned with a prompt that provides the initial tokens in the sequence \(s_{1:k}\!=\!(s_{1},...,s_{k})\) and uses the model to complete \(s_{k+1:n}\).
The adaptability of in-context learning lies in the amount of flexibility that can be packed into \(s_{1:k}\) - this prompt sequence can itself contain many sequences, each an input-output pair, and perhaps additional task conditioning [38, 29]. Specifically, a model can in-context learn to complete a prompt which is a set of \(N\) examples \(s_{1:k}\!=\!(x^{1},x^{2},...,x^{N})\) where each \(x^{i}\) is a variable-length sequence \((s^{i}_{1},s^{i}_{2},...,s^{i}_{m^{i}})\).
Rather than investigating in-context learning with natural language tasks [39], in this work we are interested in investigating more abstract notions of non-linguistic patterns. The following sections evaluate these capabilities across LLMs, and show how they can be used in robotics. By varying the notion of what each \(x^{i}\) should be, we can characterize in-context pattern learning capabilities into the following 3 categories.
* **Sequence Transformation** (Section 4): each \(x^{1},...,x^{N-1}\) is a sequence-to-sequence input-output pair; i.e., \(x^{i}\!=\!(x^{i}_{\text{input}}\!,\!x^{i}_{\text{output}})\), each subsequence of variable length, and \(x^{N}\) is the query input \((x^{N}_{\text{input}})\).
* **Sequence Completion** (Section 5): rather than containing input-output pairs, and rather than containing many examples of different sequences, the prompt \(x\!=\!(s_{1},...,\!s_{k})\) corresponds to discrete samples from a single function, e.g., of the form \(s_{i}\!=\!a\!\cdot\!\sin(bi)\), which can be extrapolated.
* this process can be iterative and applied to a variety of formulations, e.g., offline trajectory optimization or online in-context reinforcement learning.
## 4 Sequence Transformation
LLMs are capable of in-context learning the distribution of functions that represent sequence transformations by completing abstract patterns observed among examples of input-output sequences \(x^{i}\!=\!(x^{i}_{\text{input}}\!,\!x^{i}_{\text{output}})\) of arbitrary tokens, each drawn from a fixed alphabet \(\mathcal{A}\). For example, suppose that we are given a string of input-output examples such as " 5 3 0, 3 5; 7 6 1, 6 7; 9 2 3, 2 9; 4 8 5,". Here \(\mathcal{A}\) consists of tokens that represent space-prefixed digits 0-9, a comma token to separate inputs from outputs, and a semi-colon token to delineate examples from each other. A general pattern machine should infer the completion " 8 4" by recognizing that the pattern is to swap the first 2 tokens, then remove the 3rd.
We use the ARC benchmark [20] to evaluate LLMs on such sequence transformations, whereby token patterns are substantially more complex, covering a wide range of abstract spatial tasks: infilling, counting, translating and rotating shapes, etc. Each task comes with several input-output examples (3.3 on average), and 1-3 test inputs which can be represented as 2D grids. Sizes between inputs and outputs may differ and are not provided beforehand, thereby adding to the difficulty of applying standard machine learning algorithms, which typically assume fixed size. Autoregressive LLMs can be used for the ARC by flattening the grids and predicting each new output grid item in row-major order, which naturally supports variable length outputs. While LLMs are not originally trained for rasterizing spatial outputs in this way, we hypothesize that a general pattern machine would be capable of implicitly recognizing the long-range dependencies between rows (using positional encoding as a bias [67]) to pick up patterns that extend across the 2nd dimension.
**Result: ARC benchmark.** Our experiments in Table 1 show that LLMs (PaLM, InstructGPT series in acronyms d1 - d3) prompted with input grids represented as tokens drawn from an alphabet of digits, can correctly infer solutions for up to 85 problems. Surprisingly, this outperforms a number of recent systems [21, 24, 22] based on program synthesis that use manually engineered domain-specific languages (DSLs).
\begin{table}
\begin{tabular}{l c} \hline \hline Method & Total (of 800) \\ \hline (d3) text-davinci-003 & **85** \\ (d3) w/ random \(\mathcal{A}\) & \({}^{\dagger}\)44\(\pm\)6 \\ (d2) text-davinci-002 [51] & 64 \\ (p) PalM [53, 54] & 42 \\ (d1) text-davinci-001 [39] & 11 \\ (d1) finetuned & 9 \\ \hline Ainoson et al., 2023 [23] & \({}^{**}\)130 \\ Kaggle 1st Place, 2022 & \({}^{*}\)64 \\ Xu et al., 2022 [22] & \({}^{*}\)57 \\ Alford et al., 2021 [24] & 35 \\ Ferre et al., 2021 [21] & 32 \\ \hline \hline \end{tabular}
\end{table}
Table 1: LLMs out-of-the-box can solve a non-trivial number of problems on the ARC, competitive with the best existing methods using hand-crafted domain-specific languages [21, 24, 22].
While LLMs have yet to surpass brute-force search [23] to compose functions from a handcrafted API of grid operators, LLMs are perhaps the best performing generalist method that exists today. (We address the important caveat that parts of the ARC may be present in the training data of LLMs later in this section.)
**Observation: consistent tokenization matters.** The ARC can be found among the suite of tasks in BIG-Bench [68], but has often been overlooked since many language models appear to perform poorly (near or at zero performance). We observe this occurs due to the formatting of the benchmark, where grid elements are represented as neighboring characters in a string i.e., "8686" (instead of " 8 6 8 6"). While subtle, this difference is enough for certain Byte-Pair Encoding (or SentencePiece) tokenizers [69, 70] (that do not tokenize per digit) to group together multiple grid elements ("8" and "6") into a single token ("86") which maps to a different token embedding altogether in the vocabulary. This causes inconsistencies with how the patterns are expressed at the token level. For example, given a task expressed in a string "8686, 6868; 7979," if the LLM tokenizer groups together pairs of digits 86, 68, 79, respectively, then the sequential inductive patterns of the task (to swap and repeat individual digits) is lost. A simple work-around is to directly pass token indices or embeddings to the language model, or use token alphabets unlikely to be grouped by the tokenizer. This work-around generalizes to other pattern manipulation tasks beyond the ARC; in general, it is important to tokenize in a manner that is consistent with the pattern being represented.
**Observation: token mapping invariance.** The hypothesis that LLMs can serve as general pattern machines stems from the observation that they can surprisingly still solve a non-trivial number of ARC problems using alphabets \(\mathcal{A}\) sampled randomly from the LLM's token vocabulary. For instance, given a particular alphabet:{ 8 \(\mapsto\) falls, 6 \(\mapsto\) +#, 7 \(\mapsto\) U1, 9 \(\mapsto\) Chev, 3 \(\mapsto\) \(\mathbb{R}\), 2 \(\mapsto\) 2010}, a pattern machine at sufficient proficiency can be expected to complete the prompt "falls +# falls +# falls +# falls; UI Chev UI Chev, Chev UI Chev UI; \(\mathbb{R}\) 2010 \(\mathbb{R}\) 2010," by predicting " 2010 \(\mathbb{R}\) 2010 \(\mathbb{R}\)". For example, text-davinci-003 [51, 39] with the following mapping \(\mathcal{A}\!=\!\{\ \emptyset\ \mapsto\ \texttt{offence},\ 1\ \mapsto\ \texttt{Subject},\ 2\ \mapsto\ \texttt{Lub},\ 3\ \mapsto\ \texttt{Fail},\ 4\ \mapsto\ \texttt{Chev},\ 5\ \mapsto\ \texttt{symb},\ 6\ \mapsto\ \texttt{swung},\ 7\ \mapsto\ \texttt{U1},\ 8\ \mapsto\texttt{escalate},\ 9\ \mapsto\ \texttt{Chromebook}\}\) solves 52 ARC problems, and across 5 different random alphabets solves an average of 43.6 problems. Interestingly, we find that token mapping invariance holds to an extent on simple pattern transformations for randomly sampled _embeddings_ as well (i.e., such that embeddings are not associated with any token in the vocabulary, see Appendix).
The implications of token mapping invariance are two-fold. First, note that it is possible that parts of the ARC (and other static examples of pattern transformations) are present in the training data of an LLM (i.e., due to contamination). Therefore, measuring the performance of LLMs under random alphabets may provide a closer estimate of their true underlying in-context sequence transformation capabilities. (As additional evidence that LLMs' sequence transformation ability is not simply due to memorization, we also provide a new procedurally-generated pattern transformation benchmark which we describe below.)
Second, we hypothesize that the pattern manipulation capabilities which token invariance implies could help to drive positive transfer from patterns learned across Internet-scale language data to new modalities or symbolic representations for robot reasoning. As an example of this idea, (i) Fig. 3 (top) shows a grasp (Skilltes) detector which outputs target coordinates within a downsampled image (with 6 in-context examples), and (ii) Fig. 3 (bottom) shows spatial rearrangement via predicting simple forward dynamics where the red bowl moves to the green plate (with 9 in-context examples of downsampled images as inputs and outputs). The generality of what the arbitrary tokens could represent may allow pattern transformation capabilities - especially as LLMs improve - to be leveraged at various levels of abstraction in robotics (including at the level of pixels or robot joint positions). Incorporating more semantic priors into representations may also boost performance and enable further LLM-driven reasoning (e.g., reducing visual
Fig. 3: Example LLM prediction as an in-context grasp detector (top) and a simple forward dynamics model (bottom).
data into more semantic spatial representations). It may also be possible to search for the "optimal" token alphabet for a particular setting with gradient-free optimization, but we leave this to future work.
**Result: PCFG benchmark.** The ARC is a difficult benchmark, and the performance falloff can be steep (and relatively uninformative) across LLMs with decreasing model size and data scale, making it difficult to measure incremental progress towards better pattern machines that could be used for sequence transformation in robotics. Therefore, we introduce a new adjustable-difficulty benchmark, where the transformations are procedurally generated using the probabilistic context-free grammar (PCFG) in Hupkes et al. [71]. These transformations include a collection of lexical rules that may be composed (e.g., reverse, shift, swap, repeat, etc.) over the tokens in the input sequence \(x^{i}_{\text{input}}\) to generate \(x^{i}_{\text{output}}\) (see Appendix). Example composed transformations are given in Table 2. The complexity of these transformations can be controlled by varying the number of tokens \(k\) used to express sequences \(x^{i}\!=\!(s_{1}\),...,\(s_{k})\), and increasing the number of lexical rules \(w\) used to define the transformation. This is simply the identity function when \(w\!=\!0\), and progressively appears more complex (and more random) as \(w\!\rightarrow\!\infty\). Table 3 aggregates PCFG pattern completion accuracy across different LLMs over sequence length \(k\!=\![1\),\(2\),\(4\),\(8\),\(16\),\(32]\) and complexity \(w\!=\![0\),\(1\),\(3\),\(7\),\(15\),\(31]\), each with 100 runs. In the Appendix, we show results for different \(k\),\(w\) combinations to illustrate the way in which accuracy decreases as either \(k\) or \(w\) increases. This benchmark provides a more unbiased evaluation of pattern reasoning capabilities in LLMs; PCFG completion accuracy improves with model scale, and correlates with ARC performance. We use PCFG for evaluation only (rather than for training data [71; 72]) such that one can measure how pre-training regimes or modalities may improve general pattern recognition and completion capabilities across sequence transformations. We will release the PCFG benchmark.
## 5 Sequence Completion
In this section - complementary to transformations (Section 4) - we assess if LLM pattern reasoning can extend to settings where an LLM predicts a continuation of time series data points generated by a simple function class. We then demonstrate that such sequence completion can be operationalized on real robots to extend partial demonstrations of simple periodic motions. In this setting, the input context consists of a sequence \(x\!=\!(s_{1}\),...,\(s_{k})\) which packs a series of \(l\!\leq\!k\) discrete samples from a function \(f\). For example, the sequence of tokens " \(1\) \(2\), \(1\) \(2\), \(1\) \(2\)" may represent \(l\!=\!3\) samples from a constant vector-valued function \(f\) that outputs \((1\!,\!2)\). We use the LLM to extrapolate \(f\) by predicting \(s_{k+1}\),...,\(s_{n}\) autoregressively.
**Completion of sinusoids.** We start with a simple example where LLMs extrapolate a function of the form \(f(x)\!=\!a\!\cdot\!\sin(bx)\). As in Section 4, tokenization matters; we found it effective to discretize outputs among integers 0-100, as these integers are represented by single tokens in the tokenizers of the LLMs we tested.
\begin{table}
\begin{tabular}{l c} \hline \hline Function & Example Inputs & Example Outputs \\ \hline remove\_second(swap(s_{1},\ s_{2}),\ s_{3}) & \\ echo(copy(swap( & \\ prepend{remove\_second( swap(echo(s_{1}s_{2})),\ s_{3}s_{4}),\ s_{5}s_{6}s_{7}s_{8}s_{9}s_{10})\) & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Illustrations of transformations in our PCFG benchmark. Row 1 shows a transformation composed of \(k\!=\!2\) operations over \(w\!=\!3\) tokens, and row 2 shows a transformation composed of \(k\!=\!8\) operations over \(w\!=\!10\) tokens, respectively. For each transformation function, we show two example inputs and the corresponding outputs.
\begin{table}
\begin{tabular}{l c} \hline \hline Method & Accuracy (\%) \\ \hline (d3) text-davinci-003 & **75** \\ (d3) w/ random \(\mathcal{A}\) & \({}^{\dagger}\)58 \(\pm\) 1 \\ (p) PalM [53; 54] & 74 \\ (d2) text-davinci-002 [51] & 69 \\ (d1) text-davinci-001 [39] & 60 \\ (c1) text-curie-001 & 54 \\ (b1) text-babbage-001 & 50 \\ (a1) text-ada-001 & 39 \\ \hline \hline \end{tabular}
\end{table}
Table 3: LLMs of varying sizes are capable of completing patterns procedurally generated with PCFG, averaged over a range of \(k\) and \(w\).
Fig. 4 shows completions of the sine wave by text-davinci-003 over 11 trials given 3 and 5 periods as context, as well as average distance (computed by Dynamic Time Warping) of the generated predictions to the ground truth function values across several LLMs. Multiple LLMs produce near-perfect continuations of the sine wave, especially with more context (i.e., more periods of the sine wave). We additionally test the function family \(ax\cdot\sin(bx)\) - in which the amplitude of the oscillations increases with \(x\)-values. Here, the LLM must extrapolate to new values unseen in the context, which highlights the utility of using a metric space for the outputs (0-100) where the LLM has priors over the scale of the different tokens. These functions also contain a "meta-pattern": the \(y\)-values increase, decrease, and then increase in a single period - and the amplitude of the function also increases over time. We also test the function \(\frac{a}{2^{2}}\cdot\sin(bx)\), reminiscent of a stabilizing controller. Across these three functions, we observe that greater context and larger scale LLMs yield higher quality predictions.
**Completion of periodic motions.** We emphasize that the Sequence Completion capability above is domain-agnostic - i.e., we do not use any specialized prompts explaining what function should be completed, nor do we provide any linguistic grounding for the metric tokens. We can therefore operationalize this zero-shot capability of LLMs to simple open-loop motion extrapolation problems in robotics, e.g., by encoding a series of positions sampled from a demonstration, and predicting future positions. We test two simple tasks on a mobile robot manipulator: _Table Sweeping_ and _Whiteboard Drawing_ (both shown in Fig. 2).
In _Table Sweeping_, the goal is to continue a human-provided kinesthetic demonstration of sweeping a portion of a table (see middle Fig. 2). We encode the demonstration as a series of end-effector poses at approximately 3 Hz. Each demonstration lasts roughly 20-30 seconds. We represent the 7 DoF end-effector pose as a concatenation of Cartesian position and the quaternion, where each value is binned to an integer between 0 and 100, and the dimensions are delimited by spaces. We collect 30 demonstrations that demonstrate the sweeping motion. Note that demonstrations in this task are noisier and higher dimensional than the stylized sinusoid functions above. For each demonstration, we construct a context to consist of the first two-thirds of the provided demonstration, and treat the last one-third as the ground truth for the LLM to predict. Larger models quantitatively perform better with generally lower variance (see Fig. 5).
In _Whiteboard Drawing_, the goal is to continue a scripted demonstration of drawing loops on a whiteboard (see Fig. 2). Loops are defined by parametric equations of the form \(x=a_{x}\cos(bt)+d_{x}\) and \(y=a_{y}\sin(bt)+c_{y}t+d_{y}\). We execute the motions using position control and record the end-effector positions at 5 Hz, then discretize states in between 0 and 300, as finer motion is needed for this task. We provide part of the loop pattern in-context, and assess the ability to extrapolate from 2 loops to do a third loop. LLMs, e.g., text-davinci-003 perform well - we show completions with different loop styles in the Appendix.
## 6 Sequence Improvement
In the previous two sections, we have investigated the ability of LLMs to in-context learn sequence transformations, and the ability to extrapolate simple periodic function classes from partial sequences, which enables them to complete some partial demonstrations that exhibit a pattern. In this section, we explore the synergies between sequence transformation and completion - and investigate _improving_ a
Figure 4: LLMs (text-davinci-003) can extrapolate various functions \(y\!=\!a\cdot\sin(bx)\) (top row), \(y\!=\!ax\cdot\sin(bx)\) (middle row), and \(y\!=\!\frac{a}{2^{2}}\sin(bx)\) (bottom row) given amounts of context. Overall, larger models make better predictions with lower error rates (right column). More context also helps prediction accuracy (light vs. dark).
Figure 5: LLM trajectory predictions _Table Sweeping_ improve with larger models.
sequence, such as trajectories in a sequential decision process, along some metric, such as a reward function. Here, we use an LLM to generate new sequences \(x^{N}\) conditioned on previous sequences \((x^{1},...,x^{N-1})\), which can represent previous iterations of the same sequence (or policy it represents).
The improvement can also be return-conditioned, given a reward (or cost) function \(r(\cdot)\). By inserting as the first token(s) of each sequence its corresponding total reward \(x\!=\!(r(x)\),\(s_{1}\),..., \(s_{k}\)), we can prompt the model at to conditionally "improve" by "just asking" [73] for a higher reward than those seen in-context (i.e., prompting LLMs out-of-the-box to act as Decision Transformers [74]). New "rollouts" of the sequences can return new reward labels that then replace the original desired rewards with actual rewards. Iteratively performing this inference and accumulating more trajectories may jointly use the model's general notion of pattern transformation and extrapolation to perform improvement of sequences, which can be represented by numeric or symbolic tokens. Note that there are practical considerations, e.g., depending on the task or model, not all sequences can fit in context, so options could be to keep only the most recent, or the ones with the highest rewards if available (we refer to the Appendix for more discussion of the nuances here). In this section, we perform a series of targeted experiments on simple tasks, aiming to explore the possibility of using pre-trained LLMs for sequence improvement in trajectory and policy optimization.
**Extrapolating simple meta-patterns among trajectories.**
Sequence improvement with LLMs enables a simple form of trajectory optimization for a _Marker in Cup_ task on a Franka Panda robot, where we define the prefixed reward of a trajectory to be the negative distance between the final end-effector position and the cup (normalized between 0-100), and initialize the context with a collection of pre-recorded trajectories (stopping at 20%, 40%, 60%, and 80% of the way to the cup), delimited by newlines and prefixed by rewards (ranging roughly from 70-90; see prompts in the Appendix). For this task, we represent trajectories as sequences of Cartesian positions, each dimension normalized between 0-100. We find that text-davinci-003, to an extent, is able to generalize the pattern and generate a trajectory that achieves a reward \(>\!90\). For this extrapolation to occur, we observe that the meta-patterns in the context are crucial: in Fig. 6 (left), we compare the average reward achieved by text-davinci-003 over 11 trials (each with a different goal position) given contexts with different orderings of the trajectories (sorted by least-to-most reward, randomly permuted, or with/without reward annotations).
**Sampling higher-reward trajectories online.** While LLMs can extrapolate from trajectories that exhibit clear meta-patterns among them, we find that this ability is more limited for less trivial setups. Consider a simple \(9\!\times\!9\)_Grid_ navigation environment with a fixed goal position that is randomly placed and a fixed starting position at the center of the grid. Episodes terminate after 20 timesteps, and the return is based on the distance from the agent to the goal at the final time step. This environment is inspired by the Dark Room environment from [60] but with a continuous reward function, reducing the exploration challenge. The agent may take actions (1-5) corresponding to moving right, up, left, down, and no-op. We initialize the context buffer with 20 trajectories of agent grid positions generated by a random policy, sorted by total cumulative rewards. These trajectories exhibit a more complicated meta-pattern than in the _Marker in Cup_ task; we do not find that LLMs can generate trajectories of higher reward immediately. With that said, we can consider an iterative, _online_ setting, in which the LLM acts as an agent that interacts with the environment in a closed-loop fashion. The context consists of the highest reward trajectories in sorted order, appended with a higher reward than was seen in the context, plus states and actions from the current partial trajectory (see Appendix for details). Once an episode terminates, its trajectory is relabeled with the reward achieved, and inserted into the context at the appropriate position. In Fig. 7, we plot the maximum return attained by a1-d3 over 50 episodes, compared to random exploration, averaged over 5 trials. We find that a1-d1 tend to sometimes
Fig. 6: LLM agents can generate new trajectories with increasing returns for a _Marker in Cup_ task (right). Performance varies with different ways of building the context (left).
Fig. 7: Average maximum return for LLM agents a1-d3 on _Grid_ compared to random exploration (_r_).
"exploit" the suboptimal behaviors represented in the context (which initially contains trajectories with rewards ranging from 6-78), whereas d3 can consistently find a solution to _Grid_ within 50 episodes.
**Result: discovering a simple _CartPole controller._**
We show that using LLMs as agents in an online, closed-loop setting can discover a stabilizing controller for the _CartPole_ environment (where observation tokens consist of pole angle and velocity, normalized to 0-100, actions are 0 (left) and 1 (right), maximum time horizon is 200). Fig. 8 (left) shows that the total reward (number of timesteps the CartPole is kept upright) improves on average across various LLMs over 100 episodes (where the first 100 are generated by random exploration). Fig. 8 (right) shows the evolution of trajectories over episodes of d3, demonstrating that it discovers "oscillatory" behaviors to keep the CartPole upright.
**Result: online human-guided trajectory optimization.** LLMs can also react to sparse binary reward signals (e.g., subjectively provided by a human) to adjust trajectories online. This is analogous to an implementation of "clicker training" [75, 76] used for training dogs, but instead applied to robots. In this setup, at every time step (2s), the robot executes an action corresponding to a movement of its end-effector in a particular direction. The human observes the action and chooses whether to give a reward (i.e., by using the clicker) to encourage or discourage similar behaviors. Episodes reset after 30 seconds, and the first two episodes are generated by random exploration. The (_reward_, _state_, _action_) tuples are added as in-context examples (with negative example followed by positives, and an equal number of each) to generate the next action based on the current state. An example context format is given in the Appendix. As shown in Fig. 9, applying LLMs' sequence improvement capabilities in this way enables a human to interactively guide the robot to push an object via in-context sequence improvement.
## 7 Discussion
We are excited about the opportunities of LLMs as pattern machines for robotics - from reasoning and extrapolating complex patterns as a prior for control, to online optimization of closed-loop policies via sequence optimization. These capabilities present several implications, including (i) supplementary perspectives on the role of language pretraining for generalist end-to-end robot learning models [31, 32], and (ii) in-context learning of arbitrary patterns as a driving mechanism for policy improvement. LLMs also show promise for mixed autonomy settings - e.g., real-time pattern extrapolation for assistive teleoperation. We expect many these abilities to continue improving as large models expand from learning the patterns within language-only datasets, to multimodal domains (e.g., images, videos, etc.). While this work investigates the scope of in-context generalization on fairly simple settings without additional data collection or model training, these capabilities presumably may be significantly improved via domain-specific objectives and finetuning [77, 78, 62, 63].
**Limitations & Future Work.** Today, the inference costs (and monetary costs) of using LLMs in the control loop is quite high. Predicting the next token for every sequence, e.g., every dimension of every time step in a trajectory, involves querying an LLM. State-action spaces which are higher dimensional and/or greater precision also result in longer trajectory representations, and thereby the extent to which they can be extrapolated or sequence optimized is bounded by the context length of models. These limitations may prevent deploying these models on more complex tasks in practice, but could be lifted over time as current efforts in the community continue to drive improvements in LLM quantization [79] and inference
Fig. 8: Different LLM agents (d3 - c1) on average can improve trajectories (total rewards) with more _CartPole_ episodes (left), and discovers “oscillatory behaviors” (right) to keep the CartPole upright (later episodes are brighter).
Fig. 9: LLMs can in-context react to sparse reward signals online to encourage an end effector to reach a desired goal.
efficiency [80]. As with any other language-only model, LLM-based control may (i) be unpredictable, and (ii) lack visual/physical grounding; thus, it is not currently suitable for application outside of constrained lab settings. We leave the exploration of these important topics for future work.
#### Acknowledgments
The authors would like to acknowledge Jie Tan, Peng Xu, Carolina Parada, Alexander Herzog, Jensen Gao, Joey Hejna, and Megha Srivastava for valuable feedback and discussions.
|
2303.07907
|
Weak entanglement improves quantum communication using only product
measurements
|
We show that weakly entangled states can improve communication over a qubit
channel using only separate, interference-free, measurements of individual
photons. We introduce a communication task corresponding to the cryptographic
primitive known as secret sharing and show that all steerable two-qubit
isotropic states provide a quantum advantage in the success rate using only
product measurements. Furthermore, we show that such measurements can even
reveal communication advantages from noisy partially entangled states that
admit no quantum steering. We then go further and consider a stochastic variant
of secret sharing based on more sophisticated, yet standard, partial Bell state
analysers, and show that this reveals advantages also for a range of
unsteerable isotropic states. By preparing polarisation qubits in unsteerable
states, we experimentally demonstrate improved success rates of both secret
sharing tasks beyond the best entanglement-unassisted qubit protocol. Our
results reveal the capability of simple and scalable measurements in
entanglement-assisted quantum communication to overcome large amounts of noise.
|
Amélie Piveteau, Alastair A. Abbott, Sadiq Muhammad, Mohamed Bourennane, Armin Tavakoli
|
2023-03-14T13:48:19Z
|
http://arxiv.org/abs/2303.07907v3
|
# Weak entanglement improves quantum communication using only passive linear optics
###### Abstract
We show that noisy entangled states, that cannot violate any Bell inequality, can be used to improve quantum communication when measurements are limited to being compatible with standard, ancilla-free, linear optics. We introduce a communication task inspired by the cryptographic primitive known as secret sharing and show that entanglement that is too weak to permit possible Einstein-Podolsky-Rosen steering can still enhance the success rate when using only standard partial Bell state analysers for decoding. We then go further and show that even the simplest type of decoding, namely product measurements, which require no optical interference at all, can still lead to an advantage when the entanglement is steerable but still Bell-local. We demonstrate the former advantage by preparing polarisation qubits in an unsteerable entangled state and by using only beam-splitters and phase-shifters observing a boost in the success rate of beyond the best entanglement-unassisted qubit protocol.
_Introduction.--_Entanglement is well-known to be the crucial resource for a wide variety of quantum information applications. A major domain of application is in quantum communication where it can, e.g., increase the classical capacity of a quantum channel [1] or reduce classical communication complexity [2]. However, not all forms of entanglement are evidently useful. For instance, while entanglement that is strong enough to generate nonlocality has been found to improve noiseless classical communication beyond its conventional limitations (see, e.g., [3; 4; 5; 6; 7; 8]), weaker entangled states that cannot violate any Bell inequality do not have that ability [9]. However, if the system communicated is itself quantum, e.g. a qubit instead of a bit, then some weaker forms of entanglement also become useful. This can be seen in the celebrated dense coding protocol [10] where a maximally entangled state, \(|\phi^{+}\rangle=\frac{|00\rangle+|11\rangle}{\sqrt{2}}\), is exploited to allow a qubit message to transmit two bits instead of one bit. Then, if the maximally entangled state is replaced with an isotropic state
\[\rho_{v}=v|\phi^{+}\rangle\!\langle\phi^{+}|+\frac{1-v}{4}\openone, \tag{1}\]
for some visibility \(v\in[0,1]\), then an advantage over an unassisted qubit message exists whenever \(v>\frac{1}{3}\)[11; 12]. This coincides with the visibility at which the state becomes separable, \(v_{\rm sep}=\frac{1}{3}\), and is considerably lower than the critical visibility for Einstein-Podolsky-Rosen steering under general projective measurements, \(v_{\rm unsteer}=\frac{1}{2}\)[13]. Moreover, it is even further below the largest known visibility at which the isotropic state (1) cannot violate any Bell inequality, \(v_{\rm local}\approx 0.6875\)[14].
However, to harness the dense coding advantage one must measure in the Bell basis \(\{\Phi^{+},\Phi^{-},\Psi^{+},\Psi^{-}\}\), where \(\Phi^{\pm}=|\phi^{\pm}\rangle\!\langle\phi^{\pm}|\) and \(\Psi^{\pm}=|\psi^{\pm}\rangle\!\langle\psi^{\pm}|\) are the projectors onto the states \(|\phi^{\pm}\rangle=\frac{|00\rangle+|11\rangle}{\sqrt{2}}\) and \(|\psi^{\pm}\rangle=\frac{|01\rangle+|10\rangle}{\sqrt{2}}\). In optical systems, which are the most relevant for quantum communication, it is impossible to implement a linear optics Bell basis measurement on separate photons without the use of auxiliary photons [15]. While dense coding experiments have been reported [16; 17; 18; 19; 20], implementation of the Bell basis is not expected to be scalable in optical systems in the near future. Nevertheless, it has recently been found that entanglement can yield advantages in one-shot quantum communication scenarios by using much more limited, yet much less experimentally demanding, optical measurements that are compatible with passive linear optics [21]. However, the schemes considered thus far come with greater requirements on the quality of entanglement, in particular demanding states that can be used to violate a Bell inequality.
Here, we show that when restricting to simple optical measurements, compatible with ancilla-free linear optics, weak forms of entanglement still constitute a resource for enhancing communication. To this end, we introduce a communication task which can be interpreted as a stochastic version of the cryptographic primitive known as secret sharing. In secret sharing a secret is distributed between two parties in such a way that they must cooperate to reconstruct it [22; 23]. This task is of considerable interest for quantum cryptography, and has consequently received much attention (see e.g. [24; 25; 26; 27; 28]). We prove that for our task there exist entangled but unsteerable isotropic states which enable an advantage over unassisted qubits using only conventional optical partial Bell state analysers [29; 30]. Using polarisation qubits generated via spontaneous parametric down-conversion, we use both a maximally entangled state to experimentally demonstrate an optimal stochastic secret sharing protocol, and an unsteerable isotropic state to outperform the best possible entanglement-unassisted qubit protocol for the task. We also consider whether an advantage could be obtained using the simplest type of measurements, namely product measurements of separate photons that require no two-photon interference in the decoding procedure. Even with such measurements, we find that some isotropic states that are steerable but cannot violate any Bell inequality still enable an advantage over unassisted qubit communication.
_Stochastic secret sharing.--_Suppose that Alice wishes to generate a random secret bit \(a\in\{0,1\}\) that is shared with Bob and Charlie in such a way that they individually have no knowledge of \(a\) but can learn its value if they cooperate. To this end, we consider the scenario illustrated in Fig 1.
In this scenario, Bob and Charlie each privately select two uniformly random bits \(x\equiv(x_{0},x_{1})\in\{0,1\}^{2}\) and \(y\equiv(y_{0},y_{1})\in\{0,1\}^{2}\), respectively. Given their respective inputs, Bob and Charlie each
to Alice. Alice privately selects a binary input \(z\in\{0,1\}\) and accordingly decodes the two incoming messages. The decoding yields one of three possible values \(a\in\{0,1,\bot\}\). The values \(a\in\{0,1\}\) are interpreted as Alice's secret bit, whereas the value \(a=\bot\) is associated to rejecting and discarding the round of the secret sharing protocol (after which it can be re-initiated). Specifically, the conditions for successfully completing the task are as follows. When Alice selects input \(z\), the binary value of \(x_{z}\oplus y_{z}\) determines whether the round contributes to the secret sharing or is a "discarding round". If it is a discarding round (\(x_{z}\oplus y_{z}=0\)), then the round is deemed successful if \(a=\bot\). Otherwise, if it is a secret sharing round (\(x_{z}\oplus y_{z}=1\)), then it is successful if Bob and Charlie can reconstruct Alice's output through the relation \(a=x_{z}\oplus y_{z}\), where \(\bar{z}=z\oplus 1\). Thus, by announcing \(z\), Alice informs Bob and Charlie which of their bits hold the shared secret. The average success probabilities in the discarding rounds and the secret sharing rounds, respectively, becomes
\[\begin{split}&\mathcal{S}_{\text{discard}}=\frac{1}{16}\sum_{z} \sum_{\begin{subarray}{c}x,y:\\ x_{\bar{z}}\oplus y_{z}=0\end{subarray}}p(a=\bot\ |x,y,z),\\ &\mathcal{S}_{\text{secret}}=\frac{1}{16}\sum_{z}\sum_{ \begin{subarray}{c}x,y:\\ x_{\bar{z}}\oplus y_{z}=1\end{subarray}}p(a=x_{\bar{z}}\oplus y_{z}|x,y,z). \end{split} \tag{2}\]
Note that this task could be equally well formulated using only the secret sharing rounds, i.e., by removing the outcome \(a=\bot\). However, including \(\mathcal{S}_{\text{discard}}\) in the analysis turns out to allow advantages to be obtained from even more weakly entangled states, while still admitting a simple interpretation. One may think of \(\mathcal{S}_{\text{discard}}\) as a control parameter which certifies the nonclassical nature of the secret sharing correlations. While we may consider the pair \((\mathcal{S}_{\text{discard}},\mathcal{S}_{\text{secret}})\), it is both simpler and sufficient for our purposes to instead consider the single success metric obtained by averaging the two,
\[\mathcal{S}=\frac{1}{2}\left(\mathcal{S}_{\text{discard}}+\mathcal{S}_{\text{ secret}}\right). \tag{3}\]
Naturally, in order to draw meaningful conclusions, the parties must have some physical limitations. On the one hand, we are interested in the situation in which Bob and Charlie share no prior entanglement and simply send messages encoded in qubit states \(\beta_{x}\) and \(\gamma_{y}\), respectively, while Alice can decode using a general quantum measurement \(\{M_{a|z}\}_{a}\). The observed correlations are then given by
\[p_{\text{qubit}}(a|x,y,z)=\operatorname{tr}\left[(\beta_{x}\otimes\gamma_{y} )M_{a|z}\right], \tag{4}\]
where \(\{M_{a|z}\}_{a}\) are positive operator-valued measures for each \(z\). On the other hand, we also consider the situation illustrated in Fig. 1, where Bob and Charlie may additionally share a two-qubit entangled state \(\rho\) and then encode their qubit messages through local quantum channels \(\Lambda_{x}^{B}\) and \(\Lambda_{y}^{C}\), respectively. In this entanglement assisted case, the correlations are given by
\[p_{\text{EAquh}}(a|x,y,z)=\operatorname{tr}\left[\big{(}\Lambda_{x}^{B} \otimes\Lambda_{y}^{C}\left(\rho\right)\big{)}M_{a|z}\right]. \tag{5}\]
We now show that there exists an entanglement-assisted quantum protocol that simultaneously achieves a perfect discarding rate and a perfect secret sharing rate, while requiring no more sophisticated measurements than those compatible with standard linear optics.
_Ideal entanglement-assisted and unassisted protocols.--_ Consider a maximally entangled state \(\rho=|\phi^{+}\rangle\!\langle\phi^{+}|\) and let Bob's and Charlie's local channels correspond to implementing the four unitaries \(U_{x}^{B}=\sigma_{x}^{x_{0}}\sigma_{Z}^{x_{1}}\) and \(U_{y}^{C}=\sigma_{X}^{wo}\sigma_{Z}^{y_{1}}\) respectively, where \(\sigma_{X}\) and \(\sigma_{Z}\) are the Pauli bit-flip and phase-flip operators. This means that the two-qubit state arriving to Alice is one of the four Bell states. When \(z=0\), Alice performs a three-outcome measurement corresponding to a projection onto \(\{\Psi^{+},\Psi^{-},\Phi^{+}+\Phi^{-}\}\), i.e., she discriminates the states \(\Psi^{\pm}\). When \(z=1\), she instead discriminates the states \(\Phi^{-}\) and \(\Psi^{-}\) by projecting the two qubits onto \(\{\Phi^{-},\Psi^{-},\Phi^{+}+\Psi^{+}\}\). A direct calculation then shows that \(\mathcal{S}_{\text{discard}}=\mathcal{S}_{\text{secret}}=1\) and hence \(\mathcal{S}_{\text{EAquh}}=1\). Importantly, the employed measurements may be seen as partial Bell state analysers and it is known that passive linear optics can discriminate no more than two of the four Bell states [31]. This is indeed the case in our quantum protocol.
In contrast, in the scenario when entanglement is absent, it is no longer possible to succeed deterministically. Indeed, we were able to determine the maximum value of \(\mathcal{S}\) achievable with entanglement-unassisted qubits and general projective measurements for Alice. To this end, we used a straightforward modification of a hierarchy of semidefinite programs for bounding dimensionally-restricted quantum correlation [32]. In order to obtain sufficiently tight upper bounds on \(\mathcal{S}\) by solving numerically the semidefinite programs, we needed to consider terms appears in the first three levels of this hierarchy (the full third level being too computationally difficult to implement) [33]. To render these intensive calculations more readily tractable on a standard desktop computer, we
Figure 1: The secret sharing scenario: Bob and Charlie select private inputs and perform separate transformations on a shared two-qubit entangled state. They relay their respective output qubits to Alice. Alice selects an input \(z\) and performs a corresponding measurement with outcome \(a\). Depending on the value of her outcome, the round counts either towards secret sharing or towards a re-initialisation of the experiment which in itself serves as a control parameter for the advantages of entanglement.
employed recently developed symmetrisation techniques [34]. The symmetrisation is based on the observation that the objective function \(\mathcal{S}\) remains invariant under i) simultaneous bit-flip of \(x_{0}\) and \(y_{0}\), ii) simultaneous bit-flips of \(x_{1}\) and \(y_{1}\) and iii) simultaneous swaps \(x_{0}\leftrightarrow x_{1}\) and \(y_{0}\leftrightarrow y_{1}\). Up to solver precision, we determine the upper bound \(\mathcal{S}_{\text{qubit}}\leq\frac{5}{8}\). This bound in fact holds even if all three parties share some pre-agreed classical randomness.
Although the approach described provides only an upper bound on \(\mathcal{S}_{\text{qubit}}\), we demonstrate its tightness by exhibiting a strategy--in fact, an entirely classical one--which saturates it, thereby showing that there exists no quantum-over-classical advantage without using entanglement. To this end, consider the strategy in which Bob and Charlie relay the bits \(x_{0}\) and \(y_{0}\), respectively, to Alice. Alice then outputs \(a=\perp\) unless \(z=0\) and she receives \((0,1)\) or \((1,0)\), in which case she simply outputs \(a=0\). A direct calculation gives \(\mathcal{S}_{\text{discard}}=\frac{1}{2}\) and \(\mathcal{S}_{\text{discard}}=\frac{1}{8}\), giving an overall winning probability of \(\mathcal{S}_{\text{classical}}=\frac{5}{8}\). We conclude that any value in the range \(\frac{5}{8}<\mathcal{S}\leq 1\) implies an advantage over both the best classical and the best entanglement-unassisted quantum model, and hence is powered by the consumption of entanglement.
_Advantage from unsteerable states.--_If, in the above ideal entanglement-assisted protocol, we substitute the maximally entangled state for the isotropic state (1), we find that \(\mathcal{S}=\frac{3+5v}{8}\) Thus, we observe an advantage ascribed to entanglement whenever \(\mathcal{S}>\mathcal{S}_{\text{qubit}}\), which occurs whenever \(v>\frac{2}{5}\). Consequently, when \(v\in\left(\frac{2}{5},\frac{1}{2}\right]\), the isotropic state is both unsteerable and a communication resource with the employed decoding resources.
_Bell-local state and product measurements.--_While the partial Bell state measurements used above are compatible with passive linear optics, they still require Alice to perform precise two-photon interferences. This can become costly to scale to many qubits. It motivates the question of whether the simplest conceivable joint measurements can also reveal entanglement-assisted advantages [21]. This entails separately measuring each of the two qubits and then post-processing the two separate outcomes into the final output \(a\). As we now show, such simple measurements can nonetheless still lead to an advantage from a relatively weak form of entanglement, namely that which is too weak to violate any Bell inequality but still strong enough to manifest Einstein-Podolsky-Rosen steering.
To this end, consider once more the isotropic state (1) and the unitaries \(U_{x}^{B}\) and \(U_{y}^{C}\) for Bob and Charlie. Now, let Alice perform product measurements \(\sigma_{Z}\otimes\sigma_{Z}\) for \(z=0\) and \(\sigma_{X}\otimes\sigma_{X}\) for \(z=1\). The individual qubit measurements yield binary outcomes \(b_{B},b_{C}\in\{0,1\}\). Alice then decides her final outcome by a classical post-processing where she assigns \((b_{B},b_{C})\in\{(0,0),(1,1)\}\) to \(a=\perp\) and \((b_{B},b_{C})\in\{(0,1),(1,0)\}\) to \(a=1\). Evaluating this strategy gives \(\mathcal{S}_{\text{prod}}=\frac{3(1+v)}{8}\) which outperforms the unassisted qubit limit whenever \(v>\frac{2}{3}\). Hence, when \(v\in\left(\frac{2}{3},0.6875\right]\), the isotropic state is both Bell-local and a communication resource even when using only such minimal decoding resources.
_Experimental realisation.--_We report here on an experimental implementation of the entanglement-assisted communication advantage in stochastic secret sharing developed in this letter. We give a proof-of-principle demonstration of both the ideal quantum protocol using maximally entangled states encoded in the polarisation of photos, and an advantage from a weakly entangled, unsteerable, state.
To generate the entangled states, ultraviolet light centered at a wavelength of 390 nm is focused onto two 2 mm thick \(\beta\) barium borate (BBO) nonlinear crystals placed in an interferometric configuration to produce photon pairs emitted into two spatial modes through the second order degenerate type-I spontaneous parametric down-conversion process (SPDC). The spectral, and temporal distinguishability between the down-converted photons is carefully removed by passing through narrow-bandwidth interference filters and quartz wedges respectively (see Fig. 2).
To prepare the desired states for the secret sharing protocol, we first prepared the polarisation entangled pairs of photons in the state \(|\phi^{+}\rangle=\frac{|HH\rangle+|VV\rangle}{\sqrt{2}}\), where \(H\) and \(V\) are, respectively, the horizontal and vertical photonic polarisation modes. By taking the standard encoding of \(|0\rangle:=|H\rangle\) and \(|1\rangle:=|V\rangle\), we recover the desired maximally entangled state. In order to prepare an isotropic state, \(\rho_{v}\), \(|\phi^{+}\rangle\) must be mixed with white noise. We achieve this by, with probability \(1-v\), randomly transforming \(|\phi^{+}\rangle\) into one of the four Bell states \(|\phi^{\pm}\rangle,|\psi^{\pm}\rangle\). These transformations were experimentally realised by a motorised rotation of two quarter wave plates (QWP) placed in each of the modes (a) and (b) (see Fig. 2 and Supplemental Material for the QWP settings).
The polarisation measurements are performed using half wave plates (HWP) and polarising beam splitters (PBS), beam splitters (BS) placed at the two output modes of the PBS, and then by single photon detectors (actively quenched Sivalanche photodiodes). The partial Bell analyser is implemented through two-photon interference, using PBS and
Figure 2: Experimental setup. Entangled photon pairs in spatial modes (a) and (b) are generated through the SPDC process. Isotropic states are prepared by randomly transforming the maximally entangled \(|\phi^{+}\rangle\) state into one of the other Bell states using quarter wave plates (QWP). The unitaries of Bob and Charlie are implemented using a combination of quarter wave plate (QWP), half wave plates (HWP) and phase shifters (PS). Alice’s partial Bell state measurements are implemented using HWP, polarising beam splitters (PBS), beam splitters (BS) and then detected by single photon detectors (DET). See main text for further details.
HWPs set at \(22.5^{\circ}\). The two-photon Hong-Ou-Mandel dip visibility is \(99\%\pm 4\), where the substantial statistical error is due to the low two-photon coincidence rate used in the experiment (one per second) and a measurement time of 2400 seconds per point (see Supplemental Material). To switch from a Bell measurement discriminating between \(\{\Psi^{+},\Psi^{-},\Phi^{+}+\Phi^{-}\}\) to \(\{\Phi^{-},\Psi^{-},\Phi^{+}+\Psi^{+}\}\), Alice uses three HWPs placed before the first PBS (see Supplemental Material the HWP settings). All single-detection events were registered using a VHDL-programmed multichannel coincidence logic unit, with a time coincidence window of \(1.7\) ns.
Bob's and Charlie's transformations \(U_{x}^{B}\) and \(U_{y}^{C}\), respectively, are performed using two quarter wave plates (QWP), two half wave plates (HWP) and a phase shifter (PS) to change the relative phase between the two modes (a) and (b). Since they have a total of 16 settings but only four qualitatively different global operations on the states we consider, we have considered a simplified setting in which only the latter cases are realised (see Supplemental Material for details and HWP and PS settings).
To perform state tomography of isotropic states \(\rho_{v}\) for different \(v\), we performed tomography of the four Bell states generated in the randomisation procedure. Measurements were made at a rate of one two-coincidence per second over 1400 seconds for each of the nine settings needed to perform the state tomography for each Bell state. These results were then recombined _a posteriori_ at different ratios to establish the density matrices \(\rho_{v}\). We considered the reconstructed states \(\rho_{v}\) for \(v=0.4\) to \(v=0.5\), with a step-size of \(0.01\), corresponding to the resourceful but unsteerable range. Naturally, the reconstructed density matrices are not exactly isotropic states. To ensure the unsteerability of the experimentally realised states, we used the linear programming method of Ref. [35] which allows one to obtain a certificate of unsteerability for an arbitrary two-qubit state. We chose to proceed during the experiment with \(v=0.47\), as this represented a good balance between being below the steering threshold of \(v_{\text{unster}}=\frac{1}{2}\) while allowing for good enough statistics to show a significant quantum advantage. The fidelity of the reconstructed state with the target isotropic state for \(v=0.47\) is \(0.9983\pm 0.0004\). Detailed tomography results for the density matrices, the state fidelities, and the certificates of unsteerability are presented in the Supplemental Material.
The protocol was then carried out with the isotropic state at \(v=0.47\) with the noise added by randomly changing the Bell state between each two-photon coincidence while maintaining the necessary ratio between these states. To obtain at most one event per change of Bell state and thus ensure the randomness and therefore unpredictability of each event, we chose to work at a rate one two-photon detection coincidence per second. The effective measurement time per setting was \(2.8\) hours. We obtained a success probability of \(\mathcal{S}=0.655\pm 0.003\) which significantly goes beyond the theoretical entanglement-unassisted limit of \(\mathcal{S}_{\text{qubit}}\leq\frac{5}{8}\), hence showing an advantage from unsteerable states.
The same experiment was also performed for a maximally entangled state to realise the ideal protocol and show that Bob's and Charlie's unit rotations coupled with Alice's measurements give the expected results. This amounts to effectively setting \(v=1\) and thus no randomisation over Bell states was required. This allowed the experiment to be performed at an average rate of 800 two-photon detection coincidences per second and a measurement time per setting of 2 hours. The fidelity of the prepared state and the maximally entangled state \(|\phi^{+}\rangle\) was measured to be \(0.9947\pm 0.0009\). We observed a success probability of \(\mathcal{S}=0.9748\pm 0.0001\) in the secret sharing protocol, close to the ideal maximum value of \(\mathcal{S}=1\) and showing a large advantage due to entanglement.
_Discussion.--_ In this letter we demonstrated theoretically, and confirmed experimentally, that one can obtain quantum communication advantages using weakly entangled unsteerable states in a quantum secret sharing task. The advantage is rendered experimentally accessible by its achievability with passive linear optics, notably exploiting a Bell state analyser. Figure 3 summarises the ranges of noise \(v\) for which quantum communication advantages are achievable with two-qubit isotropic states and the relation to the nonlocal properties of the isotropic states. By focusing on easily realisable measurements, we showed that advantages can be obtained using partial Bell state measurements when \(v\geq 2/5\) and hence for some unsteerable isotropic states, and even using the simplest product measurements when \(v\geq 2/3\), a range that still includes some Bell-local states. It remains an open question whether advantages can be obtained in both these cases for smaller visibilities. In the former case, our bound is tight for the task at hand so an advantage would require a different task, whereas in the latter case it remains open whether \(v=2/3\) is tight for product measurements. Of particular relevance is to investigate whether product measurements also can generate advantages from highly noisy multi-qubit states, as this would pave the way for experiments that go beyond proof-of-principle demonstrations.
We finish by noting that, while we focused on the communication advantage itself, obtaining a sufficiently high success probability in the task can also be interpreted as a semi-device-independent certification of the entanglement in the shared resource state in the spirit of [11]. Moreover, if there is a strict separation between the critical visibilities for product
Figure 3: Nonlocal properties (red) and quantum communication advantages (blue) of the two-qubit isotropic state (1). The ranges for separability and unsteerability are tight, whereas the limit for Bell-local models is a lower bound [14]. The range for quantum communication advantages from general entangled measurements follows from dense coding-like protocols [11; 12] and coincides with the range in which the state is entangled. The ranges shown for quantum communication advantages with passive linear optics measurements and product measurements are established in this work and are (potentially sub-optimal) upper bounds on the critical visibilities.
and more general measurements in this task, a sufficiently high success probability in the task would also certify the succesful implementation of an entangled measurement in a semi-device-independent manner.
###### Acknowledgements.
A.A.A. and A.T. thank Anthony Martin and Alek Lagarrigue for discussions on an early iteration of the task presented here. This work supported by the Swedish research council, the Wenner-Gren Foundation and by the Knut and Alice Wallenberg Foundation through the Wallenberg Center for Quantum Technology (WACQT).
|
2310.01296
|
Hybrid light-matter states in topological superconductors coupled to
cavity photons
|
We consider a one-dimensional topological superconductor hosting Majorana
bound states at its ends coupled to a single mode cavity. In the strong
light-matter coupling regime, electronic and photonic degrees of freedom
hybridize resulting in the formation of polaritons. We find the polariton
spectrum by calculating the cavity photon spectral function of the coupled
electron-photon system. In the topological phase the lower in energy polariton
modes are formed by the bulk-Majorana transitions coupled to cavity photons and
are also sensitive to the Majorana parity. In the trivial phase the lower
polariton modes emerge due to the coupling of the bulk-bulk transitions across
the gap to photons. Our work demonstrates the formation of polaritons in
topological superconductors coupled to photons that contain information on the
features of the Majorana bound states.
|
Olesia Dmytruk, Marco Schirò
|
2023-10-02T15:52:57Z
|
http://arxiv.org/abs/2310.01296v2
|
# Hybrid light-matter states in topological superconductors coupled to cavity photons
###### Abstract
We consider a one-dimensional topological superconductor hosting Majorana bound states at its ends coupled to a single mode cavity. In the strong light-matter coupling regime, electronic and photonic degrees of freedom hybridize resulting in the formation of polaritons. We find the polariton spectrum by calculating the cavity photon spectral function of the coupled electron-photon system. In the topological phase the lower in energy polariton modes are formed by the bulk-Majorana transitions coupled to cavity photons and are also sensitive to the Majorana parity. In the trivial phase the lower polariton modes emerge due to the coupling of the bulk-bulk transitions across the gap to photons. Our work demonstrates the formation of polaritons in topological superconductors coupled to photons that contain information on the features of the Majorana bound states.
## I Introduction
Cavity embedding provides a promising avenue to probe and control quantum materials and devices. On the one hand there is tantalizing possibility of controlling phase transitions and phase diagrams by coupling to a cavity mode, an idea which has received theoretical and experimental attention [1; 2]. Another source of cavity control can arise from the hybridization with finite-frequency modes, leading to new hybrid quasiparticles - polaritons [3], which can be then probed and control in novel ways. A wide range of polaritonic modes have been proposed and observed, classified depending on the type of charged particles in the matter component [4].
A particularly appealing scenario arises when the material has a non-trivial topological character, a feature which can then be enhanced or suppressed [5; 6; 7; 8; 9] or even generated by the coupling with a cavity and thus transmitted to the emergent polariton excitations [10]. Among topological phases of matter, topological superconductors hosting zero-energy Majorana bound states [11; 12; 13; 14] hold a specially interesting place for their potential for quantum computing [15]. The prototype system for topological superconductivity is the Kitaev chain model [11] describing a one-dimensional \(p\)-wave superconductor with Majorana bound states emerging at its opposite ends in the topological phase. Promising platforms for the Majorana bound states are superconductor-semiconductor nanowires [16; 17], graphene-like systems [18; 19], and chains of magnetic atoms [20; 21; 22]. Signatures of the Majorana bound states in the form of zero-bias peak have been experimentally observed in superconductor-semiconductor nanowire platforms [23; 24; 25; 26; 27; 28; 29]. However, theoretical works have demonstrated that the zero-bias peak could arise due to non-Majorana mechanisms [30; 31; 32; 33; 34; 35; 36; 37; 38].
The idea of using cavities to probe and manipulate the Majorana bound states has been explored in different settings [39; 40; 41; 42; 43; 44; 45; 46; 47]. In these cases the cavity plays mainly the role of non-invasive spectroscopic tool to probe the physics of these modes. A different scenario arises potentially in the strong or ultrastrong light-matter coupling regime where polariton modes are formed, which in the case of a topological superconductor could take the form of the Majorana polaritons [48; 49].
In this work we study the hybrid light-matter states that emerge by coupling topological superconductors to a single mode cavity. We consider two models of topological superconductors hosting the Majorana bound states: a prototype Kitaev chain model [11] and a more realistic nanowire model [16; 17]. Hybridization between electronic and photonic states results in formation of polaritons. We focus specifically on the signatures of these polaritonic modes which emerge in the cavity photon spectral function [7; 50; 51; 52; 53], which is directly measurable in a transmission/reflection experiment [40; 54]. We find that the polariton spectrum is sensitive to the Majorana parity in the topological phase. Moreover, the energies of the polariton modes are different in the trivial and topological phases that could be used to probe the emergence of zero modes in topological superconductor.
The paper is organized as follows. In Sec. II we introduce two tight-binding models for topological superconductors and derive how to couple them to a single mode cavity. Then, in Sec. III we calculate the polariton spectrum of the coupled electron-photon system. Finally, Sec. IV is devoted to conclusions.
## II Coupling topological superconductors to light
We start by discussing how to couple topological superconductors described by a tight-binding model to a single mode cavity. We consider two models for topological superconductors: (1) a prototype Kitaev chain [11] and (2) an experimentally relevant nanowire with spin-orbit interaction and proximity-induced superconductivity subject to magnetic field [16; 17]. Contrary to previously studied tight-binding models for non-superconducting systems [7; 52], the Kitaev chain (nanowire) models contain \(p\)-wave (\(s\)-wave) superconducting pairing term that
pairs two neighboring sites (opposite spins) in the chain.
### Kitaev chain coupled to cavity
The Hamiltonian for the Kitaev chain reads [11],
\[H_{K} =-\mu\sum_{j=1}^{N}c_{j}^{\dagger}c_{j}-t\sum_{j=1}^{N-1}\left(c_{j} ^{\dagger}c_{j+1}+\text{h.c.}\right)\] \[+\Delta\sum_{j=1}^{N-1}\left(c_{j}c_{j+1}+\text{h.c.}\right), \tag{1}\]
where \(c_{j}^{\dagger}\) (\(c_{j}\)) are fermionic creation (annihilation) operators at site \(j\), \(N\) is the total number of sites in the chain, \(\mu\) is the chemical potential, \(t\) is the hopping amplitude, and \(\Delta\) is a \(p\)-wave superconducting pairing potential. The Kitaev chain is in the topological (trivial) phase if \(|\mu|<2t\) (\(|\mu|>2t\)) hosting two Majorana bound states describes by the operators \(\gamma_{L(R)}=\gamma_{L(R)}^{\dagger}\). These two Majorana operators form a full fermionic state with \(c_{M}=(\gamma_{L}-i\gamma_{R})/2\) that gives raise to the Majorana occupation \(n_{M}=\langle c_{M}^{\dagger}c_{M}\rangle\) that determines its parity. The Majorana occupation \(n_{M}\) can be \(0\) or \(1\) corresponding to the even (odd) parity.
Next, we couple the Kitaev chain to a single mode cavity given by the Hamiltonian \(H_{ph}=\omega_{c}\left(a^{\dagger}a+1/2\right)\), where \(a^{\dagger}\) (\(a\)) is the photonic creation (annihilation) operator and \(\omega_{c}\) is the cavity frequency. The Kitaev chain Hamiltonian \(H_{K}\) is coupled to the electromagnetic field described by a homogeneous photonic vector potential \(\mathbf{A}=\mathbf{u}_{x}\left(g/e\right)\left(a+a^{\dagger}\right)\) via the Peierls substitution, which is equivalent to applying a unitary transformation \(U\) to the electronic Hamiltonian (6) only [7; 52], \(H_{K-ph}=H_{ph}+U^{\dagger}H_{K}U\), with
\[U=e^{i\frac{a}{\sqrt{N}}(a+a^{\dagger})\sum_{j}R_{j}c_{j}^{ \dagger}c_{j}}. \tag{2}\]
Here, \(R_{j}=j-l_{0}\), where \(l_{0}=N/2\) for even \(N\). Using that
\[U^{\dagger}c_{m}U=e^{i\frac{a}{\sqrt{N}}(a+a^{\dagger})R_{m}}c_{ m}, \tag{3}\]
we find that the superconducting pairing term acquires a site-dependent phase and the full light-matter Hamiltonian reads
\[H_{K-ph}=-\mu\sum_{j=1}^{N}c_{j}^{\dagger}c_{j}-\sum_{j=1}^{N-1 }\left(te^{i\frac{a}{\sqrt{N}}(a+a^{\dagger})}c_{j}^{\dagger}c_{j+1}+\text{h. c.}\right)\] \[+\omega_{c}\left(a^{\dagger}a+\frac{1}{2}\right). \tag{4}\]
Moreover, we note that coupling the superconducting pairing term to light is equivalent to dressing \(\Delta\) with a phase, \(\Delta\rightarrow\Delta e^{i\varphi}\)[55; 41; 56]. The phase \(\varphi\) could be found under the assumption that the \(p\)-wave pairing term in \(H_{K}\) is inherited from the bulk \(s\)-wave superconductor underneath the wire. In this case, we consider that the instantaneous supercurrent flowing through the bulk superconductor vanishes,
\[J_{s}=\frac{2e}{m}|\psi|^{2}(\nabla\varphi-2e\mathbf{A})\equiv 0\,. \tag{5}\]
Here, \(m\), \(|\psi|^{2}\), and \(\varphi\) are the electronic mass, the density of superconducting electrons in the \(s\)-wave superconductor, and its phase, respectively. The solution of the differential equation \(\nabla\varphi=2e\mathbf{A}\) gives us \(\varphi_{j}=2g(a+a^{\dagger})(j-l_{0}+1/2)\sqrt{N}\). Here, \(\varphi_{j}\) is chosen such that \(\varphi_{1}=-\varphi_{N}\)[54]. We note that these two approaches result in the same light-matter Hamiltonian given by Eq. (4). Alternatively, light-matter coupling could be included in the problem by starting with a semiconducting nanowire tunnel coupled to a bulk \(s\)-wave superconductor and assuming that the tunneling hopping is dressed with the Peierls phase [40].
### Superconductor-semiconductor nanowire coupled to cavity
We now consider a more realistic model of topological superconductor coupled to photonic cavity. The tight-binding Hamiltonian composed of \(N\) sites that describes a semiconducting nanowire with Rashba spin-orbit interaction and proximity-induced superconductivity subject to magnetic field reads [57]
\[H_{nw} =\sum_{j,\sigma,\sigma^{\prime}}\left[c_{j+1,\sigma}^{\dagger} \left(-t\delta_{\sigma\sigma^{\prime}}+i\alpha\sigma_{\sigma\sigma^{\prime}}^ {y}\right)c_{j,\sigma^{\prime}}+\Delta c_{j,\uparrow}^{\dagger}c_{j,\downarrow}^ {\dagger}\right.\] \[+\left.\frac{1}{2}c_{j,\sigma}^{\dagger}\left[\left(2t-\mu \right)\delta_{\sigma\sigma^{\prime}}+V_{Z}\sigma_{\sigma\sigma^{\prime}}^{x} \right]c_{j,\sigma^{\prime}}+\text{h.c.}\right]\!, \tag{6}\]
where \(c_{j,\sigma}^{\dagger}(c_{j,\sigma})\) is the creation (annihilation) operator acting on electrons with spin \(\sigma\) located at site \(j\), \(\sigma_{x(y)}\) is the \(x\) (\(y\)) Pauli matrix acting in the spin space, and \(t=\hbar^{2}/\left(2m^{*}a_{1}^{2}\right)\) is the hopping amplitude, with \(m^{*}\) the effective mass and \(a_{l}\) lattice constant. Here, \(\alpha\) is the spin-orbit coupling, \(\Delta\) is the proximity-induced superconducting pairing potential, \(\mu\) is the chemical potential, and \(V_{Z}=g^{*}\mu_{B}B/2\) is the Zeeman energy, with \(g^{*}\) the \(g\)-factor of the nanowire and \(\mu_{B}\) the Bohr magneton. The nanowire hosts Majorana bound states emerging at the opposite ends of the one-dimensional system if \(V_{Z}>\sqrt{\Delta^{2}+\mu^{2}}\)[16; 17].
Similarly to the Kitaev chain, the light-matter Hamiltonian for the nanowire coupled to a single mode cavity could be obtained by performing the unitary transformation \(H_{nw-ph}=H_{ph}+U^{\dagger}H_{nw}U\), with
\[U=e^{i\frac{a}{\sqrt{N}}(a+a^{\dagger})\sum_{j\sigma}\chi_{j}c_{j }^{\dagger}c_{j\sigma}}. \tag{7}\]
Here, \(\chi_{j}=j-j_{0}\) is chosen such that \(\chi_{1}=-\chi_{N}\)[54], with \(j_{0}=\left(N+1\right)/2\) for even \(N\). Using that
\[U^{\dagger}c_{m\sigma^{\prime}}U=e^{i\frac{\pi}{\sqrt{N}}(a+a^{ \dagger})\chi_{m}}c_{m\sigma^{\prime}}, \tag{8}\]
we find that total light-matter coupling Hamiltonian becomes
\[H_{nw-ph}=\sum_{j,\sigma,\sigma^{\prime}}\Big{[}c_{j+1,\sigma}^{ \dagger}\Big{(}-te^{-i\frac{\pi}{\sqrt{N}}(a+a^{\dagger})}\delta_{\sigma\sigma ^{\prime}}\] \[+i\alpha e^{-i\frac{\pi}{\sqrt{N}}(a+a^{\dagger})}\sigma_{\sigma \sigma^{\prime}}^{y}\Big{)}c_{j,\sigma^{\prime}}+\Delta e^{-2i\frac{\pi}{ \sqrt{N}}\chi_{j}(a+a^{\dagger})}c_{j,\uparrow}^{\dagger}c_{j,\downarrow}^{\dagger}\] \[+\frac{1}{2}c_{j,\sigma}^{\dagger}\left[\left(2t-\mu\right) \delta_{\sigma\sigma^{\prime}}+V_{Z}\sigma_{\sigma\sigma^{\prime}}^{x} \right]c_{j,\sigma^{\prime}}+\text{h.c.}\Big{]}\] \[+\omega_{c}\left(a^{\dagger}a+\frac{1}{2}\right). \tag{9}\]
In the next section we will discuss the cavity photon spectral function for the two models in Eqs. (4) and (II) and highlight the emergence of polariton excitations and their topological signatures.
## III Polariton spectrum
In the strong light-matter coupling regime, the electronic and photonic states hybridize giving raise to the formation of new hybrid quasiparticles - polaritons. The polariton spectrum can be obtained by computing the cavity photon spectral function
\[A(\omega)=-\frac{1}{\pi}\text{Im}\int dte^{-i\omega t}\left(-i \theta(t)\right)\langle\left[a(t),a^{\dagger}\right]\rangle\,. \tag{10}\]
To compute this quantity we follow Refs. [50; 51; 52; 7], write down the action for the electron-photon problem which we evaluate at the saddle point plus Gaussian fluctuations in the cavity field. Due to gauge-invariance the photon remains incoherent in presence of a uniform vector potential [58; 59; 60; 52; 56; 57]. The light-matter coupling however gives rise to a self-energy correction for the cavity mode arising from current-current fluctuations of the electronic system. As a result the cavity spectral function takes the form [52; 7]
\[A(\omega)=-\frac{1}{\pi}\frac{\chi^{\prime\prime}(\omega)(\omega+ \omega_{c})^{2}}{(\omega^{2}-\omega_{c}^{2}-2\omega_{c}\chi^{\prime}(\omega))^ {2}+(2\omega_{c}\chi^{\prime\prime}(\omega))^{2}}, \tag{11}\]
where \(\chi(\omega)=K(\omega)-\langle J_{d}\rangle\) is the current-current correlation function, with
\[K(t-t^{\prime})=-i\theta(t-t^{\prime})\langle[J_{p}(t),J_{p}(t^{ \prime})]\rangle. \tag{12}\]
Here, \(J_{p}\) (\(J_{d}\)) are paramagnetic (diamagnetic) current operators that could be defined from the second-order expansion in \(g\)[52; 7]
\[H_{K(nw)-ph}\approx\omega_{c}a^{\dagger}a+H_{K(nw)}+\left(a+a^{ \dagger}\right)J_{p}-\frac{\left(a+a^{\dagger}\right)^{2}}{2}J_{d} \tag{13}\]
and \(\theta(t-t^{\prime})\) is the Heaviside step function.
Polariton spectrum is approximately given by the solutions of the equation [60; 52]
\[\omega^{2}\approx\omega_{c}^{2}+2\omega_{c}\chi^{\prime}(\omega). \tag{14}\]
For \(g=0\) the topological superconductor and cavity photons are fully decoupled and there is a single solution of Eq. (14) given by \(\omega=\omega_{c}\). For finite light-matter coupling \(g\neq 0\) electrons and photons are coupled resulting in multiple solutions that depend both on cavity frequency \(\omega_{c}\) and parameters of the electronic system through the real part of the current-current correlation function \(\chi^{\prime}(\omega)\). Therefore, the resulting polariton energies are sensitive to the properties of the topological superconductor.
We start by deriving the general expression for the current-current correlation function \(\chi(\omega)\). Coupling between topological superconductor and cavity photons induces transitions between the Majorana and bulk states in the chain [40; 46; 47]. These Majorana-bulk transitions could be directly seen as peaks in the imaginary part of the correlation function \(K(\omega)\) Eq. (12). To evaluate \(K(\omega)\), we rewrite the fermionic operators \(c_{j}\) (\(c_{j}^{\dagger}\)) in terms of the annihilation (creation) operators \(\tilde{c}_{n}\) (\(\tilde{c}_{n}^{\dagger}\)) for the Bogoliubov quasiparticles [40; 46]
\[c_{j}=\sum_{n}\left(u_{j,n}\tilde{c}_{n}+v_{j,n}\tilde{c}_{n}^{ \dagger}\right), \tag{15}\]
so that the electronic Hamiltonian (1) (6) becomes diagonal \(\tilde{H}_{el}=\sum_{n}\epsilon_{n}\left(\tilde{c}_{n}^{\dagger}\tilde{c}_{n}-1/2\right)\). Here, \(u_{j,n}\) (\(v_{j,n}\)) are the electron (hole) components of the eigenvectors and \(\epsilon_{n}\) are the corresponding eigenvalues of the electronic Hamiltonian, with \(n=1...N\) for the Kitaev chain Hamiltonian (1) and \(n=1...2N\) for the superconductor-semiconductor nanowire Hamiltonian (6). To calculate the expectation value of the diamagnetic current operator \(\langle J_{d}\rangle\) over a bare electronic Hamiltonian (1) (6) we rewrite \(J_{d}\) in terms of \(\tilde{c}_{n}\) (\(\tilde{c}_{n}^{\dagger}\)) operators and use that \(\langle\tilde{c}_{n}^{\dagger}\tilde{c}_{m}\rangle=f(\epsilon_{n})\delta_{n,m}\), with \(f(\epsilon_{m})\) being the Fermi distribution function. Assuming zero temperature, \(f(\epsilon_{m})\) reduces to the occupation number \(n_{m}\) that can take values \(0\) or \(1\) for empty or occupied state. Under this assumption, we arrive at the following expression
\[\langle J_{d}\rangle=\sum_{m}j_{m}^{d}n_{m}, \tag{16}\]
where \(j_{m}^{d}\) is the diagonal matrix element for the diamagnetic current operator between eigenstates corresponding to the eigenvalues \(\epsilon_{m}\). Defining the Fourier transformation as \(K(\omega)=\int e^{i\omega t}K(t)\) and using that \(\tilde{c}_{m}(t)=\tilde{c}_{m}(0)e^{-i\epsilon_{m}t}\), we find the general expression for the paramagnetic current correlation function at zero temperature
\[K(\omega)=\sum_{l,m}|j_{l,m}^{p}|^{2}\frac{n_{l}-n_{m}}{\omega+ \epsilon_{l}-\epsilon_{m}+i\eta}. \tag{17}\]
Here, \(j^{p}_{l,m}\) are the matrix elements of the paramagnetic current and \(\eta>0\) is the linewidth of the energy levels. At zero temperature only bulk states with negative energies are occupied, while \(n_{l}\equiv n_{M}=0,1\) for the Majorana states. We note that \(K(\omega)=0\) for \(l=m\) making it fully off-diagonal in contrast to \(\langle J_{d}\rangle\). When the system is in the topological phase the paramagnetic current correlation function given by Eq. (17) can be rewritten as a sum of three contributions \(K(\omega)=K_{BB}(\omega)+K_{BM}(\omega)+K_{MM}(\omega)\), corresponding respectively to transitions between bulk states only (\(K_{BB}\)), between Majorana and bulk states (\(K_{BM}\)) and between Majorana states only (\(K_{MM}\)). We note that \(K_{MM}(\omega)=0\) since Majorana parity remains conserved in the presence of coupling to photons [40]. The bulk only contribution in the topological phase (or the total paramagnetic current correlation function in the trivial phase) could be further simplified to
\[K_{BB}(\omega)=\sum_{\epsilon_{l}\neq\alpha>0}\left(\frac{1}{ \omega-\omega_{b}+i\eta}-\frac{1}{\omega+\omega_{b}+i\eta}\right)\] \[\times|j^{p}_{l,-m}|^{2}, \tag{18}\]
where \(\omega_{b}=\epsilon_{l}+\epsilon_{m}\) is the transition frequency between the bulk states \(l\) and \(m\), and \(j^{p}_{l,-m}\) is the matrix element between the bulk states with energies \(\epsilon_{l}\) and \(-\epsilon_{m}\). The peaks in the imaginary part of the bulk contribution appear at transition frequencies \(\omega_{b}>2\Delta_{g}\), where \(\Delta_{g}\) is the the effective gap in the electronic energy spectrum.
Furthermore, the bulk-Majorana transitions are included in \(K_{BM}(\omega)\) term given by
\[K_{BM}(\omega)=\sum_{\epsilon_{l}>0}\left(\frac{1}{\omega- \omega_{e(0)}+i\eta}-\frac{1}{\omega+\omega_{e(0)}+i\eta}\right)\] \[\times\left[|j^{p}_{l,o}|^{2}(n_{M}-n_{l})+|j^{p}_{l,e}|^{2}(1-n _{l}-n_{M})\right], \tag{19}\]
where \(\omega_{e(0)}=\epsilon_{l}\pm\epsilon_{M}\) is the transition frequency between bulk state with occupation number \(n_{l}=0\) and Majorana state with occupation number \(n_{M}=0(1)\) corresponding to even (odd) parity, and \(j^{p}_{l,e(o)}\) is the matrix element between bulk state \(l\) and even \(e\) (odd \(o\)) parity Majorana state. The imaginary part of the paramagnetic current correlation function \(K^{\prime\prime}_{BM}(\omega)\) calculated for even parity with \(n_{M}=0\) has multiple peaks at frequency \(\omega_{e}>\Delta_{g}\) with the amplitude given by \(|j^{p}_{l,e}|^{2}\), while for \(n_{M}=1\) the peaks are at \(\omega_{o}\) with the amplitude given by \(|j^{p}_{l,o}|^{2}\). Moreover, even in the absence of the overlap between two Majorana bound states \(\epsilon_{M}\approx 0\) the correlation function \(K_{BM}(\omega)\) distinguishes between different Majorana parities through the matrix elements \(j^{p}_{l,e(o)}\)[46].
In the topological phase the cavity spectral function \(A(\omega)\) given by Eq. (11) depends on the Majorana parity through the different matrix elements entering in the current-current correlation function \(\chi(\omega)\) and, therefore, polariton spectrum could be used to probe Majorana properties. Comparing Eqs. (19) and (18) we note that the lowest-energy peaks in the topological and trivial phases appear at frequencies \(\omega_{e(o)}\approx\Delta_{g}\) and \(\omega_{b}\approx 2\Delta_{g}\), respectively, suggesting that the cavity spectral function could be also used to differentiate between two phases.
### Polaritons in Kitaev chain coupled to photons
We start discussing the cavity spectral function for the Kitaev chain, Eq. (4). In this case the paramagnetic and diamagnetic current operators could be found from Eq. (4):
\[J_{p}=i\frac{g}{\sqrt{N}}\sum_{j}\Big{[}-tc^{\dagger}_{j}c_{j+1}+2\Delta(R_{ j}+1/2)c_{j}c_{j+1}-\text{h.c.}\Big{]} \tag{20}\]
Figure 1: (a) Energy spectrum of the Kitaev chain 1 as a function of the chemical potential \(\mu/t\) (red solid lines). Vertical dashed lines indicate the transition frequencies \(\omega_{b}\) and \(\omega_{e(o)}\). For the chosen set of parameters Majorana energy \(\epsilon_{M}=7.44\times 10^{-7}\) and \(\omega_{e}\approx\omega_{o}\). (b) Real part of the current-current correlation function \(\chi^{\prime}(\omega)\) as function of frequency \(\omega/t\). Red solid (black dashed) lines correspond to even (odd) Majorana parity. The transition frequencies \(\omega_{e}\) are shown in gray vertical dotted lines. (c) Imaginary part of the current-current correlation function \(\chi^{\prime\prime}(\omega)\) as function of frequency \(\omega/t\) for even (odd) Majorana parity shown in red solid (black dashed) lines. Gray vertical dotted lines correspond to the transition frequencies and indicate the position of the peaks. Parameters are chosen as \(N=100\), \(\Delta/t=1\), \(\mu/t=-1.75\) (except in panel (a)), \(\eta=4\times 10^{-3}\).
and
\[J_{d}=\frac{g^{2}}{N}\sum_{j}\Big{[}-tc_{j}^{\dagger}c_{j+1}+4\Delta(R_{j}+1/2)^{ 2}c_{j}c_{j+1}+\text{h.c.}\Big{]}, \tag{21}\]
where we see that in addition to the usual contribution from single particle hopping there is also a term coming from the superconducting pairing. We emphasize that this current is not associated to a conserved charged in the Kitaev model, which only enjoys a discrete \(Z_{2}\) parity symmetry. However, it is the natural object entering the response of the system to the cavity vector potential, see Eq. (13).
To find the cavity spectral function we first calculate the current-current correlation function using Eqs. (16) and (17). In Fig. 1 (b) we plot the real part of correlation function \(\chi^{\prime}(\omega)\) as a function of frequency \(\omega\). Vertical dotted lines indicate the bulk-Majorana transition frequencies \(\omega_{e(o)}\). For the Kitaev chain in the topological phase \(\epsilon_{M}\approx 0\) and therefore \(\omega_{e}\approx\omega_{o}\). We find that \(\chi^{\prime}(\omega)\) has different oscillation amplitudes for even and odd Majorana parities stemming from the difference in the matrix elements \(j_{l,e}^{p}\). Next, we numerically evaluate the imaginary part of the correlation function \(\chi^{\prime\prime}(\omega)\) (see Fig. 1 (c)). The function \(\chi^{\prime\prime}(\omega)\) has multiple peaks at resonant frequencies \(\omega_{e(o)}\) that differ for two parities, similarly to the features present in \(\chi^{\prime}(\omega)\). Therefore, the current-current correlation function \(\chi(\omega)\) is a good marker to distinguish between two Majorana parities in the topological phase.
Given the above results for the current-current correlator we can now focus on the cavity photon spectral function (11). We plot \(A(\omega)\) as a function of frequency in Fig. 2 (a) at a fixed light-matter coupling \(g\) for different parities in topological phase. The current-current correlation function is calculated for a finite-length Kitaev chain and has many resonances (see Fig. 1 (b)), therefore, Eq. (14) has multiple solutions for polariton energies corresponding to peaks in \(A(\omega)\). Moreover, the polariton spectrum in topological phase depends on the Majorana parity through \(\chi^{\prime}(\omega)\). The cavity spectral function has different patterns for two parities and can distinguish between the parities. We further compute the cavity spectral function in the trivial phase [see Fig. 2 (b)] for the same light-matter coupling strength \(g\) and the effective gap \(\Delta_{g}\). We find that \(A(\omega)\) has a sharp peak around the cavity frequency \(\omega_{c}\) as in the topologically nontrivial phase. However, we note that contrary to topological case small peaks emerge at frequencies larger than \(2\Delta_{g}\) corresponding to bulk-bulk transition across the gap in the system.
In Fig. (3) (a) we plot the cavity spectral function for the Kitaev chain in the topological phase as a function of frequency and light-matter coupling. We consider a cavity frequency in resonance with the first bulk-Majorana transition for the even parity (\(\omega_{c}=\omega_{c}\)). We see that for low frequency there is a broad peak which shifts towards lower frequencies upon increasing \(g\). At higher frequencies on the other hand we recognize sharp features associated to transitions between Majorana and bulk states. Next, we calculate \(A(\omega)\) for the Kitaev chain in the trivial phase [see Fig. (3) (b)]. As discussed for the topological phase there is a broad peak that originates at \(\omega=\omega_{c}\) for \(g=0\) and further broaden as the light-matter coupling strength is increased. However, in the trivial phase the current-current correlation function \(\chi(\omega)\) that enters Eq. (11) has resonances only at frequencies \(\omega_{b}>2\Delta_{g}\). Therefore, other polariton modes appear only at \(\omega>2\Delta_{g}\). Comparing the cavity spectral function calculated in the topological phases we note the distinct features between the two, namely that the sharp features of the transitions between Majorana (bulk) - bulk states appear at different energy scales of \(\Delta_{g}\) (\(2\Delta_{g}\)). Therefore, the polariton spectrum could be potentially used as a way to probe zero-energy states in topological superconductors.
Figure 2: Cavity spectral function \(A(\omega)\) as function of frequency \(\omega/t\) for \(g=0.1\). (a) In the topological phase (\(\mu/t=-1.75\)) red solid (black dashed) line corresponds to even (odd) Majorana parity. Vertical gray dotted line indicates the cavity frequency \(\omega_{c}\) fixed to be in resonance with the first bulk-Majorana transition. (b) In the trivial phase (\(\mu/t=-2.25\)) there is a large peak emerging at \(\omega_{c}\) and smaller peaks appearing at \(\omega>2\Delta_{g}\). Vertical gray dotted line corresponds to \(\omega_{c}\) and pink dotdashed line indicates the first bulk-bulk transition. Other parameters are the same as in Fig. 1.
### Polaritons in nanowire coupled to photons
We now move to the superconductor-semiconductor nanowire model, Eq. (9), for which the paramagnetic and diamagnetic current operators read respectively
\[J_{p}=i\frac{g}{\sqrt{N}}\sum_{j}\left[t\left(c^{\dagger}_{j+1 \uparrow}c_{j\uparrow}+c^{\dagger}_{j+1\downarrow}c_{j\downarrow}\right)\right.\] \[+\alpha\left(c^{\dagger}_{j+1\uparrow}c_{j\downarrow}-c^{ \dagger}_{j+1\downarrow}c_{j\uparrow}\right)-2\Delta\chi_{j}c^{\dagger}_{j \uparrow}c^{\dagger}_{j\downarrow}-\mathrm{h.c.}\Big{]} \tag{22}\]
and
\[J_{d}=\frac{g^{2}}{N}\sum_{j}\Big{[}-t\left(c^{\dagger}_{j+1 \uparrow}c_{j\uparrow}+c^{\dagger}_{j+1\downarrow}c_{j\downarrow}\right)\] \[+\alpha\left(c^{\dagger}_{j+1\uparrow}c_{j\downarrow}-c^{ \dagger}_{j+1\downarrow}c_{j\uparrow}\right)+4\Delta\chi^{2}_{j}c^{\dagger}_ {j\uparrow}c^{\dagger}_{j\downarrow}+\mathrm{h.c.}\Big{]}. \tag{23}\]
To find the cavity spectral function of the nanowire model, we proceed in the same way as for the Kitaev chain. The real and imaginary part of \(\chi(\omega)\) has similar
Figure 3: Spectral function \(A(\omega)\) as a function of \(g\) and \(\omega/t\). Horizontal black dashed line corresponds to frequency \(\omega_{c}\) chosen to be equal to first bulk-Majorana transition frequency and horizontal pink dotdashed line corresponds to \(\omega_{b}\) in the trivial phase. (a) In the topological phase with \(\mu/t=-1.75\) the lowest polariton branch originating at \(\omega=\omega_{c}\) for \(g=0\) goes down as \(g\) is increased. White horizontal lines corresponding to bulk-Majorana transitions coupled with photons that appear at frequencies \(\omega>\Delta_{g}\). (b) In the trivial phase with \(\mu/t=-2.25\), the lowest polariton branch appears at \(\omega=\omega_{c}\). In contrast to the topological phase, white horizontal lines correspond to bulk-bulk transitions and appear at \(\omega>2\Delta_{g}\). In two phases white horizontal lines corresponding to bulk-Majorana (a) and bulk-bulk (b) transitions emerge at different frequencies signalling the presence of zero-energy states in the topological phase. Other parameters are the same as in Fig. 1.
Figure 4: Cavity spectral function \(A(\omega)\) of the nanowire as function of frequency \(\omega/t\) for light-matter coupling strength \(g=0.05\). (a) Red solid (black dashed) lines correspond to \(n_{M}=0\) (\(n_{M}=1\)) in the topological phase with \(V_{Z}/\Delta=1.8\). Gray vertical dotted line indicates the cavity frequency \(\omega_{c}\) resonant with the first bulk-Majorana transition at \(\omega_{e}\approx\omega_{o}\) (\(\epsilon_{M}/t=10^{-6}\)). (b) \(A(\omega)\) for the nanowire in the trivial phase with \(V_{Z}/\Delta=0.2\). Vertical gray dotted line indicates \(\omega_{c}\) and pink dotdashed line signals the position of the first bulk-bulk transition frequency \(\omega_{b}\). Other parameters are fixed as \(N=100\), \(\Delta/t=0.1\), \(\mu=0\), \(V_{Z}/\Delta=1.8\), \(\alpha/t=0.4\), and \(\eta/t=10^{-3}\).
structure to Fig. 1, but the position and amplitude of the peaks are less homogeneous due to more involved energy spectrum of the nanowire.
In Fig. 4 we plot the cavity spectral function for the nanowire model as a function of frequency \(\omega\) for a fixed value of the light-matter coupling \(g\). In the topological phase we consider different parities depicted in solid red and black dashed line. The cavity spectral function has a large peak around the cavity frequency \(\omega_{c}\) resonant with the lowest bulk-Majorana transition frequency \(\omega_{e}\approx\omega_{o}\) and multiple smaller peaks corresponding to higher in energy bulk-Majorana transitions appearing at \(\omega>\Delta_{g}\) [see Fig. 4 (a)]. Considering the superconductor-semiconductor nanowire in the trivial phase coupled to photonic cavity, we find that the cavity spectral function has a sharp peak originating at the frequency \(\omega_{c}\) and multiple smaller peaks at frequencies \(\omega>2\Delta_{g}\) that stem from the bulk-bulk transitions in the nanowire [see Fig. 4 (b)]. Similar features were found for the Kitaev chain [see Fig. 2 (b)] and allow one to probe the presence of zero-energy modes in the topological superconductor.
Finally, we present \(A(\omega)\) for the nanowire in the topological phase as a function of frequency and light-matter coupling strength in Fig. 5. By choosing the cavity frequency to be equal to the first bulk-Majorana transition frequency, we find the appearance of a broad low-frequency polariton mode that goes down in \(\omega\) with increasing \(g\). Higher-frequency polariton modes appear due to coupling between higher bulk-Majorana transitions and photons showing a dense pattern of modes. Similar behaviour was found for the Kitaev chain (see Fig. 3).
## IV Conclusions
In this work, we studied the topological superconductor coupled to cavity photons. We calculated the cavity spectral function of the electron-photon system that revealed the polariton spectrum of the hybrid system. The peaks in cavity spectral function appear at different energy scales for the electronic chain in the trivial and topological phase. Moreover, in the topological phase associated with the presence of the Majorana bound states the polariton spectrum has different pattern for two Majorana parities. Therefore, cavity spectral function could be used to probe topological properties of the electronic chain.
###### Acknowledgements.
O.D. acknowledges helpful discussions with Jelena Klinovaja, Daniel Loss, Pascal Simon and Mircea Trif. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 892800. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 101002955 -- CONQUER).
|
2304.04271
|
Embarrassingly Simple MixUp for Time-series
|
Labeling time series data is an expensive task because of domain expertise
and dynamic nature of the data. Hence, we often have to deal with limited
labeled data settings. Data augmentation techniques have been successfully
deployed in domains like computer vision to exploit the use of existing labeled
data. We adapt one of the most commonly used technique called MixUp, in the
time series domain. Our proposed, MixUp++ and LatentMixUp++, use simple
modifications to perform interpolation in raw time series and classification
model's latent space, respectively. We also extend these methods with
semi-supervised learning to exploit unlabeled data. We observe significant
improvements of 1\% - 15\% on time series classification on two public
datasets, for both low labeled data as well as high labeled data regimes, with
LatentMixUp++.
|
Karan Aggarwal, Jaideep Srivastava
|
2023-04-09T16:34:06Z
|
http://arxiv.org/abs/2304.04271v1
|
# Embarrassingly Simple MixUp for Time-series
###### Abstract
Labeling time series data is an expensive task because of domain expertise and dynamic nature of the data. Hence, we often have to deal with limited labeled data settings. Data augmentation techniques have been successfully deployed in domains like computer vision to exploit the use of existing labeled data. We adapt one of the most commonly used technique called MixUp, in the time series domain. Our proposed, MixUp++ and LatentMixUp++, use simple modifications to perform interpolation in raw time series and classification model's latent space, respectively. We also extend these methods with semi-supervised learning to exploit unlabeled data. We observe significant improvements of 1% - 15% on time series classification on two public datasets, for both low labeled data as well as high labeled data regimes, with LatentMixUp++.
## 1 Introduction
Time series data is one of the most commonly found data in nature across various domains like healthcare, finance, astronomy, and meteorology. Labeling time series data is quite an intensive process owing to the temporal and dynamic nature of data, unlike other domains like images or text.
Time series data labeling usually requires domain expertise and time to gather labeled data in time series. This is particularly an acute problem in domains like healthcare where labeling has to be quite precise owing to high stakes. Hence, for many applications, particularly in healthcare, only limited labeled data is available.
Recently, there is an advent of machine learning models in time series analysis, in particular deep learning, which require large datasets for performing well. Also, training on small datasets suffers from issues like over-fitting [13]. These issues can significantly impact the performance as well as generalizability of time series models.
Data Augmentation is a technique to augment a training dataset with artificially created examples. It has been used extensively in computer vision to expand the size of training image datasets [9; 14]. It involves either using data transformations or using generative models for creating artificial training samples. Commonly used data transformations on images are rotating the image, flipping, or modulating colors.
These data transformations are quite simple but specific to image domain, as such transformations can completely change the meaning of a time-series. This is compounded by the huge diversity in domains of time series data, with very limited generic data transformations. There have been limited attempts in time series domain to come up with such augmentations, mostly in the wearables domains [18; 15]. They have found permutation or rotation based transformations to be most effective.
Deep generative models like Generative Adversarial Networks [5] are used to generate images from noise distribution or conditional on desired style [7] or labels [11]. These have recently been used for time series data augmentation as well [3; 12]. However, these methods are quite challenging to train and to generate real world images.
MixUp [20] was proposed in computer vision, as a training method to interpolate two real data points to generate a synthetic sample. Model is trained on the interpolated data, and not on the original training data. However, it is specific to the image domain as two images can be superimposed over each other, but a linear interpolation of two time series can be quite different. It has been adapted to other domains like natural language processing interpolation of sentences in latent space for text classification [6].
In this work, we adapt MixUp by simple extensions to time series classification problems; we train the model with the interpolated data as well as original data, in a true data augmentation fashion. Next, we also perform multiple MixUp steps for each batch of original data. We perform the same operation in latent space, called LatentMixUp. Our proposed methods, MixUp++ and LatentMixUp++ outperform baselines: non-augmentation based as well as permutation based time series augmentation. The proposed methods, especially LatentMixUp works particularly well in a low labeled data regime. We further extend MixUp to semi-supervised settings with pseudo-labeling based MixUp. These methods do not need any additional effort or hyper-parameter tuning apart from classifier tuning while improving performance considerably.
In summary, we make following contributions:
* We propose simple but effective extensions to MixUp, MixUp++ and LatentMixUp++ for time series classification;
* We propose Pseudo-labeling based MixUp for semi-supervised learning to leverage unlabeled data; and
* Our results on two datasets: human activity and sleep staging, shows that our methods outperform baselines considerably in both low labeled data regime as well as high labeled data setting.
We organize the rest of this work as follows. Section 2.2 discusses relevant literature in data augmentation. Section 3 discusses background concepts like MixUp and our proposed methods. Section 4 presents details on our experimental settings. We present our results in Section 5. Finally, we summarize our conclusions in Section 6.
## 2 Related Work
In this section, we give an overview of data augmentation literature and data augmentation in time series.
### Data Augmentation in Machine Learning
Data augmentation is a technique of adding existing data with synthetically generated data. This synthetic generation of data can be done through: 1) Simple data transformations or, 2) using Generative machine learning methods.
Simple Data Transformation AugmentationsThese simple data transformations encode human knowledge about the data domain. For example, in computer vision tasks like classification, we know that a transformation like rotating an image or adjusting its colors would not change its class. These geometric image transformations like flipping, rotating, cropping, or noise injection have been extensively used in the computer vision domain to increase data set sizes [14]. Even the seminal ImageNet paper [9], used these transformations to increase the training dataset size. Taylor et al. [17] performed a empirical study on effect of different transformations on the data augmentation. AutoAugment [2] is a general augmentation technique for images based on reinforcement learning that learns appropriate augmentation from the given image data from a list of image augmentations. However, these methods are quite limited to images and do not consider characteristics like temporal nature in time series. In this work we explore MixUp [20], which interpolates two images for
time-series data augmentation. We further explore MixUp in latent model embedding space as has been explored in domains like natural language processing with sentence mixup [6].
Generative Machine Learning ModelsGenerative machine learning models generate synthetic data from scratch by modeling the distribution of input data. The most prominent family of models are Generative Adversarial Networks (GANs) [5]. GANs can generate image data from a noise distribution by learning a function that transforms noise into an image by learning over real world images through adversarial learning. Various methods in GANs have been proposed like Neural Style Transfer [7], which can create synthetic examples by converting a given input image with another theme like adding a hat to a person's headshot. Further, Conditional Adversarial Networks (CGANs) [11], have been proposed to generate images conditioned on a class label. Other generative methods include Variational Auto-Encoders (VAEs) [8]. Generative models are however, quite difficult to train and to be used for generating realistic data.
### Data Augmentation for Time Series Data
There is limited work in the time series domain for data augmentation. Since time series data is so diverse and so domain specific, there are very limited generic geometric transformations in time series. Um et al. [18] explored data transformations like rotation, scaling, permutation, or distance based dynamic time warping for data augmentation on wearables data. They found rotation and permutation transformations to be the most effective augmentations. While permutation involves shuffling time series segments to create another sample, rotation involves 3D rotation of wearable time series. Steven Eyobu et al. [15] also explored using scaling and jitter for wearable time series data augmentation.
Generative models like GANs have been used for time series data augmentation like Esteban et al. [3]. They generate medical time series like EEG using GANs with a recurrent neural network (RNN) backbone for both generator and discriminator to model the time series sequence. They showed that synthetic time series performed as well as real world time series data. Ramponi et al. [12] propose Time-Conditional Generative Adversarial Networks (T-CGAN), and use it to study the data augmentation for time series. They show considerable improvement on time series datasets by augmenting with GAN generated time series data. However, these methods are challenging to train and tune to produce desirable results.
In this work, we explore MixUp for time series data augmentation. We compare our method with permutation augmentation proposed by Um et al. [18] and show the effectiveness of our simple method.
## 3 Methodology
In this section, we present the methodology used in this work. We first give a background on MixUp technique, and next discuss our methodology.
### Supervised Setting: MixUp++ and LatentMixUp++
#### 3.1.1 Mixup
MixUp is a simple data augmentation technique proposed by Zheng et al. [20] used in computer vision. It is based on a simple idea of a linear interpolation of two labeled training samples to produce a synthetic training sample. This results in: 1) increased number of labeled training samples; and 2) regularization in the supervised model's embedding space. It has shown encouraging results by improving the accuracy of supervised algorithms on image classification datasets.
Given two training samples, (\(x_{1}\), \(y_{1}\)) and (\(x_{2}\), \(y_{2}\)), synthetic interpolated example can be given by:
\[\tilde{x} = \lambda x_{1}+(1-\lambda)x_{2} \tag{1}\] \[\tilde{y} = \lambda y_{1}+(1-\lambda)y_{2} \tag{2}\]
where, \(\tilde{x}\) and \(\tilde{y}\) are generated by example and label, respectively. \(\lambda\) is ratio of mixing, which is sampled from a beta distribution as follows with a parameter \(\alpha\):
\[\lambda \sim \mathrm{Beta}(\alpha,\alpha) \tag{3}\]
#### 3.1.2 LatentMixUp
MixUp performs an interpolation in the raw input space to create a new example. However, while it is intuitive to do such interpolation in the image domain, it is not as intuitive in domains in natural language or even time series. For example, interpolation of a cosine function and its opposite phase cosine, can produce a constant value time series, which might be unlikely to be observed. Hence, recent methods have proposed to perform mixup in latent space of classification model in domains like Natural language with SenMixUp [6] where two sentences can not be interpolated directly. We refer to it as _LatentMixUp_, to perform a mixup of two time series in the latent space, described as follows.
Let \(f(x)\) be the classifier with \(f:\mathbb{R}^{T}\longrightarrow\mathbb{R}^{|C|}\), where \(C\) is the set of output classes. We can further define \(f(x)=g(h(x))\), where \(h(x)\) is an intermediate neural network representation. \(g(x)\) is the remainder of neural network layers that are built on top of \(h(x)\) to perform the classification. For example, in a 6 layered neural network, \(h(x)\) can be the first 5 layers of the network, and \(g(x)\) is the last layer of the neural network that performs the softmax for the classification task.
Given two training samples, (\(x_{1}\), \(y_{1}\)) and (\(x_{2}\), \(y_{2}\)), synthetic interpolated example can be given by:
\[h(\tilde{x}) = \lambda h(x_{1})+(1-\lambda)h(x_{2}) \tag{4}\] \[\tilde{y} = \lambda y_{1}+(1-\lambda)y_{2} \tag{5}\]
Final prediction on interpolated \(h(\tilde{x})\) is done by passing it through \(g(.)\). Hence, prediction for the newly interpolated example, \(\tilde{y}_{pred}=g(h(\tilde{x}))\). The key idea behind LatentMixUp is that it provides interpolation in the model's latent space which is arguably more linear than raw feature space, making interpolation operation more representative of the data manifold.
#### 3.1.3 Proposed: MixUp++ and LatentMixUp++
In MixUp and LatentMixUp, the model is trained _only_ on the interpolated data, while the original data is not used. We hypothesize that this could impact the model performance as original training data is real world data and is important for creation of data manifold on which decision boundary is created. Further, MixUp is done by randomly permuting a training batch and interpolating it with the training batch examples in the given batch, with a sampled value of \(\lambda\). While with enough epochs of training, the model could get to see enough pairs from all possible pairs with a good proportion of mixing coefficient \(\lambda\), it is not as efficient as the number of epochs is usually limited.
We propose two embarrassingly simple but effective additions to the MixUp training:
* **Train with original data**: We keep the original data during training instead of discarding the original data in MixUp [20].
* **Train with multiple MixUps for a single data batch**: We perform mixup, \(k\in\mathbb{N}\) times for a single training data batch, each with different sampled values of mixing coefficient \(\lambda\) to increase the possible mixup examples model is trained on.
We hypothesize that training entirely on interpolated examples is not optimal for time series data, as interpolated examples can represent an entirely different phenomenon for time series. Hence, it is important to use the original real world data during training, and use the interpolated data as a regularizer.
### Semi-supervised Setting
While MixUp is entirely based in a supervised learning setting, we further extend the notion of MixUp in the semi-supervised setting. For this purpose, we use pseudo-labeling, a simple but commonly used semi-supervised learning technique.
#### 3.2.1 Pseudo-Labeling
Pseudo-labeling [10] is a simple technique to leverage unlabeled data in presence of limited labeled data. The idea behind pseudo-labeling is simple: we train the model on the limited labeled data, perform inference on unlabeled data to choose the examples the model is quite confident on. These examples are then added to the training set with model predicted labels for the next training iteration. This process is continued until we the training stabilises or maximum number of epochs are reached.
More formally, let labeled data be given as \(\mathcal{D}_{l}=\{x_{i}^{l},y_{i}^{l}\}_{i=1}^{n}\), where \(n\) is the number of labeled samples. Let, \(\mathcal{D}_{u}=\{x_{j}^{u}\}_{j=1}^{m}\) be an unlabeled dataset, with \(m\) being the number of unlabeled samples. We train a classifier \(f(x)\) with labeled and unlabeled examples. Firstly, labeled examples are used to train the model in the epoch, which is used to do inference over each unlabeled example \(x_{j}^{u}\). If predicted \(f(x^{u})=\hat{y}^{u}\geq\tau\), then the pair \((x^{u},y^{u})\) is used to train the model further. \(\tau\) is the confidence threshold to only select highly confident examples in training. \(y^{u}\in\mathbb{R}^{|C|}\) is one hot encoding of highest probability class above the confidence threshold \(\tau\) given by:
\[y_{c}^{u}=\quad\left\{\begin{array}{rl}1,&\text{if }c\text{ = argmax}_{c\in|C|}\hat{y}^{u}\\ 0,&\text{otherwise}\end{array}\right. \tag{6}\]
Hence, we given a label of \(y^{u}\) to the highest confident class of an unlabeled sample \(x^{u}\), which is then used for training the model. Pseudo-label classification loss can be written as follows:
\[\mathcal{L}(f,\mathcal{D}_{l},\mathcal{D}_{u})=\sum_{x^{l}\in\mathcal{D}_{l}}l (f,x^{l},y^{l})+\sum_{x^{u}\in\mathcal{D}_{u}}\mathbb{I}(f(x^{u})\geq\tau)l(f,x^{u},y^{u}) \tag{7}\]
where, \(l(f,x,y)\) is the standard cross-entropy classification loss. Note, value of threshold, \(\tau\) is usually set to be high. We use a \(\tau\) of 0.99 in this work. A low value of \(\tau\) would allow wrongly labeled samples from unlabeled data to find their way into the training data. This can destabilize the model, further reinforced by subsequent pseudo-labeling rounds.
Figure 1: Schematic view of Pseudo-labeling with MixUp. First, we get the most confident samples from the unlabeled data, called pseudo-labels. We use these pseudo-labels and labeled data for MixUp. Finally, we train the model on three sources: labeled data, pseudo-labeled data, and MixUp data.
#### 3.2.2 Pseudo-Labeling with MixUp
We propose to leverage MixUp during pseudo-labeling in order to leverage the unlabeled data. The idea is simple: we add highly confident samples from unlabeled datasets with their pseudo-labels to the examples MixUp has access to. This way, MixUp interpolates between the labeled as well as pseudo-labeled samples from unlabeled data. Pseudo-Labeling with MixUp is shown in Figure 1.
Next, we discuss our experimental settings to evaluate the effectiveness of our proposed methods: _MixUp++_ and _LatentMixUp++_.
## 4 Experimental Settings
In this section, we present our experimental settings: datasets used, data augmentation baselines, and hyper-parameters tuned.
### Datasets
We use the following two datasets in this work related to human activity recognition and sleep staging:
* **Sleep EDF2**: Sleep-EDF dataset [4] contains polysomnography (PSG) for 20 subjects. We follow Supartak et al. [16] by using the EEG channels (Fpz-Cz) sampled at 100 Hz, for sleep stage classification. Goal is to detect the sleep stage of each person for every 30 second window into Awake (W), deep sleep (N1, N2, N3), and Rapid Eye Movement (REM) sleep. This data has 25612 training and 8910 test samples. Note, the training and test sets are independent in terms of subjects following the implementation in DeepSleepNet3. Footnote 2: [https://pysionet.org/challenge/2012/](https://pysionet.org/challenge/2012/)
* **UCI HAR4**: UCI HAR dataset [1] contains multivariate time series across 6 channels of motion state recording of 30 volunteers, aged 19-48 years. It tracks 6 kinds of activities like walking, walking upstairs, walking downstairs, lying, sitting, or standing up. Data was collected using a waist bound Galaxy SII device with a sampling rate of 50 Hz. Task is to detect the state of the person for each 2.56 second window. This splitting gives about 7352 training set windows, and 2947 test set windows. Footnote 3: [https://github.com/akaraspt/deepsleepnet](https://github.com/akaraspt/deepsleepnet)
### Baselines Used
We use the following data augmentation methods as a comparison with our proposed approach:
* **Supervised Learning**: Baseline model without using any data augmentation, trained only on original real world labeled data.
* **PermuteAugment**: In this data augmentation technique, we permute segments of time series, based on Um _et al._[18] where permutation and rotation were the most effective augmentation methods for time-series data. Since, rotation is specific to the dataset they used, we use permutation as the augmentation method as implemented in their code5. Footnote 4: [https://archive.ics.uci.edu/ml/datasets/human+activity+recognition+using+smartphones](https://archive.ics.uci.edu/ml/datasets/human+activity+recognition+using+smartphones)
* **MixUp**: We compare MixUp++ with the conventional MixUp algorithm.
* **LatentMixup**: This is just the conventional MixUp in latent space as described in Section 3. We implement this on the last layer before the softmax of our transformer neural network, described next.
For all these baselines, we use the same classifier, a transformer neural network [19]. Using the same classifier ensures that we can confidently measure the difference between data augmentations of different methods.
### Hyper-parameters
We use the same train-test split as provided in the datasets. For hyper-parameter tuning we randomly sample 80% of the train set for training, and rest 20% for validation. For the Sleep-EDF dataset, we use subject stratified split to ensure no overlap of subjects in training and validation sets. For transformer architecture, we tried the number of layers \(\in\{1,2,3,4,5,6\}\), and selected 5 layers and 5 as the number of heads. For layer size, we searched \(\in\{32,64,100,128,160\}\) and selected 100 with a dropout of 0.15. We use Adam optimizer with a learning rate of \(0.0002\). We searched for \(\tau\in\{0.95,0.97,0.98,\)
\(0.99,0.995\}\) and selected 0.99 as the confidence threshold for pseudo labels. Further, all experiments were performed 10 times with 10 different seeds to report the mean and standard deviations of all the results reported in the results section, Sec 5.
## 5 Results and Analysis
In this section, we present results on the time series classification for two datasets: HAR and Sleep-EDF. Firstly, we present results in a fully supervised setting. Next, we show ablation results of performance as the proportion of training data. Finally, we show the performance of semi-supervised pseudo-labeling mixup.
### Data Augmentation in Supervised Setting
Classification results for the two datasets are presented in Table 1. We observe that LatentMixUp with two batches of mixup, shows the best results, for the both datasets compared with the supervised
\begin{table}
\begin{tabular}{|l|c c c|} \hline \hline
**Method** & **Accuracy (\%)** & **F1 Macro (\%)** & **Kappa (\%)** \\ \hline \multicolumn{4}{|c|}{_HAR_} \\ \hline Supervised & 92.95 \(\pm\) 0.83 & 92.99 \(\pm\) 0.89 & 91.52 \(\pm\) 1.0 \\ \hline PermuteAugment & 91.8 \(\pm\) 0.91 & 91.97 \(\pm\) 0.92 & 90.14 \(\pm\) 1.09 \\ PermuteAugment++ & 93.1 \(\pm\) 0.43 & 93.26 \(\pm\) 0.46 & 91.71 \(\pm\) 0.51 \\ \hline MixUp & 92.63 \(\pm\) 0.56 & 92.71 \(\pm\) 0.65 & 91.14 \(\pm\) 0.67 \\ MixUp++ & 93.29 \(\pm\) 0.80 & 93.38 \(\pm\) 0.85 & 91.94 \(\pm\) 0.97 \\ MixUp++ (w/ 2 Iterations) & 93.45 \(\pm\) 0.41 & 93.57 \(\pm\) 0.45 & 92.13 \(\pm\) 0.50 \\ \hline LatentMixUp & 94.07 \(\pm\) 0.70 & 94.17 \(\pm\) 0.73 & 92.87 \(\pm\) 0.84 \\ LatentMixUp++ & 94.41 \(\pm\) 0.92 & 94.46 \(\pm\) 0.95 & 93.28 \(\pm\) 1.1 \\ LatentMixUp++ (w/ 2 Iterations) & **94.44**\(\pm\) 0.72 & **94.52**\(\pm\) 0.75 & **93.32**\(\pm\) 0.87 \\ \hline \hline \multicolumn{4}{|c|}{_Sleep-EDF_} \\ \hline \hline Supervised & 80.57 \(\pm\) 0.34 & 73.52 \(\pm\) 0.85 & 73.48 \(\pm\) 0.42 \\ \hline PermuteAugment & 74.21 \(\pm\) 2.09 & 67.59 \(\pm\) 2.31 & 64.3 \(\pm\) 2.71 \\ PermuteAugment++ & 78.89 \(\pm\) 0.35 & 71.75 \(\pm\) 0.52 & 71.63 \(\pm\) 0.43 \\ \hline MixUp & 79.14 \(\pm\) 0.96 & 66.3 \(\pm\) 0.91 & 70.52 \(\pm\) 1.47 \\ MixUp++ & 80.47 \(\pm\) 0.70 & 70.82 \(\pm\) 1.62 & 72.8 \(\pm\) 0.92 \\ MixUp++ (w/ 2 Iterations) & 80.00 \(\pm\) 0.57 & 68.7 \(\pm\) 1.00 & 72.13 \(\pm\) 0.67 \\ \hline LatentMixUp & 80.83 \(\pm\) 0.82 & 72.71 \(\pm\) 1.04 & 73.56 \(\pm\) 1.09 \\ LatentMixUp++ & 81.08 \(\pm\) 0.56 & 73.74 \(\pm\) 1.05 & 73.89 \(\pm\) 0.65 \\ LatentMixUp++ (w/ 2 Iterations) & **81.12**\(\pm\) 0.47 & **73.79**\(\pm\) 0.82 & **73.94**\(\pm\) 0.62 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Time series classification results with various data augmentation methods measured on Accuracy/F1 micro, F1 Macro, and Cohen’s Kappa for two datasets: HAR and Sleep-EDF. Note, differences between supervised and LatentMixUp++ are statistically significant with a \(p<0.01\) with student \(t\)-test.
baselines and other data augmentation baselines. Original MixUp [20], performs even lower than the vanilla supervised baseline while vanilla LatentMixUp performs only slightly better than the vanilla supervised baseline. The performance difference is especially stark in Sleep-EDF dataset.
Adding batches of MixUp with original data helps MixUp, though all MixUp variants perform below or similar to supervised baseline. LatentMixUp standalone performs better than supervised baseline, while also benefiting significantly from the addition of original data in LatentMixUp++ variants. We can attribute this difference between MixUp and LatentMixUp to the fact that addition of two time series in raw time space could produce something that is completely unrecognizable, _e.g.,_ addition of two time out of sync consine functions could produce a zero-valued time series. Hence, mixing time series in the latent space with features more relevant to the time series class, is expected to produce better examples. Our hypothesis is confirmed by the empirical results.
### Ablation: Performance as a proportion of training Data
We experiment to see the performance of our methods as a function of amount of training data, as shown in Figure 2 and Figure 3. We report MixUp++ and LatentMixUp++ performance with 2 mixup batches. We observe that LatentMixUp++ still beats all the baseline methods for all the percentages of training data. **Additionally, LatentMixUp++ has a much higher F1 score differential low data regime (1% or 5%)**, than in high label data regime. This shows the effectiveness of LatentMixUp in regularizing the latent space for better classification performance. We note that MixUp++ is only marginally better than purely supervised method in low data regime for HAR dataset, but
Figure 3: Classification Performance on Sleep-EDF dataset as a function of percentage of training data. Note, MixUp++ and LatentMixUp++ performance is reported for models with 2 mixup batches.
Figure 2: Classification Performance on HAR dataset as a function of percentage of training data. Note, MixUp++ and LatentMixUp++ performance is reported for models with 2 mixup batches.
significantly better for Sleep-EDF dataset. We could attribute this to avoidance of over-fitting by label regularization in low data regimes which are prone to over-fitting.
### Data Augmentation in Semi-Supervised Setting with Pseudo-labeling Mixup
Next, we experiment with semi-supervised learning by using pseudo-labeling MixUp, as described in Section 3. We used a confidence score of 0.99 to select highly confident examples for pseudo-labeling MixUp. Results are shown in Figure 4 and Figure 5 for HAR and sleep datasets, respectively. We report MixUp++ and LatentMixUp++ performance with 2 mixup batches. LatentMixUp++ still outperforms all other semi-supervised baselines, especially in the low labeled data regime with 6-7% absolute increase in F1 score for the 1% label data scenario versus traditional pseudo labeling for the two datasets. MixUp++ performs similar to pseudo labeling baseline. The out-performance reduces to 0.5% - 1% in the higher labeled data regime like 50%.
Comparing performance of semi-supervised pseudo-labeling scenarios with purely supervised scenarios we see a drastic jump in low labeled data scenarios, especially for Sleep-EDF dataset. However, we do notice a drop in performance versus purely supervised method in the Sleep-EDF dataset as we increase labeled data to 25%. This can be attributed to issues with pseudo-labeling as highly confident examples can bias the dataset, and propagate errors in initial training [21].
Figure 4: Classification Performance on HAR dataset as a function of percentage of labeled data in the semi-supervised pseudo-labeling setting. Note, MixUp++ and LatentMixUp++ performance is reported for models with 2 mixup batches.
Figure 5: Classification Performance on Sleep-EDF dataset as a function of percentage of labeled data in the semi-supervised pseudo-labeling setting. Note, MixUp++ and LatentMixUp++ performance is reported for models with 2 mixup batches.
Based on these results, we can conclude that LatentMixUp++ with pseudo-labeling MixUp improves performance considerably in semi-supervised settings, especially for low labeled data regime.
### Ablation Study: Number of MixUp Batches
In the previously shown results, we used MixUp methods with 2 batches of mixup data per original data batch. We present an ablation study in Figure 6, on HAR dataset, as a function of the number of mixup data batches per original data batch. We observe a maximum when the number of batches is equal to two; performance drops subsequently as we increase the number of mixup batches. While this is empirical observation, we speculate that this is due to loss of decision boundary in the data manifold of the base model as the number of synthetic mixuped examples far exceeds the number of real examples.
## 6 Conclusions
Time series are ubiquitous in nature, and commonly found in various domains. Labeling time series data is a challenging task as it needs domain expertise and more effort owing to the dynamic and temporal nature of the data. This often leads to situations with limited labeled data, with limited scope for getting more labeled data. Machine learning models are particularly data intensive. Data augmentation techniques have been used extensively in computer vision to overcome these issues. In this work, we extend MixUp to time series classification. MixUp is a commonly used simple technique in computer vision to interpolate input samples. We propose MixUp++ and LatentMixUp++ which use simple modifications over MixUp to perform interpolation in raw time series and classification model's latent space. We also propose extension of our methods in semi-supervised learning with pseudo labeling. Our results on two public datasets, indicate considerable improvement in performance for both low labeled data regime as well as high labeled data using LatentMixUp++.
|
2307.12143
|
Emergence of Adaptive Circadian Rhythms in Deep Reinforcement Learning
|
Adapting to regularities of the environment is critical for biological
organisms to anticipate events and plan. A prominent example is the circadian
rhythm corresponding to the internalization by organisms of the $24$-hour
period of the Earth's rotation. In this work, we study the emergence of
circadian-like rhythms in deep reinforcement learning agents. In particular, we
deployed agents in an environment with a reliable periodic variation while
solving a foraging task. We systematically characterize the agent's behavior
during learning and demonstrate the emergence of a rhythm that is endogenous
and entrainable. Interestingly, the internal rhythm adapts to shifts in the
phase of the environmental signal without any re-training. Furthermore, we show
via bifurcation and phase response curve analyses how artificial neurons
develop dynamics to support the internalization of the environmental rhythm.
From a dynamical systems view, we demonstrate that the adaptation proceeds by
the emergence of a stable periodic orbit in the neuron dynamics with a phase
response that allows an optimal phase synchronisation between the agent's
dynamics and the environmental rhythm.
|
Aqeel Labash, Florian Fletzer, Daniel Majoral, Raul Vicente
|
2023-07-22T18:47:18Z
|
http://arxiv.org/abs/2307.12143v1
|
# Emergence of Adaptive Circadian Rhythms in Deep Reinforcement Learning
###### Abstract
Adapting to regularities of the environment is critical for biological organisms to anticipate events and plan. A prominent example is the circadian rhythm corresponding to the internalization by organisms of the \(24\)-hour period of the Earth's rotation. In this work, we study the emergence of circadian-like rhythms in deep reinforcement learning agents. In particular, we deployed agents in an environment with a reliable periodic variation while solving a foraging task. We systematically characterize the agent's behavior during learning and demonstrate the emergence of a rhythm that is endogenous and entrainable. Interestingly, the internal rhythm adapts to shifts in the phase of the environmental signal without any re-training. Furthermore, we show via bifurcation and phase response curve analyses how artificial neurons develop dynamics to support the internalization of the environmental rhythm. From a dynamical systems view, we demonstrate that the adaptation proceeds by the emergence of a stable periodic orbit in the neuron dynamics with a phase response that allows an optimal phase synchronisation between the agent's dynamics and the environmental rhythm.
Circadian rhythm Deep reinforcement learning Dynamical systems Synchronisation,
## 1 Introduction
Circadian rhythms represent a major and well-studied adaptation of almost all terrestrial organisms to the \(24\)-hour rotation of the Earth [1, 2, 3, 4]. This endogenous rhythm regulates in a periodic manner the physiology of the organism, including obvious behavioral patterns such as the sleep and wakefulness cycle [5]. At the physiological level, the circadian rhythm is best understood in Drosophila [6]. The biochemical mechanism involves several transcription-translation feedback loops in which the transcription of genes is regulated by its protein products. These feedback loops induce the expression of so-called "clock" genes and protein levels to oscillate with a period of roughly \(24\) hours [6]. At the functional level, one of the key advantages of exhibiting an endogenous and entrainable rhythm is the possibility to anticipate regular events from the environment [7]. In addition, endogenous rhythms can also synchronise interdependent physiological processes and interactions with other organisms [8].
As their biological counterparts, artificial learning agents also need to adapt to the statistical regularities of their environments. In particular, reinforcement learning agents can develop long-term strategies to explore and exploit the structure and reward signals in complex environments [9]. Intuitively, the agent's success is explained in terms of a certain adaptation or internalization by the agent to regularities of the environment, including the environment's response to the agent's actions. Indeed, the internalization of environmental dynamics into the agent's internal states has been related to the degree of autonomy of an agent [10]. Thus, higher levels of autonomy indicate that the agent acts prompted by its internal state rather than being purely reactive to environmental transitions [10, 11].
In this work, we study the specific mechanisms by which a learning agent internalizes a periodic variation in the environment. In particular, we explore how endogenous and entrainable rhythms emerge in reinforcement learning agents controlled by an artificial neural network. To this end, we deployed an agent in an environment with a reliable periodic variation while the agent learns to solve a foraging task by reinforcement learning. After characterizing the agent's behavior, we tested whether the rhythm exhibited by the agent after learning is endogenous and entrainable. Interestingly, the agent's rhythm quickly adapted to shifts in the phase of the environmental rhythm without any re-training. Using tools from dynamical systems theory, we describe how individual neurons in the network develop a stable (locally attracting) periodic orbit through a bifurcation [12]. Such neural dynamics are essential to sustain the endogenous rhythm internalized by the agent. Furthermore, we compute phase response curves of these periodic orbits and explain how they help to synchronize the internalized rhythm to external inputs at an optimal phase difference.
We remark that the model studied is not intended as a model of biological circadian rhythms. Rather the study takes inspiration from biological rhythms to select the task and assess whether and how artificial RL agents also internalize environmental regularities. It is our understanding that the internalization of environmental regularities is a general phenomenon in reinforcement learning. We study the circadian rhythm to understand in detail how a RL agent performs such internalization for the case in which the agent internalizes a simple periodic signal. That allows us to study how the internalization emerges in mechanistic terms and how the stability properties of the learned attractor endows the agent with generalization properties not foreseeable from the environment or experiences alone.
## 2 Methods
### Criteria for Circadian Rhythms
It is generally accepted that a biological rhythm with a period of roughly \(24\) hours is called circadian if the following criteria are fulfilled [5]:
1. **Endogeneity:** A rhythm is called _endogenous_ if it persists without an external periodic input. That is, the rhythm must be driven by an internal mechanism of the considered organism. Specifically, circadian rhythms preserve their \(24\)-hour period in the absence of an external light or temperature signal that would provide a cue of the daytime.
2. **Entrainability:** A rhythm is _entrainable_ if it is able to adapt its phase to an external signal. Although circadian rhythms preserve a period of roughly \(24\) hours even in an artificial constant environment, external cues (daylight, temperature) are necessary for readjusting the rhythm to the exact daytime. Even if a significant phase shift occurs, e.g., a change of time zones, the entrainability of circadian clocks ensures a readjustment to the environmental phase. The process of readjustment by an external signal is called _entrainment_. (Note that entrainment should not be confused with training.)
3. **Temperature compensation:** The rhythm is sustained across a wide range of temperature changes. While biological processes are often accelerated by higher temperatures, circadian rhythms maintain their approximate \(24\)-hour period independently of the temperature of the environment.
In this work, we study an agent acting in an environment in which one variable (daylight) is periodically modulated. We specifically test whether the agent's rhythm has properties that correspond to the above-mentioned criteria for circadian rhythms. Our simulated environment does not contain a temperature component. Instead, we restrict ourselves to the question whether the agent's rhythmic behavior is endogenous and entrainable.
### Foraging Task and Environment
In our experimental setup, the environment comprises an alternating _daylight signal_ implementing a daytime-night-cycle. The agent's task is to collect food, which is randomly placed within a specific area, the _food area_. The episode reward increases with each consumed food item. Hence, ideally the agent should try to collect as much food as possible. The placement of the food does not depend on the time of the day, i.e., the agent can find the same amount of food during daytime and night. However, at nighttime, we impose a negative reward if the agent leaves a specific safe location outside the food area. We refer to this safe location as the _home location_. By choosing appropriate reward values, we ensure that the penalty for leaving the home location at night outweighs the potential reward for food collected at night. Therefore, the optimal strategy for the agent to maximize its reward is to forage in the food area at daytime and to stay at home at night. That is, the agent needs to learn to adapt its behavior to the environmental daytime-night-cycle, which is determined by a daylight binary signal.
We simulate the task and environment using the Artificial Primate Environment Simulator (APES), a customizable 2D grid world simulator [13]. APES allows to define environments with multiple elements that interact with user-defined reward functions. For our experiments, we use a \(5\times 5\) grid world environment, illustrated in Fig. 1. The food area
consists of \(3\times 3\) grid cells indicated with a green background color. A food object is randomly placed within the food area and remains until it is collected by the agent. As soon as the food is collected, a new food item will be placed randomly within the food zone. The home location is the bottom right grid cell. The white grid cells represent a _transit zone_ separating the home location and the food area. No food will be placed in the transit zone, but the agent will still be punished for being in the transit zone at night. Only the home location is safe for the agent to shelter at night.
The time of the environment is discrete. One daytime-night-cycle consists of \(40\) time steps. Accordingly, the daylight signal is periodic with a period of \(40\) time steps. It has the value \(1\) (daytime) for the first \(20\) time steps of a daytime-night-cycle and the value 0 (night) for the following \(20\) time steps. We specifically designed the foraging task such that the agent needs to anticipate the progression of time, i.e., the phase of the daylight signal. The agent perceives the current state of the daylight signal, i.e., it knows at every time step whether it is currently daytime or night. However, the current state of the daylight signal does not provide information about its phase. That is, solely based on the current state, it is impossible to determine how many time steps are left until the next change from daytime to night, or vice versa. However, the agent needs at least four time steps to return from the food area to the home location. Therefore, for earning a high reward, the agent must leave the food area a few time steps before the night begins. Thus, the agent needs to develop the ability to anticipate the onset of the night to avoid penalty scores.
We call the information that the agent receives at the time step \(t\) an _observation_ and denote it by \(o_{t}\). Besides the daylight signal, the observation \(o_{t}\) comprises the current location of the agent, the agent's orientation, and the location where the food is currently placed (Fig. A.1). Since the daylight signal is periodic, the state \(s_{t}\) of the environment is entirely described by the observation and the phase of the daylight signal. In other words, the environment is a partially observable process, where the phase of the daylight signal is a hidden variable [14, 15].
For each time step, the agent performs one of five possible actions: moving up, down, right, left, or standing still. The agent earns a reward of \(+1\) for each consumed food object, and is penalized with a value of \(-2.5\) for each time step spent outside the home location at night.
### Architecture and Training
We train an agent to perform the foraging task by deep reinforcement learning (DRL) [9, 16, 17, 18] using a dueling Q-network [19]1. The network estimates so-called Q-values of state-actions pairs to enable the agent to chose the best action for the current state. As described above, we differentiate between state \(s\), which completely describes the environment, and observation \(o\), which is the part of the state that is perceived by the agent. The network input at each time step \(t\) is the observation \(o_{t}\) containing the spatial information from the environment (location of the agent and food) and the current state of the daylight signal. For optimal decisions, however, we actually need to consider the complete state \(s\): the information about the progression of the day (remaining time steps until the end of the daytime) is relevant for the agent to decide when to leave the food area to return home. Despite this information is not contained in \(o_{t}\) (only in \(s_{t}\)), it can be extracted from the _history of observations_\(h_{t}=\{o_{t},o_{t-1},o_{t-2},\ldots\}\). Therefore, we equip our network with an LSTM layer [20, 21, 15] to represent information from past inputs in its internal state.
Figure 1: Environment for the foraging task. The green cells mark the food area. At the bottom right corner is the safe home location (marked by a rabbit hole), where the agent (depicted as a rabbit) must shelter at night to avoid penalties.
We use experience replay to achieve stable training and an \(\varepsilon\)-greedy policy for exploration. We train the network for \(37500\) training episodes, each consisting of \(160\) time steps (four full days).
Details of the network architecture are in Appendix A. For an explanation of Q-learning with dueling Q-networks, we refer to Appendix B. For a details about training process and an overview of the used hyperparameters, we refer to Appendix C. The evolution of the reward during training is shown in Fig. D.2 in Appendix D.1.
## 3 Results
### The Agent's Behavior
As described in Sec. 2.2, the grid world environment comprises three distinct areas: first, the food area, where the agent can gain reward by collecting food; second, the home location, where the agent is protected from receiving penalty scores at night; and third, the transit zone, which must be crossed by the agent when moving from the home location to the food area or vice versa. To move from the food area to the home location, the agent needs at least four time steps. That is, to avoid a penalty at night, the agent must plan at daytime to reach home ideally exactly at the \(21\)st time step of the day (first time step of the night).
To evaluate the agent's behavior, we characterize how it navigates the environment by the timing of its most salient actions: leaving the home location, entering the food area, leaving the food area, and entering the home location. We trained a randomly initialized model and performed \(1000\) test runs for which we captured the time steps of these events. The results are shown in Fig. 2: for each day of the test phase, we indicate the frequency of each event type by colored histograms. The green histograms in panel a) show at which time steps the agent leaves the home location. During the first four days of the test runs, the agent typically leaves home at the first time step of the daytime, which is ideal for maximizing the reward. For the remaining four days, the agent leaves the home location a few time steps later (.e.g. during day 8 the agent leaves the home area at the fourth time step). This can be explained by the fact that our training episodes include only four days, whereas the test runs comprise eight days. The network must generalize from the training data to make suitable decisions after the fourth day. Indeed, as we note below, the average LSTM activation decreases considerably during the night of the fourth day. This is likely due to the lack of pressure from the training (which consisted only of 4 days) to maintain any particular activation range for the LSTM right before the episode would end (during training the agent does not need to anticipate any subsequent event after day 4). Nevertheless, the agent actions and LSTM activations continue with a daily regularity from day 5 onwards. The agent's behavior reveals that this generalization is not perfect but sufficient to obtain near maximal reward. The time points at which the agent enters the food location are shown by blue histograms in panel b). As expected, this happens a few time steps after leaving home. The red histograms in panel c) show the time points at which the agent leaves the food area. The agent has to make this decision in anticipation of the approaching night to enter the home location on time and avoid a large penalty. The purple histograms in panel d) show when the agent reaches the home location after leaving the food area. For all eight days, the agent arrives almost always on time or with a delay of at most four time steps. This shows that the generalization from the shorter training episodes (lasting only four days) works well for learning to predict the onset of the night.
Figure 2: Timing of the agent entering and leaving the home location and food area. The angular plots show histograms of the agent’s timing for eight full-day cycles. The following events are shown: (a) the agent leaves the home location, (b) the agent enters the food area, (c) the agent leaves the food area, (d) the agent returns to the home location. The grey grid lines mark event probabilities of \(0.2\), \(0.4\), \(0.6\), etc. Daytime and night are indicated by white and grey areas. The histogram values were obtained from \(1000\) test runs.
### Testing the Rhyth Endogeneity
One of the main characteristics of circadian rhythms is their endogeneity, i.e., their property to maintain their period even in the absence of an external periodic drive signal. During training, we use a periodic daylight signal, which has the value \(1\) at daytime, i.e., from time step \(1\) to \(20\) of each day cycle, and the value \(0\) at night (time steps \(21\) to \(40\)). To demonstrate the endogeneity of the observed circadian behavioral rhythm (as opposed to just being a sequence of reactions to the external daylight cues), we need to consider cases where the daylight signal is clamped to a constant value, \(1\) or \(0\), to model permanent daytime or permanent night, respectively. In the following, we describe the results of such tests under constant conditions. Further, we perform a bifurcation analysis of the activity of the LSTM units in the network.
#### 3.2.1 Tests During Constant Conditions
To confirm the endogeneity of the agent's rhythm, we use test runs in which after certain time the daylight signal is clamped to a constant value. Hence, we apply the usual periodic daylight signal for four day cycles (\(160\) time steps) to ensure that the agent's rhythm is present and phase-adjusted to the environmental time. For the remainder of the test run (time steps 160 to 320) a constant daylight signal was applied (either constant daytime or constant night). Figure 3 shows the mean activation of the LSTM neurons and the timing of the agent's behavior for \(1000\) test runs under these conditions. The activation pattern shows that the rhythm persists with a period of roughly one day (\(40\) time steps). We can conclude that the observed rhythm is endogenous: for days \(5\) to \(8\), the oscillation of the LSTM activations and the behavior of the agent are not forced externally. During this phase, the oscillation is purely the result of the dynamical properties of the trained neural network. In other words, the LSTM layer internalized the environmental periodic rhythm.
#### 3.2.2 Bifurcation analysis
Without training, the LSTM activations do not oscillate if we apply a constant daylight signal. The internal rhythm arises during the initial training phase. That is, the step-wise parameter change during training causes the system to undergo a bifurcation (a sudden qualitative change in a system behavior due to a small change in the system parameters). This bifurcation is illustrated by Fig. 4a, which shows the activation of one arbitrarily chosen LSTM neuron plotted against the delayed activation (delayed by \(10\) time steps, i.e., \(1/4\) period) of the same neuron during the training episodes \(33\) to \(132\). Although the plot depicts the activation state of a single neuron, other neurons in the LSTM layer act in a qualitatively equivalent way. The plot shows that the neuron's activation state remains approximately zero for the initial training episodes. However, after roughly \(55\) episodes, we observe a rhythmic behavior, which indicates the onset of a stable periodic orbit in the system's dynamics. Figure 4b depicts the amplitude (here defined as the difference between maximum and minimum) of the neuron's activation. Between episode \(55\) and \(100\), we observe cycles with increasing
Figure 3: Agent behavior and LSTM activation for a clamped daylight signal: (a) permanent daytime and (b) permanent night after the fourth day. Shown are average values over \(1000\) test simulations. Daytime and night are represented by white and grey areas. The average activation of the LSTM neurons is plotted in blue. The activation pattern is shaped differently after day 4, but it retains its period. The red bars are histograms counting the agent’s exits of the food area at the respective time step.
amplitude. After episode \(100\), the amplitude remains nearly constant. The period of the neuron's activation is always roughly one day (\(40\) time steps). The fact that the amplitude grows from zero after the system crosses the bifurcation, whereas the frequency immediately jumps to a positive value, suggests that the observed stable periodic orbit emerges as a result of a supercritical Neimark-Sacker bifurcation [22].
The above observations are further supported by the spectrograms shown in Fig. 5. The upper row of the figure contains spectrograms of the activation of an arbitrarily chosen LSTM neuron for constant daylight signal: panel (a) illustrates the permanent daytime case, and panel (b) the permanent night case. The color represents the power spectral density of the neuron's activation. The spectrograms exhibit a frequency peak near \(1/\text{day}\) beginning after \(60\) to \(80\) training episodes, which validates our above observation of a frequency jump at the bifurcation point. Panels (c) and (d) show the logarithm of the power spectral density of the same neuron as in panels (a) and (b). Plotting the logarithmic values enables us to see subtle frequency peaks: higher order resonances of the base frequency show up, and, in particular for panel (c), it is revealed that the frequency peak at \(1/\text{day}\) emerges at an earlier period than visible in panel (a). Finally, panels (e) and (f) in Fig. 5 show the logarithm of the mean of the power spectra of all \(128\) LSTM neurons for permanent daytime and permanent night, respectively. The single neuron spectra shown in panels (a) to (d) are very similar to the average spectra shown in panels (e) and (f). In fact, all LSTM neurons reveal a similar spectral pattern, and hence a similar frequency content.
Additionally, in Appendix D.2, we plot spectrograms (Fig. D.3) for the whole training procedure of \(37500\) episodes. It is revealed that in the course of the training, the system may undergo further bifurcations. Once learned, the internalized rhythm is in fact persistent for the whole training phase and can always be observed if the daylight signal is clamped to 1 (permanent daytime) during the test runs. However, the rhythm can switch between being active and inactive if the daylight signal is clamped to \(0\) (permanent night) as shown in Fig. D.3b.
### Tests for Rhyth Entrainability
Entrainability is the ability of an oscillating system to synchronize to an oscillating input signal or environment. It is one of the defining properties of circadian rhythms and ensures that the internal circadian clock is continuously adjusted
Figure 4: Panel (a) shows the activation of an arbitrarily chosen LSTM neuron for a constant daylight signal (permanent daytime) plotted against itself delayed by \(1/4\) period. Shown are data obtained by one test run after each training episode from \(33\) (blue) to \(132\) (red). Panel (b) shows the amplitude of the neuron’s activation.
to the environmental clock time. Moreover, it enables the circadian rhythm to readjust to sudden phase shifts of the environmental rhythm. A prominent example is the ability of humans and other biological organisms to adapt to time differences when travelling across time zones.
#### 3.3.1 Jet lag experiments
As with all the experiments in this paper we did not re-train the model. To confirm and study the entrainability of the network's rhythm, we simulated time zone shifts by altering the length of a single daytime period or night during test runs with the trained model. Figure 6 shows how the time shifts affect the average activation pattern of the LSTM layer and the timing of the agent's behavior. We altered the length of the second day by extending the daytime (Fig. 6b) or the night (Fig. 6c) by \(50\) percent. For comparison, we show the results for the unaltered case (Fig. 6a). For the cases with extended daytime or night, we observe a "jet lag" effect on day \(3\): the agent exits the food area earlier than necessary. This jet lag effect is slightly stronger for the extended night case. In both cases, the agent progressively adapts its food area exit time to the new environmental time on day \(4\) and day \(5\), which indicates that the controlling neural network is re-synchronising its internal clock to the changed environmental time. On day \(6\) and later, the agent's internal rhythm is again synchronised with the environment. Further experiments with phase perturbations, including complete reversal of daytime and night, are described in section D.3 in the Appendix. This shows that the neural network dynamics is able to compensate within three days for strong shifts of the environmental clock.
#### 3.3.2 Phase response curve analysis
The effect of environmental light on the circadian rhythms of humans has been studied by measuring the change (phase response) of the human rhythm as a reaction to light exposure at different times of the day [23]. To obtain these measurements, test persons stayed within an environment with controlled light conditions for a couple of days and were exposed to bright light during certain time periods. The light exposure resulted into a phase shift of the circadian clock of the test persons, which could be determined by measuring their body-temperature curve throughout the day. These observations can be visualized with phase response curve (PRC), which plots the phase shift in reaction to a light exposure (the phase response) against the phase when the light exposure occurred. The PRC studies in humans provide several insights. Light at the evening or early night leads to a negative phase response, i.e., a delay of the circadian clock. On the contrary, light at the late night and morning causes a positive phase response, i.e., the circadian clock is advanced. Consequently, the PRC in an average human crosses the x-axis at two points: at night with a positive slope,
Figure 5: Power spectra of the LSTM activation for the initial training phase. The top row contains power spectra of one arbitrary LSTM neuron for (a) permanent daytime and (b) permanent night. Panels (c) and (d) show the logarithm of the same spectral data. Applying the logarithm emphasizes subtle frequency pattern such as the higher order resonances or the pattern at early episodes. The bottom row shows the logarithm of the mean power spectra of all \(128\) LSTM neurons for (e) permanent daytime and (f) permanent night. All spectral data were calculated based on a single test simulation.
and at daytime with a negative slope. The phase of the circadian rhythm is in a stable equilibrium if the time of the light exposure is correctly adjusted with the negative slope zero-crossing of the PRC. This observation describes the regulation of the human circadian clock by light from a dynamical systems viewpoint.
Motivated by these studies of the human PRC, we plotted PRCs for the LSTM layer of our trained neural network. We ran a series of test simulations with the usual alternating daylight signal on the first four days, and a constant daylight signal for the subsequent days, i.e., either permanent daytime or permanent night. On day \(5\), we inverted the daylight signal for one time step, i.e., we applied a daylight pulse if the signal was permanent night and vice versa. Then we measured the resulting phase shifts on day \(6\). The initial period of four days with periodic daylight signal sets the neural network's inner clock. Therefore, we can interpret the measured phase shifts at given times of day \(5\) as the network's reaction to light (or darkness) at the corresponding phase of its internal clock.
Figure 6: Effect of a daylight signal phase shift on the agent’s behavior and the LSTM activation. Shown are (a) a base case without phase shift, and two dephased cases, where (b) the second daytime period, or (c) the second night is extended by \(10\) time steps. All values are averages of \(1000\) test simulations. Daytime and night are represented by areas with white or grey background. The average activation of the LSTM neurons is plotted in blue. The red bars are histograms counting the agent’s exits of the food area at the respective time step. Both the LSTM activation and the behavioral pattern adapt to the phase shift within three days.
Figure 7: Average phase response to light exposure (red) and darkness (blue). Panels (a) and (b) show the PRCs for two independently initialized and trained models. The light red and light blue curves are the PRCs for the \(128\) individual neurons of the LSTM layer obtained by averaging over \(200\) test runs. The dark red and dark blue curves show the mean phase response of all LSTM neurons.
Figure 7 shows the PRCs of the LSTM neurons reacting to a light impulse (red curves) and to darkness (blue curves) for two randomly initialized and independently trained models (panels (a) and (b)). In case (a), the agent reacts with a positive phase shift to light and a negative phase shift to darkness during the early morning time around time step 1. This corresponds to a phase advance for more light in the early morning and a phase delay for darkness at this time. At evening time (near time step \(20\)) this pattern is inverted and can be interpreted equivalently. In panel (a), the model response to light (red curve) is similar to the human PRC for light exposure. In case (b), the most significant phase shift occurs if light is absent in the morning (late nighttime and early daytime). This suggests that the corresponding trained model resets its inner clock mainly during morning time. The PRCs shown in panels (a) and (b) differ significantly, which indicates different learned phase adjustment mechanisms. This difference between the trained models indicates that entrainment can be realized by different strategies and that the neural network is able to learn more than one of these strategies.
### Robustness of the emergence of circadian-like rhythms
So far the emergence of a stable periodic orbit representing the internalisation of an environmental rhythm has been studied for a particular architecture and type of recurrent layer (LSTM). Hence, a natural question concerns the generality of the phenomenon. To explore this question we ran a series of variations on the network architecture, type of recurrent layer, and learning algorithm. In particular, we considered variations in the optimization algorithm (SGD and RMSprop), weight regularization (L1 + L2 norms), weight initialization (He normal), type of recurrent layer (vanilla RNN and GRU), as well as the width of the recurrent layer (32 and 96 neurons) and fully connected layer (8 units).
For each case, we tested whether the variations also resulted in the endogeneity and entrainability of a rhythm emerging in the recurrent layer of the network. Appendix D.4 shows the results for the constant conditions and jet-lag experiments for all the variations. Overall, the emergence and entrainability of the internal rhythm were robust to most of the variations explored.
We also noted that different seeds might lead to different strategies, an effect well known in reinforcement learning. While it is not possible to fully explore and characterize the conditions under which one type of solution or another emerges, the results indicate that the circadian-like properties of the emergent rhythm are not particular to a specific set of parameters or network architecture but rather they occur in a wide range of conditions.
## 4 Related Work
Studying circadian rhythms was a natural choice and inspiration since they constitute a clear and almost universal internalization of a simple environmental rhythm, and for which different techniques for assessing the internalization of the rhythm were available (tests for endogeneity and entrainment could be borrowed from the biological literature). In particular, we studied how the system dynamics supports the internalization and its adaptation properties. More generally, the environmental periodic signal that we considered represents a specific case of non-stationarity in the environmental properties (cyclo-stationarity). How an optimal agent adapts to this and other non-stationities (including those due to the adaptation of other agents [24]) is a question of theoretical and practical interest for continual learning in RL [25]. Interestingly, the internalization and stability properties of the periodic orbit by the agent's neurons endowed the agent with robustness to perturbations (jet-lag experiments) not foreseeable from the training experiences.
The internalization of environmental correlations has been related to the degree of autonomy of an agent [10]. In [11] a partial information decomposition was used to quantify the degree of internalization in tasks with a Markovian dynamics. The estimated index was a global measure of internalization that cannot distinguish what dynamics is being internalized nor its mechanisms. In the present work, we explicitly demonstrated the internalization of an environmental rhythm by a reinforcement learning agent, and how this happened via a bifurcation in the LSTM units. This bifurcation endowed the network with a stable periodic orbit with phase entrainable dynamics.
Bifurcations of random RNNs and their node synchronisation have been studied in [26] with an emphasis on how they impact their computational properties. For an attractor view on RNNs training, see also [27].
The dynamical systems view of neural networks has been influential in the development of neural ordinary differential equations (ODE's) and its variants. In particular, dynamical stability of vanilla RNNs and its relation to vanishing and exploding gradients was addressed in [28], where a neutral stability condition was imposed by restricting weight matrices to be anti-symmetric. Numerical results demonstrated that this condition improved the training and generalization of the network in several tasks. Early work on considering RNNs as dynamical systems and how their stability can impact the training can be found in [29]. In our work, we did not impose any stability condition, rather we characterized how the phase stability of an emergent attractor explained the adaptability of the agent to the perturbations of the external rhythm that were never experienced during the training episodes.
To our knowledge, studies of how artificial neural networks develop attractors and how the attractor characteristics relate to the training and generalisation properties have been conducted in the supervised setting. For example, in [30] the authors observed that smaller intrinsic dimensions of representations in the final layers of vision networks correlated with higher accuracy in a classification task. Line attractor dynamics (continuous attractors with one direction of neutral stability) have been reported in sentiment analysis networks [31].
The ability of LSTM units to distinguish precise timing in temporal patterns is well known and it was demonstrated in early work [32]. Here we were interested in the type of solution adopted by the agent. We note that the agent trained in our study could potentially have developed a simple event-driven mechanism that counted time steps triggered by the external daylight transitions to solve the task without internalising any rhythm. This was not the case in our study where a rhythm of appropriate periodicity was clearly internalized. While a simple counter triggered by the environment transitions is one of the optimal solutions for the training of the foraging task, the presence of a sustained rhythm in the agent under constant environmental conditions revealed that the internalization of the rhythm was the actual solution adopted during learning. For other network internalizations (or tasks), it is possible that simple counting mechanisms emerge as possible solutions. What are the exact factors that determine the emergence of one or another type of solution is a matter of further investigation.
Models of circadian rhythms abound in the mathematical biology literature. These models often consist of coupled differential equations describing concentration of molecules, gene expression levels, or multi-cellular changes [33]. No learning or reinforcement mechanisms are included in these studies where the model parameters are fixed or scanned. For a recent application of artificial circadian rhythms in robotics, see [34].
## 5 Discussion
We have investigated the emergence of circadian-like rhythms in deep learning agents trained by reinforcement in a foraging task. The results show that a reinforcement learning agent equipped with LSTM units can internalize an external rhythm and use it to anticipate and exploit an environmental regularity. In particular, the timing of the agent's actions was critically controlled by the internalized rhythm. We conducted extensive experiments to determine the properties of the agent's rhythm. Tests under constant conditions and jet lag experiments confirmed that the rhythm was endogenous and entrainable in a similar way as circadian rhythms exhibited by biological organisms. Furthermore, bifurcation and phase response curve analyses were conducted to characterize the emergence of the rhythm and its synchronization properties. We observed the emergence of a stable periodic orbit in the LSTM dynamics via a bifurcation as the training progressed. Since the periodic orbit emerges with a smoothly increasing amplitude and an instantaneous jump of the frequency, we conjecture a supercritical Neimark-Sacker bifurcation. The phase response curves illustrate how the phase of the agent's internal clock is dynamically attracted by the phase of the environmental rhythm via phase-dependent reactions to the daylight signal. This stability property ensures that the agent can adapt to phase shifts in the environment. Interestingly, the phase stability emerged, although the agent has not experienced phase perturbations during training (which always consisted of four regular daytime-night-cycles). Moreover, the phase response curves reveal that the agent is not limited to learning one specific strategy. A comparison of two independently trained models shows significant differences in the phase response of the periodic orbits. This observation raises the question whether the observed periodic orbit may stem from different types of bifurcations.
Our results are in line with the view that learning agents can develop long-term strategies by internalizing correlations in the environment dynamics and the agent-environment interactions. Planning ahead often requires a simulation or unfolding in time of the dynamics to be predicted. In the case that we studied, such an internalization led to the emergence of a periodic trajectory of LSTM units that enabled the agent to anticipate the environmental dynamics.
As mentioned above, we can understand the adaption and internalization of the circadian-like rhythm by the agent as the effect of a bifurcation, i.e., a parameter change in a dynamical system (LSTM units) which causes topologically different trajectories and attractors as the training progresses. The neural network controller of the agent developed a periodic orbit with stability properties that endowed the agent with an endogenous and entrainable rhythm. More generally, this observation raises the question whether agents trained in different tasks and environments also benefit from developing attractors whose topology and stability support appropriate computations and policies. From this perspective, successful learning is directly related to changing parameters of the model to induce the appropriate bifurcations and attractors to support the representation and computations of appropriate variables. This is already a successful research direction in neuroscience [35, 36] that could be transferred to the study of how artificial learning agents represent and process information by exploiting attractor dynamics.
## Acknowledgements
We are thankful to Jaan Aru, Tambet Matiisen, Meelis Kull, and three anonymous reviewers for constructive comments on the manuscript.
This research was supported by the Estonian Research Council Grant PRG1604, the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement No. 952060 (Trust AI), the Estonian Centre of Excellence in IT (EXCITE) Project Number TK148, and the Project CardioStressCI (ERA-CVDJTC2020-015) from the European Union's ERA-CVD Joint Transnational Call 2020.
|
2305.04158
|
Koopman-type inverse operator for linear non-minimum phase systems with
disturbances
|
In this paper, a novel Koopman-type inverse operator for linear
time-invariant non-minimum phase systems with stochastic disturbances is
proposed. This operator employs functions of the desired output to directly
calculate the input. Furthermore, it can be applied as a data-driven approach
for systems with unknown parameters yet a known relative degree, which is a
departure from the majority of existing data-driven methods that are only
applicable to minimum phase systems. Based on this foundation, we use the Monte
Carlo approach to develop an improved Koopman-type method for addressing the
issue of inaccurate parameter estimation in data-driven systems with large
disturbances. The simulation results justify the tracking accuracy of
Koopman-type operator.
|
Yuhan Li, Xiaoqiang Ji
|
2023-05-07T01:40:07Z
|
http://arxiv.org/abs/2305.04158v1
|
# Koopman-type inverse operator for linear non-minimum phase systems with disturbances*
###### Abstract
In this paper, a novel Koopman-type inverse operator for linear time-invariant non-minimum phase systems with stochastic disturbances is proposed. This operator employs functions of the desired output to directly calculate the input. Furthermore, it can be applied as a data-driven approach for systems with unknown parameters yet a known relative degree, which is a departure from the majority of existing data-driven methods that are only applicable to minimum phase systems. Based on this foundation, we use the Monte Carlo approach to develop an improved Koopman-type method for addressing the issue of inaccurate parameter estimation in data-driven systems with large disturbances. The simulation results justify the tracking accuracy of Koopman-type operator.
## I Introduction
Numerous studies have demonstrated that output tracking for non-minimum phase (NMP) systems poses more significant challenges than minimum phase (MP) systems. A system is considered to have non-minimum phase (or possess unstable zeros in the linear case) if there exists a (nonlinear) state feedback capable of maintaining the system output at an identical zero level, while simultaneously causing the internal dynamics to become unstable (Isidori and Alberto, 1985). For linear systems, the inverse of the system will make the unstable zeros of the original system transfer function become the poles, which cause the instability of the inverse (Butterworth et al., 2018).
Various approaches have been proposed to tackle the difficulties in NPM systems, including decomposing the system into external and internal dynamics and solving for the internal dynamics (Devasia et al., 1996; Devasia and PadenFrom, 1998; Estrada, 2021). Berget and Tomas (2020) proposed a method for constructing a new output that eliminates the unstable part of the zero dynamics, while Zundert et al. (2019) have decomposed NMP systems into a stable part and an unstable part for separate control. Ma et al. (2020) solved the problem of random perturbations in NMP systems by using two cost functions that possess the dual property. However, one of the main disadvantages of model-based control methods is their reliance on accurate models of the systems, which can be difficult and time-consuming to develop and validate for complex NMP systems.
In recent years, data-driven approaches have garnered attention for their ability to address the challenge of obtaining accurate models for model-based control strategies. Among the classic methods for non-minimum phase systems are virtual reference feedback tuning (VRFT) (Campi et al., 2002) and correlation-based tuning (CBT) (Van Heusden et al., 2011). However, CBT and VRFT require a comprehensive dataset to ensure model accuracy and may rely on the selection of certain empirical parameters, as noted by Rallo et al. (2016). To overcome these limitations, Suresh Kumar et al. (2022) developed a data-driven control method that integrates internal model control and VRFT and designed a generalizable methodology for data-driven identification of nonlinear dynamics based on the Koopman operator. Markovsky et al. (2022) proposed and solved the data-driven dynamic interpolation and approximation problem. However, most data-driven NMP control methods often require large amounts of data for training, and their applicability conditions and control accuracy have not been mathematically proven. Mamakoukas et al. (2021) designed a generalizable methodology for data-driven identification of nonlinear dynamics based on the Koopman operator.
In this paper, we propose a data-driven method that requires very few parameters and training data and provide a proof for the tracking accuracy. For systems without disturbances, only a small period of input-output data is needed to determine the parameters of our method.
The Koopman method, initially introduced by Koopman (1931), has proven to be a valuable mathematical tool in transforming complex nonlinear systems into higher-dimensional linear systems. This transformation enables the application of established linear system theory to control nonlinear systems. In recent years, the Koopman method has gained significant attention in various fields, including cybernetics and machine learning. Mauroy et al. (2021) introduced the use of Koopman operators in control systems. Mamakoukas et al. (2020) demonstrated the effectiveness of the Koopman operator for a robotic system at the experimental level. Additionally, Klus et al. (2020) presented an application of the Koopman operator in data-driven methods, and Leon and Devasia (2022) proposed a Koopman-type data-driven approach to control linear MP systems. Further research on the Koopman method holds the potential to revolutionize the field of nonlinear control systems.
The Koopman method has been widely studied in control systems, with research falling into two main categories: selecting eigenfunctions of the Koopman operator experimentally and obtaining desired coefficients using machine
learning methods, or introducing a Koopman method under an MP system and proving its applicability and effectiveness. However, there is currently no proof of the practicality of the Koopman method for NMP systems. In this paper, we propose a Koopman-type control method for linear NMP systems with stochastic disturbances and prove its applicability conditions and tracking effects. The contributions of this paper are twofold: first, to the best of our knowledge, this paper is the first to use the Koopman method for the control of linear NMP systems and provide a complete proof; second, we demonstrate that our Koopman-type operator is still applicable for systems with random perturbations.
The remainder of the paper is organized as follows, the introduction of basic notation and the Koopman method is in Section II, the problem of NMP system control is in Section III, the Koopman-type method is proposed in Section IV with the theoretical analysis and the demonstration of tracking error. Section V includes simulation results justifies the tracking accuracy theorem presented in Chapter IV. A brief conclusion is in Section IV.
## II Preliminaries
### Notation
Throughout this paper, let \(\mathbb{R}\) denotes the set of real numbers, \(\mathbb{R}^{m\times n}\) denotes the set of \(m\times n\) matrix, and \(\mathbb{Z}^{+}\) denotes the set of positive integers. For vector \(\mathbf{a}\), \(\mathbf{b}\), \(||\mathbf{a}||\) is the \(l_{2}\) norm, dim(\(\mathbf{a}\)) is the dimension of vector \(\mathbf{a}\), \(<\mathbf{a},\mathbf{b}>\) is the inner product,. For matrix \(\mathbf{A}\), \(||\mathbf{A}||\) is the matrix Frobenius norm, \(\mathbf{A}^{T}\) is the transpose of matrix \(\mathbf{A}\), \(\mathbf{A}^{\dagger}\) is the Moore-Penrose pseudoinverse of matrix \(\mathbf{A}\). \(U[x]\) is a uniform distribution on interval \([-x,x]\) for \(x\in\mathbb{R}^{+}\). For any matrix \(\mathbf{A}=(a_{ij})\), where \(a_{ij}\) are random variables, then \(E[\mathbf{A}]=(E[a_{ij}])\) is the mathematical expectation of matrix \(\mathbf{A}\). \(y^{(r)}(t)\) is the r-th derivative of \(y\) respect to \(t\).
### Koopman method
Consider a (Banach) space \(\mathscr{F}\) of observables \(f:X\rightarrow\mathbb{C}\). The Koopman operator \(U_{s}:\mathscr{F}\rightarrow\mathscr{F}\) associated with the map \(S:X\to X\) is defined through the composition (Mauroy et al., 2020)
\[\boldsymbol{U_{s}f}=\boldsymbol{f}\circ\boldsymbol{S}\quad\boldsymbol{f}\in \mathscr{F} \tag{1}\]
\(U_{s}\) is Koopman operator which convert nonlinear systems to finite or infinite dimensional linear systems by decomposition of eigenequations and eigenvalues of \(U_{s}\) and treat the nonlinear system as a higher order linear system.
The conventional Koopman method necessitates the identification of specific observables, such that they constitute a linear system. Subsequently, the linear system composed of these observables is analyzed to derive the control strategy for the linear system. The final step involves solving the inverse of the observables to obtain the desired parameters of the original system, such as the input function \(u(t)\) of the system. A noteworthy concept inspired by the Koopman approach is the potential to directly and linearly acquire the desired parameters through observables. A valuable idea inspired by Yan and Devasia (2022) is along the lines of the Koopman approach is the possibility of getting the parameters we want directly and linearly through observables. This modified Koopman method we call Koopman-type method.
## III Problem Statement
Consider a non-minimum phase (NMP) stochastic linear time-invariant (LTI) system defined by
\[\dot{\mathbf{x}}(t) =\mathbf{A}\mathbf{x}(t)+\mathbf{B}u(t)+\mathbf{G}w(t) \tag{2}\] \[y(t) =\mathbf{C}\mathbf{x}(t)+h(t)\]
Where states \(\mathbf{x}(t)\in\mathbb{R}^{n}\), outpute \(y(t)\in\mathbb{R}\), \(h(t)\) and \(w(t)\) are stochastic disturbance functions, \(\mathbf{A}\in\mathbb{R}^{n\times n}\), \(\mathbf{B},\mathbf{G}\in\mathbb{R}^{n\times 1}\), and \(\mathbf{C}\in\mathbb{R}^{1\times n}\).
**Assumption 1**: \(h(t)\)_, \(w(t)\) are bounded, i.e. \(\sup\limits_{t\in\mathbb{R}}|h(t)|<\infty\) and \(\sup\limits_{t\in\mathbb{R}}|w(t)|<\infty\)_
Perform Laplace transform on (1),
\[Y(s)= [\mathbf{C}(s\mathbf{I}_{n}-\mathbf{A})^{-1}\mathbf{B}]U(s)+ \tag{3}\] \[[\mathbf{C}(s\mathbf{I}_{n}-\mathbf{A})^{-1}\mathbf{G}]W(s)+H(s)\]
Write the first part of (3) in the form
\[\mathbf{C}(s\mathbf{I}_{n}-\mathbf{A})^{-1}\mathbf{B}= \tag{4}\] \[k\cdot\frac{s^{n-r}+b_{n-r-1}\cdot s^{n-r-1}+...+b_{0}}{s^{n}+a_{ n-1}s^{n-1}+...+a_{1}s+a_{0}}\]
**Assumption 2**: _The relative degree \(r\leq n\) is known and the exact value of matrix \(\mathbf{A},\mathbf{B},\mathbf{C},\mathbf{G}\) are unknown._
**Assumption 3**: _The polynomial in the numerator of (4) has roots with real parts greater than zero but no roots with real parts equal to zero, while the roots of the denominator polynomial have all real parts less than zero._
Take \(y_{e}=y(t)-h(t)\), \(\xi=(y_{e},\dot{y}_{e},\ddot{y}_{e},...,y_{e}^{(r-1)})^{T}\), \(\eta=(x_{1},x_{2},...,x_{n-r})^{T}\), we call \(\eta\) the internal states, where \(x_{i}\) is the i-th element in \(\mathbf{x}\). Using linear transformations the same as in Hendricks et al. (2008), the system can be written as
\[\dot{\xi}(t) =\mathbf{A}_{1}\xi(t)+\mathbf{A}_{2}\eta(t)+\mathbf{B}_{1}u(t)+ \mathbf{G}_{1}w(t) \tag{5}\] \[\dot{\eta}(t) =\mathbf{A}_{3}y(t)+\mathbf{A}_{4}\eta(t)+\mathbf{G}_{2}w(t)\]
Where
\[\mathbf{A}_{3} =(0,0,...,1)^{T}\in\mathbb{R}^{n\times 1} \tag{6}\] \[\mathbf{B}_{1} =(0,0,...,k)^{T}\in\mathbb{R}^{n\times 1}\] \[\mathbf{G}_{1} =(0,0,...,g)^{T}\in\mathbb{R}^{n\times 1}\]
and
\[\mathbf{A}_{1} =\left(\begin{array}{cccc}0&\cdots&0\\ \vdots&\vdots&\vdots\\ -&\mathbf{r}^{T}&-\end{array}\right)\mathbf{A}_{2}=\left(\begin{array}{ ccc}0&\cdots&0\\ \vdots&\vdots&\vdots\\ -&\mathbf{s}^{T}&-\end{array}\right) \tag{7}\] \[\mathbf{A}_{4} =\left(\begin{array}{cccc}0&1&0&\cdots\\ \vdots&\ddots&\ddots&\vdots\\ 0&\cdots&0&1\\ -b_{0}&-b_{1}&\cdots&-b_{n-r-1}\end{array}\right)\]
Denote \(y_{d}(t)\) as the desired output, the task of this paper is to find a Koopman-type operator to calculate input \(u(t)\)
directly from \(y_{d}\) such that under this input, the actual output \(y(t)\) and \(y_{d}(t)\) differ by an acceptable error as time grows.
**Assumption 1**: \(y_{d}\) _is r-order continuously differentiable and there is \(M>0\), \(y_{d}(t)\leq M\) for any \(t\in\mathbb{R}\)._
## IV Main Results
In this section, we describe the Koopman-type operator in detail. In part A, we present the specific formulation of the Koopman-type method and conduct a theoretical analysis of its tracking accuracy. In part B, we introduce a data-driven approach for determining the parameters required by the Koopman-type method. In part C, we designed an improved Koopman-type method by incorporating the Monte Carlo technique to improve the accuracy of parameters estimation in data-driven processes for systems with large disturbances.
### Koopman-type operator and its performance analysis
**Definition 1** Koopman-type operator
\[\begin{split}\mathcal{A}:=&\{q^{-d\cdot\Delta t} y_{d}(t)|d\in\{0,\pm 1\cdots\pm N-1,N\}\}\\ \mathcal{D}:=&\{y_{d}^{(i)}|i\in\{1,2\cdots r\} \}\end{split} \tag{8}\]
Where \(q^{-d\cdot\Delta t}\) is \(d\cdot\Delta t\) pure time delay. Take \(\Phi\) be a column vector containing every function in \(\mathcal{A}\) and \(\mathcal{D}\), we define the Koopman-type operator in the form
\[u(t)=<\mathbf{K},\Phi(t)> \tag{9}\]
Where \(\mathbf{K}\) is a constant column vector with dimension \(2N+r\).
**Remark 1**: \(\mathbf{K}\) _is an undetermined parameter of the Koopman-type operator, for different systems we need to determine different \(\mathbf{K}\) to improve the accuracy of tracking._
**Theorem 1**: _There exists \(\delta>0\), for any \(\epsilon>0\), there exist \(\hat{N}\), \(\Delta\hat{t}\) and \(\mathbf{K}\), if \(N>\hat{N}\), and \(\Delta t<\Delta\hat{t}\), then from the Koopman-type operator, the output of system (1) satisfies_
\[|y(t)-y_{d}(t)|\leq\epsilon+\delta\sup_{t\in\mathbb{R}}|w(t)|+\sup_{t\in \mathbb{R}}|r(t)| \tag{10}\]
_for large enough t._
**Proof** By the linear variable substitution of \(\mathbf{x}\), Zou and Devasia (1999), the second equation of system (3) can be written as
\[\dot{\eta^{\prime}}(t)=\mathbf{A}_{3^{\prime}}y_{e}(t)+\mathbf{A}_{4^{\prime} }\eta^{\prime}(t)+\mathbf{G}_{2^{\prime}}w(t) \tag{11}\]
Where
\[\mathbf{A}_{4^{\prime}}=\begin{bmatrix}\mathbf{A}_{4^{\prime}-}&0\\ 0&\mathbf{A}_{4^{\prime}+}\end{bmatrix} \tag{12}\]
Where all eigenvalues of \(\mathbf{A}_{4^{\prime}-}(\mathbf{A}_{4^{\prime}+})\), have negative (positive) real part. Without of lose generality, assume \(\mathbf{A}_{4}\) is directly this form. From Devasia (1996), the unique solution of the second equation in system (3) with the assumption \(\eta(\pm\infty)=0\) for given \(y_{d}\) is
\[\eta(t)=\int_{-\infty}^{+\infty}\phi(t-\tau)(\mathbf{A}_{3}y_{d}(\tau)+ \mathbf{G}_{2}w(\tau))d\tau \tag{13}\]
Where
\[\phi(t)=\begin{bmatrix}1(t)e^{\mathbf{A}_{4-}}&0\\ 0&-1(-t)e^{\mathbf{A}_{4+}}\end{bmatrix} \tag{14}\]
**Lemma 1**: _There exists positive scalars \(\alpha>0,\beta>0,\gamma>0\) such that_
\[||\eta(t)-\eta_{N}(t)||\leq\beta e^{-\alpha N\Delta t}+\gamma\sup_{t\in \mathbb{R}}|w(t)| \tag{15}\]
_where \(\eta(t)\) is calculated in (13), and_
\[\eta_{N}(t)=\int_{t-N\Delta t}^{t+N\Delta t}\phi(t-\tau)(\mathbf{A}_{3}y_{d} (\tau)+\mathbf{G}_{2}w(\tau))d\tau \tag{16}\]
**Proof** The eigenvalues of \(\mathbf{A}_{4-}\) and \(-\mathbf{A}_{4+}\) have negative real parts so there exists positive scalars \(\kappa_{1}>0\), \(\kappa_{2}>0\), \(\kappa_{3}>0\), \(\alpha_{1}>0\), \(\alpha_{2}>0\), \(\alpha_{3}>0\) such that, see Desoer and Vidyasagar (2009)
\[\begin{split}||\phi(t)||&\leq\kappa_{1}e^{-\alpha_{1} t}\\ ||\phi(-t)||&\leq\kappa_{2}e^{-\alpha_{2}t}\\ ||e^{\mathbf{A}t}||&\leq\kappa_{3}e^{-\alpha_{3}t} \end{split} \tag{17}\]
Then
\[\begin{split}||\eta(t)-\eta_{N}(t)||=&||\int_{- \infty}^{t-N\Delta t}\phi(t-\tau)\mathbf{A}_{3}y_{d}(\tau)d\tau+\\ &\int_{t+N\Delta t}^{\infty}\phi(t-\tau)\mathbf{A}_{3}y_{d}(\tau)d \tau+\\ &\int_{-\infty}^{+\infty}\phi(t-\tau)\mathbf{G}_{2}w(\tau)d \tau||\\ \leq& M||\mathbf{A}_{3}||(||\int_{-\infty}^{t-N \Delta t}\kappa_{1}e^{-\alpha_{1}(t-\tau)}d\tau||+\\ &||\int_{t+N\Delta t}^{\infty}\kappa_{2}e^{-\alpha_{2}(t-\tau)}d \tau||)+\\ &\sup_{t\in\mathbb{R}}|w(t)|\cdot||\mathbf{G}_{2}||\\ &(||\int_{-\infty}^{t}\kappa_{1}e^{-\alpha_{1}(t-\tau)}d\tau||+\\ &||\int_{t}^{\infty}\kappa_{2}e^{-\alpha_{2}(t-\tau)}d\tau||)\\ =& M||\mathbf{A}_{3}||(\frac{\kappa_{1}}{\alpha_{1}}e^{- \alpha_{1}N\Delta t}+\frac{\kappa_{2}}{\alpha_{2}}e^{-\alpha_{2}N\Delta t})+ \\ &\sup_{t\in\mathbb{R}}|w(t)|\cdot||\mathbf{A}_{2}||(\frac{\kappa_{1 }}{\alpha_{1}}+\frac{\kappa_{2}}{\alpha_{2}})\end{split} \tag{18}\]
Take \(\alpha=\min\alpha_{1},\alpha_{2}\), \(\beta=2M||\mathbf{A}_{3}||\max\frac{\kappa_{1}}{\alpha_{1}},\frac{\kappa_{2}} {\alpha_{2}}\), \(\gamma=||\mathbf{G}_{2}||(\frac{\kappa_{1}}{\alpha_{1}}+\frac{\kappa_{2}}{ \alpha_{2}})\)
By the definition of integration, we can write the integral as partial sum
\[\begin{split}\eta_{N}(t)=&\int_{t-N\Delta t}^{t+N \Delta t}\phi(t-\tau)\mathbf{A}_{3}y(\tau)d\tau\\ \approx&\Delta t\cdot\sum_{\tau=0}^{2N-1}\phi((-N+ \tau)\Delta t)\mathbf{A}_{3}q^{(N-\tau-1)\Delta t}y_{d}(t)\\ \triangleq&\hat{\eta}_{N}(t)\end{split} \tag{19}\]
For any \(\epsilon_{1}>0\), we can find small enough \(\Delta\hat{t}\), such that for \(\Delta t\leq\Delta\hat{t}\sup\limits_{t\in\mathbb{R}}||\eta_{T}(t)-\hat{\eta}_{t} (t)||<\epsilon_{1}\), then
\[||\eta(t)-\hat{\eta}_{N}(t)|| \leq||\eta_{N}(t)-\hat{\eta}_{N}(t)||+||\eta(t)-\eta_{N}(t)|| \tag{19}\] \[\leq\beta e^{-\alpha N\Delta t}+\gamma\sup\limits_{t\in\mathbb{R }}|w(t)|+\epsilon_{1}\]
Take
\[\begin{split} u(t)\triangleq&\frac{1}{k}[y_{d}^{(r )}(t)-\mathbf{r}^{T}\xi_{d}(t)-\mathbf{s}^{T}\eta(t)-g\cdot w(t)]\\ \hat{u}(t)\triangleq&\frac{1}{k}[y_{d}^{(r)}(t)- \mathbf{r}^{T}\xi_{d}(t)-\mathbf{s}^{T}\hat{\eta}_{N}(t)]\end{split} \tag{20}\]
Then
\[\begin{split}|u(t)-\hat{u}(t)|<&\frac{1}{k}(|| \mathbf{s}||(\beta e^{-\alpha N\Delta t}+\epsilon_{1})+\\ &(||\mathbf{s}||\gamma+g)\sup\limits_{t\in\mathbb{R}}|w(t)|)\end{split} \tag{21}\]
For some initial state \(\mathbf{x}(0)\), under inpute \(u(t)\),we have, the outpute \(y(t)\) can be strictly equal to \(y_{d}(t)-h(t)\), then
\[\begin{split} y_{d}(t)&=\mathbf{C}\mathbf{x}(t) \\ &=\mathbf{C}(e^{\mathbf{At}}\mathbf{x}(0)+\int_{0}^{t}e^{\mathbf{ A}(t-\tau)}(\mathbf{B}u(\tau)+\mathbf{G}w(\tau))d\tau)\end{split} \tag{22}\]
Assume the true system has initial states \(\hat{\mathbf{x}}(0)\), under the outpute \(\hat{u}(t)\), take
\[\begin{split}\hat{y}_{d}(t)&=C\mathbf{x}(t)\\ &=C(e^{At}\hat{\mathbf{x}}(0)+\int_{0}^{t}e^{A(t-\tau)}(B\hat{u}( \tau)+Gw(\tau))d\tau)\end{split} \tag{23}\]
Then
\[\begin{split}|\hat{y}_{d}(t)-y_{d}(t)|=&|\mathbf{C} (e^{\mathbf{At}}(\hat{\mathbf{x}}(0)-\mathbf{x}(0))+\\ &\int_{0}^{t}e^{\mathbf{A}(t-\tau)}(\hat{u}(\tau)-u(\tau))d\tau| \\ \leq&||\mathbf{C}||(\beta_{3}e^{-t\alpha_{3}}\cdot|| \mathbf{x}(0)-\hat{\mathbf{x}}(0)||+\\ &\frac{1}{k}(||\mathbf{s}||(\beta e^{-\alpha N\Delta t}+\epsilon _{1})+\\ &(||s||\gamma+g)\sup\limits_{t\in\mathbb{R}}|w(t)|)\frac{\beta_{3 }}{\alpha_{3}}(1-e^{-\alpha_{3}t}))\end{split}\]
The true output
\[y(t)=\hat{y}_{d}(t)+h(t) \tag{24}\]
\[\begin{split}&\lim_{t\to+\infty}|y(t)-y_{d}(t)|\\ &\leq\lim_{t\to+\infty}|y(t)-\hat{y}_{d}(t)|+|\hat{y}_{d}(t)-y_{d} (t)|\\ \leq&\sup\limits_{t\in\mathbb{R}}|h(t)|+||\mathbf{C} ||\frac{||\mathbf{s}||\beta_{3}}{k\alpha_{3}}(\epsilon_{1}+\beta e^{-\alpha N \Delta t})+\\ &\frac{\beta_{3}}{\alpha_{3}}(||\mathbf{s}||\gamma+g)\sup\limits _{t\in\mathbb{R}}|w(t)|\end{split}\]
Take \(\epsilon_{1}\) small enough and \(\hat{N}\) such that \(||\mathbf{C}||\frac{||\mathbf{s}||\beta_{3}}{k\alpha_{3}}(\epsilon_{1}+\beta e ^{-\alpha N\Delta t})<\epsilon\) and \(\delta=\frac{\beta_{3}}{\alpha_{3}}(||\mathbf{s}||\gamma+g)\)\(\blacksquare\)
### **Data-driven pseudoinverse approach for Koopman-type operator parameter optimization design**
Theorem 1 illustrates that for the system with disturbance, if a suitable \(\mathbf{K}\) can be found, then our Koopman-type operator can control this system with error depending on the disturbances. In this part we will show how to find \(\mathbf{K}\) by data-driven pseudoinverse method. Using every function \(\phi_{i}\) in \(\Phi\) as the input to the system and measuring the output \(y_{i}\) of the system. Denote the output vector by \(\mathbb{O}_{i}\)
\[\begin{split}\mathbb{O}_{i}&=[\phi_{i}(t_{1}),\phi_ {i}(t_{2})\cdots\phi_{i}(t_{j})],\\ j&\in\mathbb{Z}^{+}and\ t_{1}<t_{2}\cdots<t_{j}\end{split} \tag{25}\]
_Remark 2:_ For different input functions, \(\phi_{i}\), \(t_{1},t_{2},...,t_{j}\) are same and \(t_{1}\) should be large enough to reduce the effect of the initial dynamic.
Take
\[\mathbb{O}=[\mathbb{O}_{1}^{T},\mathbb{O}_{2}^{T},...]^{T} \tag{26}\]
\[\mathbb{O}_{d}=[y_{d}(t_{1}),y_{d}(t_{2}),...,y_{d}(t_{j})] \tag{27}\]
We calculate K by minimize
\[||\mathbb{O}_{d}-\mathbf{K}^{T}\mathbb{O}|| \tag{28}\]
and the solution to this optimal problem can be written in exact form
\[\mathbf{K}^{T}=\mathbb{O}_{d}\mathbb{O}^{\dagger} \tag{29}\]
## Appendix C Improved Koopman-type method to handle large disturbances
Monte Carlo methods tend to perform well in dealing with stochastic problems. For systems with large perturbations, the parameters of the Koopman-type operator obtained by the data-driven method may be very inaccurate, this section we will prove the errors in the data-driven process can be reduced using Monte Carlo methods.
**Theorem 2:** Assume \(E[\mathbb{O}]\) is the expection of the output matrix, and \(\mathbb{O}^{1},\mathbb{O}^{2},...\) are the output matrix obtained from system (5), take \(\mathbb{O}_{N}=\frac{1}{N}\sum_{n=1}^{N}\mathbb{O}^{n}\). For given \(\Phi\), if \(E[r(t)]=E[w(t)]=0\), and \(E[\int_{0}^{t}w^{2}(t)]<\infty\) for any \(t\in\mathbb{R}\) then there exists \(\sigma>0\), with probability \(1-p\), \(||\mathbb{O}_{N}-E[\mathbb{O}]||_{\infty}<\frac{\sigma}{\sqrt{Np}}\).
**Proof** Denote \(y_{s}\) as the actual output of system (1)
\[\begin{split}& Var(y_{s}(t)-E[y_{s}(t)])\\ &=Var(\int_{0}^{t}e^{\mathbf{A}(t-\tau)}\mathbf{G}w(\tau)d\tau)+ Var(r(t))\\ &=E[(\int_{0}^{t}e^{\mathbf{A}(t-\tau)}\mathbf{G}w(\tau)d\tau)^{2}] +Var(r(t))\\ &\leq||\mathbf{G}||^{2}(\int_{0}^{t}(\kappa_{3}e^{-\alpha_{3}(t-\tau) })^{2}d\tau)E[\int_{0}^{t}w^{2}(\tau)d\tau]+\\ & Var(r(t))\\ &\leq\frac{\kappa_{3}^{2}||\mathbf{G}||^{2}}{2\alpha_{3}}E[\int_{0 }^{t}w^{2}(\tau)d\tau]+Var(r(t))\end{split} \tag{30}\]
The second to the last inequality hold by Cauchy Inequality. Take
\[V^{2}=\frac{\kappa_{3}^{2}||\mathbf{G}||^{2}}{2\alpha_{3}}\sup_{t\in[0,t_{j}]}E[ \int_{0}^{t}w^{2}(\tau)d\tau]+\sup_{t\in\mathbb{R}}Var(r(t)) \tag{31}\]
Then \(Var(y_{s}(t)-E[y(t)])\leq V^{2}\) for all \(t\in[0,t_{j}]\), by Chebyshev's Inequality, with probablity \(1-sp\), where s is the number of elements in matrix \(\mathbb{O}\)
\[||\overline{\mathbb{O}}_{N}-E[\mathbb{O}]||_{\infty} =||\frac{1}{N}\sum_{n=1}^{N}(\mathbb{O}^{n}-E[\mathbb{O}])||_{\infty} \tag{32}\] \[\leq\frac{V}{\sqrt{Np}}\]
Take \(\sigma=V\sqrt{s}\)\(\blacksquare\)
_Remark 3_ According to this theorem, for systems subjected to substantial disturbances, an effective strategy to mitigate the error and attain an accurate estimation of \(\mathbf{K}\) is to employ averaging over multiple measurements.
## V Simulation Results
In this section, we present a linear NMP system and validate the results in Section IV through simulation. In Part A, the system's parameters and the simulation methodology are designed. In Part B, data-driven approach in Section IV, Part B is applied to determine Koopman-type operator's parameters. Then the function provided by the Koopman operator is applied to the system to confirm the tracking accuracy demonstrated in Section IV, Part A. Lastly, in Part C, we corroborate that the Improved Koopman-type method proposed in Section IV, Part C is more effective in systems with substantial perturbations.
### Simulation system and parameter design
The simulation system we choose is
\[\dot{\mathbf{x}} =\begin{pmatrix}0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ -1&-4&-5.5&-3.5\end{pmatrix}\mathbf{x}+\begin{pmatrix}0\\ 0\\ 0\\ 1\end{pmatrix}u(t)+\mathbf{G}w \tag{33}\] \[y =\begin{pmatrix}-2&1&1&0\end{pmatrix}x+h\]
Where \(\mathbf{G}\) is a \(1\times 4\) matrix. And \(w,h\) are bounded disturbances, their characteristics will be determined according to the specific simulation requirements. The system is non-minimum phase, and it has relative degree 2. The initial stste at \(t=0\) is set to \(\mathbf{x}(0)=\mathbf{0}\).
The function being tracked is set to \(y_{d}=\sin(0.1t)\). We take \(T=10,\Delta t=0.5\) in Theorem 1. The output vector \(\mathbb{O}_{i}\) are generated by \(y_{d}(t+T-i\Delta t)=\sin(0.1(t+10-0.5i))\) as the input of the system for \(i\in\{1,2\cdots 40\}\). And \(\mathbb{O}_{41}\), \(\mathbb{O}_{42}\) are generated by \(\dot{y}_{d}=0.1\cos(0.1t)\), \(\dot{y}_{d}=-0.01\sin(0.1t)\). All these output vector has \(t_{j}=50+0.5j\) for \(j\in 1,2\cdots 100\) in (19). Parameters of the Koopman-type operator are calculated by (25).
For a given input function \(u(t)\), we calculate the value at y(t) by
\[y(t)=\mathbf{C}\big{(}\int_{0}^{t}e^{\mathbf{A}\tau}(\mathbf{B}u(\tau)+ \mathbf{G}w(\tau))\big{)}+h(t) \tag{34}\]
For the calculation of the integral we use partition method, in order to ensure the accuracy and do not occupy too much memory, the length of partition length is 0.01, which is accurate compare to \(\Delta=0.05\).
Performance of Koopman-type operator in disturbanced system under data-driven pseudoinverse approach
In this part, random disturbances are added to both the input and output. We take \(\mathbf{G}\)=(0 0 0 1)\({}^{T}\), \(w(t)=U[0.05|u(t)|],h(t)=U[0.05|y(t)|]\). Solving the Koopman operator parameters by Data-driven method and data simulation are performed in the perturbed system. Figures 1 and 2 show the simulation results and errors for this disturbed system, and the images vary slightly from simulation to simulation due to the presence of random functions in the system. From Figure 2, the tracking error is less than \(0.05max(y_{d}(t))=0.05\).
### _Performance of improved Koopman-type operator_
In this part, we introduce a simulated scenario in which large disturbances are added to both the input and output of the system during the data-driven approach for solving the Koopman-type operator coefficients. Specifically, we set \(\mathbf{G}\) to be (0 0 0 1)\({}^{T}\), and adopt \(w(t)=U[0.2|u(t)|]\) and \(h(t)=U[0.2|y(t)||]\) to model the larger errors in measurement that are often encountered in practical applications. To obtain different values of \(K\), we select values of \(N\) from 1, 5, and 10 in Theorem 2. After data-driven process, we get a parameter vector \(\mathbf{K}\), then we input this K into the Koopman-type operator, and the resulting input function is used as the input to the unperturbed system, the error with the tracked function is obtained. Figure 3 illustrates the impact of varying Monte Carlo sampling times on the resulting tracking performance.
## VI Conclusions
In this paper, we design a data-driven Koopman-type operator for linear non-minimum phase (NMP) systems, and give theoretical proof of the control accuracy of NMP systems using the Koopman-type method, this is the first proof for Koopman-type method in NMP systems. We demonstrate that the tracking accuracy of our Koopman-type operator is directly proportional to the ground-bound of the NMP system perturbation, given sufficiently long causal and non-causal desired outputs and appropriately small sampling intervals. Additionally, applying the Monte Carlo method helps mitigate the impact of perturbations on measurement results during the data-driven process. Our ongoing research efforts focus on extending this approach to address linear time-varying non-minimum phase systems.
|
2304.14062
|
Comparison of Optimization-Based Methods for Energy-Optimal Quadrotor
Motion Planning
|
Quadrotors are agile flying robots that are challenging to control.
Considering the full dynamics of quadrotors during motion planning is crucial
to achieving good solution quality and small tracking errors during flight.
Optimization-based methods scale well with high-dimensional state spaces and
can handle dynamic constraints directly, therefore they are often used in these
scenarios. The resulting optimization problem is notoriously difficult to solve
due to its nonconvex constraints. In this work, we present an analysis of four
solvers for nonlinear trajectory optimization (KOMO, direct collocation with
SCvx, direct collocation with CasADi, Crocoddyl) and evaluate their performance
in scenarios where the solvers are tasked to find minimum-effort solutions to
geometrically complex problems and problems requiring highly dynamic solutions.
Benchmarking these methods helps to determine the best algorithm structures for
these kinds of problems.
|
Welf Rehberg, Joaquim Ortiz-Haro, Marc Toussaint, Wolfgang Hönig
|
2023-04-27T09:46:05Z
|
http://arxiv.org/abs/2304.14062v1
|
# Comparison of Optimization-Based Methods for Energy-Optimal Quadrotor Motion Planning
###### Abstract
Quadrotors are agile flying robots that are challenging to control. Considering the full dynamics of quadrotors during motion planning is crucial to achieving good solution quality and small tracking errors during flight. Optimization-based methods scale well with high-dimensional state spaces and can handle dynamic constraints directly, therefore they are often used in these scenarios. The resulting optimization problem is notoriously difficult to solve due to its nonconvex constraints. In this work, we present an analysis of four solvers for nonlinear trajectory optimization (KOMO, direct collocation with SCxs, direct collocation with CasADi, Croocoddyl) and evaluate their performance in scenarios where the solvers are tasked to find minimum-effort solutions to geometrically complex problems and problems requiring highly dynamic solutions. Benchmarking these methods helps to determine the best algorithm structures for these kinds of problems.
## I Introduction
In recent years, multirotors have risen in popularity in academia and industry due to their exceptional agility and simple mechanical design. However, motion planning for these under-actuated systems requires to consider the dynamical constraints and is computationally expensive. While sampling- and search-based approaches to motion planning have strong theoretical guarantees regarding completeness, they suffer from the curse of dimensionality and scale exponentially with the dimension of the state space. Optimization-based methods on the other hand, often provide speed advantages in high dimensional state spaces (only polynomial scaling with the number of state space dimensions) and better quality of the found solutions. Since the motion planning problem is notoriously difficult to solve, approximate solutions are frequently applied. Common approximations are using the double integrator model of a point mass [1] or the differential-flatness property of the quadrotor model [2], which allows to compute and follow splines as trajectories. Unfortunately, such approaches neglect the real model dynamic completely or cannot take the limited motor forces of a real quadrotor into account and produce conservative motions. In this paper we consider the full nonlinear dynamic model of the system to formulate a minimum-effort control problem and benchmark four different optimization-based methods on four scenarios, including geometrically challenging obstacle avoidance and highly dynamic maneuvers.
## II Approach
### _Quadrotor Model_
We consider a quadrotor model with state vector \(x=[p,v,q,\omega_{B}]^{T}\in\mathbb{R}^{13}\), where \(p\in\mathbb{R}^{3}\) is the position, \(v\in\mathbb{R}^{3}\) is the velocity (both in the inertial frame), \(q\in\mathbb{H}\) is the unit quaternion rotation (parametrizing the rotation matrix \(\mathbf{R}(q)\in SO(3)\)), and \(\omega_{B}\in\mathbb{R}^{3}\) is the rotational velocity in the body frame. The dynamic model is derived from the Newton-Euler equations for rigid bodies with 6 degrees of freedom (DoF) [3]:
\[\dot{p}=v\hskip 28.452756pt\text{(1)}\hskip 28.452756pt\dot{q}=\frac{1}{2}q \otimes\left(\begin{array}{c}0\\ \omega_{B}\end{array}\right) \tag{3}\]
\[\dot{v}=\frac{1}{m}\mathbf{R}(q)f_{T}+g\hskip 28.452756pt\text{(2)}\hskip 28.452756pt \dot{\omega}_{B}=\mathbf{J}^{-1}(\tau-\omega_{B}\times\mathbf{J}\omega_{B}). \tag{4}\]
Here, \(\mathbf{J}\) denotes the inertia matrix of the multirotor (in body frame), \(m\) its mass, \(g\) the gravity vector, \(f_{T}\) the applied combined thrust, \(\tau\) the applied torque, and \(\otimes\) denotes the quaternion product.
The total thrust vector \(f_{T}\) and the acting torque \(\tau\) result from the multirotor geometry and the acting forces generated by the propellers as follows:
\[f_{T}=\left(\begin{array}{c}0\\ 0\\ \sum_{i}f_{i}\end{array}\right),\tau=\left(\begin{array}{c}\frac{\sqrt{2}}{ \lambda}(-f_{1}-f_{2}+f_{3}+f_{4})\\ \frac{\sqrt{2}}{2}(-f_{1}+f_{2}+f_{3}-f_{4})\\ \kappa_{\tau}(f_{1}-f_{2}+f_{3}-f_{4})\end{array}\right),\]
where \(\kappa_{\tau}\) is the torque constant and \(l\) is the arm length of the multirotor. The forces \(f_{i}\) are related to the controllable rotor speed \(\omega_{i}\) by the thrust coefficient \(\kappa_{f}\) according to \(f_{i}=\kappa_{f}\omega_{i}^{2}\). We use \(u=[f_{1},\dots,f_{4}]^{T}\) as controls.
### _Non-linear Program (NLP) Formulation_
We formulate the following discrete-time optimization problem over \(N\) steps.
\[\min_{\dot{x},\dot{u}} \sum_{k=0}^{N-1}\|u_{k}\|^{2},\] (5) subject to \[x_{k+1}=step(x_{k},u_{k}), \tag{6}\] \[S_{k}\cap O_{i}=\emptyset,\forall i\in\{1,...,n_{obs}\},\] (7) \[x_{0}=x_{s},x_{N}=x_{f},\] (8) \[x_{k_{m..i}}=x_{m,i}\forall i\in\{1,...,n_{is}\},\] (9) \[x_{k}\leq x_{\max},x_{k}\geq x_{\min},\] (10) \[u_{k}\leq u_{\max},u_{k}\geq u_{\min}. \tag{11}\]
With states \(\hat{x}=(x_{0},x_{1},...,x_{N}),x_{k}\in\mathbb{R}^{13}\) and controls \(\hat{u}=(u_{0},u_{1},...,u_{N-1}),u_{k}\in\mathbb{R}^{4}\). Here, (6) captures the dynamics according to (1) - (4); (7) avoid collisions (the multirotor, approximated by the sphere \(S_{k}\), does not intersect with the obstacles \(O_{i}\,\in\,\mathcal{O}\), \(i\,\in\,\{1,...,n_{obs}\}\)); (8) enforces that the trajectory starts in an initial state \(x_{s}\) and ends in a final state \(x_{f}\); (9) enforces intermediate states \(x_{m,i}\) with \(i\in\{1,...,n_{is}\}\); (10) limits the states to be within user-specified bounds; (11) limits the motor force magnitudes (for the highly dynamic scenarios, the optimal manouvers require reaching the limits); and (5) minimizes the required force as an approximation of the used energy.
### _Used Methods_
The discrete problem is implemented in four different trajectory optimization frameworks. These use different transcriptions and algorithms for solving the nonlinear problem. All methods discretize the continuous-time problem at \(N=100\) time points.
#### Iii-C1 Direct Collocation (DC)
Two of the compared frameworks use direct collocation.
#### Iii-C1 Sequential Convex Programming (SCP)
SCP methods approximate the non-convex discrete optimization problem iteratively as a convex sub-problem updating the approximation according to newly obtained sub-problem solutions. Advantages of SCP methods include that they have meaningful theoretical guarantees regarding algorithmic complexity and performance [4] and that they are agnostic to the choice of the convex solver. As SCP method we use SCvx [5], which we implement in Python following [4]. Positions, orientations, and linear and angular velocities as well as motor forces are introduced as optimization variables and the continuous dynamics are discretized using an explicit Euler integration scheme (\(step\)-function). The collision avoidance constraints are formulated using signed distances calculated with the flexible collision library (fcl) [6]. We formulate the discrete and convex sub-problem in every iteration using CVXPY [7] and solve it with ECOS [8]. ECOS is a solver specialized in solving convex problems, is written in C, uses a log-barrier method to formulate a series of unconstrained problems, and employs Newton's method as an inner loop.
#### Iii-C2 K-Order Markov Optimization (KOMO)
KOMO is a method for efficiently solving robot motion planning problems originally introduced in 2014 [11]. In comparison to other motion planning methods, KOMO represents the trajectory only in configuration space \((p,q)\) instead of \((p,v,q,\omega)\), computing differential quantities by finite differences of consecutive configurations. Optimization variables are therefore only the positions and orientations of the quadcopter and the motor forces. Due to the structure introduced by the short-term dependency of the Markov property of the trajectory optimization problem, the Jacobian and the pseudo-Hessian of the problem result in banded and banded-symmetric matrices which are efficient to compute, store, and factorize. The resulting dependency between the states at consecutive time points and forces is that of an implicit Euler integration scheme (\(step\)-function). Similar to SCvx, the formulated collision avoidance constraints are based on signed distances which are calculated using fcl. The formulated constrained NLP is solved using Augmented Lagrangian which internally uses Newton's method with line search to solve the unconstrained optimization problem in each iteration. The problem was implemented using the Python bindings of the RAI-Framework1, which is implemented in C++.
Footnote 1: [https://github.com/MarcToussaint/rai-python](https://github.com/MarcToussaint/rai-python)
#### Iii-C3 Differential Dynamic Programming (DDP)
DDP is an algorithm for continuous optimal control based on Bellman's principle of optimality. In each iteration, it uses a backward pass (in time) to build a local quadratic approximation of the cost-to-go value, and a forward pass to update the state and control trajectory. DDP directly accounts for the cost and the dynamic constraints in the forward-backward pass. The dynamics are discretized using an explicit Euler integration scheme (\(step\)-function). The goal, collision and state constraints are added using a squared penalty method. In contrast to the previously mentioned methods, DDP introduces only motor forces as decision variables, optimizing the state trajectory implicitly. The collision avoidance constraints are formulated using signed distances and fcl as well. Specifically, we use BOX-FDDP [12, 13], which considers control limits directly in the forward-backward pass (instead of using a squared penalty) and can be efficiently warm-started using an infeasible initial guess, offering better globalization capabilities. The main advantage of DDP is that the Markov structure of the trajectory optimization problem (like KOMO) can directly be exploited to compute second-order search directions, which is considerably faster than using general-purpose matrix factorization techniques with sparse matrices. Moreover, when the dynamical constraints are highly non-linear, the DDP recursion is often more efficient than adding these constraints in constrained optimization methods (e.g., Augmented Lagrangian). In our benchmark, we use Croccodyl [12], an open-source DDP solver that provides an efficient C++ implementation (e.g., no dynamic memory allocation, fast linear algebra operations using Eigen).
## III Experimental Results
We evaluate the four different solvers in four scenarios. All scenarios were solved 30 times on a workstation (AMD Ryzen Threadripper PRO 5975WX @ 3.6 GHz, 64 GB RAM, Ubuntu 22.04), each with a different initial guess.
### _Evaluated Scenarios_
The selected scenarios are shown in Fig. 1 and include geometrically complex problems and problems requiring highly dynamic solutions. Here the red arrows indicate the \(z\)-axis of the body frame. In scenario 1, a trajectory has to be found leading through an environment cluttered with spherical obstacles. For scenario 2, the quadcopter has to recover from an upside-down position. In scenario 3, the optimizers have to find a trajectory following 4 waypoints. The intermediate constraints in scenario 3 restrict only the position of the quadcopter. For scenario 4 an intermediate constraint is introduced forcing the orientation to be upside-down in the middle of the trajectory.
### _Initial Guess_
For scenarios 1 and 3, the initial guess was calculated by linear interpolation between the initial, intermediate and final states for positions. For the orientations, the initial guess was obtained by spherical linear interpolation and the motor forces were initialized such that the gravitational force was compensated (hover condition). The rotational and linear velocities were initialized with zero. Gaussian noise related to the scaled maximum range of each quantity was added to all states and forces. Note that all solvers received the same initial guess. For scenarios 2 and 4 the solvers were initialized with the initial position and the hovering orientation. The forces and the linear and rotational velocities were initialized in the same fashion as for scenarios 1 and 3 and the noise was added following the same pattern as well.
### _Results_
Throughout the experiments, all solvers found a feasible solution in all 30 runs. To evaluate the solver's solution quality, we report the converged values of the objective function. Figure 2 shows the histograms of the objective function values. For the geometrically challenging scenarios (1,3), all solvers converge to similar optimal solutions, with a difference between the objective values of less than 1%. For the highly dynamic scenarios (2,4), the solvers converge to different optimal values, with KOMO having in general the lowest optimal value and DDP the highest. The distributions of the optimal values for each solver in this scenario are narrow, meaning that the solvers converge to the same optimal value regardless of the noise in the initial estimate.
To evaluate the effort required to solve the problems, the number of Newton method iterations for DC with SCvx, KOMO, and DC with CasADi and the number of DDP iterations are given in Fig. 3. For algorithms using Newton's method, the number of Newton iterations is a reasonable measure of computational effort, since computing the step direction by solving a linear system of equations is the most computationally expensive operation. Note that a DDP iteration is approximately equivalent to a newton iteration regarding computational effort. Throughout the experiments, DDP required an order of magnitude fewer numbers of iterations to converge to an optimum in all runs compared to the other solvers. For scenarios 3 and 4, DDP terminates due to an iteration limit in every run. But even with more iterations, the final objective value does not change. In the highly dynamic scenarios (2,4), direct collocation with CasADi required the most Newton iterations to converge in all runs. In general, it can be observed that the variance of the number of required iterations is highest for DC with CasADi.
In addition, we evaluated the time required in the optimizer to solve the NLP. The results are shown in Table I. In all scenarios, DDP takes the least time by one to two orders of magnitude and solves all problems in less than one second. DC with CasADi takes the most time in all scenarios. In highly dynamic scenarios (2,4), DC with CasADi takes almost an order of magnitude longer than DC with SCvx and KOMO. Note that the comparison regarding runtime does not indicate algorithmic advantages, since the solvers are partly written in different programming languages.
## IV Conclusion and Future Work
To objectively compare and evaluate different trajectory optimization techniques we need standard benchmarks. To this end, we benchmark four different optimization-based solvers on dynamically and geometrically challenging scenarios of multirotor flight. We tune each solver, since the performance highly depends on the choice of user-defined weights and the parameters of the algorithms. Our results
Fig. 1: Example trajectories for the chosen scenarios. The black quadrotors represent a possible trajectory solving the problem and the red arrows indicate the \(z\)-axis of the model.
show that KOMO achieves the lowest objective function values across all scenarios, while DDP requires the least amount of time and iterations. We conclude that solvers which leverage the structure of trajectory optimization problems (KOMO and DDP) are beneficial over formulating the trajectory optimization problem as a standard NLP, as for DC with CasADi.
For the obtained optimal values, the cause for the consistent computation of the higher-cost solutions of DDP in Scenario 4 should be investigated. One conjecture is that this is related to small differences in the constraint formulation and slight violations of some constraints in the other solvers. For the computational effort, comparing the number of iterations can be misleading since the effort for DDP iterations and Newton iterations is not identical and we do not account for the number of line search iterations for Newton's method. While we compare specific implementations, it remains an open issue whether to attribute the performance differences to a better algorithmic design or a better implementation (e.g., programming language, efficiency of the linear algebra operations).
|
2308.03813
|
High-Resolution Cranial Defect Reconstruction by Iterative,
Low-Resolution, Point Cloud Completion Transformers
|
Each year thousands of people suffer from various types of cranial injuries
and require personalized implants whose manual design is expensive and
time-consuming. Therefore, an automatic, dedicated system to increase the
availability of personalized cranial reconstruction is highly desirable. The
problem of the automatic cranial defect reconstruction can be formulated as the
shape completion task and solved using dedicated deep networks. Currently, the
most common approach is to use the volumetric representation and apply deep
networks dedicated to image segmentation. However, this approach has several
limitations and does not scale well into high-resolution volumes, nor takes
into account the data sparsity. In our work, we reformulate the problem into a
point cloud completion task. We propose an iterative, transformer-based method
to reconstruct the cranial defect at any resolution while also being fast and
resource-efficient during training and inference. We compare the proposed
methods to the state-of-the-art volumetric approaches and show superior
performance in terms of GPU memory consumption while maintaining high-quality
of the reconstructed defects.
|
Marek Wodzinski, Mateusz Daniol, Daria Hemmerling, Miroslaw Socha
|
2023-08-07T10:39:23Z
|
http://arxiv.org/abs/2308.03813v2
|
High-Resolution Cranial Defect Reconstruction by Iterative, Low-Resolution, Point Cloud Completion Transformers
###### Abstract
Each year thousands of people suffer from various types of cranial injuries and require personalized implants whose manual design is expensive and time-consuming. Therefore, an automatic, dedicated system to increase the availability of personalized cranial reconstruction is highly desirable. The problem of the automatic cranial defect reconstruction can be formulated as the shape completion task and solved using dedicated deep networks. Currently, the most common approach is to use the volumetric representation and apply deep networks dedicated to image segmentation. However, this approach has several limitations and does not scale well into high-resolution volumes, nor takes into account the data sparsity. In our work, we reformulate the problem into a point cloud completion task. We propose an iterative, transformer-based method to reconstruct the cranial defect at any resolution while also being fast and resource-efficient during training and inference. We compare the proposed methods to the state-of-the-art volumetric approaches and show superior performance in terms of GPU memory consumption while maintaining high-quality of the reconstructed defects.
Keywords:Cranial Implant Design Deep Learning Shape Completion Point Cloud Completion SkullBreak SkullFix Transformers
## 1 Introduction
The cranial damages are a common outcome of traffic accidents, neurosurgery, and warfare. Each year, thousands of patients require personalized cranial implants [2]. Nevertheless, the design and production of personalized implants are
expensive and time-consuming. Nowadays, it requires trained employees working with computer-aided design (CAD) software [11]. However, one part of the design pipeline, namely defect reconstruction, can be directly improved by the use of deep learning algorithms [8, 7].
The problem can be formulated as a shape completion task and solved by dedicated neural networks. Its importance motivated researchers to organize two editions of the AutoImplant challenge, during which researchers proposed several unique contributions [8, 7]. The winning contributions from the first [3] and second editions [17] proposed heavily-augmented U-Net-based networks and treated the problem as segmentation of missing skull fragment. They have shown that data augmentation is crucial to obtain reasonable results. Other researchers proposed similar encoder-decoder approaches, however, without significant augmentation and thus limited performance [10, 14]. Another group of contributions attempted to address not only the raw performance but also the computational efficiency and hardware requirements. One contribution proposed an RNN-based approach using 2-D slices taking into account adjacent slices to enforce the continuity of the segmentation mask [19]. The contribution by Li _et al._ has taken into account the data sparsity and proposed a method for voxel rearrangement in coarse representation using the high-resolution templates [6]. The method was able to substantially reduce memory usage while maintaining reasonable results. Another contribution by Kroviakov _et al._ proposed an approach based on sparse convolutional neural networks [5] using Minkowski engine [1]. The method excluded the empty voxels from the input volume and decreased the number of the required convolutions. The work by Yu _et al._ proposed an approach based on principal component analysis with great generalizability, yet limited raw performance [21]. Interestingly, methods addressing the computational efficiency could not compete, in terms of the reconstruction quality, with the resource-inefficient methods using dense volumetric representation [7].
The current state-of-the-art solutions, even though they reconstruct the defects accurately, share some common disadvantages. First, they operate in the volumetric domain and require significant computational resources. The GPU memory consumption scales cubically with the volume size. Second, the most successful solutions do not take into account data sparsity. The segmented skulls are binary and occupy only a limited part of the input volume. Thus, using methods dedicated to 3-D multi-channel volumes is resource-inefficient. Third, the final goal of the defect reconstruction is to propose models ready for 3-D printing. Working with volumetric representation requires further postprocessing to transfer the reconstructed defect into a printable model.
Another approach, yet still unexplored, to cranial defect reconstruction is the use of deep networks dedicated to point clouds (PCs) processing. Since the introduction of PointNet [15] and PointNet++[16], the number of contributions in the area of deep learning for PC processing exploded. Several notable contributions, like PCNet [24], PoinTr [22], AdaPoinTr [23], 3DSGrasp [12], MaS [9], have been proposed directly to the PC completion task. The goal of the PC completion is to predict a missing part of an incomplete PC.
The problem of cranial defect reconstruction can be reformulated into PC completion which has several advantages. First, the representation is sparse, and thus requires significantly less memory than the volumetric one. Second, PCs are unordered collections and can be easily splitted and combined, enabling further optimizations. Nevertheless, the current PCs completion methods focus mostly on data representing object surfaces and do not explore large-scale PCs representing solid objects.
In this work, we reformulate the problem from volumetric segmentation into PC completion. We propose a dedicated method to complete large-scale PCs representing solid objects. We extend the geometric aware transformers [22] and propose an iterative pipeline to maintain low memory consumption. We compare the proposed approach to the state-of-the-art networks for volumetric segmentation and PC completion. Our approach provides high-quality reconstructions while maintaining computational efficiency and good generalizability into previously unseen cases.
## 2 Methods
### Overview
The input is a 3-D binary volume representing the defective skull. The output is a PC representing the missing skull fragment and (optionally) its meshed and voxelized representation. The processing pipeline consists of: (i) creating the PC from the binary volume, (ii) splitting the PC into a group of coarse PCs, (iii) calculating the missing PC by the geometric aware transformer for each group, (iv) merging the reconstructed coarse PCs, (v) optional voxelization and postprocessing for evaluation. The pipeline is shown in Figure 1.
### Preprocessing
The preprocessing starts with converting the binary volume to the PC. The coordinates of the positive voxels are created only from the voxels representing
Figure 1: Visualization of the processing pipeline.
the skull. The PC is normalized to [0-1] range, randomly permuted, and split into \(N\) equal groups, where \(N\) is calculated based on the number of points in the input PC in a manner that each group contains 32768 points and outputs 16384 points.
### Network Architecture - Point Cloud Completion Transformer
We adapt and modify the geometry-aware transformers (PoinTr) [22]. The PoinTr method was proposed and evaluated on coarse PCs representing object surfaces. The full description of the PoinTr architecture is available in [22].
We modify the network by replacing the FoldingNet [20] decoder working on 2-D grids with a folding decoder operating on 3-D representation. The original formulation deforms the 2-D grid into the surface of a 3-D object, while the proposed method focuses on solid 3-D models. Moreover, we modify the original k-NN implementation (with quadratic growth of memory consumption with respect to the input size) to an iterative one, to further decrease and stabilize the GPU memory consumption.
### Objective Function
We train the network supervisedly where the ground-truth is represented by PCs created from the skull defects. In contrast to other PC completion methods, we employ the Density Aware Chamfer Distance (DACD) [18]. The objective function enforces the uniform density of the output and handles the unpredictable ratio between the input/output PCs size. We further extend the DACD by calculating the distance between the nearest neighbours for each point and enforcing the distance to be equal. The final objective function is:
\[O(P_{r},P_{gt})=DACD(P_{r},P_{gt})+\frac{\alpha}{S}\sum_{i=0}^{S}\sum_{j=0}^{ k}\sum_{l=0}^{k}|P_{r}(i)-P_{r}(j)|-|P_{r}(i)-P_{r}(l)|, \tag{1}\]
where \(P_{r},P_{gt}\) are the reconstructed and ground-truth PC respectively, \(S\) is the number of points in \(P_{rec}\), \(k\) is the number of nearest neighbours of point \(i\), \(\alpha\) is the weighting parameter. We apply the objective function to all PC ablation studies unless explicitly stated otherwise. The volumetric ablation studies use the soft Dice score.
The traditional objective functions like Chamfer Distance (CD) [18], Extended Chamfer Distance (ECD) [20], or Earth Mover's Distance (EMD) [9] are not well suited for the discussed application. The CD/ECD provide suboptimal performance for point clouds with uniform density or a substantially different number of samples, tends to collapse, and results in noisy training. The EMD is more stable, however, explicitly assumes bijective mapping (requiring knowledge about the desired number of points) and has high computational complexity.
### Iterative Completion
The coarse PCs are processed by the network separately. Afterwards, the reconstructed PCs are combined into the final reconstruction. To improve the results, the process may be repeated \(M\) times with a different initial PC split and a small Gaussian noise added. The procedure improves the method's performance and closes empty holes in the voxelized representation. The optional multi-step completion is performed only during the inference.
The iterative completion allows one to significantly reduce the GPU memory usage and the number of network parameters. The PCs are unordered collections and can be easily split and merged. There is no need to process large PCs in one shot, resulting in the linear growth of inference time and almost constant GPU memory consumption.
### Postprocessing
The reconstructed PCs are converted to mesh and voxelized back to the volumetric representation, mainly for evaluation purposes. The mesh is created by a rolling ball pivoting algorithm using the Open3D library [25]. The voxelization is also performed using Open3D by the PC renormalization and assigning positive values to voxels containing points in their interior. The voxelized representation is further postprocessed by binary closing and connected component analysis to choose only the largest volume. Then, the overlap area between the reconstructed defect and the defective input is subtracted from the reconstructed defect by logical operations.
### Dataset and Experimental Setup
We use the SkullBreak and SkullFix datasets [4] for evaluation. The datasets were used during the AutoImplant I and II challenges and enable comparison to other reconstruction algorithms. The SkullBreak dataset contains 114 high-resolution skulls for training and 20 skulls for testing, each with 5 accompanying defects from various classes, resulting in 570 training and 100 testing cases. All volumes in the SkullBreak dataset are 512 x 512 x 512. The SkullFix dataset is represented by 100 training cases mostly located in the back of the skull with a similar appearance, and additional 110 testing cases. The volumes in the SkullFix dataset are 512 x 512 x Z where Z is the number of axial slices. The SkullBreak provides more heterogeneity while the SkullFix is better explored and enables direct comparison to other methods.
We perform several ablation studies. We check the influence of the input physical spacing on the reconstruction quality, training time, and GPU memory consumption. Moreover, we check the generalizability by measuring the gap between the results on the training and the external testing set for each method. We compare our method to the methods dedicated to PC completion: (i) PC-Net [24], (ii) PoinTr [22], (iii) AdaPoinTr [23], as well as to methods dedicated to volumetric defect reconstruction: (i) 3-D VNet, and (ii) 3-D Residual U-Net.
Moreover, we compare the reconstruction quality to results reported by other state-of-the-art methods.
We trained our network separately on the SkullBreak and SkullFix datasets. The results are reported for the external test set containing 100 cases for SkullBreak and 110 cases for the SkullFix datasets, the same as in the methods used for comparison. The models are implemented in PyTorch [13], trained using a single RTX GeForce 3090. We augment the input PCs by random permutation, cropping, rotation, and translation. The volumetric ablation studies use random rotation and translation with the same parameters as for the PCs. All the methods were trained until convergence. The hyperparameters are reported in the associated repository.
## 3 Results
The comparison in terms of the Dice coefficient (DSC), boundary Dice coefficient (BDSC), 95th percentile of Hausdorff distance (HD95), and Chamfer distance (CD), are shown in Table 1. Exemplary visualization, presenting both the PC and volumetric outcomes, is shown in Figure 2.
The results of the ablation studies showing the influence of the input size, generalizability, objective function, and the effect of repeating the iterative refinement are presented in Table 2.
Figure 2: Exemplary visualization of the reconstructed point clouds / volumes for a case from the SkullBreak dataset. The PCs are shown for the defect only (reconstructed vs ground-truth) for the presentation clarity.
## 4 Discussion
The reconstruction quality of the method is comparable to the volumetric networks, as shown in Table 1. Meanwhile, the proposed method takes into account the data sparsity, does not require significant computational resources, and scales well with the input size. The proposed method has good generalizability. The gap between the training and testing set is negligible, unlike the volumetric methods that easily overfit and require strong augmentation for practical use. The DACD, as well as the proposed extension, improve the reconstruction quality compared to the CD or ECD by taking into account the uniformity of the expected PC. The original PC completion methods do not scale well with the increase of PC size. The possible reason for this is connected with the noisy kNN graph construction when dealing with large PCs and increasing the number of neighbours is unacceptable from the computational point of view. The proposed method has almost constant memory usage, independent of the input shape, in contrast to both the volumetric methods and PC completion methods without the iterative approach. Interestingly, the proposed method outperforms other methods taking into account the data sparsity. The inference speed is slightly lower than for the volumetric methods, however, this application does not require real-time processing and anything in the range of seconds is acceptable.
\begin{table}
\begin{tabular}{l c c c c c c c c c} Method & \multicolumn{3}{c}{SkullBreak} & \multicolumn{3}{c}{SkullFix} & \multicolumn{2}{c}{GPU Mem [GB]} \\ & DSC & BDSC & HD95 & CD & DSC & BDSC & HD95 & CD & \\ \hline \multicolumn{8}{c}{Point Cloud Completion} \\ \hline Proposed & 0.87 & 0.85 & 1.91 & 0.31 & 0.90 & 0.89 & 1.71 & 0.29 & \(\sim\)2.78 \\ PCNet [24] & 0.61 & 0.58 & 5.77 & 1.18 & 0.77 & 0.75 & 3.22 & 0.41 & \(\sim\)2.37 \\ PointTr [22] & 0.67 & 0.66 & 5.17 & 0.82 & 0.82 & 0.81 & 3.02 & 0.36 & \(\sim\)3.11 \\ AdaPoinTr [23] & 0.66 & 0.64 & 5.29 & 0.84 & 0.81 & 0.81 & 3.05 & -0.36 & \(\sim\)3.14 \\ \hline \multicolumn{8}{c}{Volumetric Segmentation} \\ \hline
3-D VNet & 0.87 & 0.90 & 1.87 & 0.21 & 0.91 & 0.93 & 1.66 & 0.11 & 21.89 \\
3-D RUNet & 0.89 & 0.91 & 1.79 & 0.18 & 0.91 & 0.92 & 1.67 & 0.09 & 22.47 \\ \hline \multicolumn{8}{c}{State-of-the-art} \\ \hline Kroviakov _et al._[5] & - & - & - & - & 0.85 & 0.94 & 2.65 & - & \(<\) 6.00 \\ Li _et al._[6] & - & - & - & - & 0.81 & - & - & - & - \\ Mahdi _et al._[10] & 0.78 & 0.81 & 3.42 & - & 0.88 & 0.92 & 3.59 & - & \(<\) 6.00 \\ Pathak _et al._[14] & - & - & - & - & 0.90 & 0.95 & 2.02 & - & - \\ Wodzinski _et al._[17] & 0.91 & 0.95 & 1.60 & - & 0.93 & 0.95 & 1.48 & - & \(<\) 40.00 \\ Yu _et al._[21] & - & - & - & - & 0.77 & 0.77 & 3.68 & - & CPU \\ Ellis _et al._[3] & - & - & - & - & 0.94 & - & 3.60 & - & - \\ Yang _et al._[19] & 0.85 & 0.89 & 3.52 & - & - & - & - & - & - \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative results on the SkullBreak and SkullFix datasets. The final results are reported for original resolution using the DACD + kNN objective function and 3 iterative refinements (- denotes that results were not reported). The methods used for comparison are reported for the most successful setup (see Table 2).
The disadvantages of the proposed algorithm are connected to long training time, noise at the object boundaries, and holes in the voxelized output. The FoldingNet-based decoder requires a significant number of iterations to converge, thus resulting in training time comparable or even longer than the volumetric methods. Moreover, the voxelization of PCs results in noisy edges and holes that require further morphological postprocessing.
In future work, we plan to further reformulate the problem and, similarly to Kroviakov _et al._[5], use only the skull contours. Since the ultimate goal is to propose models ready for 3-D printing, the interior of the skull defect is not required to create the mesh and STL file. Another research direction is connected to the PC augmentation to further increase the network generalizability since it was shown that heavy augmentation is crucial to obtain competitive results [3, 17].
To conclude, we proposed a method for cranial defect reconstruction by formulating the problem as the PC completion task. The proposed algorithm achieves comparable results to the best-performing volumetric methods while
\begin{table}
\begin{tabular}{l c c c c c c} Method & DSC & BDSC & HD95 [mm] & CD [mm] & GPU Mem [GB] & Gen. Gap [\% DSC] \\ \hline \hline \multicolumn{7}{c}{Input Size (uniform voxel spacing)} \\ \hline Proposed: original & 0.87 & 0.85 & 1.91 & 0.31 & \(\sim\)2.78 & 4.18 \\ Proposed: 1 mm & 0.83 & 0.77 & 2.64 & 0.46 & \(\sim\)2.69 & 4.05 \\ Proposed: 2 mm & 0.74 & 0.71 & 3.89 & 0.67 & \(\sim\)2.64 & 4.57 \\ Proposed: 4 mm & 0.69 & 0.64 & 5.12 & 0.79 & \(\sim\)2.63 & 3.12 \\ \hline PCNet: original & - & - & - & - & \(>\) 24 & - \\ PCNet: 1 mm & 0.37 & 0.33 & 10.57 & 1.76 & \(\sim\)13.22 & 1.13 \\ PCNet: 2 mm & 0.57 & 0.53 & 7.18 & 1.37 & \(\sim\)5.37 & 1.33 \\ PCNet: 4 mm & 0.61 & 0.58 & 5.77 & 1.18 & \(\sim\)2.37 & 3.07 \\ \hline PoinTr: original & - & - & - & - & \(>\) 24 & - \\ PoinTr: 1 mm & 0.58 & 0.55 & 6.82 & 1.39 & \(\sim\)21.41 & 1.89 \\ PoinTr: 2 mm & 0.65 & 0.64 & 5.28 & 0.94 & \(\sim\)6.48 & 2.19 \\ PoinTr: 4 mm & 0.67 & 0.66 & 5.17 & 0.82 & \(\sim\)3.11 & 3.98 \\ \hline
3-D RUNet: original & - & - & - & - & \(>\) 24 & - \\
3-D RUNet: 1 mm & 0.89 & 0.91 & 1.79 & 0.18 & 22.47 & 10.11 \\
3-D RUNet: 2 mm & 0.85 & 0.85 & 2.09 & 0.25 & 7.84 & 14.51 \\
3-D RUNet: 4 mm & 0.76 & 0.77 & 2.89 & 0.63 & 3.78 & 17.48 \\ \hline \multicolumn{7}{c}{Objective Function (proposed method, original size, 3 iters)} \\ \hline DACD + kNN & 0.87 & 0.85 & 1.91 & 0.31 & \(\sim\)2.78 & 4.18 \\ DACD & 0.85 & 0.81 & 2.78 & 0.42 & \(\sim\)2.72 & 4.22 \\ ECD & 0.75 & 0.71 & 4.11 & 0.68 & \(\sim\)2.72 & 3.99 \\ CD & 0.83 & 0.78 & 2.98 & 0.28 & \(\sim\)2.72 & 4.58 \\ \hline \multicolumn{7}{c}{No. Refinements (proposed method, original size, DACD + kNN)} \\ \hline
1 iters & 0.85 & 0.81 & 2.51 & 0.41 & \(\sim\)2.78 & 4.51 \\
2 iters & 0.87 & 0.83 & 1.98 & 0.32 & \(\sim\)2.78 & 4.18 \\
3 iters & 0.87 & 0.85 & 1.91 & 0.31 & \(\sim\)2.78 & 4.18 \\
4 iters & 0.87 & 0.85 & 1.90 & 0.30 & \(\sim\)2.78 & 4.18 \\ \hline \end{tabular}
\end{table}
Table 2: The ablation studies related to the input size, the objective function, and the number of refinements. The results are reported for the SkullBreak dataset at the original scale (except the CD). The Gen. Gap denotes the difference between the training and testing set in terms of the DSC.
requiring significantly less computational resources. We plan to further optimize the model by working directly at the skull contour and heavily augmenting the PCs.
## Acknowledgements
The project was funded by The National Centre for Research and Development, Poland under Lider Grant no: LIDER13/0038/2022 (DeepImplant). We gratefully acknowledge Polish HPC infrastructure PLGrid support within computational grant no. PLG/2023/016239.
|
2302.07390
|
Perspective: strain and strain gradient engineering in membranes of
quantum materials
|
Strain is powerful for discovery and manipulation of new phases of matter;
however, the elastic strains accessible to epitaxial films and bulk crystals
are typically limited to small ($<2\%$), uniform, and often discrete values.
This Perspective highlights new directions for strain and strain gradient
engineering in free-standing single crystalline membranes of quantum materials.
Membranes enable large ($\sim 10\%$), continuously tunable strains and strain
gradients via bending and rippling. Moreover, strain gradients break inversion
symmetry to activate polar distortions, ferroelectricity, chiral spin textures,
novel superconductivity, and topological states. Recent advances in membrane
synthesis by remote epitaxy and sacrificial etch layers enable extreme strains
in new materials, including transition metal oxides and Heusler compounds,
compared to natively van der Waals (vdW) materials like graphene. We highlight
new opportunities and challenges for strain and strain gradient engineering in
membranes of non-vdW materials.
|
Dongxue Du, Jiamian Hu, Jason K Kawasaki
|
2023-02-14T23:22:04Z
|
http://arxiv.org/abs/2302.07390v1
|
# Perspective: strain and strain gradient engineering in membranes of quantum materials
###### Abstract
Strain is powerful for discovery and manipulation of new phases of matter; however, the elastic strains accessible to epitaxial films and bulk crystals are typically limited to small (\(<2\%\)), uniform, and often discrete values. This Perspective highlights new directions for strain and strain gradient engineering in free-standing single crystalline membranes of quantum materials. Membranes enable large (\(\sim 10\%\)), continuously tunable strains and strain gradients via bending and rippling. Moreover, strain gradients break inversion symmetry to activate polar distortions, ferroelectricity, chiral spin textures, novel superconductivity, and topological states. Recent advances in membrane synthesis by remote epitaxy and sacrificial etch layers enable extreme strains in new materials, including transition metal oxides and Heusler compounds, compared to natively van der Waals (vdW) materials like graphene. We highlight new opportunities and challenges for strain and strain gradient engineering in membranes of non-vdW materials.
## I Introduction
The properties of quantum materials with highly localized \(d\) and \(f\) orbitals can be highly sensitive to changes in bond lengths, bond angles, local coordination, and symmetry. Strain is a powerful knob for tuning these parameters, with striking examples including strain-induced superconductivity in epitaxial RuO\({}_{2}\) films [1], strain-induced ferroelectricity in SrTiO\({}_{3}\)[2], and strain-induced changes in magnetic ordering in magnetic shape memory alloys [3]. However, the strains accessible to bulk materials and epitaxial films are typically limited to \(<2\%\) before relaxation via dislocations [4; 5; 6]. Moreover, in epitaxial films the strain is static and discrete, based on the lattice mismatch between particular film and substrate combinations. As a result, many quantum properties remain out of reach.
This Perspective highlights new opportunities for strain and strain gradient engineering in single crystalline membranes of quantum materials, beyond natively vdW materials. Free-standing membranes enable two regimes that are inaccessible in films and bulk crystals (Fig. 1). First, membranes and other free-standing nanostructures can sustain much larger elastic strains (8% in (La,Ca)MnO\({}_{3}\) membranes [7] and 10% in BaTiO\({}_{3}\)[8]), compared to the \(\sim 2\%\) limit for films and bulk crystals. Second, membranes enable controlled strain gradients via bending and rippling [9]. Whereas uniform strain breaks rotational and translational symmetries (Fig. 1a), strain gradients break inversion symmetry. Inversion breaking is the necessary ingredient for ferroelectric polar distortions, nonlinear optical responses, Dzyaloshinskii-Moriya interaction (DMI)-induced chiral spin textures, and Rashba splitting (Fig. 1b).
Recent advances in remote epitaxy [10; 11; 12; 13] and etch release layers [7; 14] enable the synthesis of ultrathin membranes of quantum materials, including Heusler compounds and transition metal oxides. These synthesis advancements enable extreme strain to be applied to new classes of ultrathin membranes, which until recently were mainly limited to easily exfoliable van der Waals (vdW) materials like graphene and transition metal dichalcogenides [15; 16]. We highlight opportunities for discovery of new properties via large strains and strain gradients in these materials (Section II). Realization of these properties relies on new approaches for single crystalline membrane synthesis, understanding and controlling their extreme mechanical properties, and feedback from computational modelling (Section III). We conclude with an outlook on static and nonequilibrium stains (Section IV).
## II Opportunities
### Magnetism, flexomagnetism, and skyrmions
Homogeneous strains couple strongly to magnetism via piezomagnetism (\(M\propto\epsilon\)) and magnetostriction (\(M^{2}\propto\epsilon\)). Microscopically, strain tunes magnetic exchange via the bond lengths and bond angles, and tunes the band degeneracies and occupancies via change in symmetry [17]. Many Heusler compounds and transition metal oxides have rich magnetic properties [18; 19; 20]. Larger strain in membranes have the potential to tune magnetic properties more substantially. As an example, \(>5\%\) strains in (La,Ca)MnO\({}_{3}\) membranes induce a ferromagnetic metal metal metal to insulator transition [7] (Fig. 1c).
Magnetism can also couple strongly to strain gradients, which is termed flexomagnetism (\(M\propto\nabla\epsilon\)) [21; 22; 23]. Whereas strain gradients are difficult to control in bulk crystals and epitaxially clamped films, we recently demonstrated an antiferromagnetic to ferro/ferrimagnetic transition upon rippling in GdPtSb membranes, in the first experimental example of flex
omagnetism [9] (Fig. 1d). Although the microscopic mechanism is not well understood, we speculate that strain gradients enhance the DMI, leading to canted ferrimagnetism in the rippled GdPtSb membranes. Microscopic measurements and theory are required to understand the flexomagnetic response.
Inversion-breaking strain gradients can also tune or induce chiral spin textures such as skyrmions, via tuning the DMI. As proof of concept, recent experiments on partially relaxed (La,Sr)MnO\({}_{3}\) (LSMO) films show signatures of skyrmions, induced by inversion-breaking strain gradients along the growth direction [24]. We anticipate even greater control of skyrmions in bent LSMO membranes, which allow the strain gradient to be tuned more precisely and continuously rather spontaneous strain relaxation in LSMO films.
Theory predicts skyrmions in other bent systems. Mesoscale calculations predict highly tunable skyrmions in bent membranes heterostructures of simple metals and a flexo-Hall effect induced by bending [25]. More complex cyclical states are predicted for curved nanotubes of CrI\({}_{3}\), due to periodic boundary conditions along the circumference of the nanotube [26]. Strain gradients are also predicted to control skyrmion motion [27; 28; 29], which could be controlled dynamically on a flexible membrane platform. We anticipate these concepts to apply broadly to membranes of new vdW materials, e.g. rare earth Heusler compounds, magnetic oxides, and chiral intermetallics.
### Ferroelectricity, flexoelectricity, and polar metals
Ferroelectricity requires crystals with broken inversion symmetry that have a unique polar axis. Although homogeneous strain alone does not break inversion, it can tune ferroelectricity in systems that are already ferroelectric or induce ferroelectricity in materials on the verge of being ferroelectric. For example, uniaxial tensile strain induces ferroelectricity in membranes of the quantum paraelectric SrTiO\({}_{3}\), by suppressing quantum fluctuations [30]. The absence of strain (clamping) in membranes can also be important: free-standing BaTiO\({}_{3}\) membranes display
Figure 1: Symmetry breaking and properties induced by large strains and strain gradients. (a) Homogeneous strain breaks rotational and translational symmetry, to lift band degeneracies and tune bond lengths and angles. These parameters can tune magnetic exchange and electron correlations. (b) Strain gradients break inversion, providing access to polar distortions, tunable Dzaloshinskii-Moriya interaction (DMI), tunable Weyl nodes, and Rashba splitting. (c) Extremely large strain induced Metal to Insulator transition and magnetic phase transition in La\({}_{0.7}\)Ca\({}_{0.3}\)MnO3 membrane. From Hong, et. al. Science 368, 71 (2020) (Ref [7]). Reprinted with permission from AAAS. (d) Flexomagnetism induced by strain gradients in rippled GdPtSb membranes. Reproduced from Du et. al. Nature Communications, 12, 2494 (2021) (Ref [9]), under Creative Commons licence.
faster switching than epitaxial BaTiO\({}_{3}\) films, due to the release from substrate clamping effects [31].
Strain gradients, which break inversion, are even more powerful because they can induce polar distortions in materials that were originally centrosymmetric. The general coupling between ferroelectricity and strain gradients is termed flexoelectricity [32; 33; 34; 35]. Early experiments quantified the flexoelectric coefficients for few millimeter thick cantilevers of lead magnesium niobate [36] and lead zirconate titanate (PZT) [37]. More recent experiments suggest that 10 nm thick BaTiO\({}_{3}\) membranes released from graphene/Ge display an enhanced flexoelectric response compared to bulk [14]. We anticipate broader opportunities for flexoelectricity in ultrathin membranes, where the enhanced elasticity in the ultrathin limit provides access to new regimes for large strains and strain gradients.
Flexoelectric coupling may also enable the switching of polar metals. Unlike ferroelectric insulators, in which the electric polarization can be switched via an applied electric field, in polar metals the electric field is screened out by free carriers. Bending-induced strain gradients provides a means of switching a polar metal without application of an electric field [38]. First-principles calculations identified LiOsO\({}_{3}\) as a promising material for switching via strain gradients [38]. Other materials, including the high conductivity polar metals LaAuGe and GdAuGe[12; 39], may also be good candidates.
Finally, ultrathin ferroelectric membranes provide opportunities for mechanically active materials, due to their extreme superelastic responses, large strains, and 180 degree bending. For BiFeO\({}_{3}\) membranes, 180 degree bending with 1 micron radius of curvature and reversible elastic strains up to 5.4% are accommodated by a reversible mombahedral-tetragonal phase transformations [40]. For BaTiO\({}_{3}\) membranes, large bending strains of 10 percent are reported, which are enabled by continuous dipole rotations of ferroelectric domains [8]. Similar arguments based on phase transformations and domain reorientations may apply for membranes of martensitic materials like shape memory alloys. These materials provide new opportunities for tuning stimuli-responsive materials that undergo large ferroelectric and ferroelastic phase transitions.
### Superconductivity.
Strain and strain gradients in membranes provide opportunities to enhance the critical temperature \(T_{c}\) and critical fields of known superconductors, induce superconductivity in new materials, and tune the pairing symmetry and coupling to other electronic states such as topological states and ferroelectricity.
Strain can enhance the \(T_{c}\) of known superconductors including iron based superconductors and cuprates [41]. For example, epitaxial strain enhances the upper critical field of Fe based superconductors [42]. In this family of materials, the strength of electronic correlations and \(T_{c}\) are highly dependent on the X-Fe-X bond angle (X = pnictogen or chalcogen), with a maximum Tc when the bond angle is near 109 degrees [43; 44]. This bond angle is typically tuned by alloying, doping, or intercalation [43], which introduces disorder. Freestanding membranes provide a path to cleanly and continuously tune the X-Fe-X bond angle via strain and bending-induced strain gradients. Decoupling a monolayer FeSe film from a SrTiO\({}_{3}\) substrate also enables the specific effects of interfacial-enhanced superconductivity to be tested [45; 46; 47].
Strained membranes may provide similar opportunities for cuprates and other superconducting oxides. In cuprates, 0.5 % compressive epitaxial strain nearly doubles the \(T_{c}\) of (La,Sr)CuO\({}_{4}\) films from 25 K to 49 K [48]. One challenge for cuprates is that the Tc is also highly sensitive to oxygen stoichiometry [49], making it challenging to compare across separate samples. Membranes provide a possible solution for deconvolving stoichiometry from strain effects by allowing continuous tuning of strain on the same sample. Moreover, membranes enable larger strains and strain gradients.
The large strains and strain gradients in membranes also provide opportunities to induce superconductivity in new materials. Anisotropic strain induces superconductivity in RuO\({}_{2}\) films grown on TiO\({}_{2}\) (110) [1]. Membranes provide further tunability for anisotropic strain, since the strain is not limited to particular film-substrate combinations and the strain in different crystallographic directions can be tuned independently. Inversion-breaking strain gradients may also an important new tool: for SrTiO\({}_{3}\), ferroelectric polar distortions are thought to stabilize superconductivity [50], and strain has been shown to tune the \(T_{c}\)[51]. Tunable inversion breaking in membranes allows this idea to be tested in other classes of materials, beyond the few existing quantum paraelectrics [52; 53].
Finally, inversion breaking strain gradients may find use for tuning the superconducting pairing symmetry [54]. In noncentrosymmeric superconductors, mixtures of spin-singlet and spin-triplet pairing are allowed [55], and interesting topological and magnetoelectric properties are expected [54]. Membranes provide a means to continuously tune the crystalline symmetry of a superconducting material, to distinguish effects strain from disorder.
### Topological states.
Membranes provide an opportunity to tune topological band inversion and band gaps via large strains that are difficult to access in bulk materials or films. For example, FeSe\({}_{x}\)Te\({}_{1-x}\) is suggested by ARPES and STM to be a topological superconductor for \(x=0.45\)[56; 57; 58], due to the strong spin-orbit coupling of Te and \(p-d\) hybridization [59; 60]. Large strains in Fe(Se,Te) membranes may provide extended control of the band inversion and \(p-d\) hybridization beyond what can be achieved by Te
alloying alone. As another example, whereas many rare earth half Heusler compounds are topological semimetals with overlapping valence and conduction bands [61; 62; 63; 64], it would be more attractive to have a material with a bulk bandgap. DFT calculations for LaPtBi suggest that a very large strain of 7% is required to open a bulk band between overlapping \(\Gamma_{8}-\Gamma_{6}\) band while preserving the band inversion [65; 66]. This magnitude of strain is not possible in epitaxial films, which typically relax below 2% strain, but may be accessible in free-standing membranes.
Strain gradient in membranes can also tune topological states via pseudo magnetic fields. While homogeneous strains have zero gauge field, spatially varying strains in materials produce a pseudo magnetic field **B**=\(\nabla\times\)**A**[67; 68]. A previous study showed that the pseudo field created by dislocation arrays can flatten the bands near the Dirac points to create helical surface states [69]. Membranes provide an alternative path to more controllably create inhomogeneous strain fields and their associated pseudo magnetic fields, borrowing techniques that have been developed for inducing pseudo B fields in graphene [67; 68]. The pseudo magnetic fields are also powerful for tuning the k-space spacing between Weyl nodes [70], which act as sources and sinks of Berry curvature.
## III Why Now?
The new science and strain engineering of single crystalline membranes is driven by recent advances in membrane synthesis, demonstrations of extreme and tunable strains, and integrated computational modelling from atomistic to mesoscale.
### New membrane synthesis
Epitaxial growth and release from a sacrificial etch layer is a leading membrane synthesis strategy (Fig. 2a,b). This approach was first developed for semiconductor membranes, including SiGe membranes by etching the oxide from silicon on insulator (SOI) [71], and GaAs/AlAs membranes by selective etches for GaAs or AlAs layers [72]. It has been extended to other materials that lattice match to semiconductors, including the shape memory alloy Ni\({}_{2}\)MnGa fabricated via epitaxial growth on AlGaAs and subsequent etching [73].
New water soluble oxide layers enable the release of free-standing perovskite transition metal oxide membranes. These release layers include (Ca,Sr,Ba)\({}_{3}\)Al\({}_{2}\)O\({}_{6}\), which allows the lattice parameter to be tuned from 3.819 A to 4.124 A [7; 74; 75], SrVO\({}_{3}\)[76], and BaO [77]. These layers are typically grown by pulsed laser deposition (PLD)[7] or molecular beam epitaxy (MBE) [74]. A significant challenge for epitaxial etch release is that not all materials combinations have selective etch chemistries that can etch the lattice matched release layer without damaging the membrane layer.
Remote epitaxy and exfoliation provide an etch-free alternative (Fig. 2c,d). In this approach, an epitaxial film is grown on a graphene (or other 2D material) covered substrate [10]. Epitaxial registry between film and substrate is thought to occur via remote interactions that permeate through graphene [10; 78], although a pinhole-seeded mechanism can also produce exfoliatable membranes [79]. The weak van der Waals interactions of graphene allow film exfoliation to produce a freestanding membrane, similar to exfoliation of vdW materials like graphene and transition metal dichalcogenides. First demonstrated for the compound semiconductors [10], growth and exfoliation from graphene has been demonstrated for transition metal oxides [11; 13], halide perovskites [80], simple metals [81], and Heusler compounds [12; 9].
Several challenges exist for remote epitaxy. First, the quality of remote epitaxial film growth and ability to exfoliate depend on the quality of the starting 2D material covered substrate. In most cases, this starting surface is prepared by layer transfer because graphene and other 2D materials cannot be grown directly on arbitrary substrates. This transfer can introduce wrinkles, tears, and interfacial contaminants that introduce defects in the subsequent membrane growth [79], and in extreme cases can affect the ability to exfoliate [82; 83]. A cleaner alter
Figure 2: Synthesis of single-crystalline membranes. (a) Epitaxial etch release. (b) Transmission electron microscopy (TEM) image of a BiFeO\({}_{3}\) (BFO) film grown on a Sr\({}_{3}\)Al\({}_{2}\)O\({}_{6}\) (SAO) sacrificial etch layer. From Peng et. al., Sci. Adv., 6, aba5847 (2020) (Ref. [40]). Reproduced with permission from AAAS under Creative Commons License. (c) Remote epitaxy and exfoliation from graphene. (d) Epitaxy of GdPtSb on graphene/Al\({}_{2}\)O\({}_{3}\)(0001), reproduced under Creative Commons License from Ref. [9]. Inset photos show the GdPtSb membrane and the graphene/Al\({}_{2}\)O\({}_{3}\) substrate after exfoliation.
native strategy is to use graphene directly grown on the substrate of interest. Recently, epitaxial BaTiO\({}_{3}\) membranes were grown on graphene/Ge (110) [14], where the graphene was grown directly on Ge. Further advancements in remote epitaxy may require the development of graphene growth directly on new substrates of interest.
A second challenge is that the atomic-scale mechanisms for remote epitaxy remain unclear. Clear experimental evidence for a remote mechanism remains elusive. In most experiments, the primary evidence for a remote mechanism is that the films are epitaxial to the underlying substrate (rather than to graphene) and can be exfoliated. Recent in-situ surface science measurements, however, demonstrate that a pinhole-seeded lateral epitaxy mechanism can also produce epitaxial, exfoliatable membranes [79]. In this growth mode, few nanometer diameter pinholes in the graphene serve as sites for selective nucleation at the substrate, followed by lateral overgrowth and coalescence of a continuous film. Since the pinholes are small and sparse, membranes can still be exfoliated. Moreover, the pinholes are easy to overlook because they do not appear after the graphene transfer step. Instead they only appear immediately prior to film growth because they are created by interfacial oxide desorption at pre-growth sample annealing temperatures.
Careful microscopic measurements at multiple steps during the growth process are required to understand the growth mechanisms on graphene. The development of graphene grown directly on substrates of interest, e.g. graphene on Ge, avoids the interfacial oxide-induced pinholes and may allow the intrinsic mechanisms for remote epitaxy to be tested. Alternative forms of evidence may also shed light on the mechanisms: for GdPtSb films grown on graphene/Al\({}_{2}\)O\({}_{3}\) (0001), a 30\({}^{\circ}\) rotated superstructure forms that cannot be explained by pinholes [12]. Is this superstructure evidence for an intrinsic remote epitaxy mechanism? A microscopic understanding of the mechanisms, whether intrinsic remote epitaxy or extrinsic pinholes, is required to understand the limits and new applications for epitaxy and exfoliation from graphene.
### Extreme strain manipulation
Released membranes enable the application of extreme strains. To date, strain is typically applied via top-down methods. Using micropositioners, strains of 8% have been demonstrated in few nanometer thick (La,Ca)MnO\({}_{3}\) membranes in tension [7], and 5.4% for BiFeO\({}_{3}\)[40] and \(\sim 10\%\) for BaTiO\({}_{3}\) membranes [8] in bending. A flexible polymer handle can aide in the handling of ultrathin membranes, and the use of polymers handles cooled below the glass transition temperature can lock in the desired strain state [7].
Strain gradients can be produced by bending and rippling. Methods include local bending using a scanning probe or micropositioners [84; 87], Fig. 3 (a,b), rippling via lateral compression on a polymer handle [9; 88], and transferring membranes to a patterned surface [68], Fig. 3 (d,e). Local bending by micropositioners have demonstrated elastic recoverable 180 degree bends with 1 micron radius of curvature, as is shown in Fig. 3 (a) [40].
Bottom-up strategies provide new opportunities for fine strain gradient control. Strain sharing bilayers, in which one layer is compressive and the other is tensile, spontaneously roll up into nanotubes upon release. This strategy has been successfully implemented to make semiconductor nanotubes [89; 90] and curved oxide membranes [85] (Fig. 3 (c)). Another strategy is spontaneous rippling in lattice-mismatched lateral heterostructures. First implemented for WS\({}_{2}\)/WSe\({}_{2}\) heterostructures, these materials relax via rippling out of plane due to the weak van der Waals interaction with the substrate [86], as is shown in Fig. 3 (f). We envision similar lateral heterostructures of non vdW membranes, grown by remote epitaxy on graphene, may experience out of plane rippling.
### Why can membranes sustain much larger strains than clamped films or bulk materials?
We offer several possible reasons, based on surface science [91] and the mechanics of 1D metallic whiskers [92; 93; 94] and semiconductor and metallic nanowires [95; 96; 97; 98; 99].
First, membranes are not clamped to a rigid substrate. In epitaxial films, dislocations form when the strain energy exceeds the energy cost to form a misfit dislocation at the film/substrate interface. This criterion, which can be expressed in terms of an energy balance (People and Bean [5], van der Merwe [6]) or a force balance (Matthews and Blakeslee [4]), typically limits strains to \(\sim 2\%\). Otherwise a film relaxes at a critical thickness below one unit cell. For a free-standing membrane, there is no interfacial bonding between film and substrate to create a misfit dislocation. Thus dislocations must nucleate from the bulk or from the top or bottom surface.
Second, ultrathin membranes are dominated by their surfaces. At surface, atoms have decreased local coordination and increased degrees of freedom for relaxation compared to bulk. In response to external stresses, surface atoms can relax out-of-plane or reconstruct in-plane. Surface contributions [100; 101] are invoked to explain the elasticity of few nanometer diameter nanowires, which can also sustain elastic strains of order \(\sim 10\%\)[95; 96; 97]. Similar arguments may explain why a 6 nm thick (La,Ca)MnO\({}_{3}\) membranes can sustain 8% elastic strain, whereas thicker membranes (\(>20\) nm) undergo fracture below 2% strain [7].
Interestingly, novel phase transitions and domain reorientations have been observed by transmission electron microscopy in bent ultrathin membranes of the ferroelectric materials BaTiO\({}_{3}\)[8] and BiFeO\({}_{3}\)[40], and a continuous face centered cubic to body centered tetragonal transition has been detected in few nanometer diameter Cu nanowires [102]. These studies indicate that elastic de
formations within the _interior_ of a thin membrane, and not just within the few surface layers, can be different than the bulk. Further microscopic studies are needed in order to understand the relaxations and reconstructions at the surface and near surface region of strained ultrathin membranes.
Third, the mechanisms for generation, motion, and pinning of defects are length scale dependent [103]. Activation and suppression of these mechanisms has been invoked to explain the size-dependent elastic properties of few micron diameter metallic whiskers [92, 93, 94] and micropillars [103]. Similar arguments may describe the mechanics of membranes at intermediate thicknesses of tens to hundreds of nanometers.
### New developments and challenges in modeling.
An accurate modeling and prediction of the physical properties of strained membranes requires theory and computation at multiple scales. Of central importance is the accurate treatment of the spatially inhomogeneous strain (e.g., strain gradient), which has been challenging to address through first-principles density functional theories (DFT) calculations. This is because inhomogeneous strain often creates non-periodic crystal structure (incommensurate lattice distortion) yet the supercell used in DFT calculation often needs to be periodic. Thanks to the recent advances in the density-functional perturbation theory (DFPT), it is now possible to accurately compute the microscopic response (both linear and non-linear) of a system to an arbitrary inhomogeneous strain. Perhaps the most prominent example is the development of first-principles theory of flexoelectricity [104] and its application to compute the flexoelectric tensor [105, 106, 107, 108, 109], which can then be utilized to inform the mesoscale/continuum materials modeling [108].
Despite these exciting developments, significant challenges still remain. For example, the properties of strained membrane, like most practical materials, depend on the formation and evolution of mesoscale patterns (e.g., magnetic/ferroelectric/ferroelastic domains, electronic phase separation) at finite temperature, which go beyond the capability of conventional DFT calculations. However, research into the prediction of mesoscale pattern formation under extreme strain condition is still at its early stage, with many open questions remain. Take the ferroelectrics as an example, large bending can significantly change the bandgap of the domain wall [110] and hence lead to redistribution of the ionic and electronic defects and even an insulator-to-metal transition [111]. How does the strain-induced ionic/electronic defect re
Figure 3: Methods of generating strain gradients. (a) Bending membrane ribbons with nano-manipulator. From Peng, et. al. Science Advances 6, eaba5847 (2020) (Ref [40]). Reprinted under Creative Commons License. (b) Stretching membranes by AFM tips. Reprinted (adapted) with permission from V. Harbola et. al., Nano Lett. 21, 6, 2470–2475 (2021) (Ref. [84]). Copyright 2021 American Chemical Society. (c) Rolling up of SrTiO\({}_{3}\)/Si/SiGe membrane via strain relaxation. From Prakash, et. al., Small 18, 2105424 (2022) (Ref. [85]). Reprinted with permission from WILEY. (d) Transferring membranes to prestrained polymer tape and generating ripples by expanding and shrinking of the tape. (e) Rippling membranes by transferring them to soft support and imprinting patterns to membranes with stamps. (f) Generating ripples in in-plane heterostructures via strain relaxation. From Xie, et. al., Science 359, 1131 (2018) (Ref. [86]). Reprinted with permission from AAAS.
distribution interact with the domain structure evolution under extreme strain condition [99]? How does the defect distribution influence the strain-induced ferroelectric/ferroelastic phase transition? How to disentangle the contribution of flexoelectricity, piezoelectricity, and electrostriction to the mesoscale pattern formation? In addition to these fundamental challenges, there also exist technical challenges in different computational methods.
Modern atomistic methods such as effective Hamiltonian-based methods[112; 113; 114; 115] and second-principles calculations [116; 117; 118] can predict the mesoscale pattern formation with atom-resolved spatial resolution, and permits taking input directly from DFT calculations without the need of parameterization. However, it is still challenging to consider the realistic mechanical boundary conditions for the application of strain and strain gradient (Fig. 3) and their application to practical-sized (e.g., hundreds of micrometers to millimeters) materials systems currently would consume too much computational resources to be realistic.
Mesoscale materials modeling methods such as phase-field modeling cannot predict pattern formation and evolution at the scale below one unit cell, but can conveniently consider the complexity arising from the actual mechanical boundary condition upon the application of strain (gradient) [98; 89; 119; 8], and incorporate the role of 0D (point defects such as oxygen vacancies [99]), 1D (dislocations [120]), 2D (grain boundaries [121; 122; 123]), and 3D (e.g., precipitates [124] and cracks [125]). In particular, the phase-field model has the additional versatility of modeling the formation and co-evolution of different types of coupled patterns, for example, the coupled magnetic and structural domains [126; 127; 128]. With input from ab initio and/or experimental measurements, the predicted mesoscale patterns can often be utilized for a side-by-side comparison to experiments for not only understanding and interpreting the results, but also provide insights into how to access these patterns and manipulate them for realizing exotic phenomena or enhanced responses [129; 130; 131].
## IV Outlook: Beyond Static Strains
Large strains and strain gradients provide unique opportunities for inducing new properties in membranes of quantum materials. This Perspective highlighted static strain tuning of magnetism, superconductivity, ferroelectricity, and topological states.
Exciting opportunities also lie in dynamic and nonequilibrium properties. Nonlinear phononics, in which ultrafast optical pulses resonantly excite phonon modes, is a powerful approach for revealing nonequilibrium properties that arise from photon-phonon-spin or photon-phonon-electron couplings. Examples include ultrafast antiferromagnetic-ferrimagnetic switching [132], metastable ferroelectricity [133], and possible nonequilibrium superconductivity [134]. The general applicability of nonlinear phononics, however, is limited since these complex couplings are often weak, difficult to tune, and difficult to apply beyond a narrow set of materials that obey the required symmetry constraints. We anticipate the strong symmetry-breaking strains and strain gradients in membranes may solve this challenge, by enhancing the quasiparticle coupling strengths via strain, and breaking symmetries to activate new phonon modes for resonant excitation. The absence of substrate clamping is also beneficial since larger amplitude lattice vibrations can be accessed. Strain and strain gradients, both in static and dynamic forms, provide power tuning knobs for unleashing hidden properties in quantum materials membranes.
## V Acknowledgements
We thank Cyrus Dreyer, Jun Xiao, Daniel Rhodes, Ying Wang, and Uwe Bergmann for discussions. JKK and DD acknowledge the Air Force Office of Scientific Research (FA9550-21-0127) and the National Science Foundation (DMR-1752797). All authors acknowledge the National Science Foundation through the University of Wisconsin Materials Research Science and Engineering Center (MRSEC) Grant No. DMR-1720415.
|
2304.03278
|
How Do US Congress Members Advertise Climate Change: An Analysis Of Ads
Run On Meta's Platforms
|
Ensuring transparency and integrity in political communication on climate
change has arguably never been more important than today. Yet we know little
about how politicians focus on, talk about, and portray climate change on
social media. Here we study it from the perspective of political advertisement.
We use Meta's Ad Library to collect 602,546 ads that have been issued by US
Congress members since mid-2018. Out of those only 19,176 (3.2%) are
climate-related. Analyzing this data, we find that Democrats focus
substantially more on climate change than Republicans, with 99.7% of all
climate-related ads stemming from Democratic politicians. In particular, we
find this is driven by a small core of Democratic politicians, where 72% of all
impressions can be attributed to 10 politicians. Interestingly, we find a
significant difference in the average amount of impressions generated per
dollar spent between the two parties. Republicans generate on average 188% more
impressions with their climate ads for the same money spent as Democrats. We
build models to explain the differences and find that demographic factors only
partially explain the variance. Our results demonstrate differences of
climate-related advertisements of US congress members and reveal differences in
advertising characteristics between the two political parties. We anticipate
our work to be a starting point for further studies about climate-related ads
on Meta's platforms.
|
Laurenz Aisenpreis, Gustav Gyrst, Vedran Sekara
|
2023-04-06T17:58:41Z
|
http://arxiv.org/abs/2304.03278v1
|
# How Do US Congress Members Advertise Climate Change:
###### Abstract
Ensuring transparency and integrity in political communication on climate change has arguably never been more important than today. Yet we know little about how politicians focus on, talk about, and portray climate change on social media. Here we study it from the perspective of political advertisement. We use Meta's Ad Library to collect 602,546 ads that have been issued by US Congress members since mid-2018. Out of those only 19,176 (3.2%) are climate-related. Analyzing this data, we find that Democrats focus substantially more on climate change than Republicans, with 99.7% of all climate-related ads stemming from Democrats. In particular, we find this is driven by a small core of Democrats, where 72% of all impressions can be attributed to 10 politicians. Interestingly, we find a significant difference in the average amount of impressions generated per dollar between the two parties. Republicans generate on average 188% more impressions with their climate ads for the same money spent as Democrats. We build models to explain the differences and find that demographic factors only partially explain the variance. Our results demonstrate differences of climate-related advertisements of US congress members and reveal differences in advertising characteristics between the two political parties. We anticipate our work to be a starting point for further studies about climate-related ads on Meta's platforms.
1 IT University of Copenhagen
[email protected], [email protected], [email protected]
\({}^{\dagger}\) These authors contributed equally
\({}^{\dagger}\) Corresponding author
## Introduction
Climate change is considered one of the biggest, if not the, greatest challenge of our time (IPCC 2021; Watts et al. 2018; United Nations 2021). Despite the large scientific consensus about the causes of climate change (IPCC 2021), it is still considered a complex policy issue, without any agreement on how to address it (Victor 2015).
Social media has in the last two decades been used to communicate political messages, and is today considered an integral part of the political toolbox (Owen 2019). For instance, social media has been found to have been instrumental in both Barack Obama's and Donald Trump's presidential campaigns (Cogburn and Espinoza-Vasquez 2011; Allcott and Gentzkow 2017). It has changed traditional political activities and enabled political actors to: publicly express their opinion, reach different and broader networks (Tarai et al. 2015; Nott 2020), engage with their audience in new ways (Kearney 2017), and raise funds for campaigns (Auter and Fine 2018). In addition, social media has also changed how political campaigns are conducted, allowing politicians to micro-target potential voters and frame their messaging strategically (Sahly, Shao, and Kwon 2019). Fowler et al. (2021) found that US politicians' Facebook ads occur earlier in campaigns, are less negative, less issue-focused, and more partisan than television advertising.
Today, parties and politicians spend extensive amounts of money on social media campaigns. Political online advertising spending has more than quadrupled from 2018 to 2020, and political actors spent $2.3 billion on Meta and Google alone in 2019 and 2020 (Tech For Campaigns 2020). Unlike traditional means of political communication, social media is not a 'one-to-many' channel, but rather a two-sided'many-to-many' communication channel. On the one hand, political actors can respond to the priorities of potential voters, evaluate engagement, and embed feedback and insights from online campaigns to optimize their political strategies (Ensner-Jedenastik et al. 2022). On the other hand, recipients of political communication can be influenced, and potentially be informed by the political content they are exposed to (Bode 2016). In spite of the relevancy of social media, we still know little about how political communication on climate change is characterized on social media. Existing literature focuses on social media discussions of climate change, and less on the political discourse around it and advertisement about it (Williams et al. 2015; Pearce et al. 2019; Sarewitz 2011). As such, there is a gap in research on how politicians advertise climate change on social media.
Here we focus on Meta's platforms (Facebook, Messenger, Instagram, WhatsApp), with Facebook being the largest social media platform in terms of users (Statista 2021d). Facebook is the preferred channel for people to get their news from, for instance, a third of American adults receive their news through the platform (Matsa and Walker 2021). We leverage Meta's Ad Library1 data to gain insights into the advertisement activities of Congress members, and specifically to understand how they speak about climate change. In particular, we focus on analyzing relevant metrics such as spend, impressions, geographic, and de
mographic coverage of their climate and non-climate related ads. We focus on US Congress members for two reasons: 1) as the world's leading economy, the United States, will play a vital role in addressing climate change, and 2) Congress is the legislative branch of the US, with the power to shape future policies for the country.
The rest of the paper is organized as follows. First, we outline the state-of-the-art research within the field. Second, we describe the process of collecting our comprehensive dataset of ads for Congress members. Third, we summarize our findings, and lastly, we discuss limitations and possible extensions of our work.
## Related literature
Social media is an important medium for spreading news and shaping opinions, and thus an interesting avenue for researchers to examine the gap between the scientific, public, and political opinions on climate change. The existing literature on climate change predominantly focuses on discussions on Twitter, which can be divided into three main categories: public (e.g. users' knowledge and views on the topic), themes (e.g. studies based on thematic data sets such as hashtags or keywords), and professional communications (e.g. communication of climate scientists on social media) [13]. Cody et al. (2015) measure sentiment to analyze how Twitter users respond to different climate change news, events, and natural disasters. They find that climate change topics related to natural disasters, climate bills, and oil-drilling decrease happiness, while topics such as climate rallies increase happiness. Further, they find that Twitter serves as a valuable tool to spread awareness of climate change, and that the voice of activists has a larger presence online than skeptics and deniers [1]. Williams et al. (2015) find there is high polarization on the topic of climate change among users on Twitter, where users with strong opinions (i.e. skeptics and activists) are the most vocal. When it comes to US politicians talking about climate change, Yu et al. (2021) find that the likelihood of a Democrat tweeting about the topic is associated with the existing public opinion, not the climate change risks that their constituency is faced with. Thus they present empirical evidence to support the _riding the wave_ theory [1]. The theory suggests that political campaigns are more successful when following the topics that are currently in the public eye. This implies that the definition of a political agenda is a bottom-up process in which political actors respond to the priorities of potential voters [15]. Brooks et al. (2020) study how gender equality, girl power, and climate change are framed in 150 ads. They found that 84.4% of climate change ads use a 'loss frame', highlighting the consequence of inaction rather than the gains of action.
Understanding climate change related issues on Meta's platforms, however, is a more sparse topic in the literature. This is likely related to Meta's restrictive policies on data transparency. As such, when it comes to Facebook studies have focused on: how climate change denial circulates within public pages [1], how NGOs frame the issue [23], or small online experiments to test interventions that address fake news regarding climate change and other topics [10]. Unlike Twitter, Meta has not had an API for researchers to investigate the content on their platform until the release of the Ad Library [16]. However, only ads relating to social issues, elections or politic are required to include information about who paid for them. While the Ad Library has yet to be used to study climate change advertisement it has been explored for various issues and used to studying adjacent policy issues such as COVID-19 and immigration. Fowler et al. (2021) in particular, analyse the difference in how US politicians advertise on Facebook compared to television and study a range of different political issues. Their findings suggest that politicians are less likely to talk about controversial public policies on Facebook but instead prioritise promotional, valence-oriented ads to activate their existing supporters. Edelson et al. (2019) were some of the first to use the Ad Library and studied how political advertisers in the US disseminate political messages. They, further, compared the results from the Meta Ad Library to ads disseminated on platforms such as Google and Twitter and pointed out how advertisers can 'intentionally or accidentally' bypass the political advertising archive. Similar studies have also been performed for ads run by political advertisers in Germany [10]. Other studies have taken further steps and analyzed the effects of political online advertising by combining Meta's Ad Library data with additional data sources. For instance, Jamison et al. (2020) investigated vaccination-related advertising prior to COVID-19 and found that anti-vaccine advertisers were successful in publishing advertisements with low costs but high user impressions. Similarly, Mejova and Kalimeri (2020) examined narratives around COVID-19 through ads promoted on Facebook, and found several instances of possible disinformation and misinformation, ranging from conspiracy theories regarding bioweapons to unverifiable claims made by politicians. Lastly, Capozzi et al. (2021) used data from the Ad Library to study immigration stances in Italy, and built a pro- and anti-immigration classifier to dig into which audiences these ads target.
Our work aims to transfer the above established methodologies to analyze ads collected from the Meta Ad Library to the domain of climate change. Consequently bridging an important gap in the existing literature, namely studying political communication about climate change on Meta's platforms.
## Methodology
This section describes how we collect and clean ads from the Meta Ad Library, and how we identify ads relating to climate change topics.
### Data collection
Meta's Ad Library API allows us to query data for each politician or political organization by specifying the _page IDs_ of the desired Meta pages. To identify Congress members we use a publicly available data set [10].
2021) that lists the current members of Congress2 and includes further information such as party affiliation and social media accounts. To find the Meta ad account _page IDs_ of the politicians we use Meta's own Ad Library report, which lists all political Meta ad accounts since 2018 (Facebook 2021a). We retrieve _page IDs_ by searching the Ad library using the name of individual politicians. If a politician has several ad accounts registered3, we take the page with the highest ad spend. We use this filtering process to minimize errors and identify the main ad accounts in use by politicians. We ensure integrity of the pages, by filtering out pages that do not have Meta's official page verification badge (Facebook 2021b)4. During the manual revision, we removed, replaced, and added several Meta page IDs of politicians. Overall, we were able to collect the page IDs of _520_ Congress members5. Fifteen members of Congress did, according to the Ad Library report, not have a Meta ad account linked to their Meta social media accounts.
Footnote 2: In fact, the data set lists 538 members of Congress and not the precise number of 535 currently serving members. The length is somewhat arbitrary, because members that left Congress may not be updated immediately.
Footnote 3: For instance, Congressman French Hill has pages ”Congressman French Hill” and ”French Hill for Arkansas”.
Footnote 4: Through manual revision we filtered out third-party pages such as “Friends of Derek Kilmer” and ”Corrupt Thom Tillis”.
Footnote 5: A list can be found on [https://github.com/lrnz-asnprs/political-ad-pi/blob/main/src/data_sets/US_legislators_page_ids_2021.csv](https://github.com/lrnz-asnprs/political-ad-pi/blob/main/src/data_sets/US_legislators_page_ids_2021.csv).
With this final list of politician page IDs, we crawl the Ad Library API to collect all ads run on these pages from May 2018 until November 2021. This results in a total of 602,546 ads over all of Meta's platforms. Even though Meta officially announced that the labeling of political advertisements started in May 2018 and is not backdated (TechCrunch 2021), our data set contains advertisements prior to this date (692 ads in total). We keep these as in the data set but disregard them when analyzing temporal trends.
### Filtering for climate-related ads
In order to determine which advertisements were concerned with the topic of climate change, we adopt a keyword filtering approach that has been used in previous studies (Cody et al., 2015; Yu et al., 2021). In particular, we filter the collected ads according to a _query_ suggested by Yu et al. (2021). This query has previously been applied to identify Tweets about climate change. The query consists of several words related to climate change linked by logical operators: _"climate OR (global AND warming) NOT (business climate OR economic climate OR biz climate OR tax climate OR regulatory climate)"_. Applying the query to our dataset identifies 19,176 advertisements (\(\sim 3\%\)) to be related to climate change. In total, 153 unique Meta page IDs of politicians are included in this subset. Hence, 367 out of the 520 Congress members do not advertise about climate-related topics.
To validate the accuracy of our keyword-filtering approach we compare it to a machine-learning tagging method. Using an natural language inference-based zero-shot text classifier6(Lewis et al., 2019) we apply it on our total set of ads. We do so by specifying the label: "climate" and set a probability score threshold of 0.85. We find that the zero-shot approach classifies 98.4% of the 19,176 ads identified by the keyword approach to be climate-related. The classifier also finds 17 additional climate ads that the keyword approach did not identify, however, after a manual inspection we deem none of them to be climate-related. As such, we continue using the ads identified by the key-word approach.
Footnote 6: The zero-shot classifier can be found on [https://huggingface.co/facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli).
## Results
We analyse in total 602,546 ads and focus on understanding: 1) Which politicians drive the topic of climate change and how the temporal dynamics and characteristics of ads run by Democrats and Republicans differ? 2) How the content of ads differs. And, 3) Which factors (demographic, political, and geographical) impact ad performance.
### Characterizing the total data set
We begin by describing the characteristics of the total dataset before turning to the subset of climate-related ads. Out of our 520 Congress member large data set, 264 are Democrats (\(50.7\%\)), 254 are Republicans (\(48.8\%\)) and two are classified as Independent (\(0.5\%\), Senators Bernie Sanders and Angus King). Out of the total ads, 481,144 (\(80\%\)) are linked to Democrats (here we count Bernie Sanders and Angus King as Democrats as they both caucus with the Democratic Party) and 121,402 (\(20\%\)) belong to Republicans. For each ad, Meta provides the number of impressions (how many times an ad has been seen), and how much funds have been spent on the ad. Unfortunately, Meta does not give us the exact number of impressions, or money spent per ad, instead, they provide ranges. Impressions are grouped into eight groups (\(\leq\) 1 K, 1-5 K, 5-10 K, 10-50 K, 50-100 K, 100-500 K, 500 K - 1 M, and \(\geq\) 1 M impressions), while money spent is given in the ranges (\(\leq\) $100, $100-$499, $500-$999, $1000-$5000, $5000-$10,000, $10,000-$50,000, $50,000-$100,000, and \(\geq\) $100,000). To understand the total spend and impressions generated by individual politicians we aggregate their ads by taking the sum of the average endpoints of each individual ad. For the bottom and upper brackets (e.g. spend brackets \(\leq\) $100 and \(\geq\) $100,000) we respectively take the upper or lower limits. For example, if a politician has one ad in the \(\leq\) $100 and three ads in the bracket $100-$499 we calculate the total spend as \(\$100+3\cdot(\$499+\$100)/2=$998.5\).
Fig. 1 depicts the overall top 10 advertisers by total spend and impressions. The top 10 advertisers by spend are dominated by Democrats (\(n_{D}\) = 7), with only two Republicans among top advertisers. Similarity the politicians that received most impressions are also Democrats. Bernie Sanders is the Congress member which spent most money on ads (\(\sim\) $15M) and generated the most impressions (\(\sim\) 800M) during our observation period. However, comparing spend to impressions we find that there is not necessarily a linear relationship between the two. For instance, the Republican Ted Cruz appears among the top 10 with the most impressions,
although he is not among the top spenders--his ads effectively generate more impressions per dollar. There can be multiple explanations for this, he can have a bigger follower-base, his ads can be 'better', or his ads can be amplified more strongly by Meta's algorithms. With the current data from the Ad Library, it is impossible to pinpoint what underlying factors contribute to this difference.
### Politicians driving the topic of climate change
Next, we turn to advertisements that are related to climate change. The subset of climate ads accounts for \(\sim 3\%\) (\(n_{C}\)=19,176) of all ads with only 153 (29%) members of Congress advertising about climate-related issues. The large majority of climate-related ads generate few impressions and are inexpensive (see Table 1). In fact, 96% of climate ads cost less than $500 and 67% of ads generated fewer than 1,000 impressions. However, there are 8 climate ads with more than 1 million impressions, while no ad costs more than $100,000. Comparing these numbers to non-climate related ads, we find that 91.6% of ads costed less than $500, while 54% of ads generated less than 1,000 impressions (a 13% difference from climate-related ads). Overall, politicians tend to spend fewer funds on climate-related ads, which subsequently generate fewer impressions.
Out of the 153 Congress members that run climate-change related ads, 140 are Democrats (\(92\%\)), while only 13 are Republicans (\(8\%\)). Fig. 2 shows that the top 10 climate advertisers differ from the top 10 overall advertisers. Most notably, there are no Republicans among the top climate advertisers. All of the top 10 politicians according to spend also appear in the top 10 according to impressions, however in a different order. While Bernie Sanders, for instance, ranks third by spend he is the politician with the highest number of impressions for climate-related ads. Further, when we compare the impressions generated by the top 10 climate advertisers we find that they account for 72% of the total number of impressions for climate ads.
Looking at what proportion climate ads account for out of the total number of ads for each politician we find that only \(15\%\) of politicians that talk about climate change have a share of climate ads higher than \(10\%\). The three politicians (with at least 100 ads) with the highest fraction of climate change related ads are Jimmy Panetta (40%), Ed Perlmutter (37%), and Jared Huffman (36%). This reveals that only a small share of politicians emphasize the topic, and none run exclusively on a climate platform. Overall, few politicians run climate-related ads, and out of those, climate ads only account for a small fraction of their total ads.
### Temporal dynamics of climate ads
To further quantify the ecosystem around climate-related ads we focus on understanding when individual ads are run, how many impressions they generate, and how funds are spent over time. First, we split climate-related ads according to political parties and find that Democrats account for 99.6% (19,107) of all climate-related ads, with Republicans having, in total, only run 69 ads (0.04%) in the period from March 2018 to November 2021. Fig. 3 (left panel) shows the temporal dynamics of climate-related ads and illustrates significant differences between Democrats and Republicans. While Democrats have cumulatively spent more than $3 million since mid-2018, Republicans have only spent $21,000 on climate-related ads--the difference is more than two or
\begin{table}
\begin{tabular}{l l l} & \begin{tabular}{l} Climate ads \\ (\(n_{C}\)=19,176) \\ \end{tabular} &
\begin{tabular}{l} Non-climate ads \\ (\(n=583,368\)) \\ \end{tabular} \\ \hline \(\leq\) 1 K & 67\% (12,909) & 54\% (314,993) \\
1-5 K & 19\% (3681) & 22\% (129,169) \\
5-10 K & 5\% (953) & 7\% (43,435) \\
10-50 K & 7\% (1251) & 11\% (66,434) \\
50-100 K & 1\% (199) & 3\% (14,638) \\
100-500 K & 0,8\% (160) & 2\% (12,849) \\
500 K - 1 M & 0,08\% (15) & 0.2\% (1258) \\ \(\geq\) 1 M & 0,04\% (8) & 0.1\% (592) \\ _Spend_ \\ \hline \(\leq\) $100 & 86\% (16,455) & 76\% (443,869) \\
5100-5499 & 10\% (1940) & 15\% (90,290) \\
5100-5909 & 2\% (372) & 4\% (21975) \\
51000-55000 & 2\% (339) & 4\% (22,697) \\
51000-510,000 & 0,2\% (35) & 0.5\% (2817) \\
510,000-580,000 & 0,2\% (34) & 0.3\% (1615) \\
510,000-510,000 & 0,005\% (1) & 0.02\% (94) \\ \(\geq\) $100,000 & 0\% (0) & 0.004\% (21) \\ \hline \end{tabular}
\end{table}
Table 1: Distribution of climate ads and all ads in ranges of impressions (top) and spend (bottom). Numbers are summarized over individual ads.
Figure 1: Top 10 advertisers of the total dataset sorted by total spend [5] (top) and cumulative impressions (bottom). Politicians are colored according to their status in Congress.
ders of magnitude. The difference in impressions shows a similar picture (Fig. 3, right panel). Democrats have generated a cumulative amount of 112 million impressions, while Republicans have generated a total of 1,17 million impressions. Similar findings were found for Twitter, where Democrats have been found to tweet more frequently about climate change related topics [21].
Looking at the temporal dynamics, we observe a sudden increase in Republican spend and impressions in the months prior to the 2020 presidential and congressional elections. Overall, Republican candidates spent more than half of their total cumulative funds within a two-month period. However, the jump in spend is exaggerated by the logarithmic scale of the graph. While Democratic candidates also spend more prior to the 2020 elections, their increase is less visible. Democrats spent 18% of their total funds during the same period prior to the 2020 elections. For Democrats, the relative share of climate ads impressions has grown over time. Pre-2021, impressions stemming from climate ads constituted on average less than 3% of all impressions per month, while from 2021 and onward, they generated, on average, 6% of monthly impressions. Interestingly, there is relatively higher spend and impressions on climate ads after election dates.
Spend and impressions alone do not give us a complete image of the dynamics, as there are large differences in how many ads candidates from the two parties run (\(n_{D}=19,107\), \(n_{R}=69\)). Instead, to quantify differences in how the two parties advertise and how their ads perform, we look at the average spend per ad and the average number of impressions generated per dollar. Overall, Democrats spend on average $161 per climate ad, whereas Republicans spend $310, although there is no significant difference between the two means (two-sample t-test, \(t=-1.23\), \(p=0.22\)). However, we find a significant difference in the average impressions per dollar between the two parties (two-sample t-test, \(t=-10.98\), \(p<0.001\)). Republicans generate on average 72 impressions per dollar compared to 25 for the Democrats. In other words, although Republicans advertise much less about climate-related topics they are more successful at generating impressions--they outperform Democrats in generating impressions per dollar by 188%.
### Sentiment of ads
To understand if differences in impressions are caused by how politicians talk about climate change we analyze the content of their ads. Here, we focus the sentiment of ads, which we calculate using the VADER library [13]. Ads which have no associated text (e.g. they only contain a video or image) are disregarded (less than 1% of ads). Ads can consist of multiple sentences, to quantify sentiment we score each individual sentence and take the average compound score. As such, an ad is only negative (or positive) if it consistently uses negative (or positive) language.
Fig. 4, left panel, shows the sentiment distributions for all ads. While the average values are very similar and positive for both parties, 0.15 for Democrats, and 0.12 for Republicans, the distributions are different. The sentiment distribution for Republican ads is more broad, indicating that their ads are more likely to be 'extreme', either being overly negative, or overly positive (tails of the distribution). Further, calculating the average sentiment for ads in each of the 8 impression groups (see Table 1) we find that Republicans ads are, on average, always less positive than Democrat ads.
For climate ads (Fig. 4, right panel), we see that both parties use nuanced sentiment to talk about climate topics. However, Republican ads are, overall, shifted towards more negative values and have, on average, negative sentiment values (-0.05). Democrat climate ads are slightly positive with a mean value of 0.04. As such, lower, or more negative, sentiment values can be one explanation to why Republican ads generate more impressions per dollar. This is in line with recent studies which have shown that negative comments generate more engagement on Facebook [13], and that algorithmic amplification of this content can lead to great reach [14].
### Audience demographics
Spend and sentiment might not be the only factors that determines the 'performance' of an ad. Targeting demographics, i.e. which audiences the politicians target their ads towards, might also explain some of the differences. In the following, we focus on the set of ads for which there is demographic and geographic information for--this covers 13,466 (70\(\%\)) climate-related ads and 463,402 ads in total (77\(\%\)). Meta's Ad Library contains demographic information about the age, gender (binary), and geographical location (US states) of people who viewed each ad. Fig. 5 shows what age and gender segments viewed climate-related ads. Ads run by Democrats tend to be viewed by younger audiences in
Figure 2: Top 10 climate advertisers sorted by total spend [10] (top) and the cumulative number of impressions (bottom).
Figure 4: Sentiment distributions. Left, Distributions for all political ads. Dotted lines show the average sentiment. Right, sentiment distributions for ads related to climate change topics. The Republican distribution is more jagged due to a lower number of climate related ads.
Figure 5: Audience age and gender distributions of climate-related ads run by Democrats (left) and Republicans (right), measured by the number of impressions. The distributions for non-climate ads are similar.
Figure 3: Temporal dynamics of climate-related ads. (Left) Cumulative spend [$] by political parties since the launch of Meta’s Ad Library in 2018. Here we have assigned Bernie Sanders and Angus King as Democrats. Note, that we show the lower, average, and upper bounds of the spend range. The upper and lower bounds are estimated by summing up all upper- and lower-range values. Further, please note that due to the large differences in between parties we use logarithmic scaling on the y-axis. (Right) Cumulative impressions, where logarithmic scaling is used to compare the political parties.
comparison to their counterparts. In particular, 25-34-year-olds, whereas the Republicans have an older audience (i.e. older than 55). The demographic data for Democrats reflect an hour-glass shaped distribution, with a higher fraction of users in the segments 25-34 and +65, while for the Republicans the distribution takes the shape of a reverse pyramid. However, similar patterns are present in the demographic distributions for all ads. As such, climate-related ads do not deviate from the demographic distributions of all other ads run by politicians from the two parties. A general trait for demographic distributions is that there are more female impressions. This can be explained by the overall distribution of Meta's socials being skewed towards female demographics [21, 2]. Further, it is interesting that both parties have a high fraction of their impressions in the 65+ year segment, despite the fact that this segment only accounts for about 5% of all Facebook users and 2.1% of Instagram users [21, 2].
To understand the effects of demographic factors in more detail we use the information to build a model. Here, we only focus on ads for which the lower bounds of impressions and spend are non-zero, and ads for which we have demographic information for. We do this because we are interested in knowing which factors cause an ad to be successful, not which factors cause an ad to be unsuccessful. In total, this leaves us with a dataset of approximately 25% of all ads. Further, to account for heterogeneity in ads we build two models, one for each political party. As model type we focus on linear models (LASSO) due to their interpretability.
Fig. 6 shows coefficient weighs for the models. We find that the models do not have strong predictive power (correlation coefficient r = 0.476 for the Democrat model and r = 0.297 for the Republican model). However, the goal is not to get a perfect prediction, rather we want to understand the factors that drive impressions per dollar. We find that impressions for Republican ads are mainly driven by male audiences, while for Democrats it is 18-24 year olds, followed by male audiences. (Female audiences have zero weight in the model as this variable is redundant. Female and male audiences sum to one so the 'female audience' variable does, model-wise, not contain new information.) Interestingly, the factors limiting impressions per dollar are for both models 65+ year old demographics. Geographic regions only play a minor role, but Republican ads perform worse in Democrat states, while Democrat ads have a positive contribution from Republican states. (We have chosen to aggregate US states into three categories, Democrat, Republican and swing states. We define swing states as a state where either party got less than a 5% win margin in the 2020 presidential election. Swing states are: Arizona, Florida, Georgia, Michigan, Nevada, North Carolina, Pennsylvania, and Wisconsin.) When it comes to ad sentiment the models show that positive sentiment has only a minor positive contribution for Democrat ads, while the weight is negative for Republican ads. This means, that Republican ads with positive sentiment score perform worse; the more negative an ad is the better it performs. Lastly, climate related ads also have a minor contribution to ad performance.
Similar to the models in Fig. 6, we look at which factors explain impressions per dollar for climate ads (Fig. 7). Here, we only build one model because there are so few Republicans climate ads. This model has a better fit (correlation coefficient r = 0.521), and reveals that the factors which drive impressions per dollar for climate ads are predominantly 18-24 and male audiences. The main limiting factor is again 65+ year old audiences. However, the model also reveals that climate ads viewed in Republican states and ads run by Republicans get more impressions per dollar.
## Discussion
To our knowledge, our work presents the first quantitative study of how US politicians advertise climate change-related topics. We focus on ads run on Meta's (formerly Facebook) platforms between the period from May 2018 to November 2021 and find almost 20,000 climate-related ads, indicating that climate-change topics only play a minor role in political advertisement (\(\sim 3\%\) of all ads).
Our work is limited by multiple factors, including data issues, and algorithmic confounders. First, both impressions and spend are reported as range values instead of being specified as a precise number, which introduces a factor of inaccuracy. We have tried to account for it by calculating averages, and showing the range of possible spend (e.g. Fig. 3).
A second shortcoming is that Ad Library does not contain any information about the intended targeting of the advertisers, it only shows which demographics ultimately saw the ads. These two are not the same, especially given the myriad of ways advertisers can target their audiences on Meta, and how algorithms on Meta decide who to show ads to [1]. Having this information at hand would allow
Figure 6: Linear models for understanding factors behind impressions per dollar. Figure shows the coefficient weights for individual party models covering all ads. Error-bars show 95% confidence intervals, estimated from 100 bootstrapped samples.
us to investigate _to whom_ politicians want to speak about climate change and eventually allow us to draw more relevant conclusions about the rationale behind their advertising strategies. Additionally, it could help us quantify the skew 'algorithmic targeting' (Thorson et al., 2021) introduces in the data. Meta has previously released such data (Facebook, 2022). In the wake of the 2020 presidential elections, they released a dataset that included targeting information of all political ads that were run in the 90-day period preceding the elections on November 3rd, 2020. As such, it raises the question of why Meta is not revealing targeting specifications for all political advertisements. Even though such data might reveal campaign strategies of various political actors, the gain in transparency for research and policy-making would be immense.
One major limitation is that the keyword filtering method is limited to textual data, and while a majority of Meta ads contain text, some only contain multimedia content such as videos or images. Future studies, could include these ads by transcribing the content using speech-to-text or image-to-text methods. Further, the keyword filtering method also identifies ads that deny, or are critical of climate change-research as climate-related ads (Fig. 8). By manually going through a subsample of the ads we found several cases of Republicans being against climate measures and some even denying climate change, calling global warming a 'theory'. We have chosen to keep these in the data as they relate to climate-related topics, just from an opposite perspective.
Lastly, the only metric we have access to in the Ad Library which quantifies the 'performance' or 'effectiveness' of an ad is the number of _impressions_. However, that is often not the best way of estimating ad performance, other metrics such as engagements (e.g. number of likes), click-through rates, and relevance scores (calculated by Meta based on the positive and negative feedback they expect an ad to receive from its target audience) would make it easier to understand the impact of climate-related ads. All these metrics are otherwise available to advertisers through Meta's Ad Manager (Facebook, 2022).
Despite these limitations, our work reveals interesting insights about the climate-related focus of Congress members. Our findings show that Democrats noticeably dominate online advertisements about climate change on Meta's platforms. In fact, there is a substantial difference, as almost no Republicans advertise about climate change (\(<0.4\%\) of all climate ads in our data). While the imbalance between the parties makes it difficult to draw general conclusions about advertising strategies, it indicates the perceived relevance each party and its potential voters ascribe to the topic of climate change. Unlike TV ads, where the audiences are more politically diverse, Meta ads are targeted and can, therefore, be used by Congress members to convince individuals who share an ideological affinity (Fowler et al., 2021).
The top 10 politicians that talk about climate change are all Democrats, and their ads make up for 72% of all climate-related ads. This means that even within the Democratic party, only a few actors focus on climate-related topics.
Comparing the spend and impressions generated by climate ads to all ads, we find that a majority of climate ads fall within the categories of low spend (\(<\)$100) and low impressions (\(<\)1 K). However, we find that Republican ads not only generate higher average impressions per dollar than Democrats when looking at all ads but also outperform their counterpart when specifically looking at climate-related ads. This could stem from the fact that Republican ads in general are less positive than Democrat ads. Manually combining through Republican ads, we also found several cases of politicians being against climate policies and some even denying climate change (Fig. 8). These examples indicate a political divide of opinions between Democrats and Repub
Figure 8: Selected examples of Republican ads marked as climate-related in our dataset, which deny or are critical of human-induced climate change and global warming.
Figure 7: Coefficient weights for linear models that explains impressions per dollar for climate ads. Error-bars show 95% confidence intervals, estimated from 100 bootstrapped samples.
licans. Advertisements that contain polarizing, extreme, and divisive content has been found to generate more attention on Meta's platform [1], and on Twitter, where content by the mainstream political right has been found to achieve higher algorithmic amplification than the mainstream political left [12].
Looking at audience demographics, we find that Republicans have more impressions in the older segments, while Democrats have a higher share of impressions in younger segments (Fig. 5). However, our models reveal that these demographics do not necessary drive the number of impressions ads generate per dollar. In fact, we find that Republican ad impressions are mainly driven by male audiences, while Democrat ads are predominantly driven by younger audiences (18-24 year olds). For both parties older audiences (65+ year olds) actually reduce the number of impressions gained per dollar. Impressions for climate ads are similarity driven by male audiences, but we also find that audiences from Republican states drive impressions, and that Republican sponsored ads perform better. However, the models are not perfect, their predictive power is limited by their simplicity, and by the granularity of available data. Gaining a deeper understanding of ad impressions with respect to social and economic inequalities will require different approaches.
Our work presents new insights and limitations of using Meta's Ad Library for studying how politicians talk about climate change. While we mainly focused on the US ecosystem around climate ads, one possible avenue of future work could be deeper analysis of the content of ads. Natural Language Processing techniques can be used to analyze which narratives are used, and whether cases of misinformation about climate change are present in the political advertisement. The second avenue of future research could be to increase the scope of who is included in the dataset. While our focus is on the Congress, it would be interesting to include a broader set of politicians (e.g. local politicians), and include NGOs and other active voices in the climate debate in further research. Extending the scope to other countries could also be beneficial. Lastly, comparing how politicians advertise across Meta's different platforms (Whatapp, Instagram, Facebook) could bring interesting insights.
## Ethical Statement
The data in this paper is derived from the Meta Ad Library. It contains publicly accessible ads run on Meta platforms by US politicians. Working with social media data carries risks of privacy issues and the right to be forgotten. However, our data analysis is limited to aggregated data presentations and only concerns ads published by public figures.
|
2305.13342
|
On the Limitations of Simulating Active Learning
|
Active learning (AL) is a human-and-model-in-the-loop paradigm that
iteratively selects informative unlabeled data for human annotation, aiming to
improve over random sampling. However, performing AL experiments with human
annotations on-the-fly is a laborious and expensive process, thus unrealistic
for academic research. An easy fix to this impediment is to simulate AL, by
treating an already labeled and publicly available dataset as the pool of
unlabeled data. In this position paper, we first survey recent literature and
highlight the challenges across all different steps within the AL loop. We
further unveil neglected caveats in the experimental setup that can
significantly affect the quality of AL research. We continue with an
exploration of how the simulation setting can govern empirical findings,
arguing that it might be one of the answers behind the ever posed question
``why do active learning algorithms sometimes fail to outperform random
sampling?''. We argue that evaluating AL algorithms on available labeled
datasets might provide a lower bound as to their effectiveness in real data. We
believe it is essential to collectively shape the best practices for AL
research, particularly as engineering advancements in LLMs push the research
focus towards data-driven approaches (e.g., data efficiency, alignment,
fairness). In light of this, we have developed guidelines for future work. Our
aim is to draw attention to these limitations within the community, in the hope
of finding ways to address them.
|
Katerina Margatina, Nikolaos Aletras
|
2023-05-21T22:52:13Z
|
http://arxiv.org/abs/2305.13342v1
|
# On the Limitations of _Simulating_ Active Learning
###### Abstract
Active learning (AL) is a _human-and-model-in-the-loop_ paradigm that iteratively selects informative unlabeled data for human annotation, aiming to improve over random sampling. However, performing AL experiments with human annotations on-the-fly is a laborious and expensive process, thus unrealistic for academic research. An easy fix to this impediment is to _simulate_ AL, by treating an _already_ labeled and publicly available dataset as the pool of _unlabeled_ data. In this position paper, we first survey recent literature and highlight the challenges across all different steps within the AL loop. We further unveil neglected caveats in the experimental setup that can significantly affect the quality of AL research. We continue with an exploration of how the _simulation_ setting can govern empirical findings, arguing that it might be one of the answers behind the ever posed question "_why do active learning algorithms sometimes fail to outperform random sampling?_". We argue that evaluating AL algorithms on available labeled datasets might provide a _lower bound_ as to their effectiveness in real data. We believe it is essential to collectively shape the best practices for AL research, particularly as engineering advancements in LLMs push the research focus towards data-driven approaches (e.g., data efficiency, alignment, fairness). In light of this, we have developed guidelines for future work. Our aim is to draw attention to these limitations within the community, in the hope of finding ways to address them.
## 1 Introduction
Based on the assumption that "_not all data is equal_", active learning (AL) Cohn et al. (1996); Settles (2009) aims to identify the most informative data for annotation from a pool (or a stream) of unlabeled data (i.e., data acquisition). With multiple rounds of model training, data acquisition and human annotation (Figure 1), the goal is to achieve _data efficiency_. A data efficient AL algorithm entails that a model achieves satisfactory performance on a held-out test set, by being trained with only a fraction of the acquired data.
AL has traditionally attracted wide attention in the Natural Language Processing (NLP) community. It has been explored for machine translation Haffari et al. (2009); Dara et al. (2014); Miura et al. (2016); Zhao et al. (2020), text classification Ein-Dor et al. (2020); Schroder and Niekler (2020); Margatina et al. (2022); Schroder et al. (2023), part-of-speech tagging Chaudhary et al. (2021), coreference Yuan et al. (2022) and entity resolution Qian et al. (2017); Kasai et al. (2019), named entity recognition Erdmann et al. (2019); Shen et al. (2017); Wei et al. (2019), and natural language inference Snijders et al. (2023), _inter alia_. Still, its potential value is on the rise Zhang et al. (2022), as the current language model pretraining paradigm continues to advance the state-of-the-art Tamkin et al. (2022). Under the initial "not all data is equal" assumption, it is logical to assume that researchers would try to find the "most useful" data to pretrain or adapt their LLMs.
The usual pool-based AL setting is to acquire data from an unlabeled pool, label it, and use it to train a supervised model that, hopefully, obtains
Figure 1: High-level overview of the _train-acquire-annotate_ steps of the active learning loop.
satisfactory performance on a test set for the task at hand. This is very similar to the general model-in-the-loop paradigm (Karmakharm et al., 2019; Bartolo et al., 2020, 2022; Kiela et al., 2021; Wallace et al., 2022), with the main difference being the AL-based data acquisition stage. The assumption is that, by iteratively selecting data for annotation according to an informativeness criterion, it will result into better model predictive performance compared to randomly sampling and annotate data of the same size.
However, this does not always seem to be the case. A body of work has shown that AL algorithms, that make use of uncertainty (Lewis and Gale, 1994; Cohn et al., 1996; Houlsby et al., 2011; Gal et al., 2017), diversity sampling (Brinker, 2003; Bodo et al., 2011; Sener and Savarese, 2018) or even more complex acquisition strategies (Ducoffe and Precioso, 2018; Ash et al., 2020; Yuan et al., 2020; Margatina et al., 2021), often fail to improve over a simple random sampling baseline (Balridge and Palmer, 2009; Ducoffe and Precioso, 2018; Lowell et al., 2019; Kees et al., 2021; Karamcheti et al., 2021; Snijders et al., 2023). Such findings pose a serious question on the practical usefulness of AL, as they do not corroborate its initial core hypothesis that _not all data is equally useful for training a model_. In other words, if we cannot show that one subset of the data is "better"1 than another, why do AL in the first place?
Footnote 1: We consider a labeled dataset \(A\subset C\) to be “better” than a labeled dataset \(B\subset C\), both sampled from a corpus C and \(|A|=|B|\), if a model \(M_{A}\) trained on \(A\) yields higher performance on a test set compared to \(M_{B}\), where both models are identical in terms of architecture, training procedure, etc.
Only a small body of work has attempted to explore the pain points of AL. For instance, Karamcheti et al. (2021), leveraging visualisations from _data maps_(Swayamdipta et al., 2020), show that AL algorithms tend to acquire _collective outliers_ (i.e. groups of examples that deviate from the rest of the examples but cluster together), thus explaining the utter failure of eight AL algorithms to outperform random sampling in visual question answering. Building on this work, more recently Snijders et al. (2023) corroborate these findings for the task of natural language inference and further show that uncertainty based AL methods recover and even surpass random selection when hard-to-learn data points are removed from the pool. Lowell et al. (2019) show that the benefits of AL with certain models and domains do not generalize reliably across models and tasks. This could be problematic since, in practice, one might not have the means to explore and compare alternative AL strategies. They also show that an actively acquired dataset using a certain model-in-the-loop, may be disadvantage for training models of a different family, raising the issue of whether the downsides inherent to AL are worth the modest and inconsistent performance gains it tends to afford.
In this paper, we aim to explore all possible limitations that researchers and practitioners currently face when doing research on AL (Zhang et al., 2022). We first describe the process of pool-based AL (Figure 1) and identify challenges in every step of the iterative process (SS2). Next, we unearth obscure details that are often left unstated and under-explored (SS3). We then delve into a more philosophical discussion of the role of simulation and its connection to real practical applications (SS4). Finally, we provide guidelines for future work (SS5) and conclusions (SS6), aspiring to promote neglected, but valuable, ideas to improve the direction of research in active learning.
## 2 Challenges in the Active Learning Loop
We first introduce the typical steps in the pool-based AL setting (Lewis and Gale, 1994) and identify several challenges that an AL practitioner has to deal with, across all steps (Figure 2).2
Footnote 2: We point the reader to the comprehensive survey of Zhang et al. (2022) for a more in-depth exploration of recent literature in AL.
### Problem Definition
Consider the experimental scenario where we want to model a specific NLP task for which we do not yet have any labeled data, but we have access to a large pool of unlabeled data \(\mathcal{D}_{\text{pool}}\). We assume that it is unrealistic (e.g., laborious, expensive) to have humans annotating all of it. \(\mathcal{D}_{\text{pool}}\) constitutes the textual corpus from which we want to sample a fraction of the most _useful_ (e.g., informative, representative) data points for human annotation. In order to perform active learning, we need an initial labeled dataset \(\mathcal{D}_{\text{lab}}\), often called "seed" dataset, to be used for training a task-specific model with supervised learning. To evaluate the model, we need a usually small validation set for model selection \(\mathcal{D}_{\text{val}}\) and a held out test set \(\mathcal{D}_{\text{test}}\) to evaluate the model's generalization. We use \(\mathcal{D}_{\text{lab}}\) and \(\mathcal{D}_{\text{val}}\) to train the first model and then test it on \(\mathcal{D}_{\text{test}}\).
In this stage, we start acquiring labeled data for model training. Data points are sampled from \(\mathcal{D}_{\text{pool}}\) via an acquisition strategy and subsequently passed to human annotators for labeling. The acquisition function selects a batch of data \(Q\subset\mathcal{D}_{\text{pool}}\) according to some informativeness criterion and can either use the model-in-the-loop or not. We employ crowdsourcing or expert annotators to label the selected batch \(Q\) which then is appended to the labeled dataset \(\mathcal{D}_{\text{lab}}\).
Now that we have augmented the seed dataset with more data, we re-train the model on the new training dataset, \(\mathcal{D}_{\text{lab}}\). We test the new model on \(\mathcal{D}_{\text{test}}\) and we stop if we obtain satisfactory performance or if the budget for annotation has run out (or using any other stopping criterion). If we do not want to stop, we use the acquisition function to select more unlabeled data from \(\mathcal{D}_{\text{pool}}\), which we annotate and append to \(\mathcal{D}_{\text{lab}}\), etc. This is the AL loop shown in Figure 2.
### Active Learning Design
Seed datasetWe start the AL loop (SS2.1) by defining an initial labeled "seed dataset" (Figure 2: 1). The seed dataset plays an important role, as it will be used to train the the first model-in-the-loop Tomanek et al. (2009); Horbach and Palmer (2016). In AL research, we typically address the cold-start problem by sampling from \(\mathcal{D}_{\text{pool}}\) with a uniform distribution for each class, either retaining the true label distribution or choosing data that form a balanced label distribution.3 This is merely a convenient design choice, as it is simple and easy to implement. However, sampling the seed dataset this way, does not really reflect a real-world setting where the label distribution of the (unlabeled data of the) pool is actually unknown.
Footnote 3: In AL research, a fully labeled dataset is typically _treated_ as an _unlabeled_\(\mathcal{D}_{\text{pool}}\) by entirely ignoring its labels, while in reality we _do_ have access to them. Hence, the labels implicitly play a role in the design of the AL experiment. We analyze our criticism to this seemingly “random sampling” approach to form the seed dataset in §4.2.
Prabhu et al. (2019) performed a study of such sampling bias in AL, showing no effect in different seed datasets across the considered methods. Ein-Dor et al. (2020) also experimented with different imbalanced seed datasets, showing that AL improves over random sampling in settings with highest imbalance.
Furthermore, the choice of the seed dataset has a direct effect on the entire AL design because the first model-in-the-loop marks the reference point of the performance in \(\mathcal{D}_{\text{test}}\). In other words, the performance of the first model is essentially the baseline, according to which a practitioner will plan the AL loop based on the goal performance and the available budget. It is thus essential to revisit existing approaches on choosing the seed dataset Kang et al. (2004); Vlachos (2006); Hu et al. (2010); Yuan et al. (2020) and evaluate them towards a realistic simulation of an AL experiment.
Number of iterations & acquisition budgetAfter choosing the seed dataset it is natural to decide the number of iterations, the acquisition size (the size of the acquired batch \(\mathcal{Q}\)) and the budget (the size of the actively collected \(\mathcal{D}_{\text{lab}}\)) of the AL experiment. This is another part where literature does not offer concrete explanations on the design choice. Papers that address the cold-start problem would naturally focus on the very few first AL iterations Yuan et al. (2020), while others might simulate AL until a certain percentage of the pool has been annotated Prabhu et al. (2019); Lowell et al. (2019); Zhao et al. (2020); Zhang and Plank (2021); Margatina et al. (2022) or until a certain fixed and predefined number of examples has been annotated Ein-Dor et al. (2020); Kirsch et al. (2021).
### Model Training
We now train the model-in-the-loop with the available labeled dataset \(\mathcal{D}_{\text{lab}}\) (Figure 2: 2). Interestingly, there are not many studies that explore how we should properly train the model in the low data resource setting of AL. Existing approaches include semi-supervised learning McCallum and Nigam (1998); Tomanek and Hahn (2009); Dasgupta and Ng (2009); Yu et al. (2022), weak supervision Ni et al. (2019); Qian et al. (2020); Brantley et al. (2020); Zhang et al. (2022) and data augmentation Zhang et al. (2020); Zhao et al. (2020); Hu and Neubig (2021), with the most prevalent approach currently to be transfer learning from pretrained language models Ein-Dor et al. (2020); Margatina et al. (2021); Tamkin et al. (2022). Recently, Margatina et al. (2022) showed large performance gains by adapting the pretrained language model to the task using the unlabeled data of the pool (i.e., task adaptive pretraining by Gururangan et al. (2020)). The authors also proposed an adaptive fine-tuning technique to account for the varying size of \(\mathcal{D}_{\text{lab}}\) showing extra increase in \(\mathcal{D}_{\text{test}}\) performance.
Still, there is room for improvement in this rather
under-explored area. Especially now, state-of-the-art NLP pretrained language models consist of many millions or even billions of parameters. In AL we often deal with a small \(\mathcal{D}_{\text{lab}}\) of a few hundred examples, thus adapting the training strategy is not trivial.
### Data Acquisition
The data acquisition step (Figure 2: 4) is probably the core of the AL process and can be performed in various ways.4
Footnote 4: In literature, the terms _data selection method_, _query strategy_ and _acquisition function_ are often used interchangeably.
Zhang et al. (2022c) provide a thorough literature review of query strategies, dividing them into two broad families. The first is based on _informativeness_, and methods in this family treat each candidate instance individually, assign a score and select the top (or bottom) instances based on the ranking of the scores. Major sub-categories of methods that belong in the informativeness family are uncertainty sampling Lewis and Gale (1994); Culotta and Mccallum (2005); Zhang and Plank (2021); Schroder et al. (2022), divergence-based algorithms Ducoffe and Precioso (2018); Margatina et al. (2021); Zhang et al. (2022b), disagreement-based Seung et al. (1992); Houlsby et al. (2011); Gal et al. (2017); Siddhant and Lipton (2018); Kirsch et al. (2019); Zeng and Zubiaga (2023), gradient-based Settles et al. (2007); Settles and Craven (2008) and performance prediction Roy and Mccallum (2001); Konyushkova et al. (2017); Bachman et al. (2017); Liu et al. (2018).
The second family is representativeness and takes into account how instances of the pool correlate with each other, in order to avoid sampling bias harms from treating each instance individually. Density-based methods choose the most representative instances of the unlabeled pool Ambati et al. (2010); Zhao et al. (2020); Zhu et al. (2008), while others opt for discriminative data points that differ from the already labeled dataset Gissin and Shalev-Shwartz (2019); Erdmann et al. (2019). A commonly adopted category in this family is batch diversity, where algorithms select a batch of diverse data points from the pool at each iteration Brinker (2003); Bodo et al. (2011); Zhu et al. (2008); Geifman and El-Yaniv (2017); Zhdanov (2019); Yu et al. (2022), with core-set Sener and Savarese (2018) to be the most common approach.
Naturally, there are hybrid acquisition functions that combine informativeness and representativeness Yuan et al. (2020); Ash et al. (2020); Shi et al. (2021). Still, among the aforementioned methods there is not a universally superior acquisition function that consistently outperforms all others. Thus, which data to acquire is an active area of research.
### Data Annotation
After selecting a subset \(Q\) from \(\mathcal{D}_{\text{pool}}\) with an acquisition function, we send the acquired unlabeled data to humans for annotation (Figure 2: 5). In the simulation AL setting, we do not focus at this part at all, as we _already_ have the labels of the actively acquired batch. However, a question that naturally arises is: _Are all examples equally easy to
Figure 2: Distinct steps of the active learning loop (\(1\)–\(6\)). We use blue for the unlabeled data, purple for the labeled data and red for the (labeled) test data.
_annotate?_ In simulation, all instances take equally long to label. This does not account for the fact that hard instances for the classifier are often hard for humans as well [10, 11], therefore the current experimental setting is limiting and research for cost-aware selection strategies [12, 13, 14] is required. This would include explicit exploration of the synergies between random or actively acquired data and annotator expertise [11].
### Stopping Criterion
Finally, another active area of research is to develop effective methods for stopping AL (Figure 2: 3). In simulation, we typically decide as a budget a number of examples or a percentage of \(\mathcal{D}_{\text{pool}}\) up to which we "aford" to annotate. However, in both research and real world applications, it is not clear if the model performance has reached a plateau. The stopping criterion should not be pre-defined by a heuristic, but rather a product of a well-designed experimental setting [23, 14, 15, 16].
## 3 The Fine Print
Previously, we presented specific challenges across different steps in the AL loop that researchers and practitioners need to address. Still, these challenges have long been attracting the attention of the research community. Interestingly, there are more caveats, that someone with no AL experience might have never encountered or even imagined. Hence, in this section we aim to unveil several such small details that still remain unexplored.
### Hyperparameter Tuning
A possibly major issue of the current academic status quo in AL, is that researchers often do not tune the models-in-the-loop. This is mostly due to limitations related to time and compute constrains. For instance, a paper that proposes a new acquisition function would be required to run experiments for multiple baselines, iterations, random seeds and datasets. For example, a modest experiment including \(a=5\) acquisition functions, \(i=10\) AL iterations, \(n=5\) random seeds and \(d=5\) datasets, would reach an outstanding number of minimum \(a\times i\times n\times d=1,250\) trained models in total. This makes it rather hard to perform hyperparameter tuning of all these models in every AL loop, so it is the norm to use the same model architecture and hyperparameters to train all models.
In reality, practitioners that want to use AL, apply it _once_. Therefore, they most likely afford to tune the one and only model-in-the-loop. The question that arises then, is "_do the findings of AL experiments that do not tune the models generalize to scenarios where all models-in-the-loop are tuned_"? In other words, if an AL algorithm \(A\) performs better than \(B\) according to an experimental finding, would this be the case if we applied hyperparameter tuning to the models of both algorithms? Wouldn't it be possible that, with another configuration of hyperparameters, \(B\) performed better in the end?
### Model Stability
In parallel, another undisclosed detail is what researchers do when the models-in-the-loop are unstable (i.e., _crash_). This essentially means that for some reason the optimisation of the model might fail and the model never converges leading to extremely poor predictive performance. Perhaps before the deep learning era such a problem did not exist, but now it is a likely phenomenon.
Dodge et al. (2020) showed that many fine-tuning experiments diverged part of the way through training especially on small datasets. AL is by definition connected with low-data resource settings, as the gains of data efficiency are meaningful in the scenario when labeled data is scarce.
In light of this challenge, there is no consensus as to what an AL researcher or practitioner should do to alleviate this problem. One can choose to re-train the model with a different random seed, or do nothing. Though, it is non-trivial under which condition one should choose to re-train the model, since it is common that not always test performance improves from one AL iteration to the next.
Furthermore, there is currently no study that explores how much AL algorithms, that use the model-in-the-loop for acquisition, suffer by this problem. For instance, consider an uncertainty-based AL algorithm that uses the predictive probability distribution of the model to select the most uncertain data points from the pool. If the model
crashes, then its uncertainty estimates are not meaningful, thus the data acquisition function does not work as expected. In effect, the sampling method turns to a uniform distribution (i.e., the random sampling baseline).
### Active Learning Evaluation
Another important challenge is the evaluation framework for AL. Evaluating the _actual_ contribution of an AL method against its competitors would require to perform the same iterative _train-acquire-annotate_ experiment (Figure 1) for all AL methods in the exact same data setting and with real human annotations. Certainly, such a laborious and expensive process is prohibitive for academic research, which is why we perform simulations by treating an _already_ labeled and open-source dataset as a pool of unlabeled data.
Still, even if we were able to perform the experiments in real life, it is not trivial how to properly define when one method is better than another. This is because AL experiments include multiple rounds of annotation, thus multiple trained models and multiple scores in the test set(s). In cases with no clear difference between the algorithms compared, how should we do a fair comparison?
Previous work presents tables comparing the test set performance of the last model, often ignoring performance in previous loops (Prabhu et al., 2019; Mussmann et al., 2020). The vast majority of previous work though uses plots to visualize the performance over the AL iterations (Lowell et al., 2019; Ein-Dor et al., 2020) and in some cases offer a more detailed visualization with the variance due to the random seeds (Yuan et al., 2020; Kirsch et al., 2021; Margatina et al., 2021).
### The Test of Time
Settles (2009) eloquently defines the "test of time" problem that AL faces: "_A training set built in cooperation with an active learner is inherently tied to the model that was used to generate it (i.e., the class of the model selecting the queries). Therefore, the labeled instances are a biased distribution, not drawn i.i.d. from the underlying natural density. If one were to change model classes--as we often do in machine learning when the state of the art advances--this training set may no longer be as useful to the new model class_".
Several years later, in the deep learning era, Lowell et al. (2019) indeed corroborates this concern. They demonstrate that a model from a certain family (e.g., convolution neural networks) might perform better when trained with a random subset of a pool, than an actively acquired dataset with a model-in-the-loop of a different family (e.g., recurrent neural networks). Related to the "test of time" challenge, it is rarely investigated whether the training data actively acquired with one model will confer benefits if used to train a second model (as compared to randomly sampled data from the same pool). Given that datasets often outlive learning algorithms, this is an important practical consideration (Balridge and Osborne, 2004; Lowell et al., 2019; Shelmanov et al., 2021).
## 4 Active Learning in _Simulated_ vs. Real World Settings
_Is it truly logical to consider an already cleaned (preprocessed), typically published open-source labeled dataset as an unlabeled data pool for pool-based active learning simulation, with the expectation that any conclusions drawn will be applicable to real-world scenarios?_
The convenience and scalability of simulation make it an undoubtedly appealing approach for advancing machine learning research. In NLP, when tackling a specific task, for instance summarization, researchers often experiment with the limited availability of labeled summarization datasets, aiming to gain valuable insights and improve summarization models across various domains and languages. While this approach may not be ideal, it is a practical solution. _What makes the sub-field of active learning different?_
Admittedly, progress has, and will be made in AL research by leveraging simulation environments, similar to other areas within machine learning. Thus, there is no inherent requirement for a radically different approach in AL. We believe that simulating AL is indispensable for developing new methods and advancing the state-of-the-art.
Nonetheless, we argue that a slight distinction should be taken into account. AL is an iterative process that aims to obtain the smallest possible amount of labeled _data_ given a substantially larger pool of unlabeled data for maximizing predictive performance on a given task. The significant difference between developing models and constructing datasets lies in the fact that if a model is poorly trained, it can simply be retrained. Conversely, in
AL, there exists a finite budget for acquiring annotations, and once it is expended, _there is no going back_. Consequently, we must have confidence that the AL state-of-the-art established through research simulations will perform equally well in practical applications.
Given these considerations, we advocate for a more critical approach to conducting simulation AL experiments. We should be addressing all the challenges (SS2) and the experimental limitations (SS3) discussed previously, while acknowledging the disparities between the simulation environment and real-world applications (SS4.1). Given that datasets tend to outlast models (Lowell et al., 2019), we firmly believe that it is crucial to ensure the trustworthiness of AL research findings and their generalizability to real-world active data collection. This will contribute to the generation of high-quality datasets that stand the test of time (SS3.4).
### Simulation as a _Lower_ Bound of Active Learning
The distribution gap between benchmark datasets in common ML tasks and data encountered in a real world production setting is well known (Bengio et al., 2020; Koh et al., 2021; Wang and Deng, 2018; Yin et al., 2021).
High Quality DataIt is common practice for researchers to carefully curate the data to be labeled properly, often collecting multiple human annotations per example and discarding instances with disagreeing labels. When datasets are introduced in papers published in prestigious conferences or journals, it is expected that they should be of the highest quality, with an in-depth analysis of its data collection procedure, label distribution and other statistics. Nonetheless, it is important to acknowledge that such datasets may not encompass the entire spectrum of language variations encountered in real-world environments (Yin et al., 2021). Consequently, it remains uncertain whether an AL algorithm would generalize effectively to unfiltered raw data. Specifically, we hypothesize that the filtered data would be largely _more homogeneous_ than the initial "pool". Assuming that the simulation \(\mathcal{D}_{\text{pool}}\) is a somewhat homogeneous dataset, we can expect that _any_ subset of data points drawn from it would, consequently, be more or less identical.6 Therefore, if we train a model in each such subset, we would expect to obtain similar performance on test data due to the similarity between the training sets. From this perspective, random (uniform) sampling from a homogeneous pool can be considered a rudimentary form of diversity sampling.
Footnote 6: Here we do not hint that all textual instances of a dataset are actually identical, but that they are more similar between them compared to the larger pool that they were created from.
Low Quality DataIn contrast, it is possible that a publicly available dataset used for AL research may contain data of inferior quality, characterized by outliers such as repetitive instances, inadequate text filtering, incorrect labels, and implausible examples, among others. In such cases, an AL acquisition strategy, particularly one based on model uncertainty, may consistently select these instances for labeling due to their high level of data difficulty and uncertainty. Previous studies (Karamcheti et al., 2021; Snijders et al., 2023) have demonstrated the occurrence of this phenomenon, which poses a significant challenge as it undermines the potential value of AL. In a real-world AL scenario, it is plausible to have a dedicated team responsible for assessing the quality of acquired data and discarding instances of subpar quality. However, within the confines of a simulation, such data filtering is typically absent from the researcher's perspective, leading to potentially misleading experimental outcomes. Snijders et al. (2023) tried to address this issue in a multi-source setting for the task of natural language inference, and showed that while uncertainty-based strategies perform poorly due to the acquisition of collective outliers, when outliers are removed (from the pool), AL algorithms exhibited a noteworthy recovery and outperformed random baselines.
### Simulation as an _Upper_ Bound of Active Learning
However, one might argue for the exact opposite.
Favored Design ChoicesPreviously, we mentioned that when selecting the seed dataset (SS2.2) we typically randomly sample data from \(\mathcal{D}_{\text{pool}}\), while keeping the label distribution of the true training set.7 Hence, a balanced seed dataset is typically obtained, given that most classification datasets tend to exhibit a balanced label distribution. In effect, the label distribution of \(\mathcal{D}_{\text{pool}}\) would also be balanced, setting a strict constraint on the AL simulation setting, as the actual label distribution of the unlabeled data should in reality be
unknown_. In other words, such subtle choices in the experimental design can introduce bias, making the simulated settings more trivial than more challenging real world AL settings where there is uncertainty as to the quality and the label distribution of data crawled online, that typically constitute the unlabeled pool.
Temporal Drift & Model MismatchDatasets intended for research purposes are often constructed within a fixed timeframe, with minimal consideration for temporal concept drift issues (Rottger and Pierrehumbert, 2021; Lazaridou et al., 2021; Margatina et al., 2023). However, it is important to recognize that this may not align with real-world applications, where the data distribution undergoes changes over time. The utilization of random and standard splits, commonly employed in AL research, can lead to overly optimistic performance estimates (Sogaard et al., 2021), which may not generalize to the challenges presented by real-world scenarios. Consequently, practitioners should consider this limitation when designing their active learning experiments. Lowell et al. (2019) also raises several practical obstacles neglected in AL research, such as that the acquired dataset may be disadvantage for training subsequent models, and concludes that academic investigations of AL typically omit key real-world considerations that might overestimate its utility.
### Main Takeaway
In summary, there exist compelling arguments that support both perspectives: simulation can serve as a lower bound by impeding the true advancement of AL methods, or it can implicitly favor AL experimental design, thus providing an upper bound for evaluation. The validity of these arguments likely varies across different cases. We can claim with certainty that this simulation setting, as described in this paper, is a far from perfect framework to evaluate AL algorithms among them and against random sampling. Nevertheless, we hypothesize that the lower bound argument (SS4.1) might be more truthful. It is conceivable that AL data selection approaches may exhibit similar performance levels, either due to a lack of variation and diversity in the sampled pool of data or due to the presence of outliers that are not eliminated during the iterations. Hence, we contend that _simulation can be perceived as a lower bound for AL performance_, which helps explain why AL methods struggle to surpass the performance of random sampling. We undoubtedly believe that we can only obtain such answers by _exploring the AL simulation space in depth and by performing thorough analysis and extensive experiments to contrast the two theories._
### Active Learning in the LLMs Era
The field of active learning holds considerable importance in the context of the current era of Large Language Models (LLMs). AL is inherently intertwined with data-driven approaches that underpin recent advancements in artificial intelligence, such as reinforcement learning from human feedback (RLHF) (Christiano et al., 2023; OpenAI, 2022, 2023; Bai et al., 2022). AL and RLHF represent two distinct approaches that tackle diverse aspects of the overarching problem of AI alignment (Askell et al., 2021). AL primarily focuses on optimizing the data acquisition process by selectively choosing informative instances for labeling, primarily within supervised or semi-supervised learning paradigms. On the other hand, RLHF aims to train reinforcement learning agents by utilizing human feedback as a means to surmount challenges associated with traditional reward signals. Despite their disparate methodologies, both AL and RLHF emphasize the criticality of incorporating human involvement to enhance the performance of machine learning and AI systems. Through active engagement of humans in the training process, AL and RLHF contribute to the development of AI systems that exhibit greater alignment with human values and demonstrate enhanced accountability (Bai et al., 2022, 2022; Ganguli et al., 2022; Glaese et al., 2022; Sun et al., 2023). Consequently, the synergistic relationship between these two approaches warrants further exploration, as it holds the potential to leverage AL techniques in order to augment the data efficiency and robustness of RLHF methods.
## 5 Guidelines for Future Work
Given the inherent limitations of simulated AL settings, we propose guidelines to improve trustworthiness and robustness in AL research.
TransparencyOur first recommendation is a call for transparency, which essentially means to _report everything_(Dodge et al., 2019). Every detail of the experimental setup, the implementation and the results, would be extremely helpful to properly evaluate the soundness of the experiments. We urge AL researchers to make use of the Appendix
(or other means such as more detailed technical reports) to communicate interesting (or not) findings and problems, so that all details (SS3) are accessible.
Thorough Experimental SettingsWe also hope to incentivize researchers to _properly think about their experimental settings_, with a focus on ethical and practical considerations. We argue that it is important to compare as many algorithms as possible, aiming to have results and findings that generalize across datasets, tasks and domains. Moreover, we endorse research endeavors that aim to simulate more realistic settings for active learning, such as exploration of AL across multiple domains (Longpre et al., 2022; Snijders et al., 2023). Additionally, we advocate for investigations into active learning techniques for languages beyond English, as the prevailing body of research predominantly focuses on English datasets (Bender, 2011).
Evaluation ProtocolWe strongly encourage researchers to prioritize the establishment of fair comparisons among different methods and to provide thorough and extensive presentation of results, including the consideration of variance across different random seeds, in order to ensure robustness and reliability of findings. Generally, we argue that there is room for improvement of the active learning evaluation framework and we should explore approaches from other fields that promote more rigorous experimental and evaluation frameworks (Artetxe et al., 2020).
AnalysisWe place additional emphasis on the essential requirement of conducting comprehensive analysis of active learning results. It is imperative to delve into the nuances of how different AL algorithms diverge and the extent of similarity (or dissimilarity) among the actively acquired datasets. It is incumbent upon AL research papers to extend beyond the results section and include an extensive analysis component, which provides deeper insights and understanding, as in Ein-Dor et al. (2020); Yuan et al. (2020); Margatina et al. (2021); Zhou et al. (2021); Snijders et al. (2023), among others. If we aim to unveil why an AL algorithm fails to outperform another (or the random baseline), we need to understand which data it selected in the first place, and why.
ReproducibilityThe reproducibility of active learning experiments can be challenging due to the complex nature of a typical AL experiment, involving multiple rounds of model training and evaluation, which can be computationally demanding. However, we strongly advocate for AL practitioners and researchers to prioritize the release of their codebase and provide comprehensive instructions for future researchers aiming to build upon their work. By making the code and associated resources available, the research community can foster transparency, facilitate replication, and enable further advancements in AL methodologies.
EfficiencyIn addition, we propose the release of actively acquired datasets generated by different AL algorithms, which would greatly contribute to research focused on the data-centric and interpretability aspects of AL. Particularly in the context of utilizing AL with large-scale models, it becomes crucial to establish the actively acquired data from other studies as baselines, rather than re-running the entire process from the beginning. Such an approach would not only enhance transparency, but also promote efficiency and eco-friendly practices within the research community.
## 6 Conclusion
In this position paper, we examine the numerous challenges encountered throughout the various stages of the active learning pipeline. Additionally, we provide a comprehensive overview of the often-overlooked limitations within the AL research community, with the intention of illuminating obscure experimental design choices. Furthermore, we delve into a thorough exploration of the limitations associated with simulation in AL, engaging in a critical discussion regarding its potential as either a lower or upper bound on AL performance. Lastly, we put forth guidelines for future research directions, aimed at enhancing the robustness and credibility of AL research for effective real-world applications. This perspective is particularly timely, particularly considering the notable advancements in modeling within the field NLP (e.g., ChatGPT8, Claude9). These advancements have resulted in a shift of emphasis towards a more data-centric approach in machine learning research, emphasizing the significance of carefully selecting relevant data to enhance models and ensure their alignment with human values.
Footnote 8: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt)
Footnote 9: [https://www.anthropic.com/index/introducing-claude](https://www.anthropic.com/index/introducing-claude)
### Limitations
In this position paper, we have strived to provide a comprehensive overview, acknowledging that there may be relevant research papers that have inadvertently escaped our attention. While we have made efforts to include a diverse range of related work from various fields, such as machine learning and computer vision, it is important to note that our analysis predominantly focuses on AL papers presented at NLP conferences. Moreover, it is worth mentioning that the majority, if not all, of the AL papers examined and referenced in this survey are centered around the English language, thereby limiting the generalizability and applicability of our findings and critiques to other languages and contexts. We wish to emphasize that the speculations put forth in this position paper carry no substantial risks, as they are substantiated by peer-reviewed papers, and our hypotheses (SS4) are explicitly stated as such, representing conjectures rather than definitive findings regarding the role of simulation in AL research. We sincerely hope that this paper stimulates robust discussions and undergoes thorough scrutiny by experts in the field, with the ultimate objective of serving as a valuable guideline for AL researchers, particularly graduate students, seeking to engage in active learning research. Above all, _we earnestly urge researchers equipped with the necessary resources to conduct experiments and analyses that evaluate our hypotheses, striving to bridge the gap between research and real-world settings in the context of active learning._
## Acknowledgements
We would like to thank the anonymous reviewers for their insightful feedback. Both authors are supported by an Amazon Alexa Fellowship.
|
2307.05359
|
The Spherical Grasshopper Problem
|
The aim of this essay is to better understand the Grasshopper Problem on the
surface of the unit sphere. The problem is motivated by analysing Bell
inequalities, but can be formulated as a geometric puzzle as follows. Given a
white sphere and a bucket of black paint, one is asked to paint half of the
sphere, such that antipodal pairs of points are oppositely coloured. A
grasshopper lands on the sphere, and jumps a fixed distance in a random
direction. How should the sphere be coloured such that the probability of the
grasshopper landing on the same colour is maximized? Goulko and Kent have
explored this problem on the plane without an antipodality constraint. This
essay gives clear indication that the spherical problem with the antipodality
constraint yields colourings with similar shapes as the planar problem does.
This research has discretised the problem and used a simulated annealing
algorithm to search for the optimal solution. Results are consistent with the
planar results of \cite{goulkokent}. For $0.10\pi\leq\theta\leq0.44\pi$
cogwheel solutions are found to be optimal, with odd integer of cogs $n_o$ such
that $n_o$ is close to $\frac{2\pi}{\theta}$. For $0.45\pi\leq\theta\leq
0.55\pi$ critical solutions are found, in which domains of identical colour
decrease in size towards $0.5\pi$ (moving from either side). For $\theta\geq
0.55$ colourings are found consisting of stripes with cogs. Towards
$\theta=\pi$ colourings are generated that display just stripes that scale in
width with $\pi-\theta$.
|
Boris van Breugel
|
2023-07-08T17:46:41Z
|
http://arxiv.org/abs/2307.05359v1
|
# The Spherical Grasshopper Problem
###### Abstract
The aim of this essay is to better understand the Grasshopper Problem on the surface of the unit sphere. The problem is motivated by analysing Bell inequalities, but can be formulated as a geometric puzzle as follows. Given a white sphere and a bucket of black paint, one is asked to paint half of the sphere, such that antipodal pairs of points are oppositely coloured. A grasshopper lands on the sphere, and jumps a fixed distance in a random direction. How should the sphere be coloured such that the probability of the grasshopper landing on the same colour is maximized? Goulko and Kent have explored this problem on the plane without an antipodality constraint [1]. This essay gives clear indication that the spherical problem with the antipodality constraint yields colourings with similar shapes as the planar problem does.
This research has discretised the problem and used a simulated annealing algorithm to search for the optimal solution. Results are consistent with the planar results of [1]. For \(0.10\pi\leq\theta\leq 0.44\pi\) cogwheel solutions are found to be optimal, with odd integer of cogs \(n_{o}\) such that \(n_{o}\) is close to \(\frac{2\pi}{\theta}\). For \(0.45\pi\leq\theta\leq 0.55\pi\)_critical_ solutions are found, in which domains of identical colour decrease in size towards \(0.5\pi\) (moving from either side). For \(\theta\geq 0.55\) colourings are found consisting of stripes with cogs. Towards \(\theta=\pi\) colourings are generated that display just stripes that scale in width with \(\pi-\theta\).
## 1 Introduction
This research focuses on the Grasshopper Problem [1] on the surface of the unit sphere. The problem is motivated by Bell inequalities. This section will briefly outline the quantum mechanical context, after which the Grasshopper Problem is introduced.
### Bell inequalities
In June 1926 Max Born published a probabilistic interpretation of quantum mechanics in his paper _Quantum Mechanics of Collision Phenomena_. Multiple leading scientists refuted this idea. Most notable is Einstein's famous reply to Born:
Quantum mechanics is very worthy of regard. But an inner voice tells me that this is not yet the right track. The theory yields much, but it hardly brings us closer to the Old One's secrets. I, in any case, am convinced that _He_ does not play dice. [2]
This criticism led to hidden-variable theories: the idea that there are underlying physical principles that deterministically govern the seemingly probabilistic quantum behaviour. Einstein, Podolsky and Rosen (EPR) argued for the incompleteness of quantum mechanics, and proposed a causal local hidden-variable theory (LHVT)[3].
In 1964 John Bell published his famous article _On the Einstein Podolsky Rosen paradox[4]_. In this work Bell inequalities are first introduced. Quantum theory predicts that space-like separated experiments performed on entangled particles can result in outcomes whose correlations would violate these Bell inequalities, whereas the inequalities would have been satisfied if the experiments could be described by an LHVT. Simply put, these inequalities provide an experimental method of determining whether the fundamentals of our world are dictated by either some classical mechanism or the probabilistic theory of quantum mechanics.
Bell inequalities have been used intensively in experimental research to provide proof that quantum mechanics is necessary for understanding the smallest scales of physics. Bell inequalities rely on space-like separated experiments that are difficult to perform in real-life. As a result, a number of notorious loopholes exist: imperfect characteristics of real-life experiments that could invalidate results. Most notable are the locality loophole, detection efficiency loophole and collapse locality loophole. Even taking into account the error bounds resulting from these imperfections, experiments [5, 6, 7, 8, 9] have tested the quantum prediction of non-local causality and the resulting violation of Bell inequalities is in line with quantum mechanics. Thus, the existence of LHVTs can be refuted with high certainty.
Most experiments are based on simple Bell inequalities like the CHSH (Clauser-Horne-Shimony-Holt) inequality [10] with an EPR-Bohm experiment set-up [3, 11]. However it would be valuable for extended research into more general Bell inequalities to be performed, to allow for a greater insight into the world of quantum mechanics, in particular quantum nonlocality. A possible application would be in the field of quantum cryptography. Quantum cryptographic protocols may provide a way of safe communication that can guarantee to give users notice if malevolent eavesdroppers or device manufacturers are after their sensitive data. One of the current challenges of quantum cryptographic protocols is efficiency; we want malicious parties to be detected within a reasonable number of tests. As these tests often rely on the use of Bell inequalities [12], it is of great relevance to further explore the full class of Bell inequalities.
Although this essay will leave the context of quantum mechanics shortly, at this point it is worth sketching the idea of the quantum mechanical experiment that underlies the essay. The Bell inequality explored in this essay is based on the EPR-Bohm experiment. The experiment consists of two observers, Alice and Bob, who possess an entangled singlet state. Alice and Bob independently choose measurement axes \(\mathbf{a}\) and \(\mathbf{b}\) respectively with the constraint that the angle between the axes is \(\theta\). They perform space-like separated measurements, and their outcomes are in \(\{+1,-1\}\). The correlation \(C(\mathbf{a},\mathbf{b})\in[-1,+1]\) is defined as the expected product of their outcomes.
Quantum mechanics predicts a correlation \(C^{Q}(\theta)=-\cos(\theta)\).1 A valid Bell inequality would be either finding a lower bound \(C^{L}(\theta)\) or upper bound \(C^{U}(\theta)\) on the LHV correlation \(C^{LHVT}(\theta)\) such that either \(C^{LHVT}(\theta)\leq C^{U}(\theta)<C^{Q}(\theta)\) or \(C^{Q}(\theta)<C^{L}(\theta)\leq C^{LHVT}(\theta)\). The key difference between this inequality and the standard CHSH inequality is that that observers Alice and Bob are free to choose their axes of measurements, with the constraint that the axes are separated by an angle \(\theta\). For the standard CHSH experiment, Alice and Bob are given one axis, and the only freedom they have is choosing the direction in \(\{+1,-1\}\) along which to measure. The new inequality is hence a generalisation of the CHSH inequality.
Footnote 1: Where with a slight abuse of notation we define \(C(\theta)=C(\mathbf{a},\mathbf{b})\), with \(\theta\) the angle between \(\mathbf{a}\) and \(\mathbf{b}\).
### The Grasshopper Problem
Kent and Pitalua-Garcia [12] translates finding tight bounds for the LHVT correlation to a geometric problem called the Grasshopper Problem. Informally this problem can be stated as follows. We are given a white sphere, a bucket of black paint and the task to paint one of every pair of antipodal points.2 A grasshopper lands on the sphere, and subsequently jumps a
fixed angle \(\theta\) in a random direction. The problem is, what colouring of the sphere maximises the probability that the grasshopper lands on the same colour after hopping, and what is this maximum probability as a function of \(\theta\)?
In relation to the quantum context, the expected success probability \(P(\theta)\) of the grasshopper is related to the bounds on the LHV correlation \(C(\theta)\) for a specific case.3 The relation between the grasshopper's success probability and the LHV correlation is \(C(\theta)=1-2P(\theta)\). Consequently, finding upper and lower bounds on \(P(\theta)\) in the Grasshopper Problem provides bounds on the LHV correlation. The rest of this essay will leave the context of quantum mechanics and focus on the Grasshopper Problem. Goulko and Kent [1] have explored the planar problem. Inter alia, they have shown that a disc is not the optimal solution and have found numerical approximations for colourings for a variety of jumping distances. As the planar problem does not account for the antipodality condition, it is unclear whether the found solutions can be easily transferred to the spherical problem.
Footnote 3: Specifically the case that there is perfect anticorrelation between the outcomes of Alice and Bob if they use the same measurements, \(\theta=0\) (see [12]).
This essay investigates the spherical Grasshopper Problem numerically. In Section 2, the problem will be formulated formally. In Section 3 the numerical set-up is discussed, including discretisation of the problem and approximation of the global maximum. In Section 4 the results are presented. In Section 5 these findings are discussed further, recommendations are made for improving the method, and a range of related problems is displayed.
## 2 Problem statement
The \(\mathbb{S}^{2}\) Grasshopper Problem can be formally stated as follows.
### Formal problem statement
Consider a density \(\mu\) on the surface \(\mathbb{S}^{2}\) of the three-dimensional sphere, satisfying \(0\leq\mu(\mathbf{r})\leq 1\) and the antipodality condition \(\mu(\mathbf{r})+\mu(-\mathbf{r})=1\) for all \(\mathbf{r}\). Consequently:
\[\int_{\mathbb{S}^{2}}d^{2}\mathbf{r}\mu(\mathbf{r})=2\pi.\]
The functional \(P_{\mu}(d)\) is defined by
\[P_{\mu}(\theta)= \int_{\mathbb{S}^{2}}d^{2}r_{1}\int_{\mathbb{S}^{2}}d^{2}r_{2}\mu (\mathbf{r}_{1})\mu(\mathbf{r}_{2})\delta(|\mathbf{r}_{1}-\mathbf{r}_{2}|- \theta), \tag{1}\]
where \(|\mathbf{r}_{1}-\mathbf{r}_{2}|\) is defined as the central angle between points \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\). The Grasshopper Problem is as follows:
\[\text{Given }\theta,\,\text{find}\operatorname*{arg\,max}_{\mu}P_{\mu}( \theta)\text{ and}\operatorname*{max}_{\mu}P_{\mu}(\theta).\]
The density \(\mu\) denotes the colouring of the sphere. This essay explores the case where \(\mu\) assumes as value either \(0\) or \(1\).4
Footnote 4: For the discrete case this is not a constraint on the success probability, as any non-binary colouring can be transformed into a binary colouring with a equal or higher success probability. A proof is given in Appendix A.
### Preliminary analytic results
This problem is very difficult to solve and the probability can only be calculated analytically for very simple colourings. The easiest colouring is a hemisphere colouring, for which the success probability \(P_{hem}(\theta)=1-\frac{\theta}{\pi}\). This can be proven as follows. Consider two points on the
sphere, separated by an angle \(\theta\in[0,\pi]\). Assume a hemisphere colouring. Because half the sphere is coloured, the great circle through both these points will be cut into two equal halves: one coloured and one uncoloured. The probability this happens between the two points is \(\frac{\theta}{\pi}\). Consequently, the probability these two points will be the same colour is \(P_{hem}(\theta)=1-\frac{\theta}{\pi}\).
The Grasshopper Problem as defined above seeks bounds on the maximum success probability, but it also provides means for finding bounds on the minimum success probability. Because of the antipodality condition, there is a linear relation between the success probability of colouring \(\mu\) for jumping angle \(\theta\in[0,\pi]\) and angle \(\pi-\theta\), given by \(P_{\mu}(\pi-\theta)=1-P_{\mu}(\theta)\). This is understood as follows. If a point \(A\) correlates with a point \(B\) at angular separation \(\theta\), for angle \(\pi-\theta\) point \(A\) will correlate with point \(B^{\prime}\), the point antipodal to \(B\). Consequently, if \(A\) and \(B\) are the same colour this will constitute positively to \(P_{\mu}(\theta)\), but because this also insinuates \(B^{\prime}\) is the opposite colour of \(A\), it will constitute to \(P_{\mu}(\pi-\theta)\) being lower. This can be generalised to the whole colouring \(\mu\), giving \(P_{\mu}(\pi-\theta)=1-P_{\mu}(\theta)\) for fixed \(\mu\). This relation means using a set-up with jumping angle \(\theta\) and maximising the success probability, should give a lower bound on the success probability for a set-up with jumping angle \(\pi-\theta\). To summarise, this essay focuses on maximising the success probability for all \(0\leq\theta\leq\pi\), and this simultaneously gives solutions for minimising the success probability.
In general we will need to turn to numerical techniques to solve the problem. The next section presents the numerical method.
## 3 Numerical Set-up
This essay researches the Grasshopper Problem numerically. To do so the problem has been discretised. The structure of the numerical approach is as follows. First, the sphere is discretised in an almost uniform antipodal5 grid. Secondly, for every pair of antipodal points exactly one is coloured, after which for every point the correlation with all coloured points is calculated (i.e. the probability the grasshopper will land on the grass after jumping up from a point). Finally, we try finding the optimal colouring for the probability function by searching through the space of colourings. For the last step two approaches are explored: a greedy algorithm and simulated annealing.
Footnote 5: That is to say: for every point \(\mathbf{r}\) there is a point \(-\mathbf{r}\) and we call these points an antipodal pair.
### Grid discretisation
To perform numerical computations, the sphere's surface is discretised into an antipodal grid. A rectangular grid based on fixed longitudes and latitudes is unfavourable, since this leads to significant distortion of cell shape or area: square grids do not have equal area, whereas equal-area grids vary in shape from equator to poles. Instead in this essay a geodesic grid based on the icosahedron is used, known as a Goldberg polyhedron [13]. This gives a grid which largely resembles a hexagonal grid. A geodesic grid leads to less distortion, as it overcomes oversampling at the poles and cells can be both minimally distorted in shape and area.
In this research an algorithm by Kurt von Leven [14] is used to create the grid, which is based on the work of Nick A. Teanby [15]. This algorithm divides the triangular faces of the icosahedron into 4 equilateral triangles. This process is repeated \(k-1\) times, after which the vertices are projected onto the unit sphere surface. The projection step induces deformations in the triangle size, which is especially apparent close to the original icosahedron's vertices. Deformations can be reduced by using bubble meshing [16] or by shifting the vertices slightly to minimize area differences [17]. This research does not make use of these methods. Parameter \(k\) is referred to as the triangularisation depth. Define \(N\) as the number of pairs of antipodal points, the total number of points \(2N\) is given by: [14]
\[2N=2+10(4^{k}). \tag{2}\]
Additionally, \(h\) is defined as a measure for the separation distance:
\[h=\sqrt{\frac{2\pi}{N}}. \tag{3}\]
The projected vertices denote the centers of the cells. A vertex's cell is defined as the set of points on the unit sphere that is closest to the vertex. Because the grid of vertices is triangular, the cells are hexagonally shaped. The only exceptions are the 12 cells corresponding to the icosahedron's vertices, which are pentagons [15]. The icosahedron's symmetries ensure that the grid is antipodal.
In Figure 1 a comparison for different triangularisation depths is displayed for a hemisphere colouring.6 For \(\theta\geq 0.1\) the results for \(k=6\) and \(k=7\) are within a \(0.25\%\) margin of the theoretical value. Resolution \(h\) scales with \(2^{-k}\) and hence for small \(\theta\) preference is given to \(k=7\) (\(h\approx 0.009\)).
Footnote 6: There is a clear trend in the deviation, for which the reason is unknown. As this research uses \(\theta>0.15\) and the deviation is marginal, this will not concern us.
### Initialisation
For initialisation \(N\) points should be coloured such that the colouring is antipodal. For convenience let us call the \(N\) points in the upper hemisphere by \(U\),7 and the \(N\) points antipodal to these by \(L\). A colouring is now uniquely defined by \(\mathbf{s}\in\{0,1\}^{N}\), where \(s_{i}=1\) denotes vertex \(i\) in \(U\) to be coloured, \(i\) in \(D\) uncoloured, and \(s_{i}=0\) vice versa. In a statistical physical perspective the discretised grid denotes a two-state spin system with every pair of antipodal points either spin up or down.8 For initialisation three straightforward options are considered in this essay.
Footnote 7: Pairs on the interface can be split up arbitrarily
Footnote 8: Equivalently one could define \(\mathbf{s}\in\{-1,+1\}^{N}\)
* \(\mathbf{s}=\mathbf{1}\)
* \(\mathbf{s}\in\{0,1\}^{N}\) randomly
Figure 1: Deviation of success probability for a hemisphere colouring, relative to theoretical probability \(p_{hem}(\theta)=1-\frac{\theta}{\pi}\) (see Sec. 2.2), as a function of \(\theta\) in the range \(0\leq\theta\leq\frac{\pi}{2}\) and triangularisation depths \(k=5\): \(2N=10242\), \(k=6\): \(2N=40962\) and \(k=7\): \(2N=163842\).
* \(\mathbf{s}\) equals the solution of a problem with a different jumping angle \(\theta\).
In other words, the first option refers to a hemisphere colouring, the second to a random antipodal colouring and the third to using a colouring of previous calculations with a different jumping angle. In general this research will use a random initialisation, but the other initialisations can be used for verifying reliability of algorithms. See Sec. 4.4 for comparisons.
### Correlation function
Because the grid is discretised the delta function in the correlation function should be replaced by a smoothed approximation of the delta function. [18] explores the conditions a function like this should suffice. In this research the 4-point cosine function[18] is used:
\[\phi\big{(}\frac{x}{h}\big{)}=\begin{cases}\frac{1}{4}(1+\cos\bigl{(}\frac{ \pi x}{2h}\bigr{)},&\text{if }\frac{|x|}{h}\leq 2\\ 0,&\text{if }\frac{|x|}{h}>2\end{cases} \tag{4}\]
This function smears out the correlation over a \(4h\) range. A different choice for the discretisation of the delta function can be made, of which a good analysis is given in [19]. [1] compares this particular discrete delta function to a different choice in the context of the planar grasshopper problem. For the two tested functions this produces consistent results.
Let us now get an expression for the success probability of points in the discretised problem. Let \(\mathbf{s}\) denote the colouring and \(\theta\) the jumping angle. Given the discretised delta function, the correlation function between grid points \(\mathbf{v}_{i}\) and \(\mathbf{v}_{j}\) is of the form \(\phi(\frac{\Delta\sigma(\mathbf{v}_{i},\mathbf{v}_{j})-\theta}{h})\), where \(\Delta\sigma(\mathbf{v}_{i},\mathbf{v}_{j})\) is the central angle between the points. The success probability of an individual grid cell \(i\) is the probability that a grasshopper, jumping up from grid cell \(i\), lands on the law.9 It is defined as the total correlation between point \(i\) and all points in the colouring, normalised by the total correlation with all points on the grid.10 To ease notation, let \(\mathbf{\tilde{s}}=[\mathbf{s}^{\intercal},(\mathbf{1}-\mathbf{s})^{\intercal}]^ {\intercal}\),11 the success probability of point \(\mathbf{v}_{i}\) is defined as: Footnote 9: Note that this is also defined for points that are not part of the lawn.
Footnote 10: This guarantees that the success probability of every point remains between 0 and 1 even if there are irregularities in the grid. The secondary effect is that points will be normalised slightly differently, hence two correlated points will not be correlated to each other exactly the same amount. This will not concern us in further computations.
Footnote 11: This means we double the length of vector \(\mathbf{s}\), such that \(s_{i}=1\Leftrightarrow\tilde{s}_{i+N}=0\) and vice versa.
\[P_{i}=\frac{\sum_{j}\tilde{s}_{j}\phi(\frac{\Delta\sigma(\mathbf{v}_{i}, \mathbf{v}_{j})-\theta}{h})}{\sum_{j=1}^{2N}\phi(\frac{\Delta\sigma(\mathbf{v} _{i},\mathbf{v}_{j})-\theta}{h})}. \tag{5}\]
The success probability of a particular colouring \(\mathbf{s}\) is trivially the average of the probabilities of points in the colouring:
\[\begin{split} P_{\mathbf{s}}(\theta)&=\frac{1}{N} \sum_{i}\tilde{s}_{i}P_{i}\\ &=\frac{1}{N}\sum_{i}\frac{\sum_{j}\tilde{s}_{i}\tilde{s}_{j} \phi(\frac{\Delta\sigma(\mathbf{v}_{i},\mathbf{v}_{j})-\theta}{h})}{\sum_{j} \phi(\frac{\Delta\sigma(\mathbf{v}_{i},\mathbf{v}_{j})-\theta}{h})}.\end{split} \tag{6}\]
For calculating the correlation, there seems to be an arbitrariness in the choice for either angular distance \(\psi=\Delta\sigma(\mathbf{v}_{i},\mathbf{v}_{j})\) or \(l^{2}\) norm distance \(d=||\mathbf{v}_{i}-\mathbf{v}_{j}||_{2}\), as a result of the bijection \(d=2\sin\bigl{(}\frac{\psi}{2}\bigr{)}\) for \(0\leq\psi\leq\pi\). However, angular separation has as the clear advantage that it allows the use of a fixed \(h\). To understand why this is the case, consider a circle formed out of equidistant points. Let us now inspect the correlation between points separated by angle \(\psi\). If the correlation function is used as a function of the angular distance, for every angle a point will be correlated to approximately the same number of points. If, however, the correlation function is used with the \(l^{2}\) norm distance \(d\) as input and fixed \(h\), a higher jumping distance would cause more points to fall in the band of correlated points; measuring from one specific point, points
are not equidistantly separated as a function of \(d\). This would also result in the antisymmetry breaking between set-ups with \(\theta\) and \(\pi-\theta\),12: with \(\theta<\frac{\pi}{2}\), the latter set-up would result in more correlations. As a fixed \(h\) and fixed width of the correlation band is desirable, the use of angular separation is preferred over \(l^{2}\) norm separation.
Footnote 12: For any colouring and every \(\theta\), theoretically \(P_{\mathbf{s}}(\theta)+P_{\mathbf{s}}(\pi-\theta)=1\) as explained in Sec. 2.2
### Maximizing the probability
As we have acquired an antipodal grid with \(2N\) grid cells, the first \(N\) to be antipodal to the second \(N\) cells, the problem can be stated as finding an optimum over the hypercube \(\left\{0,1\right\}^{N}\). For any reasonably accurate discretisation this space is huge. Two heuristic approaches are considered for trying to find a global maximum: a greedy algorithm and simulated annealing. Using both approaches might give information on the reliability of the methods, as well as insight into the optimization function itself.
#### 3.4.1 Greedy algorithm
The first method is a greedy algorithm. The idea to use this method came from Zach Wissner-Gross's solution to the planar grasshopper problem [20]. From a given colouring, the algorithm 'flips' the pair that induces the highest increase in total probability. Flipping is defined as interchanging the colours of two antipodal points. This process is repeated until no such pair exists anymore. As a flip is only allowed if it increases the total probability, this method results in a monotonically increasing total probability. Consequently, this heuristic is prone to getting stuck in local optima. However, it could find a reasonably close approximation in relatively few steps. To partly circumvent this problem, multiple runs can be initiated using different initialisations. The difference in the resulting colourings is a measure of the algorithm's reliability.
#### 3.4.2 Simulated annealing
Simulated annealing [21] is tested as an alternative to the greedy algorithm. This algorithm is also used in [1], and therefore allows a better comparison to the planar results. Simulated annealing (SA) uses randomness to escape from local optima. Whereas a greedy algorithm always chooses the flip that induces the largest increase in success probability, SA allows flips that decrease the probability by less than a specified threshold. In this research an exponential cooling scheme is used. The procedure is as follows:
1. Set initial temperature \(T=T_{0}\), number of steps \(m\) and \(\alpha\in(0,1)\)
2. Choose random \(i\in 1,...,N\), \(p\) from \(U(0,1)\)
3. Calculate total success probability difference \(\Delta P\) induced by flipping pair \(i\)
4. Make flip \(s_{i}:0\longleftrightarrow 1\) if Metropolis acceptance probability \(\min(1,\exp\bigl{\{}\frac{\Delta P}{T}\bigr{\}})\) is larger than \(p\), else pass
5. Set \(T\) to \(\alpha T\) and repeat \(m-1\) times from step 2
Historically, simulated annealing was motivated by finding the lowest energy in physical systems, by gradually decreasing temperature-induced random effects. \(T\) is therefore referred to as temperature, and the decreasing of temperature as cooling. The cooling rate should be slow enough, that the probability distribution of the current colouring is near the thermodynamic equilibrium at all times [21]. Note that in step 3 the flip is always accepted if this induces a positive change in the success probability. However, in contrast to the greedy algorithm there is also a chance this flip is accepted if it does not decrease the probability too much. Furthermore, a more negative \(\Delta P\) (more decreasing total probability) and lower \(T\) (a longer run time of the
algorithm) leads to a lower acceptance ratio. When \(T\) is very close to \(0\), a flip is only accepted if it hardly decreases the total success probability. This means at low \(T\) almost all random points will not be accepted. Consequently it can be advantageous to use Monte Carlo updates [22] instead. This entails that in step 2 a pair is randomly selected from the subset of pairs whose probability of being accepted exceeds a set threshold. Subsequently the selected pair is always flipped. This method has not been used in this research. Noteworthy is that for small \(T\) the algorithm does not become the described greedy algorithm, as the chosen flip does not necessarily correspond to the highest increase in probability.
Simulated annealing enables escape from local optima, but this comes at the cost of an additional complexity: a proper cooling algorithm should be chosen. An improper choice leads either to random flips for too long, or to a method that is unable to properly escape local optima. If the cooling starts off too slow, an ordered initialisation colouring will break down and become unordered. This is undesirable when a previous solution is used for initialisation. As the final colouring does not necessarily reflect that this might have happened, simulated annealing requires a close watch over the process itself. Additionally, more steps are required than in the greedy algorithm.
## 4 Numerical Results
In this section results for different settings are presented.13 Three regimes emerge: a cogwheel regime \(0.10\pi\leq\theta\leq 0.41\pi\), a critical regime \(0.41\leq\theta\leq 0.55\pi\) and a cogs and stripes regime \(0.55\pi\leq\theta\leq\pi\). Unless otherwise stated, results presented in this section are found using a grid with \(2N=163842\) points,14 correlation function \(\phi\) given by Eq. 4, a random antipodal colouring as initialisation and a simulated annealing approach.15 Checks were performed for a different number of grid points (\(2N=40962\)), different initialisation methods and using instead the greedy algorithm. These alternative settings produce consistent results. At the end of this section results for different methods are compared.
Footnote 13: The author’s Python implementation is available upon request
Footnote 14: Using Eq. 2 with \(k=7\)
Footnote 15: Initial temperature \(T_{0}=0.4\), \(\alpha=0.99999\) and \(150N\approx 10,000,000\) steps
### Cogwheel regime
For \(0.10\leq\theta\leq 0.41\pi\) colourings exhibit the cogwheel-like behaviour also found in [1]. Figure 2 shows the success probability. For every angle, the success probability is higher than the theoretical success probability of the hemisphere colouring. In Figure 3 the optimal colourings for a range of jumping angles \(\theta\) is shown. The quantity \(c=\frac{2\pi}{\theta}\) is also noted, which in the case that \(c\) is an odd integer16, denotes the expected number of cogs.
Footnote 16: Because colourings are antipodal even number of equidistant cogs are impossible
These colourings are in line with the planar result findings; cogwheel-shaped colourings appear to be optimal in case the angular distance between the cogs is approximately \(\theta\). This preference for cogs spaced at intervals \(\theta\) is reflected in Figure 2. For angles smaller than \(\frac{2\pi}{21}\approx 0.10\pi\) the system generally converges to a hemisphere colouring with irregular boundaries around the equator. As can be seen in Figure 3, for some small angles cogs are found. Although the distance between the cogs is in this case indeed teh expected \(\theta\), these colourings are generally irregularly shaped and only appear rarely. Additionally, in contrast to the discretisation errors as displayed in Fig. 1, the relative success probability appears unreliable for low \(\theta\). Possible explanations for the small \(\theta\) discrepancies are given in the discussion.
When \(\theta\) is far from \(\frac{2\pi}{n_{o}}\), the success probability is significantly lower. Set-ups with angles sufficiently close to \(\frac{2\pi}{n_{o}}\) with \(n_{o}\) an odd integer, will generally converge to the closest 'optimal' solution. Set-ups with values close to \(\frac{2\pi}{n_{e}}\) with \(n_{e}\) an even integer, will display more irregular
Figure 2: Success probabilities in critical regime \(0.10\pi\leq\theta\leq 0.41\pi\), in comparison to theoretical hemisphere colouring success probability. The vertical lines denote values \(\theta=\frac{2\pi}{n}\), \(n\) integer
Figure 3: Selection of cogwheel colourings for \(\theta\) between \(0.05\pi\) and \(0.45\pi\). Colour denotes success probability of grid cells and the same colourmap is used for all figures. Colourings are rotated into northern hemisphere and the grid points are mapped on a longitude-latitude grid.
behaviour. As can be seen in Figure 4, this results in a lower success probability. The intermediate colourings of one of these transitions is studied further in Section 4.1.1. Although these set-ups only converge badly, these still produce higher success probabilities than a hemisphere colouring (see Fig. 2). Conclusively, for \(\theta\) between \(0.10\pi\) and \(0.41\pi\) this essay's result give clear indication that cogwheel solutions are always more successful than hemisphere colourings.
If angle \(\theta\) is a little larger than \(\frac{2\pi}{n_{o}}\), this actually gives a higher success probability relative to the hemisphere solution. It is difficult to quantify this behaviour. An intuitive argument would be that a slightly larger angle does not change the success probability of cogs much, since the width of the cogs provides a buffer of the like. Consider a cogwheel colouring in which the northern hemisphere is mostly coloured. For points in the middle of the cogs the success probability when jumping longitudinally does not change with a slightly larger angle. The decrease in jumping probability is therefore mostly assigned to jumps with a large negative latitudinal component - jumping downwards from any point close enough to the edge of the cogwheel. However, the resulting loss in probability is similar for the hemisphere colouring, where the downwards component of jumps is the only component responsible for a jump's success. This means the change in success probability by increasing \(\theta\) slightly from an optimal angle, is approximately equal for the hemisphere and cogwheel colouring. Because the hemisphere's success probability is lower for jumping angle \(\frac{2\pi}{n_{o}}\) however, a further increase of jumping angle causes an increase in the cogwheel colouring's relative success probability. The success probability starts to decrease significantly when the success of mostly longitudinal jumps decreases even further; when \(\theta\) starts to deviate too far from \(\frac{2\pi}{n_{o}}\).
#### 4.1.1 Transition
The different cogwheel solutions give convincing numerical proof that a hemisphere colouring is never optimal for \(\theta=\frac{2\pi}{n_{o}}\), \(n_{o}\) an odd integer. Interesting is the behaviour in the transition between these 'optimal' solutions. Even using a slower cooling process,17 solutions often converge badly in between optimal angles, especially for angles \(\theta<0.12\pi\). For the transition between \(n_{o}=7\) and \(n_{o}=9\) cogs, the most successful colourings out of five runs are displayed in Figure 4, and success probability is shown in Figure 2.
Footnote 17: \(T_{0}=0.2\), \(\alpha=0.999995\) and \(150N\) number of steps
For \(\frac{2\pi}{\theta}\) close to an odd integer, the most successful colouring has odd number of cogs closest to \(\frac{2\pi}{\theta}\). In between these 'optimal' colourings, the cogs look more disorderly. Different colourings show almost the same success probability at these angles, which results in an arbitrariness in the colouring the method produces. The most extreme example of this is shown for angle \(\theta=\frac{2\pi}{7.9}\approx 0.253\pi\). The two colourings both have success probability \(0.750\), yet have a significantly different shape. Note that the number of major extrusions in these two colourings is nonetheless 7 and 9, as predicted.
Multiple simulated annealing runs are used for each angle to find the above colourings. In some runs, methods get stuck in colourings with approximately double the predicted number of cogs. This can be explained as the cogs not correlating with their nearest cogs, but one cog further. [1] predicts this behaviour, as colourings that exhibit the symmetries of a stellated polygon. These solutions have not been found to be more successful than the colouring with number of cogs equal to the odd integer closest to \(\frac{2\pi}{\theta}\),18 which is in agreement with the planar results.
Footnote 18: Although these solutions have been found if \(\theta>\frac{\pi}{2}\), see Sec. 4.3
#### 4.1.2 Comparison to planar results
It is important to note that although the solutions look very similar to the cogwheel solutions of [1], there are three clear distinctions. First off, the colourings are truly antipodal. This results in coloured cogs being identically shaped as their antipodal uncoloured cogs. Secondly, this
Figure 4: Colourings showing behaviour during transition from 7-cog cogwheel to 9-cog cogwheel. Colour denotes success probability of grid cells and the same colourmap is used for all figures.
means the number of cogs is restricted to an odd number. Neither is the case for the planar results of [1] as in the planar problem there was no intuitive way of defining antipodality. The cogs and the 'holed out' parts are not alike in the planar case and even numbers of cogs are allowed. The difference in shape between holes and cogs in the planar case is further increased by the geometry of the disc: as the cogs are further from the disc's center than the holes, the distance between them is stretched compared to the holes. This is not the case for the spherical problem, as the cogs and holes are similarly placed around the equator, and thus are the same distance from the sphere's center. Thirdly, for transitions colourings are found that are different to the odd cog cogwheels. This is not consistent with the planar problem colourings, where systems always converged to a regular cogwheel. It is unknown whether this is the result of the problem's geometry, or a difference in the used methods. Further research into transitions is required.
### Critical regime
In the regime \(\theta\in[0.10\pi,0.41\pi]\) cogwheel solutions are found. If \(\theta\) increases further, we do not get the same transition between a 5-cog colouring and a 3-cog colouring as seen earlier. Instead, more irregular colourings are generated. Figure 6 shows the colourings as \(\theta\) moves from \(0.41\pi\) to \(0.56\pi\). Figure 5 shows the corresponding success probabilities.
The set-up converges to the 5-cogwheel solution if \(\theta\) is smaller than \(\frac{2\pi}{4.5}\approx 0.41\pi\). From this angle onwards, the colourings show chunks of coloured domains, which from \(\theta\approx 0.45\pi\) get smaller and smaller towards \(\theta=\frac{\pi}{2}\). For \(\theta>\frac{\pi}{2}\) the critical regime appears again and the same behaviour is observed: the coloured domains become larger until at \(\theta=0.55\) distinct shapes appear. Note that the absolute success probability is symmetrical around \(\theta=0.5\pi\).
This behaviour is interesting as it resembles the behaviour of a simple Ising model with local interactions. When such a simple spin system is cooled down towards the critical point \(T_{c}\), entropy decreases. As a result, domains will appear with equal spins. As the entropy decreases even further, these domains can become larger until the size becomes scaleless. This same
Figure 5: Success probabilities in critical regime \(0.41\pi\leq\theta\leq 0.55\pi\), in comparison to the theoretical success probability of the hemisphere colouring (drawn dotted in a)
Figure 6: Colourings for \(\theta\) between \(0.41\pi\) and \(0.55\pi\). Colour denotes success probability of grid cells and the same colourmap is used for all figures.
behaviour is visible in the Grasshopper Problem colourings, but this happens as the angle \(\theta\) moves away from \(\frac{\pi}{2}\).
The first difference is that in this essay's spherical system, the colouring with the most entropy is found for \(\theta=\frac{\pi}{2}\) and the domains appear when moving away from this critical point. Another difference is the behaviour when moving closer to (further away from) the critical point for the simple infinite Ising lattice (the grasshopper sphere). In the former case, correlated domains will become unbounded in size, whereas in the grasshopper case the cogwheel solution becomes visible. To summarise, the spherical grasshopper system displays behaviour similar to a simple Ising model on an infinite lattice, as close to jumping angle \(\theta=\frac{\pi}{2}\) it acts as if it is ruled by a reduced temperature of the form \(\frac{1}{|\pi-2\theta|}\). This analogy breaks down for \(\theta\approx(0.50\pm 0.05)\pi\).
Notable is the behaviour at exactly \(\frac{\pi}{2}\). For \(\theta=\frac{\pi}{2}\) any colouring should give a probability of \(\frac{1}{2}\), as it correlates with antipodal pairs of points. This means flipping the colouring of a pair of points leads to no difference in the success probability. A simulated annealing method will consequently always accept a flip, hence the resulting colouring is merely the result of the pseudorandom nature of the simulated annealing algorithm.
### Cogs and stripes regime
From \(\theta\geq 0.55\pi\), interesting new forms consisting of cogs and stripes are generated. See Figure 7 for the success probabilities and Figure 8 for a selection of the colourings. For \(0.55\pi\approx\frac{2\pi}{3.55}\leq\theta\leq\frac{2\pi}{3.50}\approx 0.57\pi\) a holed out cogwheel is generated where the number of cogs is 7. The system is found to always converge to this 7-cog cogwheel; a colouring where cogs correlate not with their neighbours, but the next nearest neighbours. For \(\theta\geq\frac{2\pi}{3.45}\approx 0.58\pi\) a colourings with 3 cogs is generated. This 3 cog-shape can be regarded as the spherical version of the three-bladed fan from [1]. The difference is that the blades are more circularly shaped and that the three patches around the blades form a pole in the spherical problem. This can also be regarded as a striped configuration as found in [1], mapped onto the periodic spherical surface. The circular bulges are also found in the planar stripes, although in the spherical problem these bulges are
more prominent. Considering the slow transition from rings with cogs to stripes, we will refer to this regime as the "cogs and stripes" regime.
For \(\theta\) between \(0.61\pi\) and \(0.64\pi\) the extrusions of the 3-cog solution become more irregular, until at \(\theta\approx 0.66\pi\) the configuration assumes 5 extrusions. With increasing \(\theta\) the extrusions flatten out, until at \(0.73\pi\) one is left with a disc at the pole and a band in the other hemisphere. With increasing \(\theta\) the number of bands increases; \(0.74\pi\) gives a disc at the pole and two bands, \(0.82\pi\) gives a pole and three bands. Just like in the cogwheel regime, angles where the colourings change correspond to dips in the success probability, see Figure 7. From \(\theta=0.82\) onwards, the number of stripes increases even more and stripe width decreases. From \(0.93\pi\) the bands become irregular and non-circular stripe elements start to appear. Towards \(\theta=\pi\) these line elements become thinner and thinner, while the orientation of the lines becomes nonuniform. At \(\theta=\pi\) the colouring is uniformly spread over the sphere.
The behaviour in the striped regime is most easily understood by relating the \(\theta>\frac{\pi}{2}\) problem to a similar \(\theta<\frac{\pi}{2}\) problem. As explained in Sec. 2.2 a correlation between points separated by an angle \(\theta\in[0,\pi]\) is equivalent to a point's antipodal point being anticorrelated with the points at angle \(\pi-\theta\). Consequently, for \(\theta\geq\frac{\pi}{2}\), the system acts like an Ising model with negative coupling between points separated by angle \(\pi-\theta\leq\frac{\pi}{2}\). Put in the context of the grasshopper, the system with \(\theta>\frac{\pi}{2}\) converges to the least successful colouring were the angle \(\pi-\theta\). This relation explains the generation of stripe colourings and their behaviour towards \(\theta=\pi\). With a stripe width that scales with \(\pi-\theta\), these lines have a high anti-correlation with the empty spaces in between the stripes.
Figure 8: Selection of colourings for \(\theta\) between \(0.55\pi\) and \(\pi\). Colour denotes success probability of grid cells and the same colourmap is used for all figures.
Theoretically at angle \(\theta=\pi\) the success probability should be zero for any antipodal colouring. However, since the delta function in the correlation is smoothed out over a finite interval, a point's success probability is increased if the point is the same colour as the points separated at an angle close to \(\pi\). Using again the relation between maximum colourings for \(\theta\) being minimum colourings for \(\pi-\theta\), jumping angle \(\theta=\pi\) in combination with smoothing of the delta function causes neighbouring states to anticorrelate with each other. This means the system converges to the ground state of an Ising model with negative coupling between neighbours.
### Notes on other methods
Noteworthy is that all found solutions can be found using any type of initialisation. In the cogwheel regime, trivially an initialisation of a cogwheel solution with a close number of expected cogs works best. This guarantees that even with a quick cooling scheme simulated annealing does not get stuck in less successful solution like the described double cogs cogwheel or hemisphere. The hemisphere and random initialisation work in the cogwheel regime, but these require a slow cooling procedure. Also interesting is that in general a greedy algorithm performs just as well as simulated annealing. In the transitions of the cogwheel regime the greedy algorithm is prone to getting stuck, especially when a hemisphere colouring is used. Using random or cogwheel initialisation with multiple runs, however, it produces results consistent with simulated annealing. In other regimes the greedy algorithm and simulated annealing have always been found to perform almost equally well. This is further elaborated on in the discussion.
## 5 Discussion and outlook
The numerical results in this essay have provided qualitative information on the optimal colourings of the grasshopper for the whole range of possible jumping angles \(0\leq\theta\leq\pi\). The results are remarkably in line with the planar results found in [1]. For angles between \(0.10\pi\) and \(0.41\pi\) cogwheel solutions seem to always be optimal, although in contrast to the planar problem the number of cogs is restricted to odd integers by the antipodality condition. For \(\theta\) between \(0.41\pi\) and \(0.55\pi\) critical colourings are found that can be associated to Ising model behaviour just above the critical point. For \(\theta>0.56\pi\) shapes with rings and cogs appear, and these turn into stripes as \(\theta\) increases.
### Resolution
The smallest angle cogwheel-resembling solution was found for \(\theta=\frac{2\pi}{37}\approx 0.05\pi\). For smaller \(\theta\) hemisphere solutions are found with irregular boundaries. This can be partly explained by the difference in success probability between a hemisphere solution and a cogwheel solution becoming progressively small with decreasing \(\theta\). The energy landscape is therefore very flat, which results in many small steps being needed to escape a local minimum, e.g. the hemisphere solution. Another factor is the resolution. This research used a set-up with 163842 points, which corresponds to a \(h\) of around 0.01. This means the correlation function is smeared out over a range of 0.04. For small angles this is not much smaller than the width of a cog, e.g \(\frac{\pi}{37}\approx 0.085\) for the 37 cog solution. This resolution flattens out the energy landscape even more, making it harder for the SA algorithm to find the minimum. The SA algorithm is evaluated in Section 5.3.
Achieving a higher resolution is possible by using a finer grid. A compelling idea for this would be using a nonuniform grid with higher resolution around the equator. Using a non-uniform grid would raise challenging questions on how the correlation between points in differently spaced regions is best discretised.
### Exploration optimisation function
As the greedy algorithm only allows monotonically increasing success probability, it is interesting that the greedy algorithm performs comparably to the SA approach. This seems to indicate that the underlying optimisation function with the spherical geometry might have an interesting structure.19 Understanding the optimisation function better could provide tools that solve the problem more efficiently, or lead to methods that guarantee a global optimum. For example, an interesting way forward might be finding an (approximate) expression for the shape of the cogs by assuming periodicity around the equator.
Footnote 19: Another explanation is that the SA process is malfunctioning and consequently not performing better than the greedy approach. This idea is discussed in Sec. 5.3
### Improvements SA process
The simulated annealing process of this essay was not perfect. For small angles \(\theta\), it converged to hemisphere solutions and in more difficult settings, as is the case in between cogwheel colourings, it required a lot of steps. Multiple improvements are possible. Firstly, it is worth considering which definition is used for nearest neighbour in the simulated annealing or greedy algorithm. In this essay, a neighbouring state is defined as any state reachable by flipping the colouring of a single antipodal pair of points. This definition is restricting, as it requires a lot of small steps to leave a local minimum. Other definitions might be utilised. An example would be to use _a priori_ knowledge of the solution, by also considering as neighbours the states reachable by specific simultaneous flips of a combination of correlated points.20
Footnote 20: An example of such a move: for jumping angle close to \(\frac{2\pi}{n_{o}}\), flipping \(n_{o}\) equidistant points on a circle around the z-axis. In this way periodic solutions are easier to reach.
Another way to make the SA process more efficient is by using continuous time Monte Carlo updates [22] when \(T\) gets sufficiently small. When \(T\) is small, there are only a very small number of points that have a significant probability of being accepted by the Metropolis-Hastings acceptance mechanism (see Sec. 3.4.2). Continuous time Monte Carlo picks a pair from the pairs that decrease the success probability less than a set threshold, and accepts the flip deterministically. This requires fewer steps in the last part of the cooling process.
Lastly it is possible to use simulated annealing together with parallel tempering [23], as also used in [1]. The combination of these methods reduces the probability the algorithm gets stuck in local optima.
### Extensions to the Grasshopper Problem
There are countless ways of generalising the grasshopper problem each of which is interesting to explore in its own right. A first possibility is to drop the antipodality constraint. This research has performed elementary research into the problem without the antipodality condition in the cogwheel regime, this generated colourings consistent with the antipodal findings.21 One could further investigate this and make comparisons to the antipodal problem.
Footnote 21: A trivial difference is, however, that without the antipodality condition cogwheel colourings are not constrained to odd numbers of cogs.
The problem can be extended to different spaces and different dimensions, for example platonic solids, higher-dimensional spherical surfaces \(\mathbb{S}^{n}\) or torus \(T^{n}\). Another option is to extend the problem to a different metric. One could for example explore the cube with an \(l^{1}\) norm. Note that in the discretised case every problem is reduced to a graph with nodes and edges as the cells and correlations, for which one is requested to select a subgraph consisting of exactly half the nodes such that the resulting subgraph has maximum weight (given by the correlations between all points of the subgraph). This means methods for solving the problem could be very similar, only requiring a different grid discretisation and/or different definition for the distance as used for the correlation function. After the correlation matrix between cells
is calculated, an identical simulated annealing or greedy algorithm can be used for finding the maximum weighted subgraph.
Another generalisation is changing the way the grasshopper jumps. Forces can be included, such that symmetry in correlation between points is broken. One could also generalise the 1 jump to \(N\) jumps, with the objective being either that the grasshopper lands on the grass in the \(N\)th jump, or also that it cannot jump off the lawn in the first \(N-1\) jumps. Additionally, the jumping distance can be made variable. For example, a probability density can be used for the grasshopper's jumping angle.
Finally, it is worth relating the problem back to the original quantum context. In this essay bounds are sought for the classical correlation function, in the case Alice and Bob use an opposite colouring: the correlation between their measurements for angle \(\theta=0\) is -1. A more general problem would be finding two independent colourings; one for Alice and one for Bob. Returning to the analogy of the grasshopper, this could be explained as Alice and Bob needing to seed exactly half of the sphere respectively with grass and flowers, allowing flowers and grass to exist in the same place. Given that the grasshopper lands on any part of the grass (Alice's colouring), the objective is to maximise the probability that after jumping an angle \(\theta\), it will land on a flowered patch (Bob's colouring). This essay has explored the specific case that a patch is either covered in both grass and flowers, or in neither. Without this assumption more general Bell inequalities can be explored.
## 6 Conclusion
This research has studied the Grasshopper Problem on the surface of the unit sphere. The problem was discretised, after which a simulated annealing process was used to search for the optimal solution. Results are consistent with the planar results of [1]. Figure 9 summarises the success probability as a function of \(\theta\) and corresponding colouring behaviour. Just like in the planar problem, for small \(\theta\) (\(0.10\leq\theta\leq 0.41\)) cogwheel solutions seem to be optimal. Along the same lines as the planar problem, the regime of cogwheel solutions is followed by a critical regime (\(0.41\leq\theta\leq 0.55\)), where unconnected coloured domains appear that decrease in size as \(\theta\) converges to \(\frac{\pi}{2}\). The same behaviour in reverse is observed when \(\theta\) increases further towards \(0.55\pi\). From \(\theta=0.55\pi\) the generated colourings resemble holed out cogwheels. For \(\theta>0.57\) stripes appear. At first, these stripes display extrusions spaced at angle close to \(\theta\), just like in the cogwheel solution. With increasing \(\theta\), these extrusions flatten out and the number of stripes increases. Stripe width can be seen to scale with \(\pi-\theta\). For \(\theta\) close to \(\pi\) the stripes become thin and nonuniform in orientation.
For angles smaller than \(\frac{2\pi}{25}\) mostly hemisphere solutions with irregular boundaries have been found. Occasionally for small \(\theta\) boundaries do display cogs, but these instances are rare and colourings irregular. This is speculated to be the result of a suboptimal simulated annealing implementation, combined with a finite resolution. Improving the simulated annealing process might resolve this problem. Promising possibilities for this are using _a priori_ knowledge while considering neighbouring states, using parallel tempering [23] or using continuous Monte Carlo updates [22] for small temperatures. For the studying of set-ups with small angles \(\theta\) further research should consider increasing the resolution. An interesting possibility is using a non-uniform grid, although the effects of this on the correlation function have to be taken into account.
Analysis of the optimisation function in the context of the \(\mathbb{S}^{2}\) geometry is also recommended. In this research a simulated annealing method and greedy algorithm have yielded consistent results for different settings. Better understanding of the optimisation function could explain this behaviour, and offer new tools to tackle the grasshopper problem.
Multiple extensions to the problem have been considered. One could change the problem's space, drop the antipodality constraint or redefine the way the grasshopper jumps. In the context of quantum information further research could study the problem of finding two independent
colourings: one for each observer in an EPR-Bohm experiment. This way the Grasshopper Problem could shed light on more general Bell inequalities, inter alia helping to unlock the potential of quantum cryptography.
## 6 Conclusion
Figure 9: Summary success probability and colouring behaviour. Showing cogwheel regime (\(0.10\pi\leq\theta\leq 0.41\pi\)), critical regime (\(0.41\pi\leq\theta\leq 0.55\pi\)) and cogs and stripes regime (\(0.55\pi\leq\theta\leq\pi\)).
## Acknowledgements
This essay was originally written as part of Part III of the Mathemathical Tripos at the University of Cambridge. I would like to warmly thank Adrian Kent, who set the essay and introduced me to this fascinating problem. His suggestions at the start of the project and feedback afterwards were very helpful, and crucial to the project's success.
|
2301.07818
|
Hierarchical Reinforcement Learning Based Traffic Steering in Multi-RAT
5G Deployments
|
In 5G non-standalone mode, an intelligent traffic steering mechanism can
vastly aid in ensuring smooth user experience by selecting the best radio
access technology (RAT) from a multi-RAT environment for a specific traffic
flow. In this paper, we propose a novel load-aware traffic steering algorithm
based on hierarchical reinforcement learning (HRL) while satisfying diverse QoS
requirements of different traffic types. HRL can significantly increase system
performance using a bi-level architecture having a meta-controller and a
controller. In our proposed method, the meta-controller provides an appropriate
threshold for load balancing, while the controller performs traffic admission
to an appropriate RAT in the lower level. Simulation results show that HRL
outperforms a Deep Q-Learning (DQN) and a threshold-based heuristic baseline
with 8.49%, 12.52% higher average system throughput and 27.74%, 39.13% lower
network delay, respectively.
|
Md Arafat Habib, Hao Zhou, Pedro Enrique Iturria-Rivera, Medhat Elsayed, Majid Bavand, Raimundas Gaigalas, Yigit Ozcan, Melike Erol-Kantarci
|
2023-01-18T23:05:58Z
|
http://arxiv.org/abs/2301.07818v1
|
# Hierarchical Reinforcement Learning Based Traffic Steering in Multi-RAT 5G Deployments
###### Abstract
In 5G non-standalone mode, an intelligent traffic steering mechanism can vastly aid in ensuring smooth user experience by selecting the best radio access technology (RAT) from a multi-RAT environment for a specific traffic flow. In this paper, we propose a novel load-aware traffic steering algorithm based on hierarchical reinforcement learning (HRL) while satisfying diverse QoS requirements of different traffic types. HRL can significantly increase system performance using a bi-level architecture having a meta-controller and a controller. In our proposed method, the meta-controller provides an appropriate threshold for load balancing, while the controller performs traffic admission to an appropriate RAT in the lower level. Simulation results show that HRL outperforms a Deep Q-Learning (DQN) and a threshold-based heuristic baseline with 8.49%, 12.52% higher average system throughput and 27.74%, 39.13% lower network delay, respectively.
Multi-RAT, traffic steering, hierarchical reinforcement learning
## I Introduction
Managing user traffic in non-standalone (NSA) fifth generation new radio (5G NR) is challenging since the traffic can be either directed to 5G or long term evolution (LTE) network. In NSA, user equipments (UE) access the multiple radio access technologies (multi-RAT) using dual connectivity (DC) [1, 2]. Each type of RAT has different abilities to provide service to the UE with diverse quality of service (QoS) requirements. However, if the traffic is always steered to the same base station with a certain RAT that can best serve the QoS requirements, this may result in unbalanced load distribution. This will eventually cause high delay in the network leading to packet drops, causing a negative impact on the average system throughput. If traffic with high load arrives and data packets are aggregated into a flow, they cannot be segregated again [3]. Therefore, if a packet is forwarded to a congested queue, it will suffer from long waiting time until the queue is emptied. Considering all these issues, it becomes important to develop a load-aware robust traffic steering scheme.
Throughput-hungry applications that emerged after 5G deployments, have reached an unprecedented level [4]. These applications require stringent fulfillment of QoS demands along with flexible and intelligent network management entities. Furthermore, architectural reformation introduced in open radio access network (O-RAN) [5, 6, 7] can facilitate RAN with openness and required intelligence. The radio controller in an O-RAN architecture is divided into two parts, near-real-time RAN intelligent controller (near-RT-RIC) and non-real-time RAN intelligent controller (non-RT-RIC). The non-RT-RIC is in the top of the hierarchy that serves as a software platform for the designed rApp for high level RAN optimization. It has visibility into network information, provides artificial intelligence (AI)-enabled feeds and recommendations to near-RT-RIC. Near-RT-RIC in the lower level enables control and optimization of RAN elements. Programmable and highly modular structure of future disaggregated RANs is quite suitable for developing advanced AI-based modules to perform network optimization via robust traffic steering schemes.
A machine learning (ML)-enabled traffic steering scheme can be a great tool to optimize network performance in a multi-RAT environment. Considering this fact, attempts have been made to design traffic steering schemes for 5G using ML, specially using reinforcement learning (RL) [8]. RL algorithms can provide us with the ease of avoiding any dedicated optimization model since we can transform optimization problems into Markov decision processes (MDPs). Furthermore, compared to conventional RL algorithms, hierarchical reinforcement learning (HRL) can provide better exploration efficiency via meta-controller and controller instead of using a standalone agent [4]. In particular, the bi-level architecture of O-RAN having near and non-RT-RIC makes it a suitable candidate to embed the meta-controller and controller in the O-RAN hierarchy as rApps and xApps. Therefore, different from the previous works, we propose an HRL-based traffic steering algorithm that is applicable for O-RAN architecture.
In this paper, we intend to maintain QoS requirements of all the traffic types simultaneously by proposing an HRL-based traffic steering scheme that at the same time is able to perform threshold-based load balancing in a disaggregated RAN environment. The threshold is associated with the queue length of each RAT which is provided by the meta-controller and the controller in the lower level is responsible for RAT specific traffic steering. We compare the performance of the proposed method with a deep reinforcement learning (DRL)-based baseline namely deep Q-learning (DQN) and a threshold-based heuristic baseline. The proposed scheme gains as high as 8.49% and 12.52% improved average system throughput and 27.74% and 39.13% lower network delay compared to the DQN and threshold-based heuristic baseline
algorithm, respectively.
We organize the remaining parts of the paper as follows: Section II and III discuss the related works to our research conducted in this paper, and the system model along with problem formulation, respectively. The proposed HRL-based traffic steering algorithm is covered in Section IV. Performance comparison of the proposed method along with the baseline algorithms is presented in Section V. Finally, conclusions are presented in Section VI.
## II Related work
Network optimization has been conducted via traffic steering schemes in the literature after the emergence of 4G/LTE networks. 5G deployments can be benefited highly by an efficient traffic steering scheme since it can vastly help to deal with the increased number of users with DC, multiple traffic types, and dense establishment of small cells. A threshold-based traffic steering scheme is proposed in [9] that performs network optimization based on channel condition, load level at each RAT, and service type. Dryjanski et al. propose a traffic steering use case for ORAN with predefined policies in which xApps are designed for spectrum management, cell assignment and resource allocation [10].
In recent years, RL algorithms have been used several times in the literature to develop traffic steering schemes for 5G multi-RAT environment in O-RAN. Fetemeh at al. propose an intelligent traffic steering scheme for O-RAN to handle unknown traffic demand using recurrent neural network [11]. Cao et al. develop a federated learning-based scheme for O-RAN, in which each UE acts as an agent to make network access decisions independently [12]. An O-RAN based RAT allocation environment that enables RL agents to train their DQN models for steering their traffic between RATs is proposed in [13]. In comparison to the existing literature, contributions of our proposal lie in developing an automated traffic steering mechanism specific to each RAT that can maintain QoS requirements of different traffic types. Furthermore, it can provide interruption less network connectivity ensuring smooth user experience via threshold based load balancing in 5G NSA mode using HRL.
## III System Model and Problem Formulation
### _System Model_
In this work, we consider a multi-RAT network where multiple users are connected with 5G and LTE RATs via DC. There are small cells having 5G NR base stations (BSs) that can serve applications requiring high throughput and low latency. Small cells are within the range of a macro-cell that is facilitated by one LTE BS (eNB). There are total \(\Phi\) flows in the network and each UE in the network has a traffic flow \(\phi\), that can be either steered to eNB or gNB based on the decision of the HRL agent. The considered wireless system for 5G NSA mode is presented in Fig. 1.
There are two control loops in the system. First one is the non-RT control loop which has a latency larger than 1s. This is where policies are set, and RAN analytics are gathered. The second control loop operates in larger than 10 ms and less than 1s time frame. In this time frame, our traffic steering xApp operates and produces actions to perform flow admission to a RAT.
The total downlink bandwidth, \(B\) (in MHz) is divided into \(X_{RB}\) resource blocks. Each RBG, \(\psi\) is assigned some transmission power \(\rho_{\psi,b}\) by a BS, \(b\). The link capacity between UE, \(u\) and BS, \(b\) can be formulated as below:
\[\xi_{u,b}=\sum_{\psi=1}^{\Psi}\omega_{\psi}\log_{2}\left(1+\frac{\rho_{\psi,b }\zeta_{\psi,u}g_{\psi,u,b}}{\omega_{\psi}X_{0}+\sum_{\mu\in B}\rho_{\psi,\mu }\zeta_{\psi,u,\mu}g_{\psi,u,\mu}}\right), \tag{1}\]
where \(\omega_{\psi}\) is the bandwidth of the \(\psi\), \(\rho_{\psi,b}\) is the transmit power of the BS, \(b\) on \(\psi\), \(g_{\psi,u,b}\) is the channel co-efficient and \(\zeta_{\psi,u,b}\) is the RBG's allocation indicator of the link \((\psi,u,b)\). \(X_{0}\) is the additive white Gaussian noise single-sided power spectral density. \(\rho_{\psi,\mu}\) is the transmit power of the interfering BS, \(g_{\psi,u,\mu}\) is the channel co-efficient, and \(\zeta_{\psi,u,\mu}\) is the allocation indicator of link \((\psi,u,\mu)\). The link capacity should not be exceeded as traffic flows pass through a link in the system
\[\sum_{\phi\in\Phi}\delta_{\phi}v_{u,b}^{\phi}\leqslant\xi_{u,b}\quad\forall(u, b)\in L, \tag{2}\]
where capacity demand of a flow is represented using \(\delta_{\phi}\), \(v_{u,b}^{\phi}\) is a binary variable which is '1' given that the link \((u,b)\) has been used from \(u\) to BS \(b\). It is '0' otherwise. \(L\) is the set of links and \(\Phi\) is the set of all the traffic flows.
Fig. 1: 5G NSA deployment with macro and small cells controlled via intelligent controllers.
The proposed system model considers delay as the combination of transmission and queuing delay which is as follows \(D_{k,b}=D_{k,b}^{T}+D_{k,b}^{Q}\), where \(D_{k,b}^{T}\) is the transmission delay experienced for a specific traffic type \(k\), and \(D_{k,b}^{Q}\) is the queuing delay that took place for a certain traffic type \(k\) at BS \(b\) for a user \(u\).
### _Problem Formulation and QoS Requirements_
To conduct traffic steering for different traffic types having variant QoS requirements for delay and throughput, we define two parameters. First one is the delay parameter which is calculated as the ratio of the defined QoS requirement for delay (\(D_{QoS}\)) and the actual delay (\(D_{k,b}\)) experienced in the system for a specific traffic type (\(k\)). It is formulated as follows:
\[\varpi_{k,b}^{D}=\frac{D_{QoS}}{D_{k,b}}. \tag{3}\]
Similarly, we get the throughput parameter as the ratio of the throughput achieved by the system (\(T_{k,b}\)) running our algorithm and the minimum throughput required (\(T_{QoS}\))
\[\varpi_{k,b}^{T}=\frac{T_{k,b}}{T_{QoS}}, \tag{4}\]
The goal of the proposed scheme is to improve the system performance in terms of delay and throughput. To represent such goal, a new variable is initiated. The variable combines the delay and throughput parameters that were presented in eq. (3) and (4). It is as follows:
\[P=c_{1}(\varpi_{k,b}^{D})+c_{2}(\varpi_{k,b}^{T})-H, \tag{5}\]
where \(c_{1}\) and \(c_{2}\) are the weight factors and \(H\) is the handover penalty since excessive handover in the system would affect the system throughput. The network optimization problem that we want to address in this work is associated with this variable \(P\), and is as follows:
\[\begin{split}& max\sum_{u\in U}\sum_{\phi\in K}\sum_{b\in B}P_{u, \phi,b},\\ & s.t.\sum_{(u,b)\in L}\beta^{\phi_{k}}\geq\beta^{\phi}\quad \forall\phi\in\Phi,\\ &\sum_{(u,b)\in L}D(u,b)v_{u,b}^{\phi}\leq D^{\phi}\quad\forall \phi\in\Phi,\end{split} \tag{6}\]
where \(\beta^{\phi_{k}}\) is the required bitrate for a particular type of traffic \(k\), and \(\beta^{\phi}\) is the available bitrate. Also, \(D^{\phi}\) represents the latency demand of flow \(\phi\in\Phi\) and \(D(u,b)\) is the latency of link \((u,b)\).
On one hand, steering the user traffic to the RAT that can best serve the QoS demands of that specific traffic type can significantly increase network performance. However, for the long term performance, it is necessary that we consider the high load imposed to the BS when the traffic load increases vastly. Therefore, to maximize the total objective, we have to provide an intelligent mechanism that can perform load balancing to satisfy the desired performance. Considering that, we introduce the HRL algorithm that can perform threshold-based load balancing in a dynamic manner.
## IV Proposed HRL-based Traffic Steering Scheme
In this section, we describe the proposed HRL-based traffic steering scheme. First, terminologies and notations related to HRL are discussed in brief and MDPs are defined. Next, we present how Q-values are updated and goals are selected. Finally, the load-aware HRL-based traffic steering algorithm is presented.
### _Hierarchical Reinforcement Learning_
In typical RL, the problem is defined as an MDP \(<S,A,T,R>\), where \(S\) is the set of states, \(A\) is the set of actions, \(T\) is the transition probability (\(T:S\times A\times S\)), and \(R\) is the reward function. A standalone agent interacts with the environment to maximize the reward [14].
Compared with the traditional RL, the agent consists of two controllers, meta-controller and controller in HRL [15]. The MDP for HRL is rewritten as \(<S,A,T,R,G>\). \(G\) in the tuple indicates a set of goals. Based on the current state \(s\in S\), the meta-controller is supposed to produce high level goals \(g\in G\) for the controller. Next, these goals are transformed for high-level policies. The controller is responsible for choosing low-level actions \(a\in A\) based on the high-level policies and on the process of doing so, receives an intrinsic reward (\(r_{in}\)). Finally, the meta-controller will get an extrinsic reward (\(r_{ex}\)) from the environment and provide the controller with a new goal, _gf_. HRL can provide with more efficient learning because of the hierarchy introduced in the architecture. By dividing the sub-goals, HRL allows more efficient management of RAN functionalities.
In our proposed hierarchical implementation of intelligence, the HRL-model takes decisions at two time scales. Meta-controller on top level module (could be placed in non-RT-RIC) takes in the state perceived by the agent from the network environment and picks a new load balancing threshold as a goal. On the other hand, the controller which can be embedded in near-RT-RIC uses both the state and the chosen goal to select the actions until the episode is terminated. The models are trained using stochastic gradient descent at different temporal scales to optimize expected future intrinsic reward for the xApp-based controller and extrinsic reward for the rApp-based meta-controller. To summarize the whole process, we present the schematic of the proposed method in Fig. 2.
To transform the problem formulated in eq. (6) into HRL notations, the following MDP is defined for the meta-controller and controller.
* **State:** There are three elements in the set of states, \(s_{con}=\{F_{t},SINR_{r},Q_{l}\}\). Here, \(F_{t}\) represents the traffic flow. The second element of the set of states is the SINR measurements to represent the link quality between a BS and UE: \(SINR_{r}\)=\(\{SINR_{LTE},SINR_{NR}\}\). As for the last element in state space, we use queue length of both LTE and 5G NR RATs to represent load level: \(Q_{l}\)=\(\{Q_{l(NR)},Q_{l(LTE)}\}\).
* **Action:** Flow admission to the different RATs in a multi-RAT environment is considered in the action space which
is defined as: \(\{A_{L},A_{NR}\}\). Here, flow admission to the LTE RAT is presented by \(A_{L}\) and 5G RAT by \(A_{NR}\).
* **Intrinsic reward:** The intrinsic reward function (\(r_{in}\)) for the controller is same as eq. (5).
The meta-controller is responsible for high level policies for the agent. MDP definition for the meta-controller is stated as follows:
* **State:** The states of the meta-controller consists of the traffic type, SINR measurements, and queue length of each type of RAT: \(s_{meta}=\{F_{t},SINR_{r},Q_{l}\}\).
* **Goal for the controller:** Thresholds associated with the queue length is considered as the goals for the controller. Therefore, \(G=\{g_{1},g_{2},...,g_{n}\}=\{Th_{1},Th_{2},...,Th_{n}\}\). Transmission is differed to another RAT for load balancing based on this threshold.
* **Extrinsic reward:** The meta-controller is responsible for the overall performance of the whole system. Therefore, we have set the extrinsic reward function for the meta-controller as the objective of the problem formulation presented in eq. (6). The following equation is basically the summation of the intrinsic reward over \(\tau\) steps. \[r_{ex,\tau}=\frac{1}{n}\sum_{\tau=1}^{n}r_{in,\tau}\quad\forall(u)\in U, \forall(b)\in B,\] (7)
### _Q-value Update and Selection of Goals_
In this section of the paper, we present how to update the Q-values of the controller along with the action and goal selection strategies. Q-values of the meta-controller is updated by:
\[\begin{split} Q_{M}^{N}(s_{meta},g_{meta})=Q_{M}^{O}(s_{meta},g_{meta})+\alpha(r_{ex}+\\ \gamma\max_{g}Q_{meta}(s^{\prime}_{meta},g,\theta_{1})-Q_{M}^{ O}(s_{meta},g_{meta},\theta^{\prime}_{1})),\end{split}\] (8) where \(s^{\prime}_{meta}\) is the next state, \(\alpha\) is the learning rate, and \(\gamma\) is the discount factor. \(\theta_{1}\) and \(\theta^{\prime}_{1}\) are the weights associated with the main network and the target network. The new and old values are represented as \(Q_{M}^{N}\) and \(Q_{M}^{O}\). This means the accumulated reward is brought by state-goal pair (\(s_{meta},g_{meta}\)). Next, we use the \(\epsilon\)-greedy policy for goal selection which can balance the exploration and exploitation of goals so that long term rewards are achieved.
\[\pi(s_{meta})=\begin{cases}\text{random goal selection},&rand\leqslant \epsilon\\ arg\max_{g}Q(s_{meta},g),&rand>\epsilon,\end{cases} \tag{9}\]
where \(rand\) is a random number generated between 0 to 1 and \(\epsilon\) is less than 1.
We update the Q-values of the controller using:
\[\begin{split} Q_{C}^{N}(& s_{con},g_{meta},a_{con})=Q_{ S}^{O}(s_{con},g_{meta},a_{con})\\ +\alpha(r_{in}+\gamma\max_{a}Q_{con}(s^{\prime}_{con},g^{\prime}_{meta},a, \theta_{2})-\\ Q_{S}^{O}(s_{con},g_{meta},a_{con},\theta^{\prime}_{2}))\end{split}, \tag{10}\]
where \(s^{\prime}_{con}\) is the next state and the next goal produced by the meta-controller is \(g^{\prime}_{meta}\). \(\theta_{2}\) and \(\theta^{\prime}_{2}\) are the weights associated with the main network and the target network,respectively. The old and new Q-values for the controller are represented by \(Q_{S}^{O}\) and \(Q_{S}^{N}\), respectively. Like before, we use the \(\epsilon\)-greedy policy for controller's action selection.
### _Baseline Algorithms_
In this work we are using two baselines. First one is the DQN-based traffic steering scheme that uses similar system model and a static load balancing threshold [16].
To show the performance improvement of the proposed HRL-based traffic steering scheme algorithm, we will compare it with a threshold-based heuristic algorithm [9]. The algorithm utilizes a predefined threshold calculated using load at each station, channel condition, and user service type. The threshold (\(Th_{t}\)) is calculated by considering the mean of all the metrics mentioned. A variable \(W\) is computed using the same parameters with weight metrics (through summation). Traffic steering decision is taken through the comparison of the \(W\) and \(Th_{t}\).
## V Performance Evaluation
### _Simulation setup_
We deploy a MATLAB-based simulation environment that includes one eNB and four gNBs which serve one macro-cell and four small cells. There are 60 users in the simulation environment. There are three types of traffic: Video, Gaming and Voice traffic. Video traffic in our simulation setting had the highest throughput requirement. To test how the proposed traffic steering algorithm performs with high throughput requirement we have set the proportion of the video traffic to be 50%. Gaming traffic in the system has the most precise delay requirement and we have 30% proportion of that. Lastly, proportion of the voice traffic is 20%. QoS requirements of different traffic types have been defined based on 3GPP
Fig. 2: HRL integration with intelligent controllers at different timescales.
specifications and specifications presented in [17]. Packet size, \(T_{QoS}\), and \(D_{QoS}\) are considered to be 30 bytes, 0.1 Mbps, and 100 ms, respectively for the voice traffic. Same parameters are specified to be 250 bytes, 10 Mbps, and 80ms for the video traffic. Lastly, for the gaming traffic, we set packet size, \(T_{QoS}\), and \(D_{QoS}\) to be 120 bytes, 5Mbps, and 40ms, respectively.
We consider multi-RAT environment in 5G NSA mode where LTE and 5G NR BSs serve together. An architecture described in [18] has been opted for implementation. LTE and 5G RAT have carrier frequencies of 800 MHz and 3.5 GHz. LTE and 5G NR BSs are configured to have transmission power of 40W and 20W. Bandwidth of the LTE and 5G RAT are set to 10 and 20 MHz, respectively.
### _Simulation results_
Performance evaluation of the proposed HRL-based algorithm is conducted based on three KPIs: packet drop rate, average system throughput, and network delay. The proposed HRL-based method outperforms the threshold-based heuristic and the DRL baseline by gaining 44.57% and 24.1% decrease in the drop rate. Such performance increase by HRL is achieved because of the load-balancing performed using the threshold associated with the queue length along with traffic steering action of the lower level controller. Fig. 3 presents us with the impact of different thresholds on average system throughput. We can see that when traffic load is 5Mbps, highest throughput is obtained at 0.8 (80% of the data queue is occupied). When the threshold is 1, the throughput drastically decreases because the packets are aggregated only when the queue is full and transmission is not possible unless the associated queue is emptied. Similar effects are visible when the traffic load is 10 Mbps except for the fact that the threshold is lower this time (0.7) that gains more output. This is because of the higher arrival rate of the data packets as load increased.
Fig. 4 presents a comparative analysis between the proposed HRL-based traffic steering scheme and the baseline algorithms in terms of system throughput. The proposed method outperforms the threshold-based heuristic and the DRL baseline by achieving 12.52% and 8.49% increased throughput on an average, respectively. Since DRL is not tailored to handle dynamic change in traffic load and perform load balancing accordingly, this causes higher packet drop. Hence, this leads to reduction in the overall system throughput.
Fig. 5 presents the performance comparison among the proposed HRL-based scheme and the baseline algorithms in terms of network delay. The proposed scheme obtains 27.74% and 39.13% decrease in network delay compared to the DRL and threshold-based heuristic baseline algorithm, respectively. It is because of the more efficient traffic flow management via threshold-based load balancing at BS level.
Fig. 6 presents how the traffic is steered to between RATs if the threshold is crossed. High load is imposed when five UEs in 2100th time slot simultaneously gets served by the same base station in the small cell. As a result, fourth UE's data traffic is steered to a different RAT (2450th time slot). Same thing happens at the same time slot for the 6th UE as traffic gets steered to a different RAT.
## VI Conclusions
AI-enabled traffic steering approaches are vastly effective to obtain high performance specially when multi-RAT and multiple traffic types are involved in dense deployments. In this paper, we have proposed a novel load-aware HRL algorithm that can perform QoS-centric, RAT-specific, traffic steering using two levels of controllers, to satisfy the QoS demands of variant traffic types in the network. Optimal threshold
Fig. 4: Average system throughput versus traffic load.
Fig. 5: Network delay versus traffic load.
Fig. 3: Impact of different thresholds for 5Mbps and 10Mbps traffic load per user.
selection associated with the queue length of each BS and AI-enabled traffic steering mechanism has led to 8.49% (with respect (wrt) to DRL-based baseline), 12.52% (wrt threshold-based baseline) increase in average system throughput, and 27.74% (wrt DRL-based baseline), 39.13% (wrt threshold-based baseline) decrease in network delay. In our future studies, we plan to develop a traffic steering schemes that handle more complex RAN scenarios.
## Acknowledgement
This work has been supported by MITACS and Ericsson Canada, and NSERC Canada Research Chairs and NSERC Collaborative Research and Training Experience Program (CREATE) under Grant 497981.
|
2303.11803
|
Fighting over-fitting with quantization for learning deep neural
networks on noisy labels
|
The rising performance of deep neural networks is often empirically
attributed to an increase in the available computational power, which allows
complex models to be trained upon large amounts of annotated data. However,
increased model complexity leads to costly deployment of modern neural
networks, while gathering such amounts of data requires huge costs to avoid
label noise. In this work, we study the ability of compression methods to
tackle both of these problems at once. We hypothesize that quantization-aware
training, by restricting the expressivity of neural networks, behaves as a
regularization. Thus, it may help fighting overfitting on noisy data while also
allowing for the compression of the model at inference. We first validate this
claim on a controlled test with manually introduced label noise. Furthermore,
we also test the proposed method on Facial Action Unit detection, where labels
are typically noisy due to the subtlety of the task. In all cases, our results
suggests that quantization significantly improve the results compared with
existing baselines, regularization as well as other compression methods.
|
Gauthier Tallec, Edouard Yvinec, Arnaud Dapogny, Kevin Bailly
|
2023-03-21T12:36:58Z
|
http://arxiv.org/abs/2303.11803v1
|
# Fighting over-fitting with quantization for learning deep neural networks on noisy labels
###### Abstract
The rising performance of deep neural networks is often empirically attributed to an increase in the available computational power, which allows complex models to be trained upon large amounts of annotated data. However, increased model complexity leads to costly deployment of modern neural networks, while gathering such amounts of data requires huge costs to avoid label noise. In this work, we study the ability of compression methods to tackle both of these problems at once. We hypothesize that quantization-aware training, by restricting the expressivity of neural networks, behaves as a regularization. Thus, it may help fighting overfitting on noisy data while also allowing for the compression of the model at inference. We first validate this claim on a controlled test with manually introduced label noise. Furthermore, we also test the proposed method on Facial Action Unit detection, where labels are typically noisy due to the subtlety of the task. In all cases, our results suggests that quantization significantly improve the results compared with existing baselines, regularization as well as other compression methods.
Gauthier Tallec\({}^{1,*}\), Edouard Yvinec\({}^{1,2,*}\), Arnaud Dapogny\({}^{2}\), Kevin Bailly\({}^{1,2}\)+Sorbonne Universite\({}^{1}\), CNRS, ISIR, f-75005, 4 Place Jussieu 75005 Paris, France
Datakalab\({}^{2}\), 114 boulevard Malesherbes, 75017 Paris, France
Footnote †: stands for equal contribution. This work has been supported by the french National Association for Research and Technology (ANRT), the company Datakalab (CIFRE convention C2017396) and by the French National Agency (ANR) (FacIL, project ANR-17-CE33-0002). This work was granted access to the HPC resources of IDRIS under the allocation 2022-AD011013384 made by GENCI.
Quantization, Deep Learning, Computer Vision, Facial Expression Recognition, Noisy labels
## 1 Introduction
In the recent years, a wide range of computer vision tasks have benefited from the public release of large scale datasets, including but not limited to image classification [1], object detection [2], image segmentation [3] and face analysis [4]. These datasets, in combination with the growing availability of the resources required to train over-parametrized neural networks, have enabled computer vision solutions to achieve remarkable performance [5, 6, 7]. However, this trend is undermined by two major factors: difficulty of deployment [8, 9] and ill annotated datasets [10, 11]. While the former can be tackled by using compression techniques [8, 12, 9], the latter is more challenging to address. The main reason behind this difficulty to detect and correct the wrong annotations being the sheer size of these sets of millions of examples as well as the subtlety of the task (_i.e._ low inter-agreement among experts).
Due to their ability to approximate any continuous function [13], over-parametrized neural networks can over-fit on such noisy datasets which hinders their ability to generalize to properly annotated sets. This issue falls in the broader problem of overfitting. Regardless of the quality of the annotations, their exists a wide range of methods to avoid overfitting. The most notable ones being early stopping [14], weight decay [15], label smoothing [16] and dropout [17].
More generally, the techniques that limit the expressive power of neural networks are well known for their ability to reduce overfitting of over-parameterized networks. Such reduction of expressivity can be achieved with neural network compression methods. For instance, a simple manner to reduce expressivity of neural networks is pruning and consists in removing neurons and feature maps during or prior to train
Figure 1: Training (blue) and evaluation (orange) loss between baseline (dashed) training and quantization aware (plain lines) training on BP4D. While the baseline network quickly overfits (as indicated by the eval error rising while the training error decreases) due to noisy labels, the quantized network noticeably avoids this pitfall.
ing. In [18], pruning prevents overfitting in the case of classification, during training, by progressively discarding the least important features. Bramer [19], performed a similar study on classification trees with pruning at initialization. Another effective compression approach is quantization, which maps floating point operations to low-bit fixed point operations and as such reduces the expressivity of the model. Such techniques have been applied to enhance robustness to adversarial attacks [20] which leads to an improved generalization and reduced overfitting. In [21], the authors apply quantization to improve suggestive annotations [22], which extracts a representative and small-sized balanced training dataset based on uncertainty metrics. This method reduces overfitting on small training samples.
To the best of our knowledge, the ability of such techniques to fight overfitting on noisy data has never been tested. Instead, prior works focused on the effects of compression on the learning procedure and resulting models which were assumed to be trained on perfectly labeled data. To complement the existing results, we hypothesize that _quantization-aware training, by restricting the expressivity of neural networks, is a very effective method for limiting overfitting on noisy annotated data_, as can be observed in Fig. 1. To test this hypothesis, we conducted a comparison to the most commonly used regularization techniques and also pruning.
## 2 Fighting overfitting in deep neural networks
### Regularization Techniques
Regularization techniques are defined as any technique that helps improve the model generalization [23]. Usually, these methods focus on improving accuracy on a defined test set and do not change the computations required at inference. The most commonly used ones are early stopping [14], weight decay [15], dropout [17] and label smoothing [16]. For what follows, let's consider a neural network \(F\) with \(L\) layers \(f_{l}\) and weights \(W_{l}\).
**Early Stopping:** As we train \(F\), the accuracy on the training and validation data increases. When overfitting occurs (Fig. 1 a.), the accuracy on the validation and the test sets starts to decrease while it continues to increase on the training set. Early stopping consists in using a validation set in order to find when to stop training the model.
**Weight Decay:** Weight decay fights overfitting by introducing a prior on the scale of all the weight values in \(F\). Concretely, it consists in the addition of an L2 penalization term to the training loss \(\mathcal{L}_{w}=\alpha_{w}\sum_{l=1}^{L}\|W_{l}\|_{2}^{2}\) where \(1/\alpha_{w}\) is proportional to the scale imposed on the weights.
**Dropout:** Dropout curbs overfitting by randomly masking parts of \(F\) in order to learn to predict with a sub-network, avoiding co-adaptation of the weights. Formally, during training, for each example, each scalar weight values is set to \(0\) with probability \(p\). In test, all weights are multiplied by \(p\) to account for their frequency of presence in train.
**Label Smoothing:** Label smoothing reduces overfitting by preventing neural network over-confidence. To do so it modifies the ground truth label as follows :
\[\mathbf{y}_{\alpha_{s}}=(1-\alpha_{s})\mathbf{y}+\alpha_{s}\frac{1}{C}, \tag{1}\]
where \(\alpha_{s}\) controls the smoothing intensity and \(C\) is the number of classes in the classification problem. For multi-task binary classification (e.g. action unit (AU) detection), label smoothing is applied label-wise with \(C=2\) for presence and absence of a given label.
Note that all the aforementioned methods do not change the inference runtime of the network, hence not addressing the efficiency problem. For this reason, we propose to evaluate the ability of compression techniques for regularization.
### Compression techniques
**Pruning:** Let's assume that \(F\) is pre-trained. Then for each \(f_{l}\), we perform standard magnitude-based structured pruning [24] over the weight tensors \(W_{l}\) by removing the neurons with highest \(L^{1}\) norm. The method follows from the intuition that smaller weights induce smaller activation which themselves contribute less to the decision making. Consequently, as the expressivity of the model is reduced, it is less likely to overfit.
**Quantization:** The standard quantization operator, in \(b\) bits, is defined by \(\text{quantized}(X)=\left\lfloor X^{\frac{2b^{-1}}{\lambda_{X}}-1}\right\rceil\) where \(\lfloor\cdot\rceil\) is the rounding operation and \(\lambda_{X}\) is scaling parameter specific to \(X\) which ensures that \(X\) support is correctly mapped to \([-(2^{b-1}-1);2^{b-1}-1]\). It is common to have scalar values for \(\lambda_{X}\) when quantizing activations (_i.e._ layer inputs) and vector values for weight tensors (per-channel quantization). The activation scales are estimated per-batch during training \(\lambda_{X}=\max\{|X|\}\) and the inference value is updated using an exponential moving average. On the other hand, weight scales are always computed as the maximum per-channel of \(|W_{l}|\). When optimizing the weight values \(W\), the rounding operator introduces zero gradients almost everywhere which is problematic for gradient-based optimization. To circumvent this limitation, straight through estimation [25] is an efficient solution as the gradient operator associated to the quantization process is replaced by the identity function. In this method, the batch-normalization layers are removed from the network architecture.
Consequently, we argue that, by limiting the representative power of \(F\), quantization and pruning act as regularization during training while also significantly improving speed at inference time. In our experiments, we show the ability of quantization to out-perform all the aforementioned regularization methods when applied on noisy training datasets.
## 3 Experiments
### Datasets
We test our hypothesis on two set-ups: first single task classification on Cifar10 [26] where we manually introduce noise on the labels, second on BP4D [27], a multi-task AU detection dataset, where the annotations are expected to be slightly imperfect.
**Cifar10** is a classification dataset comprising 50,000 training images and 10,000 test images, all annotated in 10 classes. We use this dataset to simulate the robustness to bad training annotations by sampling \(s\%\) training examples and randomly re-annotating them, _i.e._ we know that \(s\%\) of the modified training set have a random label but discard that information during training. Note that the test set remains unchanged in all cases. To tackle this task, we train a ResNet 20 [28] for 200 epochs using Adam [29] optimizer.
**BP4D** is a dataset for facial AU detection and comprises about \(140k\) images featuring \(41\) people. Each image is annotated with the presence of \(12\) AU. For performance evaluation, we follow related work strategy from [30] and use the vanilla architecture from [7]. For stability concerns, the evaluation performance are averaged over 5 runs.
### Implementation Details
Each of the aforementioned regularization and compression method requires hyper-parameters. For the vanilla regularization techniques, we use widely adopted hyper-parameters: weight decay \(w_{d}=0.01\), dropout proportion \(p=0.1\), and smoothing parameter \(\alpha_{s}=0.1\). On BP4D, we systematically apply early stopping as described in [7].
To achieve good performance with quantization on multi tasking, we adapted straight-through estimator by keeping batch-normalization layers, in order to learn the input scaling factors and consequently be robust to strong discrepancies between tasks. Formally, we keep the normalization process per-task (per-AU). This change was required to get stable results across several runs.
For pruning, it is parameterized by the proportion of neurons to remove that we set to \(75\%\). Quantization is defined by the number of bits used on weights quantization and activation quantization, e.g. for int4 weights and int8 activations, we get W4/A8. As it is commonly done, we systematically quantize the first and last layers in W8/A8.
### Fighting overfitting from synthetic noisy labels
In Fig. 2, we compare the accuracy gain (top plot) and absolute accuracy (bottom plot) obtained from different regularization techniques as functions of the \(s\%\). While early stopping and weight decay do not provide significant benefits over the baseline, we observe that quantization offers the best accuracy preservation performance for values of \(s\) below \(40\%\) which already correspond to almost half of the ill-annotated training examples. This result was obtained with W4/A4 quantization. This demonstrates the efficiency of quantization to tackle overfitting (observed in Fig. 1) from poor annotation.
For higher noise proportions, we see dropout and pruning taking the upper hand, successively. Nevertheless, quantization remains competitive; furthermore, note that these methods involve parameter tuning (e.g. for dropout the position of the dropout layer in the network as well as the dropout hyperparameter parameter), while this parameter selection is a lot coarser with quantization. To elaborate on this, we report on Fig. 3 the relative accuracy gain for different regularization techniques and parameter setups. This highlights the robustness of quantization to hyper-parameter selection; the performance of dropout and pruning, conversely, is dependant on their hyper-parameter setting, which requires validation in practice. Consequently, quantization offers the highest accuracies in the most realistic experimental set-up (noise below \(30\%\)) while also offering robustness to parameter selection.
### Fighting overfitting on BP4D
Now that we showed on controlled experiments with synthetic label noise that quantization indeed acted as regularization, preventing overfitting on noisy labels, in what follows we demonstrate that on BP4D, a real noisy dataset [27] with high
Figure 2: On Cifar10: (a) Accuracy gain with respect to the baseline model (no regularization applied) as a function of \(s\). (b) Accuracy as a function of the percentage \(s\) of training examples randomly re-annotated.
inter-annotator disagreement rate, the proposed approach allows to improve the accuracy as compared with the baseline as well as existing methods to fight overfitting.
In Table 1, we observe that weight decay and label smoothing offer marginal improvements to the average F1 score at the expense of stability across AU. The best performing standard regularization technique, dropout, increases the average F1 score by 1.1 points over the baseline and 1.0 point over the second best regularization technique, weight decay. Still, quantization offers an extra 1.1 points over dropout, reduces the standard deviation across AU and, thus improves stability. Last but not least, quantization allows a significant speed-up at inference time, (up to 55% runtime reduction [8]).
## 4 Conclusion
In this study, we investigated the ability of compression techniques to tackle both the challenges of preventing overfitting on noisy labels, and the difficulty to efficiently deploy trained neural networks. More specifically, in addition to existing methods geared towards preventing overfitting, such as early stopping, weight decay, label smoothing and dropout, we investigated a number of deep neural network compression techniques such as pruning and quantization. We conducted a thorough empirical validation: first on Cifar10, on which we synthetically added a proportion of erroneous labels, and, second on BP4D AU detection dataset, which is notoriously noisy due to the subtlety of the task at hand. The results show that deep neural network quantization leads to increased robustness to label noise on both simulated and real test cases, and allows superior performance as compared with other methods. Hence, quantization is an effective method for limiting overfitting on noisy data that also allows more efficient inference.
Future work involves further investigation on multi-task quantization. We observed that, despite the significant improvements, the inter-AU accuracy discrepancy in the quantized network remains high. One solution would be to design a quantization scheme that adapts the constraint for each task independently.
|
2302.10834
|
Weakly Supervised Temporal Convolutional Networks for Fine-grained
Surgical Activity Recognition
|
Automatic recognition of fine-grained surgical activities, called steps, is a
challenging but crucial task for intelligent intra-operative computer
assistance. The development of current vision-based activity recognition
methods relies heavily on a high volume of manually annotated data. This data
is difficult and time-consuming to generate and requires domain-specific
knowledge. In this work, we propose to use coarser and easier-to-annotate
activity labels, namely phases, as weak supervision to learn step recognition
with fewer step annotated videos. We introduce a step-phase dependency loss to
exploit the weak supervision signal. We then employ a Single-Stage Temporal
Convolutional Network (SS-TCN) with a ResNet-50 backbone, trained in an
end-to-end fashion from weakly annotated videos, for temporal activity
segmentation and recognition. We extensively evaluate and show the
effectiveness of the proposed method on a large video dataset consisting of 40
laparoscopic gastric bypass procedures and the public benchmark CATARACTS
containing 50 cataract surgeries.
|
Sanat Ramesh, Diego Dall'Alba, Cristians Gonzalez, Tong Yu, Pietro Mascagni, Didier Mutter, Jacques Marescaux, Paolo Fiorini, Nicolas Padoy
|
2023-02-21T17:26:49Z
|
http://arxiv.org/abs/2302.10834v2
|
# Weakly Supervised Temporal Convolutional Networks for Fine-grained Surgical Activity Recognition
###### Abstract
Automatic recognition of fine-grained surgical activities, called steps, is a challenging but crucial task for intelligent intra-operative computer assistance. The development of current vision-based activity recognition methods relies heavily on a high volume of manually annotated data. This data is difficult and time-consuming to generate and requires domain-specific knowledge. In this work, we propose to use coarser and easier-to-annotate activity labels, namely phases, as weak supervision to learn step recognition with fewer step annotated videos. We introduce a step-phase dependency loss to exploit the weak supervision signal. We then employ a Single-Stage Temporal Convolutional Network (SS-TCN) with a ResNet-50 backbone, trained in an end-to-end fashion from weakly annotated videos, for temporal activity segmentation and recognition. We extensively evaluate and show the effectiveness of the proposed method on a large video dataset consisting of 40 laparoscopic gastric bypass procedures and the public benchmark CATARACTS containing 50 cataract surgeries.
Endoscopic videos, Surgical step recognition, Temporal convolutional networks, Weak supervision, Gastric bypass procedures, Cataracts procedures.
## I Introduction
Research in developing advanced clinical decision support systems in computer-assisted interventions (CAI) and robot-assisted surgeries (RAS) for the demanding situations of a modern Operating Room (OR) [1, 2, 3] has seen significant progress in the last decade. One of the primary functions of these advanced systems is automatic surgical workflow analysis, i.e., reliable recognition of the current surgical activities. Surgical activity recognition could play a key role in assisting clinical decisions, report generation, and data annotation by providing valuable semantic information.
Depending on the level of granularity, a surgical procedure can be decomposed into activities, such as the whole procedure, phases, stages, steps, and actions [4, 5]. Surgical phases are defined as a set of fundamental surgical aims to accomplish in order to successfully complete the surgical procedure. Similarly, steps are defined as a set of surgical actions to perform in order to accomplish a surgical phase. These definitions help clinicians define an ontology for each procedure, e.g. [6, 7] define ontologies for cataract and gastric bypass procedures. Although the ontologies are well defined, automatically recognizing these activities from available endoscopic videos is a topic of high interest.
Phase recognition has received a lot of attention and is a very active area of research in the medical computer vision community [8, 9, 10, 11, 12]. Alongside phases, there has been substantial research focusing on fine-grained activities such as robotic gestures [13, 14, 15, 16, 17, 18, 19], action triplets [20], and instrument detection and tracking [21, 11, 22]. Recently, there has been a surge of research works focusing particularly on step recognition [6, 7, 23].
While steps define a surgical workflow at a more fine-grained level than phases, the time required to annotate a dataset with steps is significantly higher than with phase annotations. For example, in Laparoscopic Roux-en-Y gastric bypass (LRYGB) procedures, the workflow consists of 44 steps and 11 phases (Table II). Precisely defining and annotating all the steps requires a considerably higher time of experts due to the number of steps and more importantly lower inter-class variances between steps. Since recent works in surgical phase/step recognition employ deep learning models, they rely on the availability of large-scale annotated datasets. Curation of these annotated datasets is difficult and time-consuming as these tasks require domain-specific medical knowledge.
To address this issue, a few works [24, 25, 26, 27] have proposed methods based on semi-supervision. These approaches involve
either pre-training the model on proxy tasks or training on synthetic labels generated by a teacher model trained on a small subset for phase recognition. Unlike these works, inspired by [22] and [28], we address the annotation scarcity issue by proposing a weakly supervised learning approach utilizing relatively economical annotations.
The main contributions of our work are summarized as follows:
1. We propose a weakly supervised learning method for surgical workflow analysis to tackle the problem of fine-grained surgical activity (step) recognition. We exploit the hierarchical step-phase relationships and utilize easier-to-annotate weak phase annotations on videos with missing step annotations.
2. We introduce a novel dependency loss to enforce the weak supervision and encode the step-phase hierarchical relationship as a matrix. By optimizing for this loss, it encourages the model to learn possible step sequences and transitions from videos with only phase annotations.
3. We present an end-to-end model consisting of ResNet-50 and Single-Stage Temporal Convolutional Network (SS-TCN) to learn both visual and temporal cues jointly.
4. We extend the CATARACTS1 dataset (containing step annotations) with phase annotations. These annotations will be released upon acceptance of this manuscript. Footnote 1: [https://cataracts2020.grand-challenge.org/](https://cataracts2020.grand-challenge.org/)
5. We extensively evaluate our approach on two surgical video datasets, namely Bypass40 [7] and CATARACTS [29], demonstrating the effectiveness and generalizability of our method.
## II Related Work
### _Surgical Activity Recognition_
Research on developing deep learning methods for surgical phase recognition has seen significant progress with initial works of EndoNet [8] and DeepPhase [9] on cholecystectomy and cataract surgeries, respectively. EndoNet proposed a Convolutional Neural Network (CNN) followed by a hierarchical Hidden Markov Model (HMM) to perform both phase and tool detection. Similarly, DeepPhase introduced an architecture
Fig. 1: Sample images from Bypass40 and CATARACTS datasets. Each column of Bypass40 images present similar steps.
with ResNet [30] and Recurrent Neural Network (RNN), instead of HMMs, for temporal modeling, for both phase recognition and tool detection. EndoLSTM [31, 32] extended EndoNet by utilizing a Long Short-Term Memory (LSTM) for temporal refinement of spatial features. Similarly, SV-RCNet [10] trained a ResNet and LSTM model end-to-end and proposed a prior knowledge inference scheme for surgical phase recognition. MTRCNet-CL [11] presented a multi-task model to detect tool presence and perform phase recognition along with a novel correlation loss to capture the relationship between tool presence and phase identification. Recently, TeCNO [12] adapted the multi-stage Temporal Convolutional Network (MS-TCN) [33] architecture for online surgical phase prediction by implementing causal convolutions [34].
On the other hand, step recognition has seen a spark in research with the initial work of [23]. A Content-Based Video Retrieval (CBVR) system, for real-time step recognition, was proposed utilizing a novel pupil center and scale tracking method as pre-processing of motion features. In [6], the CBVR system along with surgical tool presence information was used as input to statistical models consisting of Bayesian Network and HMMs for multi-level online recognition of step and phase. Recently, MTMS-TCN [7] adapted TeCNO utilizing TCNs for multi-level online recognition of step and phase. In this work, we build upon the architectures of TeCNO and MTMS-TCN by utilizing a variant of MS-TCN in an end-to-end fashion for online step recognition.
### Weak Supervision
Weak supervision has seen a great interest in the medical computer vision community to tackle the need for high-volume annotated datasets that are difficult to generate. Some of the interesting applications of weak supervision are seen in surgical tool localization [22], tool segmentation [28], cancerous tissue segmentation [35], and detection of the region of interest in chest X-rays and mammograms [36]. To reduce the number of labeled videos, most of the recent research works in phase recognition have proposed approaches based on semi-supervised learning. These approaches follow a similar strategy of pre-training the models on different proxy tasks of frame-sorting [24], predicting the temporal distance between multiple frames [25], and predicting the remaining surgery duration [26]. The most closely related work to this paper in terms of objectives is [27], which proposed a teacher/student approach for phase recognition in scenarios of extreme manual annotation scarcity (\(\leq 25\%\) of the training set). The teacher model (trained on a small set) generated synthetic phase annotations for a large number of videos on which the student model was then trained.
Weakly supervised coarse-to-fine methods have received considerable interest in the computer vision community [37, 38, 39] for image classification. [37] proposed an image-based weakly supervised end-to-end model for object classification consisting of a CNN followed by two self-expressive layers. One self-expressive layer captures the global structures through coarse labels and the other captures the local structures for fine-grained classification. [38] tackled the problem of learning finer representations from coarser labels without any fine-grained labels. Their proposed method consists of CNN based trunk-target network that learns coarse representations from labels and finer representations with nearest-neighbor classifier objective. Recently, [39] tackled the problem of Coarse-to-Fine Few-Shot (C2FS) and proposed a novel 'angular normalization' module that effectively combines supervised and self-supervised contrastive pre-training for C2FS.
Although these previous works in the vision community propose weakly supervised learning methods exploiting hierarchical structures, the focus solely lies on object recognition in natural images containing a single object in each image. In this work, we focus on weakly supervised learning from videos instead of images. We aim to recognize fine-grained activity, as opposed to object, exploiting the temporal information available in videos. In particular, we target fine-grained surgical activity recognition on videos from endoscopic procedures on two different types of surgeries, i.e., gastric bypass and cataract.
## 3 Methodology
The overview of our proposed method is presented in Fig. 2. In this section, we first present our end-to-end Spatio-temporal (ResNet-50 + SS-TCN) model for the task of fine-grained activity, i.e, step, recognition. Then we introduce the phase-step dependency loss for weak supervision of step recognition using phase annotation.
### Spatio-temporal Model
Our weakly supervised step recognition network consists of a ResNet-50 model for visual feature extraction followed by an SS-TCN for modeling the recognition problem temporally. The complete model is trained in an end-to-end fashion. The overview of the model setup is depicted in Fig. 2.
For phase segmentation, ResNet-50 [40] has been successfully employed as the backbone in many previous works [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]. In this work, we utilize the same architecture for visual feature extraction. We use a single-stage TCN (SS-TCN), a single-stage variant of MS-TCN, to learn the spatial coherence across video frames. The choice of SS-TCN was motivated by the work of [7] where MS-TCN did not provide a significant improvement over SS-TCN for both the step and phase recognition. Following the design of MS-TCN, the SS-TCN contains neither pooling layers nor fully connected layers and is constructed with only temporal convolutional layers, specifically dilated residual layers performing dilated convolutions. With the aim of online activity segmentation, we perform at each layer causal convolutions [7, 12, 34] that depend only on the current frame and \(n\) previous frames.
The complete model takes an input video consisting of \(T\) frames \(x_{1:T}\). The ResNet-50 maps \(224\times 224\times 3\) RGB images to a feature space of size \(N_{f}=2048\). These frame-wise features are collected over time and are inputs to the TCN model that predicts \(\hat{y}_{1:T}^{*}\) where \(\hat{y}_{i}^{*}\) is the class label for the current timestamp \(t\), \(t\in[1,T]\). Since step recognition is a multi-class classification problem that exhibits an imbalance in the class distribution, softmax activation and class-weighted
cross-entropy loss are utilized. Additionally, the dependency loss used when step labels are not available also relies on softmax activation and weighted cross-entropy loss, utilizing phase labels instead. The class weights for both steps and phases are calculated using the median frequency balancing [41] on the training set. The total loss is given by:
\[\mathcal{L}_{total}=\delta_{step}\cdot\mathcal{L}_{step}+(1-\delta_{step})\cdot \mathcal{L}_{dep}, \tag{1}\]
where \(\mathcal{L}_{step}\) represents weighted cross-entropy loss for steps, \(\mathcal{L}_{dep}\) is the step-phase dependency loss (subsection III-B), and \(\delta_{step}\) is a binary variable that indicates if the video contains step labels.
### Weak Supervision: Step-Phase dependency loss
Steps and phases are two types of activities describing the surgical workflow that are defined at different levels of granularity and possess an inherent hierarchical relationship [4, 7]. Steps are defined at a higher level of detail compared to phases. This brings about lower inter-class variances between steps, compared to phases, making it a more complex task to clearly define and distinguish between them. The challenges can be seen in the sample images presented in Fig. 1. For instance, in the Bypass40 dataset, similar actions are performed across different steps belonging to different phases. Dissection is performed in at least 7 steps spread across 3 different phases. Similarly, Stapling is performed in 5 steps across 4 different phases. Designing and training a deep learning model to distinguish between these similar steps poses a great challenge. Even the state-of-the-art method, MTMS-TCN [7], trained on a fully annotated dataset achieves an accuracy of \(\sim\)76% with a precision of \(\sim\)56%, accentuating the difficulty of the problem. The class imbalance further creates a challenge for training deep learning models that require large datasets with plenty of samples for each class.
In the scenario presented in this paper where the number of annotations is scarce, the recognition difficulties increase drastically. To overcome some of the challenges, this work proposes a weakly supervised approach that utilizes labels of less granular activities, i.e., phases. Phase information alone could help the model in two ways. Firstly, phase information could help the model reduce errors related to recognizing similar looking steps, e.g., 'S6: horizontal stapling' and 'S18: gastrojequinal stapling', belonging to two different phases. Secondly, we can gather a smaller subset of probable steps that could occur in a given phase eliminating the rest. For example, given the phase to be 'Phacoemulsification' of cataract surgery, only 5 out of 19 steps are likely to occur (Table I). Similarly, a phase such as 'P5: anastomosis test' in the Bypass40 dataset, reduces the possible steps to 7 out of 44 (Table II). Here, the phase information provides cues to the model to learn to distinguish between steps belonging to the subset rather than the whole set. Thus we hypothesize that the additional available weak phase information could be very beneficial for step recognition in the low data regime.
We propose to represent the relationship as a step-phase mapping matrix \(M_{s\to p}\), where the elements \(m_{ij}\) of the matrix are binary indicator variables which are \(1\) if step \(s_{i}\) occurs in phase \(p_{j}\). The matrix encodes the weak information about which steps can occur in a particular phase and does not provide details of their occurrence, duration, and/or order. To enforce this weak link between steps and phases, the step
Fig. 2: Overview of our end-to-end Spatio-temporal model setup: ResNet50 + SS-TCN (Single-Stage Temporal Convolutional Networks). When step labels are available, the model is trained through the supervised pathway (red) and weakly supervised pathway (purple) utilizing phase labels. The model is trained end-to-end in a single learning stage.
predictions \(\hat{y}_{t}^{s}\) of our Spatio-temporal model (as described earlier) are linearly transformed by \(M_{s\to p}\) into the phase space. Then a weighted cross-entropy loss (\(\mathcal{L}_{CE}\)) captures the similarity between the phase labels (\(y_{t}^{p}\)) and the transformed predictions (\(M_{s\to p}\times\hat{y}_{t}^{s}\)) of the model. The dependency loss (\(\mathcal{L}_{dep}\)) is given by:
\[\mathcal{L}_{dep}=\mathcal{L}_{CE}(y_{t}^{p},M_{s\to p}\times\hat{y}_{t}^{s}). \tag{2}\]
## 4 Experimental Setup
In this section, we discuss the experimental setup of our method. First, we present the datasets used for evaluation. Next, we discuss the experimental study followed by the training setup and evaluation metrics.
### Datasets
#### 4.1.1 Bypass40
The _Bypass40_ dataset [7] consists of 40 videos of LRYGB procedures with resolution \(854\times 480\) or \(1920\times 1080\) pixels recorded at 25 fps. Each frame is manually assigned to one of the 11 phases and one of the 44 steps [7]. For example, steps such as _gastric opening, gastric tube placement, horizontal stapling_, and _vertical stapling_ occur in _gastric pouch creation_ phase. A detailed list of phases and steps along with their hierarchical relationship is presented in Table 2. For more information, we ask the readers to refer to [7]. We split the 40 videos into 24, 6, and 10 videos for training, validation, and test sets, respectively, and sub-sampled them at 1 frame-per-second (fps). This amounts to 150k, 40k, and 65k images in each set. The images are resized to ResNet-50's input dimension of \(224\times 224\), and the training dataset is augmented by applying horizontal flip, saturation, and rotation.
#### 4.1.2 Cataracts
The CATARACTS dataset, proposed in [29], contains 50 videos of cataract surgery. With the recent CATARACTS2020 challenge, the dataset has been released with step annotations. Similar to [6], we define a phase ontology for available step labels. Cataract surgery consists of 5 phases and 19 steps that are summarized in Table 1. The dataset is extended with phase labels that is automatically generated using the available step annotations and the ontology presented in Table 1. For each frame in a video, the phase label is obtained by a simple lookup of the step label in Table 1. The only constraint while generating phase labels is when there are steps that can occur in several phases. In this case, the phase of the immediately preceding frame is assigned to the current frame. Since the only steps that occur in more than one phase are Idle, Incision, and Viscodilatation, and they do not occur at the beginning or at the end of a phase, it is therefore always possible to identify the correct phase by checking the phase of the previous step. Since very few steps occur in multiple phases, the automatically generated phase labels by table lookup are accurate and do not require expert knowledge or verification from a clinical expert.
We split the 50 videos (following the challenge2) into 25, 5, and 20 videos for training, validation, and test sets, respectively. Each set consists of 66k, 3.5k, and 11.8k frames extracted at 1 fps from the videos. The frames are resized from \(1920\times 1080\) to \(224\times 224\), and the training set is augmented with horizontal flip, saturation, and rotation.
Footnote 2: [https://www.synapse.org/#](https://www.synapse.org/#)!Synapse:syn21680292/wiki/601563
### Study
To demonstrate the effectiveness of our approach, we train and evaluate different configurations of the model. Given \(n\) videos, of which \(k\) are annotated with steps and the rest (\(n-k\)) are weakly annotated with phases, the Spatio-temporal model is trained in the proposed weakly supervised setting utilizing the dependency loss, presented as 'DEP'. To analyze the efficacy of 'DEP', we compare it against the Spatio-temporal model trained only on \(k\) videos in a fully-supervised approach for the task of step recognition, which we refer to as 'FSA'. Additionally, we add a state-of-the-art semi-supervised learning method proposed by Yu et al. [42] to our results. Yu et al. [42], proposed a teacher/student semi-supervised learning method where both the teacher and student models consisted of spatial and temporal components, CNN-biLSTM-CRF and CNN-LSTM respectively. As noted in Section 2.2, [42] is a closely related work in the literature to the work presented in this paper. Hence, we have implemented and adapted the method of Yu et al. [42] for the task of step recognition. We repeat all the experiments for different values of \(k\in\{3,6,12,18\}\).
Furthermore, to analyze the influence of the number of additional videos with phase labels on the model performance, we conduct experiments where we fix \(k\) videos with step annotations and vary the number of videos with phase annotations from \(0\) to \(n-k\) (i.e., 3, 6, 12, etc.).
### Training
The ResNet-50 model is initialized with weights pre-trained on ImageNet. The complete ResNet-50 + SS-TCN model is then trained end-to-end for the task of step recognition. Since SS-TCN models the temporal information in an online setup, features from all the past frames in the video needs to be cached. To achieve this, a feature buffer is maintained to store features from the spatial model of the past frames. The feature buffer is reset at the end of the video. In all the experiments, the model is trained for \(50\) epochs with a learning rate of 1e-5, weight regularization of 5e-4, and a batch size of 64. The test results presented are from the best performing model on the validation set. The models were implemented in PyTorch and trained on NVIDIA RTX 2080 Ti.
### Evaluation Metrics
To effectively analyze our models, we observe the accuracy (ACC), precision (PR), recall (RE), and F1 score (F1) metrics used in related publications [10, 11, 12]. Accuracy quantifies the total correct classification of activity in the whole video. PR, RE, and F1 are computed class-wise, defined as:
\[PR=\frac{|GT\cap P|}{|P|},\ RE=\frac{|GT\cap P|}{|GT|},\ F1=\frac{2}{\frac{1}{ PR}+\frac{1}{RE}}, \tag{3}\]
where GT and P represent the ground truth and prediction for one class, respectively. These values are averaged across all the classes to obtain PR, RE, and F1 for each video in the test set. All four metrics, computed per video, are averaged across all the videos in the test set. Furthermore, where applicable, standard deviations are also computed across all the videos in the test set.
## 5 Results and Discussions
### _Bypass40_
#### 5.1.1 Effect of weak supervision
To quantitatively evaluate our method, the results of step recognition on the test set are presented in Table 3. The table contains the results of our model with a varying number of videos in the training set labeled with steps (3, 6, 12, and 18) along with the rest of the training set containing phase annotations. The introduction of dependency loss 'DEP' for weak supervision significantly improves the performance over the model (FSA) trained only on the step labeled subset of the dataset. We notice a 10-13% improvement of the model trained with 'DEP' loss containing only 3 videos annotated with steps. Similarly, we see a 10-13% and 5-7% increase in performance in all the metrics of the 'DEP' model in experiments corresponding to 6 and 12 step annotated videos, respectively. Interestingly, our 'DEP' model, trained on a dataset with 50% of step and 50% of phase annotated videos, achieves performance close to the upper baseline 'FSA' model trained on the whole fully labeled dataset.
Moreover, the results of Yu et al. [42] semi-supervised method are also presented in Table 3 for different step annotated videos (3, 6, 12, and 18) used to train both teacher and student model. The student model's performance increases by 3-8% over 'FSA' in all the metrics for 6 videos with step annotations. Furthermore, an increase of 6% and 2% is noticed in recall and F1-score above 'FSA' with 12 step annotated videos. However, the method falls short of our proposed 'DEP' method. We notice a 10-15%, 2-6%, and 1-6% increase in performance in all the metrics of the 'DEP' model over Yu et al. with 3, 6 and 12 step annotated videos, respectively. Although both methods use 100% of the training videos for the task of step recognition, Yu et al. aim at exploiting the knowledge learned by an offline teacher model to generate pseudo labels for additional videos without step annotations while 'DEP' aims to use weak supervision through phase annotations. Hence, the method of Yu et al. is limited by the knowledge learned by the teacher model which uses only \(k\) step annotated videos although it learns from both current and future frames. On the other hand, the superior performance of the 'DEP' model indicates the additional cues present in phase annotated videos, although weak, is advantageous and
\begin{table}
\begin{tabular}{l l l l} \hline \hline Phases & **Steps** \\ \hline P1: preparation & SO: null step, S1: cavity exploration, S2: trocar placement, S3: retractor placement, S14: adhesiolysis, S22: gastric tube placement \\ & S0: null step, S4: crux dissection, S5: his angle dissection, S6: horizontal stapling, S7: retrogastric dissection, S8: vertical stapling, S9: gastric remnant reinforcement, S10: gastric pouch reinforcement, S11: gastric opening, S22: gastric tube placement, S43: calibration \\ P3: momentum division & S0: null step, S12: omental lifting, S13: omental section, S14: adhesiolysis \\ P4: gastrojejunal anastomosis & S0: null step, S15: treitz angle identification, S16: biliary limb measurement, S17: jejunum opening, S18: gastrojejunal stabling, S19: gastrojejunal defect closing, S26: gastrojejunal anastomosis reinforcement, S30: alimentary limb measurement \\ P5: anastomosis test & S0: null step, S22: gastric tube placement, S23: clamping, S24: ink injection, S25: visual assessment, S26: gastrojejunal anastomosis reinforcement, S39: coagulation \\ P6: jejun separation & S0: null step, S20: mesenteric opening, S21: jejun section \\ P7: closure petersen space & S0: null step, S27: petersen space exposure, S28: petersen space closing \\ P8: jejunum\_1
that the proposed method effectively utilizes this information in the lower data settings.
#### 3.1.2 Effect of the amount of phase annotated videos
In Table 4, we present the results of our model with a varying number of phase annotated videos. Utilizing 6 videos containing step annotations, the addition of phase labeled videos as weak supervision improves all metrics: accuracy, F1, precision, and recall. With 6 videos annotated with phases, the model performance increases by 7-8% in all metrics over the baseline 'FSA' model. The addition of more videos does not affect the accuracy but further improves both precision and recall by 4%. This is due to our weakly-supervised method, which only provides supervision information if a step can occur in the given phase. This information helps to distinguish steps belonging to different phases, as opposed to steps belonging to the same phase. Therefore, the precision and recall of the model improve with more phase annotated videos, and no significant improvement in accuracy is seen. We see a similar trend when using 12 videos annotated with steps and increasing the number of videos annotated with phase labels. Thus, ultimately it is beneficial to train our method utilizing all additional videos in the dataset with phase annotations for weak supervision.
### _Cataracts_
#### 3.2.1 Effect of weak supervision
We quantitatively evaluate our method and present the results of step recognition in Table 5. The table contains the results of our model, on a similar set of experiments as with _Bypass40_, by varying the number of videos in the training set labeled with steps (3, 6, 12, and 18) along with the rest of the training set containing phase annotations. We see a similar trend as with bypass where the 'DEP' model outperforms 'FSA'. We notice a 13-22% improvement 'DEP' model considering only 3 step annotated videos. Furthermore, we see a 6-13% and 1-3% increase in performance in all the metrics of the 'DEP' model in experiments corresponding to 6 and 12 step annotated videos, respectively. We see that our method achieves a similar performance improvement on a relatively easier surgical workflow, such as cataracts, consistently surpassing the FSA in all labeled ratios. The semi-supervised method of Yu et al. achieves performance improvement of 16%, 8%, and 1.5% over 'FSA' in F1-score for experiments corresponding to 3, 6, and 12 videos, respectively. However, as seen earlier, it falls short of 'DEP' by 5%, 0.5%, and 0.5% in the F1-score for experiments corresponding to 3, 6, and 12 videos. Interestingly, Yu et al. achieves high recall on both datasets (Table 3 & 5). On CATARACTS, it even outperforms the 'DEP' model in recall in all the experiments but falls short significantly in precision. This could be credited to the student model which learns from imperfect pseudo labels generated by the teacher model. Since our proposed 'DEP' model learns from true phase labels on additional videos its performance increases in both precision and recall. This validates the applicability of our approach to different surgical workflows.
#### 3.2.2 Effect of the amount of phase annotated videos
We present the results of our experiments, with a varying number
\begin{table}
\begin{tabular}{c|c c|c c c c} \hline \hline & \multicolumn{2}{c|}{\# Videos} & & & \\ Model & Step & Phase & ACC & PR & RE & F1 \\ \hline FSA & 6 & - & 59.80 & 37.19 & 35.93 & 32.15 \\ DEP & 6 & 3 & 62.15 & 40.48 & 37.15 & 33.48 \\ DEP & 6 & 6 & 67.94 & 46.17 & 42.61 & 39.67 \\ DEP & 6 & 12 & 68.07 & 47.18 & 43.18 & 40.42 \\ DEP & 6 & 18 & 68.03 & 50.05 & 45.86 & 42.05 \\ \hline FSA & 12 & - & 68.26 & 47.57 & 44.74 & 41.30 \\ DEP & 12 & 3 & 72.79 & 50.10 & 48.39 & 45.06 \\ DEP & 12 & 6 & 72.43 & 53.02 & 51.20 & 47.26 \\ DEP & 12 & 12 & 73.43 & 53.40 & 51.19 & 48.34 \\ \hline \hline FSA & 24 & - & 76.12 & 54.23 & 50.94 & 48.17 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Bypass40: Effect of the number of phase annotated videos for step recognition using ’DEP’ loss for weak supervision. Accuracy (ACC), precision (PR), recall (RE), and F1-score (F1) (%) are reported for setups with 6, 12, and 24 videos fully annotated with steps.
\begin{table}
\begin{tabular}{c|c c|c c c c} \hline \hline & \multicolumn{2}{c|}{\# Videos} & & & & \\ Model & Step & Phase & ACC & PR & RE & F1 \\ \hline FSA & 3 (12\%) & - & \(45.02\pm 9.96\) & \(26.62\pm 5.32\) & \(21.87\pm 4.70\) & \(19.44\pm 5.31\) \\ Yu et al. [42] & 3 (12\%) & - & \(43.27\pm 11.8\) & \(23.63\pm 4.41\) & \(23.91\pm 5.71\) & \(19.77\pm 4.89\) \\ DEP & 3 (12\%) & 21 & **57.20 \(\pm\) 8.31** & **33.44 \(\pm\) 6.04** & **33.16 \(\pm\) 6.37** & **29.38 \(\pm\) 6.11** \\ \hline FSA & 6 (25\%) & - & \(59.80\pm 10.17\) & \(37.19\pm 8.52\) & \(35.93\pm 7.31\) & \(32.15\pm 8.03\) \\ Yu et al. [42] & 6 (25\%) & - & \(62.55\pm 10.09\) & \(40.63\pm 7.85\) & \(43.71\pm 8.35\) & \(37.68\pm 8.54\) \\ DEP & 6 (25\%) & 18 & **68.03 \(\pm\) 9.04** & **50.05 \(\pm\) 6.82** & **45.86 \(\pm\) 6.46** & **42.05 \(\pm\) 7.44** \\ \hline FSA & 12 (50\%) & - & \(68.26\pm 8.31\) & \(47.57\pm 7.84\) & \(44.74\pm 7.59\) & \(41.30\pm 8.44\) \\ Yu et al. [42] & 12 (50\%) & - & \(67.89\pm 11.04\) & \(46.26\pm 9.97\) & \(50.11\pm 8.20\) & \(43.41\pm 10.33\) \\ DEP & 12 (50\%) & 12 & **73.43 \(\pm\) 8.43** & **53.40 \(\pm\) 7.43** & **51.19 \(\pm\) 8.20** & **48.34 \(\pm\) 8.85** \\ \hline FSA & 18 (75\%) & - & \(72.82\pm 6.76\) & \(50.60\pm 7.90\) & \(48.98\pm 8.33\) & \(46.08\pm 8.61\) \\ Yu et al. [42] & 18 (75\%) & - & \(73.33\pm 10.15\) & **54.78 \(\pm\) 11.05** & **57.21 \(\pm\) 8.51** & **51.72 \(\pm\) 10.59** \\ DEP & 18 (75\%) & 6 & **73.88 \(\pm\) 8.11** & \(54.33\pm 6.38\) & \(51.79\pm 7.10\) & \(48.62\pm 7.49\) \\ \hline \hline FSA & 24 (100\%) & - & \(76.12\pm 7.39\) & \(54.23\pm 8.24\) & \(50.94\pm 7.53\) & \(48.17\pm 8.02\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Bypass40: Effect of weak supervision on varying amount of step labeled videos. Accuracy (ACC), precision (PR), recall (RE), and F1-score (F1) (%) are reported. ‘FSA’ denotes the model trained for step recognition without any phase annotations. ‘DEP’ denotes the dependency loss added for weak supervision using phase labels on the remaining videos.
of phase annotated videos, on CATARACTS in Table VI. We notice that utilizing 6 step annotated videos with additional phase labeled videos improves all the metrics by 6-13%. In particular, with 6 videos annotated with phases, we see a performance increase of 5% in accuracy and F1-score and 8% in recall of the 'DEP' model over the baseline 'FSA'. The addition of more videos provides a fractional improvement in accuracy but further improves both recall and F1-score by 1-4%. We see a similar trend when using 12 videos with step annotations reaffirming our hypothesis that it is beneficial to train our method utilizing all additional videos in the dataset with phase annotations for weak supervision.
### Weak supervision on step predictions
To visualize the effectiveness of our method, we visualize the step predictions of our method on the CATARACTS dataset which contains fewer phases and steps thereby enabling us to render a simple and clearer graphical diagram. We compare the step predictions of our 'DEP' model against 'FSA' for 2 best and 2 worst videos in CATARACTS in Fig. 3 for different labeled ratios (3, 6, and 12 videos with step annotations). Along with the step predictions we present the errors in the phase predictions for both models. The phase prediction error plot is computed as the errors in phase predictions derived from step predictions, using the step-phase mapping matrix, against ground truth phase predictions. Fig. 3 clearly depicts the effectiveness of our method for different labeled ratios. By correcting for the phase labels through dependency loss, our 'DEP' model is able to correct for corresponding step labels without explicit supervision for step recognition (e.g. S10, S15, S18). The top row of Fig. 2(a) shows this effect where we see a marked improvement in recognition of steps S18 (first video) and S10 (second video) by correcting for phase errors.
### Limitations
In some cases, for example, S16 (Fig. 2(a), 2(b), 2(c)), correcting for phase errors does not improve step recognition. The step is misrecognized with another step that occurs in the same phase. This is an expected outcome due to the intrinsic limitations of our weakly supervised method using coarser phase labels. Given the phase to be 'P2: gastric pouch creation' (Table II), it is impossible for a model to differentiate between 'crura dissection' and 'his angle dissection' or between 'horizontal stapling' and'vertical stapling'. As can be seen in Fig. 1, the steps are quite similar in appearance and perform similar actions on the same anatomy (i.e., stomach or small intestine). This makes it challenging for a model to learn even when all the annotations are available. Furthermore, the phase information is too weak and does not provide any cues to better distinguish between the steps because both are valid steps in the current phase. Another limitation of our method is that adding more videos with phase annotations is not always beneficial. This limitation also stems from weak phase signals. If the fully supervised 'FSA' model learns to separate steps belonging to different phases, i.e., it has no or few phase-step
\begin{table}
\begin{tabular}{c|c c|c c c} \hline \hline & \multicolumn{2}{c|}{\# Videos} & & & \\ Model & Step & Phase & ACC & PR & RE & F1 \\ \hline FSA & 3 (12\%) & - & \(48.47\pm 10.62\) & \(51.32\pm 11.91\) & \(37.44\pm 9.85\) & \(37.12\pm 10.15\) \\ Yu et al. [42] & 3 (12\%) & - & \(59.61\pm 10.67\) & \(56.02\pm 14.31\) & **61.82 \(\pm\) 14.45** & \(53.26\pm 13.61\) \\ DEP & 3 (12\%) & 22 & **66.78 \(\pm\) 12.21** & **64.29 \(\pm\) 12.50** & \(59.73\pm 11.93\) & **58.31 \(\pm\) 12.73** \\ \hline FSA & 6 (25\%) & - & \(69.51\pm 11.16\) & \(71.05\pm 14.13\) & \(56.70\pm 12.67\) & \(59.28\pm 13.50\) \\ Yu et al. [42] & 6 (25\%) & - & \(74.62\pm 8.22\) & \(67.71\pm 11.48\) & **75.93 \(\pm\) 12.48** & \(67.67\pm 12.46\) \\ DEP & 6 (25\%) & 19 & **75.28 \(\pm\) 11.50** & **71.84 \(\pm\) 14.30** & \(69.19\pm 12.72\) & **68.09 \(\pm\) 13.97** \\ \hline FSA & 12 (50\%) & - & \(78.02\pm 9.05\) & \(79.02\pm 13.20\) & \(69.55\pm 12.04\) & \(71.18\pm 13.04\) \\ Yu et al. [42] & 12 (50\%) & - & \(77.84\pm 12.55\) & \(71.48\pm 13.41\) & **79.92 \(\pm\) 15.28** & \(72.96\pm 14.46\) \\ DEP & 12 (50\%) & 13 & **79.94 \(\pm\) 9.17** & **80.52 \(\pm\) 12.93** & \(72.62\pm 11.91\) & **73.52 \(\pm\) 13.29** \\ \hline FSA & 18 (75\%) & - & \(82.5\pm 8.07\) & **82.58 \(\pm\) 11.91** & \(76.05\pm 11.62\) & \(77.39\pm 12.12\) \\ Yu et al. [42] & 18 (75\%) & - & \(78.59\pm 10.71\) & \(74.55\pm 14.17\) & **78.16 \(\pm\) 12.64** & \(73.55\pm 13.67\) \\ DEP & 18 (75\%) & 7 & **82.64 \(\pm\) 9.72** & \(82.20\pm 13.70\) & \(77.32\pm 12.70\) & **77.67 \(\pm\) 13.56** \\ \hline \hline FSA & 25 (100\%) & - & \(83.37\pm 9.50\) & \(85.29\pm 12.05\) & \(78.96\pm 11.93\) & \(80.09\pm 13.34\) \\ \hline \hline \end{tabular}
\end{table}
Table V: CATARACTS: Effect of weak supervision on varying amount of step labeled videos. Accuracy (ACC), precision (PR), recall (RE), and F1-score (F1) (%) are reported. ‘FSA’ denotes the model trained for step recognition without any phase annotations. ‘DEP’ denotes the dependency loss added for weak supervision using phase labels on the remaining videos.
\begin{table}
\begin{tabular}{c|c c|c c c} \hline \hline & \multicolumn{2}{c|}{\# Videos} & & & \\ Model & Step & Phase & ACC & PR & RE & F1 \\ \hline FSA & 6 & - & 69.51 & 71.05 & 56.70 & 59.28 \\ DEP & 6 & 3 & 71.34 & 67.84 & 62.27 & 62.01 \\ DEP & 6 & 6 & 74.30 & 71.70 & 64.18 & 64.96 \\ DEP & 6 & 12 & 73.57 & 70.88 & 65.68 & 66.03 \\ DEP & 6 & 19 & 75.28 & 71.84 & 69.19 & 68.09 \\ \hline FSA & 12 & - & 78.02 & 79.02 & 69.55 & 71.18 \\ DEP & 12 & 3 & 77.60 & 78.26 & 68.60 & 69.87 \\ DEP & 12 & 6 & 80.11 & 81.60 & 72.46 & 73.98 \\ DEP & 12 & 13 & 79.94 & 80.52 & 72.62 & 73.52 \\ \hline \hline FSA & 25 & - & 83.37 & 85.29 & 78.96 & 80.09 \\ \hline \hline \end{tabular}
\end{table}
Table VI: CATARACTS: Effect of the number of phase annotated videos for step recognition using ‘DEP’ loss for weak supervision. Accuracy (ACC), precision (PR), recall (RE), and F1-score (F1) (%) are reported for setups with 6, 12, and 25 videos fully annotated with steps.
Figure 3: Step predictions on two best and two worst videos on the CATARACTS dataset for different labeled ratios. For each video, we visualize the step prediction of ground truth, DEP model predictions, DEP model phase prediction errors, FSA model predictions, and phase prediction errors of FSA model.
correspondence errors, then additional videos with phase labels add no significant value as the model, during training, makes no/few errors in phase-step correspondence that helps improve feature learning. The significant errors by the model would be the inter-class separation of steps belonging to the same phase. Learning good representations to reduce these errors without supervision is a challenging task that needs to be tackled in future works.
Meanwhile, the effect of utilizing more phase annotated videos as weak supervision for improving the model performance on step recognition is presented in Tables 4 & 6. As observed in Sections 5-A2 & 6, it is beneficial to train the 'DEP' model utilizing all the additional phase annotated videos in the dataset for weak supervision. We also observe that in the lower data setting (6 videos with step annotations) model performance improves even when the phase annotated videos are increased from 12 to 18 (19 for cataracts). However, our study doesn't provide insights as to how many phase annotated videos are truly required to achieve the best performance by our proposed 'DEP' model. This is another limitation of our study, irrespective of the complexity of the procedure, that is hindered by the size of the available labeled datasets (24 in Bypass40 & 25 in CATARACTS). Understanding the extent of the 'DEP' model would require extending these datasets which is an important direction that needs to be pursued in future studies.
## 6 Conclusion
In this paper, we introduce a weakly-supervised learning method for surgical step recognition utilizing less demanding phase annotations. To model the weak supervision between steps and phases, we introduce a step-phase dependency loss and train a ResNet-50 + SS-TCN model end-to-end. The proposed method is extensively evaluated on a _Bypass40_ dataset consisting of 40 LRYGB procedures and on the CATARACTS dataset containing 50 cataracts surgeries. The proposed 'DEP' model significantly improves the step recognition metrics over the baseline 'FSA' model for all the amounts of step annotations available. We hope that this work will inspire and foster future research in weak supervision for surgical workflow analysis utilizing multi-level descriptions of the workflow.
**Ethical approval** The surgical videos were recorded and anonymized following the informed consent of patients in compliance with the local Institutional Review Board (IRB) requirements.
**Informed Consent** The patients consented to data recording.
|
2301.09835
|
Microwave heating as a universal method to transform confined molecules
into armchair graphene nanoribbons
|
Armchair graphene nanoribbons (AGNRs) with sub-nanometer width are potential
materials for fabrication of novel nanodevices thanks to their moderate direct
band gaps. AGNRs are usually synthesized by polymerizing precursor molecules on
substrate surface. However, it is time-consuming and not suitable for
large-scale production. AGNRs can also be grown by transforming precursor
molecules inside single-walled carbon nanotubes via furnace annealing, but the
obtained AGNRs are normally twisted. In this work, microwave heating is applied
for transforming precursor molecules into AGNRs. The fast heating process
allows synthesizing the AGNRs in seconds. Several different molecules were
successfully transformed into AGNRs, suggesting that it is a universal method.
More importantly, as demonstrated by Raman spectroscopy, aberration-corrected
high-resolution transmission electron microscopy and theoretical calculations,
less twisted AGNRs are synthesized by the microwave heating than the furnace
annealing. Our results reveal a route for rapid production of AGNRs in large
scale, which would benefit future applications in novel AGNRs-based
semiconductor devices.
|
Haoyuan Zhang, Yingzhi Chen, Kunpeng Tang, Ziheng Lin, Xuan Li, Hongwei Zhang, Yifan Zhang, Takeshi Saito, Chi Ho Wong, Chi Wah Leung, Chee Leung Mak, Yuan Hu, Weili Cui, Kecheng Cao, Lei Shi
|
2023-01-24T06:35:59Z
|
http://arxiv.org/abs/2301.09835v1
|
Microwave heating as a universal method to transform confined molecules into armchair graphene nanoribbons
###### Abstract
Armchair graphene nanoribbons (AGNRs) with sub-nanometer width are potential materials for fabrication of novel nanodevices thanks to their moderate direct band gaps. AGNRs are usually synthesized by polymerizing precursor molecules on substrate surface. However, it is time-consuming and not suitable for large-scale production. AGNRs can also be grown by transforming precursor molecules inside single-walled carbon nanotubes via furnace annealing, but the obtained AGNRs are normally twisted. In this work, microwave heating is applied for transforming precursor molecules into AGNRs. The fast heating process allows synthesizing the AGNRs in seconds. Several different molecules were successfully transformed into AGNRs, suggesting that it is a universal method. More importantly, as demonstrated by Raman spectroscopy, aberration-corrected high-resolution transmission electron microscopy and theoretical calculations, less twisted AGNRs are synthesized by the microwave heating than the furnace annealing. Our results reveal a route for rapid production of AGNRs in large scale, which would benefit future applications in novel AGNRs-based semiconductor devices.
armchair graphene nanoribbons, microwave heating, single-walled carbon nanotubes, Raman spectroscopy +
Footnote †: journal: Journal of Applied Physics
## 1 Introduction
Graphene nanoribbons (GNRs) with a regulable band gap, high mobility, and high carrier capacity are getting increasing attention[1-3]. Especially, armchair GNRs (AGNRs) possess band gaps of 0.1-2.5 eV tuned by the width, making them a potential candidate for the fabrication of novel nanodevices including photodetectors, sensors, and transistors[4-6]. AGNRs can be classified into three groups by the number of dimer lines (n) across their widths: n = 3p, 3p + 1, or 3p + 2, where p is an integer. Importantly, n-AGNRs with n = 3p or 3p + 1 have a comparatively big band gap between 1.2 and 2.3 eV, while nAGNRs with n = 3p + 2 usually have the least band gap in the order of 0.1 eV[7]. Synthesis of n = 3p or 3p + 1 AGNRs attracts attention due to their potentials in semiconductor applications. Especially, n = 6 and 7 AGNRs with widths of 0.62 and 0.74 nm combined with the width-dependent energy gaps of 1.83 and 2.18 eV[8], respectively, are highly concerned. Tailoring both the width and the edge of the AGNRs are keys towards the novel applications.
The GNRs can be prepared by etching graphene into strips or through unzipping carbon nanotubes (CNTs)[9-11]. However, such a top-down approach is not able to achieve GNRs with sub-nanometer in width and controllable edge. In contrast, bottom-up synthesis on surface allows for precise control of both the width and the edge structure of the GNRs by designing precursor molecules as monomers for polymerization[12-14]. However, to obtain specific GNRs, designing and synthesis of monomers are rather complicated and often require multiple steps with low yield. Therefore, on-surface synthesis is time consuming and not suitable for large-scale production.
Confined synthesis using single-walled carbon nanotubes (SWCNTs) with hollow space as nanoreactors enables to synthesize the GNRs. Previously it was proved that polymerizing precursor molecules inside SWCNTs is an effective way to obtain GNRs[15, 16]. Our recent studies indicated that specific AGNRs, e.g., 6-/7-AGNRs, can be formed by filling and transforming the precursor molecules inside SWCNTs [8, 17-18], which became an alternative method of GNR fabrication with precise controls of the width and edge except the on-surface synthesis. We found that the diameter/chirality of the SWCNTs is the key to regulate the structure of the nanoribbons. Most importantly, no specific design is needed for the precursor molecules, since the formation mechanism of the GNRs is completely different from that of the on-surface synthesis. Specifically, in the confined synthesis the molecules were first decomposed into small organic segments, and then reassembled into GNRs under the heating. Thus, the synthesis temperature is usually higher than that of the on-surface synthesis.
Microwave as an energy source enables to heat up the materials instantly. Previously, microwave heating was used for the synthesis,
purification, and functionalization of SWCNTs[19-21]. In this work, microwave heating is utilized to transform confined precursor molecules into 6-/-AGNRs, which shortens the transformation process, i.e., decomposition and recombination, from hours by furnace annealing into seconds. More importantly, less twisted GNRs are formed by fast heating than the furnace heating, as demonstrated by Raman spectroscopy, aberration-corrected high-resolution transmission electron microscopy (ACHRTEM) and theoretical calculations. Microwave heating provides an effective and efficient way for large-scale production of GNRs with tailored width and edge, which would benefit future applications in novel GNRs-based semiconductor devices.
## 2 Experimental
### Synthesis of AGNRs
SWCNTs with an average diameter of around 1.3 nm were prepared by enhanced direct injection pyrolysis synthesis (eDIPS)[22]. The as-grown eDIPS SWCNTs were thermally treated in the air at 400 \({}^{\circ}\)C for 30 min to remove the amorphous carbon around the catalyst, and then treated with HCl to dissolve the exposed catalysts[23]. SWCNT buckypaper with a thickness of 100 um was obtained after rinsing with distilled water and drying. The SWCNTs were opened by annealing in air at 500 \({}^{\circ}\)C for 30 min. To fill the precursor molecules into the SWCNTs, the opened SWCNTs were sealed with 4,4\({}^{\circ}\)-dibromo-\(p\)-terphenyl (DBTP) or ferrocene (FeCp) or confluence (RuCp) in an amp ampoule under dynamic vacuum of around 2x10 \({}^{\circ}\)Pa, and heated to their optimal temperatures at 320 or 350 or 250 \({}^{\circ}\)C for 3 days, respectively. After filling, the ampoule was placed in a microwave oven (Midea, 700 W, 2450 MHz) and irradiated for tens of seconds to transform the molecules into armchair graphene nanoribbons. The temperature of the sample increases with the irradiation time (Figure 1). For comparison, the molecule-filled samples were annealed in a tube furnace at different temperatures (650-1000 \({}^{\circ}\)C) and durations (1-6 hours) under a dynamic vacuum of around 4x10 \({}^{\circ}\)Pa.
### Characterization
The samples were measured by Raman spectroscopy (TriVista 557, Princeton Instruments). The laser wavelengths of 633 and 561 nm were applied to excite the 6-AGNRs and 7-AGNRs, respectively[8]. The laser power was set below 1 mW to prevent the heating effect[17]. All the spectra were calibrated by Rayleigh scattering line and normalized to the intensity of the 2D-band.
The AGNR@SWCNTs samples were characterized by aberration-corrected HRTEM (Thermo Fisher) at the voltage of 60 kV. The exposure time was 1 s for each image.
### Simulation methods
The Raman spectra of the samples are implemented by the ab-initio calculations [24]. The infinite 6-AGNR refers to periodic cells. The finite 6-AGNR is obtained from supercell construction. The lateral interaction between the 6-AGNR can be emerged by decreasing the AGNR-to-AGNR distance. The fully relaxed lattice parameters at the spin-restricted PBE level under Projector Augmented Wave pseudopotentials are obtained. The plane-wave energy has a cut-off point at 300 eV[24]. The Raman spectrums are implemented by taking the finite differences for the displacement of the cell per vibrational mode [24].
## 3 Results and Discussion
SWCNTs were proved as effective nanoreactors for confined synthesis of novel 1D carbon materials, including carbon chains[23; 25], carbon nanotubes [26], and GNRs[8; 15; 16]. Especially, 5-/-6/-7-AGNRs were successfully synthesized by transforming encapsulated molecules inside SWCNTs via high-temperature annealing in a vacuum by furnace [8; 17; 18]. However, the transformation process usually takes hours and the obtained GNRs are mostly twisted and short. Instead of the furnace, microwave allows for heating up the materials rapidly. Therefore, we applied microwave heating to transform the encapsulated molecules into GNRs. To verify the validity of the method, DBTP molecules were used, since they can be polymerized into 6-AGNRs via surface synthesis at a relatively low temperature of around 460 \({}^{\circ}\)C[27; 28]. The DBTP molecules were first reacted into poly-\(p\)-phenylene, and then GNRs were formed by combining the parallel-arranged poly-\(p\)-phenylene chains. Therefore, a series of AGNRs with widths of n=6, 9, 12,... can be formed depending on the polymerization temperature[27; 28]. In our case, DBTP molecules were filled inside SWCNTs with an average diameter of 1.3 nm, thus only 6-AGNRs but not 9- and 12-AGNRs can be formed thanks to the confinement of the SWCNTs. Thus, DBTP@SWCNT is an ideal system for testing the confined synthesis.
The DBTP@SWCNTs were annealed in a tube furnace. As shown in Figure 2b, Raman spectra of the annealed sample were taken by a laser with wavelength of 633 nm. Compared with the filled SWCNTs, several Raman modes appear after annealing, Radial breathing-like mode (RBLM) represents the vibrations along the width direction of the AGNRs. The Raman frequency of the RBLM mainly depends on the width of the AGNRs[29]. Here the RBLM locates at around 460 cm\({}^{\text{-i}}\), corresponding to the 6-AGNRs[8]. In addition, the Raman signal at 1244 cm\({}^{\text{-i}}\) is assigned to C-H in-plane bending mode (CH\({}_{\text{up}}\)). The CH\({}_{\text{up}}\) intensity is comparable to that of the G-band, since the laser energy is close to the energy gap of the 6-AGNRs[8], thus strongly resonantly enhanced. Besides, the strong signal suggests the high yield of the 6-AGNRs. The other two Raman modes at 1272 and 1358 cm\({}^{\text{-i}}\) are called defect-like mode (DLM)[8;16], because their frequencies are close to that of the D-band of the SWCNTs. The G-band of the 6-AGNRs is not clearly resolved, because it is weaker than the CH\({}_{\text{up}}\) and overlaps with the intense G-band of the SWCNTs. The annealed sample was also measured by a laser with a wavelength of 561 nm, because this laser energy enables the excitation of 7-AGNRs in resonance, which could help to examine the purity of the obtained AGNRs. The width of the 7-AGNRs is only slightly wider than that of the 6-AGNRs, which fits the hollow space of the SWCNTs with a diameter of 1.3 nm as well. However,
Figure 1: Illustrations of the transformation from precursor molecules to 6-/-AGNRs by confined synthesis.
as shown in Figure 2c, only weak Raman modes of 6-AGNRs can be seen and no signal belonging to the 7-AGNRs is observed, revealing that only 6-AGNRs were synthesized. Then the question is: Can microwave heating enable to polymerize the DBTP into 6-AGNRs.
The DBTP-filled sample from the same branch was annealed via microwave heating. As indicated by the Raman spectra excited with the same two lasers shown in Figure 2, all the new Raman modes of the microwave-annealed sample belong to the 6-AGNRs, which reveals that the microwave heating is truly effective for the polymerization of the DBTP molecules into 6-AGNRs. However, microwave could heat up the sample at a temperature much higher than the optimal polymerization temperature of the DBTP, resulting in a lower yield of the 6-AGNRs. Indeed, the intensities of the RBLM, CH\({}_{\text{up}}\) and DLM of the microwave-annealed sample are weaker than those of furnace annealed sample. Furthermore, increasing the microwave annealing duration leads to a decreased yield of the 6-AGNRs and even damages of the GNRs as well as the SWCNTs (Figure S2). All the above results suggest that the growth mechanism (i.e., polymerization) of the 6-AGNRs from DBTP in confined synthesis is the same as the one in the on-surface synthesis, as illustrated in Figure 2a. How about the performance of other precursor molecules treated by the microwave heating?
Ferrocene consisting of two cyclopentadienyl rings bound to a central iron atom apparently cannot be polymerized into GNRs with hexatomic rings. However, GNRs were largely formed inside SWCNTs, when the encapsulated ferrocene molecules were annealed in vacuum [16, 17], suggesting that the growth mechanism is different from polymerization. In this case, the ferrocene molecules were first decomposed and then combined into GNRs, thus called "decomposition-recombination" strategy. The width of the synthesized GNRs depends on the diameter of the SWCNTs, but not on the structure and size of the precursor molecule. The confined space inside the SWCNTs facilitates a precise control of the width and edge of the GNRs, giving rise to specific GNRs synthesized, e.g., 6-AGNRs or 7-AGNRs. As shown in Figures 2b and 3b, the CH\({}_{\text{up}}\) to G-band ratios of 6-AGNRs obtained from DBTP and ferrocene are similar, meaning that the yield of the 6-AGNRs transformed from ferrocene by furnace annealing is comparable to the 6-AGNRs polymerized from DBTP, which highlights the effectiveness of the "decomposition-recombination" strategy. Interestingly, when the ferrocene is heated by microwave, the yield of the 6-AGNRs (Figure 2b) is much higher than that of the 6-AGNRs polymerized from the DBTP by microwave heating (Figure 3b). This is reasonable, since the optimal transformation temperature (650 \({}^{\circ}\)C) of the ferrocene is higher than the polymerization temperature (400 \({}^{\circ}\)C) of the DBTP. When the ferrocene is switched into ruthenocene, a similar yield of 6-AGNRs can be observed, as shown in Figure 3d. Except 6-AGNRs, a small quantity of 7-AGNRs from both ferrocene and ruthenocene with furnace annealing or microwave heating were observed, as shown in Figures 3c and 3e, because slightly larger SWCNTs also exist in the SWCNT sample, which are suitable to synthesize the 7-AGNRs.
Figure 3: (a) Schematic diagram of transforming FeCp\({}_{2}\) or RuCp\({}_{2}\) into 6/7-AGNRs. Raman spectra of 6-AGNRs synthesized from (b) FeCp or (d) RuCp\({}_{2}\) excited by a laser with wavelength of 633 nm. Raman spectra of 7-AGNRs synthesized from (c) FeCp\({}_{2}\) or (e) RuCp\({}_{2}\) excited by a laser with wavelength of 561 nm.
Figure 2: (a) Polymerization mechanism of DBTP into 6-AGNRs. Raman spectra of 6-AGNRs synthesized from DBTP precursor molecules excited by lasers with wavelengths of (b) 633 nm and (c) 561 nm.
To visually verify the formation of 6-7-AGNRs, the microwave-annealed samples were characterized by ACHRTEM. As shown in Figure 4, two GNRs with the widths of 0.61 and 0.72 nm inside SWCNTs are assigned to 6- and 7-AGNRs, respectively. Note that the shape of the GNRs continuously varied under the electron beam irradiation, because the GNRs freely rotated and translated inside the SWCNTs, which differentiates the edges of the GNRs from the walls of CNTs. The GNRs remained intact with long electron irradiation, suggesting a high stability of the confined GNRs. In general, the GNRs confined inside SWCNTs are mostly twisted[15-17]. In comparison, the GNRs synthesized on the substrate are flat[12-14]. As seen in Figures 4f-4i, the marked area by dashed rectangles clearly recorded the twisted structure, which is consistent with previous results[15]. However, in our observations we found that most of the GNRs in the microwave-annealed sample are flat but not twisted. In addition, the ratio of flat to twisted GNRs in the microwave-annealed sample is higher than the ratio in the furnace-annealed sample. The fast heating in a short time may play a role in forming more flat GNRs. In order to evaluate the enrichment of the flat AGNRs, RBLM should be examined in detail, since the RBLM is very sensitive to the structure of the AGNRs, whereas the CH\({}_{\text{up}}\) is more related to the edge of the AGNRs and DLM reflects the internal structure of the AGNRs.
The RBLM frequency mainly depends on the width of the GNRs. The RBLM frequency of ideal AGNRs can be calculated by the equation
\[\omega_{RBLM}\omega^{-2}\omega^{1/2}+b\]
where \(w\) is the width of the AGNR, and empirical parameters a=1667.9 cm\({}^{-1}\cdot\AA^{12}\), b=-210.2 cm\({}^{-1}\)[29]. However, since many of the 6-7-AGNRs in the sample are not perfect, the RBLM frequency is in principle affected by other factors, e.g., the length of the AGNRs[30], surrounded SWCNTs, and the strain induced from the twist. Indeed, careful observation reveals that the RBLM of the same AGNRs consists of multi-components (Figure 5) and one of the components with lower frequency is consistent with the RBLM frequency of the flat AGNRs obtained from the on-surface synthesis [12, 30], and the other component at higher frequencies could attribute to three factors: the interaction with the SWCNTs, the length of the AGNRs, and/or the twist-induced strain. Since the frequencies of the RBM of the SWCNTs encapsulated with AGNRs did not shift more than 1 cm\({}^{-1}\) (Figure 53) compared to that of the empty SWCNTs, which is within the spectroscopic resolution, implying that the interaction can be excluded for causing the RBLM splitting of the AGNRs. Similarly, the shifts of the G-band are also small (Figure S4). In addition, theoretical calculations demonstrate that only a small up-shift of the RBLM can be found when considering the interaction, which cannot explain the splitting and big up-shift of the RBLM. Thus, the interaction is not further considered in analyzing the results. As reported in previous work, short AGNRs show a length-dependent Raman signal at around 100 cm\({}^{-1}\), which was not observed in our samples. Our theoretical calculations indicate that the RBLM of a short 6-AGNR with a length of 1.3 nm splits into several components (Figure 6). All the components shift up or down compared to the RBLM of an infinite 6-AGNR, which is not consistent with our experimental observation. Therefore, we neglect the length effect in our analysis. At last, the third factor should be considered, i.e., the distortion of the GNRs. The distortion of the twisted AGNRs can be certainly
Figure 4: (a)-(d) ACHRTEM image of selected 6-AGNR\(\theta\)SWCNT. (e) Contrast profile of a 6-AGNR\(\theta\)SWCNT. (f)-(l) ACHRTEM image of selected 7-AGNR\(\theta\)SWCNT. The twisted structure is marked by white dashed rectangles. (j) Contrast profile of a 7-AGNR\(\theta\)SWCNT.
Figure 5: RBLMs of the transformed 6-AGNRs (left panel) and 7- AGNRs (right panel) from (a, d) DBTP, (b, e) ferrocene, and (c, f) ruthenocene.
expected, as observed in the ACHRTEM, which induces a certain strain in the AGNRs, resulting in up-shifted RBLM, similar to the SWCNTs[31, 32], graphene[33, 34], and AGNRs[35, 36]. Indeed, when we consider 5% strain in the 6-AGNRs, the calculated RBLM splits into two components: One is at a slightly higher position than that of the 6-AGNRs without strain, and the other one locates at a higher frequency. This is completely in line with our experimental observations, as shown in Figure 5.
With the knowledge of twist-induced RBLM changes, it enables us to analyze the RBLMs of all the samples in detail. As shown in Figure 5, clearly, the RBLM consists of two components for most of the furnace-annealed and microwave-annealed samples. The one at a lower/higher frequency corresponds to flat/twisted AGNRs, which allows to extract the ratio of flat GNRs in a sample by evaluating the area of the component at a lower frequency. The frequencies and ratios of the flat and twisted 6-/7-AGNRs are summarized in Table 1. Comparing the ratios of the flat AGNRs among the samples, we found that microwave annealing for seconds synthesizes a higher ratio of the flat 6-AGNRs than the furnace annealing for hours. Especially, microwave annealing of DBTP@SWCNTs produces almost 100% flat 6-AGNRs, whereas least flat 6-AGNRs can be obtained from the RuCp.@SWCNTs because of higher optimal temperature used. Considering the amount of the frequency shift in the experiment and the calculated results, we evaluate that around 1-2% stain exists in the twisted 6-AGNRs.
In order to check how the furnace annealing plays a role in transforming flat 6-AGNRs into twisted 6-AGNRs, temperature-dependent and time-dependent synthesis of 6-AGNRs from the DBTP@SWCNTs and RuCp2@SWCNTs were performed. As shown in Figure 7 and Figure S5, increasing the annealing temperature and duration lead to transforming the flat 6-AGNRs into twisted 6-AGNRs, which suggests that the twisted 6-AGNRs are more stable inside the SWCNTs when compared to the flat 6-AGNRs. Furthermore, annealing temperature higher than 900 \({}^{\circ}\)C
## Conclusion
In conclusion, we found that microwave heating, instead of furnace annealing, can be utilized to not only polymerize the DBTP molecules as monomer into 6-AGNRs but also transform the ferrocene and ruthenocene as precursor molecules into 6-AGNRs through "decomposition-recombination" strategy. Even better, the microwave heating allows to obtain a higher ratio of flat 6-AGNRs in seconds than the furnace annealing in hours, as demonstrated by ACHRTEM observations and Raman spectroscopy. Furthermore, microwave heating enables the macroscopic preparation of the 6-AGNRs, which would benefit future applications using the 6-AGNRs as semiconductors with moderate energy gaps.
## Acknowledgements
This work was supported by Guangzhou Basic and Applied Basic Research Foundation (202201011790), Guangdong Basic and Applied Basic Research Foundation (2019A1515011227), National Natural Science Foundation of China (51902353), Fundamental Research Funds for the Central Universities, Sun Yat-sen University (22lgqb03) and State Key Laboratory of Optoelectronic Materials and Technologies (OEMT-2022-ZRC-01). We thank the Department of Applied Physics at The Hong Kong Polytechnic University to provide the ab-initio supports. We also thank the Research Institute for Advanced Manufacturing at The Hong Kong Polytechnic University.
**Electronic Supplementary Material**: The data that support the findings of this study are available from the corresponding author upon reasonable request.
[http://dx.doi.org/10.1007/s12274-](http://dx.doi.org/10.1007/s12274-)********.****.-** (automatically inserted by the publisher).
|
2304.05609
|
Surface Gravity of Dynamical Horizons: A Causal Perspective
|
We consider marginally trapped surfaces in a spherically symmetric spacetime
evolving due to the presence of a perfect fluid in D-dimensions and look at the
various definitions of the surface gravity for these marginally trapped
surfaces. We show that using Einstein equations it is possible to simplify and
obtain general formulae for the surface gravity in terms of invariant
quantities defined at these marginally trapped surfaces like area radius,
cosmological constant and principal values of the energy-momentum tensor
\r{ho}, p. We then correlate these expressions of surface gravity to the cases
of dynamical horizons and timelike tubes and find which proposals of surface
gravity are causally sensitive as these surfaces undergo causal transitions
from spacelike to timelike and vice versa.
|
Anamika Avinash Pathak, Konka Raviteja, Swastik Bhattacharya, Sashideep Gutti
|
2023-04-12T05:01:26Z
|
http://arxiv.org/abs/2304.05609v1
|
# Surface Gravity of Dynamical Horizons: A Causal Perspective
###### Abstract
We consider marginally trapped surfaces in a spherically symmetric spacetime evolving due to the presence of a perfect fluid in D-dimensions and look at the various definitions of the surface gravity for these marginally trapped surfaces. We show that using Einstein equations it is possible to simplify and obtain general formulae for the surface gravity in terms of invariant quantities defined at these marginally trapped surfaces like area radius, cosmological constant and principal values of the energy-momentum tensor \(\rho,p\). We then correlate these expressions of surface gravity to the cases of dynamical horizons and timelike tubes and find which proposals of surface gravity are causally sensitive as these surfaces undergo causal transitions from spacelike to timelike and vice versa.
## I Introduction
Black holes are among the most mysterious objects that exist in our universe. The formation of black holes, their evolution, and their mergers are fields of intense study over many decades. One intriguing feature of black holes is the relation between gravity and thermodynamics. The event horizon of the black hole is found to possess entropy and temperature. The origin of the entropy of the black hole and its description in terms of microstates is yet to be properly understood. The connection between thermodynamics and black holes is a well-established area of physics. A relatively less understood phenomenon is the thermodynamics of a black hole that is in the process of evolution.
Black hole thermodynamics has been an area of intense study and analysis since the discovery of black hole spacetimes. The connection between the surface gravity of a stationary/static black hole event horizon and temperature is very well established. The relation between black hole entropy and its area has been firmly placed on a strong theoretical foundation due to the presence of Hawking radiation. Though the non-evolving black hole thermodynamics is very well understood, the realistic scenario involving an evolving black hole is still at the initial stages of its formulation. The main reason for this is due to the fact that the dynamical phenomenon describing the formation and evolution of a black hole is extremely complicated, that is except for highly special situations, obtaining analytically solvable solutions in general relativity is difficult.
To capture the features of evolving horizons and trapped regions, Ashtekar et al. [1; 2] defined dynamical horizons. The dynamical horizon is a spacelike hypersurface foliated by marginally trapped regions. Using this definition, they prove an important result stating that the area of the dynamical horizon always increases. They also defined timelike membranes where the evolving horizon is timelike. Hayward in his paper [3] has refined the concept of trapping horizons based on a 2+2 decomposition framework which introduced different types of horizons like future outer trapped horizon (FOTH), future inner trapped horizon (FITH), past outer trapped horizon (POTH) and past inner trapped horizon (PITH). There are many works [4; 5; 6; 7] where solutions are found for dynamical horizons, timelike membranes, and situations where an evolving horizon makes a transition from a dynamical horizon to a timelike membrane were discussed. Bousso in [8] has introduced the construction of past holographic screens which are to be defined in terms of marginally trapped surfaces, and in works with Engelhardt [9; 10] they have proved a new area law in general relativity where the area of holographic screens follow a monotonic evolution even though the causal nature of these screens changes during its dynamics.
The thermodynamics of evolving horizons is a work in progress, as the evolution of these horizons is a non-equilibrium phenomenon. An evolving horizon can be a dynamical horizon or a timelike tube depending on its causal nature and dynamical horizons tend to increase in their area. In general, the dynamical horizons are outer horizons (FOTH) while the timelike tubes are inner (FITH). Dynamical horizons are more generic while timelike tubes occur in special circumstances like Friedmann-Robertson-Walker (FRW) spacetime. There are therefore fundamental differences between the nature of dynamical horizons and timelike tubes. It is reasonable to expect that some thermodynamic properties, too, would carry over the distinction between dynamical horizons and timelike membranes. There have been various definitions for the surface gravity for the dynamically evolving marginally trapped regions. We think that a good formulation of surface gravity
is crucial when one wants to define non-equilibrium thermodynamic state variables for various astrophysically realistic cases of evolving horizons. A few of the proposals for surface gravity that are well known are, Kodama-Hayward surface gravity [11], Hayward's trapping horizon [3], Fodor et al. surface gravity [12], Booth and Fairhurst surface gravity for the evolving horizon [13], these are well described in [14]. The surface gravity expressions of various proposals for the case of the general spherically symmetric metric are in Eddington-Finkelstein coordinates and are written in terms of Misner-Sharp mass, metric function, and their derivatives with respect to these coordinates. In the paper [15], they discuss a few of the proposals for surface gravity and obtain expressions for surface gravity using Painleve-Gullstrand coordinates and highlight the differences between the proposals in a dynamical setting.
It is found in various solutions that the evolving horizon may transition from being spacelike to timelike, and vice versa [4; 5; 6; 7]. When one wants to study the thermodynamic aspects of the evolving horizons, it would be useful to understand the behavior of various surface gravity proposals with respect to the nature of causal transitions of the evolving horizons. The first paper that addresses the issue of surface gravity and causal description of evolving horizons is by [16]. The causal nature of an evolving horizon in Friedmann-Lemaitre-Robertson-Walker (FLRW) spacetime is well known. For the case of FLRW, the paper [16], evaluate the Kodama-Hawyard surface gravity and shows that the surface gravity is sensitive to the causal nature of the evolving horizon. We want to address this question in more a general context since FLRW is a specific case restricted to the cosmological type solutions.
Our goal in this article is twofold. Firstly we consider the definitions of various surface gravity proposals in the D-dimension. We show that for the case of \(D\) dimensional evolving marginally trapped surface, it is possible to simplify and obtain elementary formulae for the surface gravity ideas. We describe these formulae in simple terms of area radius \(R\), the cosmological constant \(\Lambda\), the dimension D and the principal values of the energy-momentum tensor. These formulae indicate that the surface gravity estimation can be done only using local information at the evolving horizon and does not depend upon any non-local information and the global aspects of the solution. These formulae are obtained directly from simplifying the expressions using Einstein's equations and do not require the solutions of these Einstein's equations to define the surface gravity. We obtain these formulae for the proposals of surface gravity by Kodama-Hayward [11], Fodor et al. [12], Booth-Fairhurst [13] and also obtain expressions for trapping gravity by Hawyard [3]. Secondly, we want to find out which definitions of surface gravity are causally sensitive to the transitions among spacelike and timelike surfaces. Using the general formula for each proposal, we find the relation between the expression for the causal nature of the evolving horizon with the signature for surface gravity. We find that Kodama-Hayward surface gravity gives a positive value for the case of dynamical horizons and gives a negative value if the evolving horizon is timelike.
## II Causal nature of evolving marginally trapped surface
In this section, we review some of the results describing the causal aspects of marginally trapped regions. The results can be found in [4; 5; 6; 16; 17]. Some of these results generalized to a \(D\) dimensional scenario for the case of spherically symmetric perfect fluid are derived in [7]. In the above references, the criteria for the marginally trapped surface to be timelike/null/spacelike is derived. Interestingly the causal nature of the marginally trapped surface can be obtained using Einstein equations without explicitly solving for the metric.
We assume a general metric for a (\(D=n+2\)) dimensional spherically symmetric spacetime is of the form
\[ds^{2}=-e^{\sigma(t,r)}dt^{2}+e^{\lambda(t,r)}dr^{2}+R^{2}(t,r)\ d\Omega_{n}^{2} \tag{1}\]
where \(d\Omega_{n}^{2}\) is the metric on an \(n\) dimensional sphere of unit radius with angular coordinates defined by \((\theta_{1},\theta_{2},...,\theta_{n})\). Here, \(t\) is the time coordinate, r is the comoving radial coordinate, and \(R(t,r)\) is the areal radius (we will also refer to this as "physical radius") of the n-dimensional sphere. The advantage of comoving coordinates is that the metric remains regular across the apparent horizon and becomes singular when the curvature singularity forms (this statement is generally true as long as the initial conditions are such that there are no shell crossing singularities). The matter we consider here is a perfect fluid whose energy-momentum tensor is
\[T_{\mu\nu}=(\rho(t,r)+p(t,r))u_{\mu}u_{\nu}+p(t,r)g_{\mu\nu} \tag{2}\]
and the four-velocity in comoving coordinates is
\[u^{\mu}=(e^{-\frac{\sigma}{2}},0,0,...,0) \tag{3}\]
with \(u^{\mu}u_{\mu}=-1\). The relevant Einstein equations and the relations obtained by the conservation of energy-momentum tensor are given in Appendix A. It is easily seen that the energy-momentum tensor of the perfect fluid is diagonal in this coordinate system. As shown in [7], the formula for the causal nature of the marginally trapped region can be expressed completely in terms of coordinate invariants and the principal values of the energy-momentum tensor (\(\rho\) and \(p\) in this context). To define the marginally trapped regions, for the assumed metric (1), we define the future outgoing radial null vector as
\[k^{a}=(e^{-\frac{\alpha}{2}},e^{-\frac{\lambda}{2}},0,0....,0) \tag{4}\]
and the future incoming radial null vector as
\[l^{a}=(e^{-\frac{\pi}{2}},-e^{-\frac{\lambda}{2}},0,0....,0) \tag{5}\]
These null vectors are normalized as
\[g_{ab}k^{a}l^{b}=-2\]
Using these two null vectors, the induced metric on a codimension \(D-2\) hypersurface that is orthogonal to the two null vectors is given by,
\[h_{ab}=g_{ab}+\frac{1}{2}(k_{a}l_{b}+l_{a}k_{b})\]
so the expansion for the congruence of outgoing null rays is
\[\Theta_{k}=h^{ab}\nabla_{a}k_{b}=\frac{n}{R}\bigg{(}e^{-(\frac{\pi}{2})}\dot{R }+e^{-(\frac{1}{2})}R^{\prime}\bigg{)} \tag{6}\]
and for completeness, the expansion for the congruence of incoming null rays is
\[\Theta_{l}=h^{ab}\nabla_{a}l_{b}=\frac{n}{R}\bigg{(}e^{-(\frac{\pi}{2})}\dot{R }-e^{-(\frac{1}{2})}R^{\prime}\bigg{)} \tag{7}\]
The hypersurface given by the equation \(\Theta_{k}=c\) is a curve foliated by a marginally trapped region if we set the constant \(c=0\), so the curve is a marginally trapped tube. To obtain the causal nature of the curve, we find the norm of normal \(\beta_{k}\) to this curve evaluated at \(\Theta_{k}=0\) and is given by [7],
\[\beta_{k}=-\bigg{(}(\pounds_{k}\Theta_{k})(\pounds_{l}\Theta_{k})\bigg{)} \bigg{|}_{\Theta_{k}=0} \tag{8}\]
The lie derivative of \(\Theta_{k}\) with respect to the outgoing null vector \(k^{a}\) and the incoming null vector \(l^{a}\) are
\[\pounds_{k}\Theta_{k}=k^{a}\nabla_{a}\Theta_{k}=e^{\frac{-\pi}{2}}\partial_{t }\Theta_{k}+e^{\frac{-\lambda}{2}}\partial_{r}\Theta_{k} \tag{9}\]
\[\pounds_{l}\Theta_{k}=l^{a}\nabla_{a}\Theta_{k}=e^{\frac{-\pi}{2}}\partial_{t }\Theta_{k}-e^{\frac{-\lambda}{2}}\partial_{r}\Theta_{k} \tag{10}\]
These lie derivatives have to be evaluated at \(\Theta_{k}=0\)
\[\pounds_{k}\Theta_{k}\bigg{|}_{\Theta_{k}=0}=-\tilde{\kappa}(\rho+p) \tag{11}\]
and
\[\pounds_{l}\Theta_{k}\bigg{|}_{\Theta_{k}=0}=\tilde{\kappa}(\rho-p)+2\Lambda -\frac{n(n-1)}{R^{2}} \tag{12}\]
which gives us
\[\beta_{k}=\tilde{\kappa}(\rho+p)\bigg{(}\tilde{\kappa}(\rho-p)+2\Lambda-\frac {n(n-1)}{R^{2}}\bigg{)} \tag{13}\]
Instead of the normal, one can also obtain the causal nature of the curve \(\Theta_{k}=0\) using the ratio of lie derivatives. The ratio represents the causal nature of the tangent to the curves that are foliated by marginally trapped region, the proof of which is shown in [3; 18]. The ratio of the Lie derivatives evaluated at \(\Theta_{k}=0\) determines the causal nature of the marginally trapped tube, which is
\[\alpha_{k}=\frac{\pounds_{k}\Theta_{k}}{\pounds_{l}\Theta_{k}}\bigg{|}_{ \Theta_{k}=0}=\frac{-\tilde{\kappa}(\rho+p)}{\tilde{\kappa}(\rho-p)+2\Lambda -\frac{n(n-1)}{R^{2}}} \tag{14}\]
The causal nature is described by the sign of the norm of the normal or tangent. Marginally trapped curve is timelike if \(\beta_{k}>0\), is spacelike if \(\beta_{k}<0\) and is null if \(\beta_{k}=0\). We can see that \(\beta_{k}\) and \(\alpha_{k}\) are always of opposite signs. So the causal criteria are reversed for \(\alpha_{k}\).
We note the fact that the formula for \(\beta_{k}\) is completely local and does not need the solution of the Einstein equations. The formula is described in terms of geometric invariants and system parameters like area radius \(R\) of the marginally trapped surface, cosmological constant \(\Lambda\), dimension of the spacetime (\(n=D-2\)) and the principal values of the energy-momentum tensor \((\rho,p)\) at the location of the marginally trapped region. One can easily see that by adjusting the density and pressure, one can obtain transitions of the marginally trapped tube from timelike to spacelike and vice versa. We also note that the norm of the normal is a better tool to look at these causal transitions since \(\beta_{k}\) goes to zero and hence regular while \(\alpha_{k}\) goes to infinity and therefore is analytically cumbersome at the transition points.
Now we will analytically describe these transitions in the FRW spacetime setting.
_FRW case_: The metric (1) can be brought to the standard FRW form,
\[ds^{2}=-dt^{2}+a^{2}(t)\bigg{(}\frac{dr^{2}}{1-kr^{2}}+r^{2}\ d\Omega_{n}^{2} \bigg{)} \tag{15}\]
which is the higher dimensional spherically symmetric metric whose source is a homogeneous perfect fluid. The function \(a(t)\) has the standard interpretation as the scale factor and \(k\) takes the values in \((1,0,-1)\). For the metric (15), the future incoming radial null vector is given by
\[l^{a}=(1,-\frac{\sqrt{1-kr^{2}}}{a(t)},0,0....,0) \tag{16}\]
and the future outgoing radial null vector is given by
\[k^{a}=(1,\frac{\sqrt{1-kr^{2}}}{a(t)},0,0....,0). \tag{17}\]
These are normalized to,
\[g_{ab}k^{a}l^{b}=-2.\]
The expansion scalar for the outgoing bundle of null rays is
\[\Theta_{k}=h^{ab}\nabla_{a}k_{b}=\frac{n}{a(t)r}\bigg{(}\dot{a}(t)r+\sqrt{1-kr^ {2}}\bigg{)} \tag{18}\]
We know that the norm of the normal to the \(\Theta_{k}=0\) curve can be expressed as product of Lie derivatives (8). The lie derivative of \(\Theta_{k}\) with respect to the outgoing radial null vector is
\[\left.\mathcal{L}_{k}\Theta_{k}\right|_{\Theta_{k}=0}=-\tilde{\kappa}\rho(1+\omega) \tag{19}\]
and, that with respect to the ingoing radial null vector is
\[\left.\mathcal{L}_{l}\Theta_{k}\right|_{\Theta_{k}=0}=\frac{n}{2R^{2}}\left(3 -n-\omega(n+1)+\frac{2\Lambda R^{2}(1+\omega)}{n}\right) \tag{20}\]
The norm of the normal to the curves \(\Theta_{k}=0\) can be expressed as
\[\beta_{k}=\frac{n\tilde{\kappa}\rho(1+\omega)}{2R^{2}}\left(3-n-\omega(n+1)+ \frac{2\Lambda R^{2}(1+\omega)}{n}\right) \tag{21}\]
As described earlier, the causal nature of the marginally trapped tube can also be found using the ratio of lie derivatives, which represents the causal nature of the tangent of the marginally trapped tube. The ratio of the lie derivatives evaluated at \(\Theta_{k}=0\) and \(\Theta_{l}=0\) gives us
\[\alpha_{k}=\frac{-2R^{2}\tilde{\kappa}\rho(1+\omega)}{n\left(3-n-\omega(n+1)+ \frac{2\Lambda R^{2}(1+\omega)}{n}\right)} \tag{22}\]
Note that we have expressed the formula in terms of physical radius \(R\) instead of \(a(t)r\). The formula for the cosmological case is even more elementary than the general case. If we assume \(1+\omega\) is positive, the sign of the expressions for \(\beta_{k}\) and \(\alpha_{k}\) are completely determined by the following expression,
\[\left(3-n-\omega(n+1)+\frac{2\Lambda R^{2}(1+\omega)}{n}\right) \tag{23}\]
We note that the formula does not contain any dynamical variables of the model and is expressed completely in terms of spacetime dimension (\(n\)), equation of the state parameter (\(\omega\)), cosmological constant (\(\Lambda\)), and physical radius (\(R\)) in this sense it is a geometrical result.
If we consider the case where the cosmological constant is zero, we see from the above expression that as the marginally trapped region evolves, there is no change of causal nature. It is uniformly timelike, spacelike, or null. At a value of \(\omega\) called \(\omega_{critical}\) given below, the evolving marginally trapped region is uniformly null. If \(\omega<\omega_{critical}\), it is timeline, and if \(\omega>\omega_{critical}\), the marginally trapped tube is spacelike.
\[\omega_{critical}=-\frac{(n-3)}{(n+1)} \tag{24}\]
The case where the equation of state coincides with \(\omega_{critical}\) is very special. The evolving marginally trapped tube is uniformly null. These are called the null evolving horizons that do not fit in Hayward's classification criteria as shown in [7].
If we consider the case, \(\Lambda\neq 0\) there exists a critical radius \(R_{critical}\) at which marginally trapped tube is null (\(\beta=0=\alpha^{-1}\)) and it also marks a transition of these curves from spacelike region to timelike region or vice versa.
\[R_{critical}^{2}=\frac{n(n-3)+n\omega(n+1)}{2\Lambda(1+\omega)}\]
Whenever the marginally trapped tube crosses this critical radius, it makes a transition in terms of its causal nature.
## III Surface gravity of marginally trapped surfaces
In this section, we study the various proposals that define surface gravity in a dynamic setting. For almost all the proposals, we obtain elementary formulae where the surface gravity is obtained in terms of invariants and local information like area radius, the cosmological constant, the number of dimensions, and principal values of the energy-momentum tensor. These simplified formulae are intuitively appealing and have not been reported before in literature. With the help of these formulae, it is easy to compare the behavior of various proposals for surface gravity for marginally trapped regions with the causal behavior of the same. We note a useful result that the surface gravity of \(D\) dimensional Schwarzschild black hole is, \(\kappa=(D-3)/2R=(n-1)/2R\). This helps in fixing the arbitrary constant that is due to the freedom of normalization of null rays.
### Kodama-Hayward Surface Gravity
The paper [16], has worked on Kodama-Hayward surface gravity for the case of marginally trapped surfaces in FRW spacetimes for the case of \(D=4\) dimensions. The paper finds that Kodama-Hayward's definition of surface gravity is sensitive to the causal description of the evolving marginally trapped surface. It is shown in the paper that for the FRW case, if we define the perfect fluid with the equation of state given by, \(\rho=\omega P\), for \(\omega<1/3\), the marginally trapped surface is timelike. The Kodama-Hayward surface gravity is shown in [16] to be negative in this range. For \(\omega>1/3\), the marginally trapped surface is spacelike, and Kodama-Hayward surface gravity is shown to be positive. The expressions for Kodama-Hayward surface gravity for the dynamical scenario using Painleve-Gullstrand coordinates were obtained in [15] in terms of derivatives of the Schwarzschild mass. We now obtain a formula for a spherically symmetric scenario in \(D\) dimensions for a perfect fluid. The
Kodama vector for the spherically symmetric spacetime generalized to \(D\) dimensions is defined as,
\[K^{\mu}=\frac{1}{\sqrt{-h}}\epsilon^{\mu\nu}\partial_{\nu}R \tag{25}\]
where \(R\) is the areal radius and \(-h\) is the determinant of the metric induced on the horizon \(h_{ab}\) (on the two dimensional hypersurface orthogonal to the \(D-2\) dimensional sphere).
\[\begin{split}\kappa_{K-H}=\frac{C_{1}}{\sqrt{-h}}\epsilon^{\alpha }_{\mu}\partial_{\alpha}K^{\mu}=\\ \frac{C_{1}}{\sqrt{-h}}\frac{\partial}{\partial x^{\mu}}\bigg{(} \sqrt{-h}h^{\mu\nu}\frac{\partial}{\partial x^{\nu}}R\bigg{)}\end{split} \tag{26}\]
The constant \(C_{1}\) is dependent on the normalization of the Kodama vector and is fixed indirectly by matching the value of the surface gravity for the known static case in D dimensions. For the metric (1), using \(\sqrt{-h}=\mathrm{e}^{\sigma/2}\)\(\mathrm{e}^{\lambda/2}\) and evaluating the above expression, we obtain,
\[\kappa_{K-H}=C_{1}\bigg{(}-e^{-\sigma}(\ddot{R}+\dot{R}(\frac{\dot{\lambda}- \hat{\sigma}}{2}))+e^{-\lambda}(R^{\prime\prime}+R^{\prime}\frac{(\sigma^{ \prime}-\lambda^{\prime})}{2})\bigg{)} \tag{27}\]
where, \(\dot{R}=\frac{\partial R}{\partial t}\) and \(R^{\prime}=\frac{\partial R}{\partial r}\). We show that the complicated expression can, surprisingly, be simplified to a simple form using Einstein equations for the marginally trapped regions.
For the apparent horizon, the outgoing null vector is given by (4). The condition for the outgoing null ray to be marginally trapped is obtained by setting \(\Theta_{k}\) to zero.
\[\Theta_{k}=\frac{n}{R}\bigg{(}e^{-(\frac{\sigma}{2})}\dot{R}+e^{-(\frac{ \lambda}{2})}R^{\prime}\bigg{)}=0 \tag{28}\]
This gives
\[\dot{R}e^{-\sigma/2}=-R^{\prime}e^{-\lambda/2} \tag{29}\]
Using the above relation, we can write
\[-e^{-\sigma}\frac{\dot{R}\dot{\lambda}}{2}=e^{-\frac{(\sigma+\lambda)}{2}} \frac{R^{\prime}\dot{\lambda}}{2} \tag{30}\]
and
\[e^{-\lambda}\frac{R^{\prime}\sigma^{\prime}}{2}=-e^{-\frac{(\sigma+\lambda)}{2 }}\frac{\dot{R}\sigma^{\prime}}{2} \tag{31}\]
We now simplify equation 27 using the Einstein equations given in Appendix A. We calculate \(G_{00}-G_{11}\) (using equations 67 and 68 to obtain the expression 70. We simplify the equation 70, making use of the results 29,30, 31). The expression 27 simplifies to the equation below.
\[\kappa_{K-H}=C_{1}\bigg{(}\frac{n-1}{R}-\frac{R}{n}(\tilde{\kappa}(\rho-p)+2 \Lambda)\bigg{)} \tag{32}\]
Comparing the result for zero density, pressure, and the cosmological constant, with the surface gravity of D-dimensional Scwarzschild spacetime, we get \(C_{1}=1/2\). We observe that the Kodama-Hayward surface gravity is completely determined by the local information available at the marginally trapped region. We note that if the apparent horizon is isolated, then the curve \(\Theta_{k}=0\) is null. We set (\(\rho=0,p=0\)) in the above equation and see that the surface gravity in \(D\) dimensions is proportional to \(1/R\). We recover the surface gravity of the static black hole horizon when the formula is adopted for the non-evolving scenario. We also recover the surface gravity of the de Sitter horizon or cosmological horizon by setting \(\rho=0,p=0\). We see that the surface gravity of the de Sitter cosmological horizon in a pure de Sitter spacetime is \(-\sqrt{2\Lambda}/\sqrt{n(n+1)}\). For FRW case, using reference [19], we obtain the following formula for the perfect fluid for the equation of state given below,
\[p=\omega\rho. \tag{33}\]
\[\kappa_{K-H}=\frac{1}{4}\bigg{(}\frac{n-3+\omega(n+1)}{R}-\frac{2\Lambda R}{n} (1+\omega)\bigg{)} \tag{34}\]
We note that this formula is derived by simplifying the expressions for the cosmological case. Setting \(\omega=0\) gives us the formula for dust. This formula is valid for FRW spacetime (\(\rho\neq 0\)).
#### iii.2.1 Causal correlation of Kodama-Hayward surface gravity
In the previous section, we observed the possibilities of transitions of an evolving horizon from spacelike to timelike and vice versa. We now examine the value of surface gravity as the marginally trapped region changes from timelike to spacelike. To find this relation, we divide the surface gravity in 32 with 13. We get,
\[\frac{\kappa_{K-H}}{\beta_{k}}=\frac{\bigg{(}\frac{n-1}{R}-\frac{R}{n}(2\tilde {\kappa}(\rho-p)+2\Lambda)\bigg{)}}{\tilde{\kappa}(\rho+p)\bigg{(}\tilde{\kappa }(\rho-p)+2\Lambda-\frac{n(n-1)}{R^{2}}\bigg{)}} \tag{35}\]
The ratio can be simplified to be \(-R/(2n\tilde{\kappa}(\rho+p))\). If we assume the energy condition that \(\rho+p>0\), we see that the ratio is a negative definite quantity. This implies that always the surface gravity is negatively correlated with the norm of the normal to the marginally trapped region. This implies that for the dynamical horizon where the marginally trapped curve is spacelike, the Kodama-Hayward surface gravity is also positive. For the case when the marginally trapped curve is timelike that is for timelike tubes, the surface gravity is negative.
In the FRW case, an interesting class of solutions where the evolving marginally trapped region is null. These are degenerate cases that escape the classification criteria of
evolving marginally trapped regions into outer and inner horizons [7]. For four dimensions, this corresponds to an equation of state given by \(p=\rho/3\) and for a general dimension \(D=n+2\), we have \(p=-\rho(n-3)/(n+1)\). The surface gravity corresponding to these cases is \(\kappa_{K-H}=0\).
Also, in the presence of a cosmological horizon, the evolving marginally trapped region transitions from spacelike to timelike when the curve passes through the critical radius given by (II). So the surface gravity too transitions from a positive value to a negative value passing through zero. The nature of transitions needs to be explored further and is left for future consideration.
### Hayward's trapping gravity
Another useful quantity defined by Hayward [3], is called trapping gravity. It is defined below.
\[\kappa_{H}=\frac{1}{2}\sqrt{-l^{\alpha}\Theta_{k;\alpha}}=\frac{1}{2}\sqrt{- \pounds_{l}\Theta_{k}} \tag{36}\]
We now provide a formula for estimating trapping gravity using equation 12, (from [19])
\[\kappa_{H}=\frac{1}{2}\sqrt{\frac{n(n-1)}{R^{2}}-2\Lambda-K(\rho-p)} \tag{37}\]
for a general spherically symmetric perfect fluid scenario.
For the FRW case, we obtain the formula below for a perfect fluid with the equation of state given by, \(p=\omega\rho\). Using (20), we obtain the formula for trapping horizon in FRW gravity to be,
\[\kappa_{H}=\frac{1}{2}\sqrt{\frac{\bigg{(}n-3+\omega(n+1)\bigg{)}n}{2R^{2}}- \Lambda(1+\omega)} \tag{38}\]
As can be seen from both the expressions of Hayward gravity, these are defined only for the case of dynamical horizons and not timelike tubes. This is because of the square root. The expression inside the square root is positive (assuming the null energy condition is preserved) only when the marginally trapped region is a spacelike curve and is negative if the marginally trapped region is described by a timelike curve. We note that the expressions inside the square root are causally sensitive in the same sense as the Kodama-Hayward surface gravity. The value of Hayward's trapping horizon becomes imaginary for timelike tubes.
### Fodor's method
We now consider the definition of surface gravity due to Fodor et al. [12]. We again try to obtain a formula for Fodor's surface gravity. Obtaining an expression for the surface gravity of a perfect fluid is found to be analytically difficult. We instead obtain a formula where the marginally trapped region is evolving due to dust. We, therefore, set the pressure to zero. We find that the expression simplifies to give a closed form. We then use the obtained expression to find the causal correlation. We recall the definition of Fodor's surface gravity as,
\[\kappa_{F}=-l^{\beta}k^{\alpha}k_{\beta;\alpha} \tag{39}\]
with the same definitions of \(k\) and \(l\) as given in the earlier sections. The normalization prescribed in Fodor et al. [12], is given by
\[l^{\alpha}k_{\alpha}=-1\]
However, we work with a different normalization and finally decide on the normalization by matching the surface gravity that is defined up to a multiplicative constant, with the static black hole case. For the metric defined in (1), we get the surface gravity to be,
\[\kappa_{F}=\frac{C_{2}}{2\sqrt{2}}\bigg{(}e^{-\lambda/2}\sigma^{\prime}+e^{- \sigma/2}\dot{\lambda}\bigg{)} \tag{40}\]
where \(C_{2}\) is fixed by comparing with the known static scenario. For the case of dust, the metric 1 becomes that of a D-dimensional Lemaitre Toman Bondi (LTB) model. Comparing with the D-dimensional LTB model, [6], we get the metric coefficient \(\sigma=0\) and,
\[e^{\lambda}=R^{\prime 2} \tag{41}\]
We simplify the expression (40) using the above relation. We get,
\[\kappa_{F}=\frac{C_{2}}{2\sqrt{2}}\dot{\lambda} \tag{42}\]
Now, from the expression(41) we have
\[\dot{\lambda}=\frac{2\dot{R}^{\prime}}{R^{\prime}}\]
Using the standard relation for the LTB models [6]
\[\dot{R}^{2}=\frac{F(r)}{R^{n-1}}+\frac{2\Lambda R^{2}}{n(n+1)} \tag{43}\]
where \(F(r)\) is a function only of the comoving radius \(r\) and has the interpretation of Misner-Sharp mass. It is the total mass within a shell with comoving label \(r\). For marginally trapped and anti-trapped surfaces, it is shown in [6] that \(\dot{R}^{2}=+1\) holds true. Here \(\dot{R}\) is the derivative of the area radius with comoving time. For the marginally trapped case, we choose, \(\dot{R}=-1\). We also have from reference [6],
\[\frac{F^{\prime}}{2R^{n}R^{\prime}}=\frac{\tilde{\kappa}\rho}{n} \tag{44}\]
Now by differentiating 43 w.r.t \(r\), we arrive at the expression
\[2\dot{R}\dot{R}^{\prime}=\frac{F^{\prime}}{R^{(n-1)}}+\frac{4\Lambda RR^{ \prime}}{n(n+1)}-(n-1)\frac{FR^{\prime}}{R^{n}} \tag{45}\]
Dividing both sides by \(R^{\prime}\), we arrive at,
\[\frac{\dot{R}^{\prime}}{R^{\prime}}=R\bigg{(}\frac{-\tilde{\kappa}\rho}{n}-\frac{ 2\Lambda}{n(n+1)}+\frac{(n-1)F}{2R^{(n+1)}}\bigg{)} \tag{46}\]
For the marginally trapped region, we set \(\Theta_{k}=0\). This implies [6],
\[\frac{F}{R^{n+1}}=\frac{1}{R^{2}}-\frac{2\Lambda}{n(n+1)} \tag{47}\]
Using the above expression, equation (46) gives
\[\frac{\dot{R}^{\prime}}{R^{\prime}}=\frac{n-1}{2R}-R\bigg{(}\frac{\tilde{ \kappa}\rho}{n}+\frac{\Lambda}{n}\bigg{)} \tag{48}\]
Using this, we arrive at the expression for surface gravity to be,
\[K_{F}=C_{2}\bigg{(}\frac{n-1}{2R}-\frac{R}{n}(\tilde{\kappa}\rho+\Lambda) \bigg{)} \tag{49}\]
Compared with \(D\) dimensional Schwarzschild case, we get that \(C_{2}=1\). We see that the equation (49) matches with the result in [12] if we set the dimension \(D=4(n=2)\) and equate the cosmological constant to zero. In this sense, the result obtained is a generalization of the result in [12] to \(D\) dimensions and with a cosmological constant term.
#### iv.3.1 Causal correlation of \(\kappa_{F}\)
For correlating with the causal nature of the marginally trapped surface, we compare the expression for the surface gravity with the formula for causal nature. Since the formula obtained in the Fodor et al. case is only for dust, we adopt the formula by setting the pressure to zero.
\[\frac{\kappa_{F}}{\beta_{k}}=\frac{\bigg{(}\frac{n-1}{2R}-\frac{R}{n}(\tilde{ \kappa}\rho+\Lambda)\bigg{)}}{\tilde{\kappa}\rho\bigg{(}\tilde{\kappa}\rho+2 \Lambda-\frac{n(n-1)}{R^{2}}\bigg{)}} \tag{50}\]
We can rearrange the denominator to arrive at the,
\[\frac{\kappa_{F}}{\beta_{k}}=\frac{-R\bigg{(}\frac{n-1}{2R}-\frac{R}{n}( \tilde{\kappa}\rho+\Lambda)\bigg{)}}{2n\tilde{\kappa}\rho\bigg{(}\frac{n-1}{ 2R}-\frac{R}{n}(\frac{\tilde{\kappa}\rho}{2}+\Lambda)\bigg{)}} \tag{51}\]
Again assuming that \(\rho+p>0\), we observe that there is a factor of 2 as the coefficient in the density term \(\rho\) that makes a difference from \(\kappa_{F}\) being causally correlated. The ratio \(\frac{\kappa_{F}}{\beta_{k}}\) is therefore not negative definite and hence the relation between the causal nature of marginally trapped region breaks down. However, there is a match between \(\kappa_{KH}\) and \(\kappa_{F}\) when the energy density is zero. Also, there is an agreement between \(\kappa_{KH}\) and \(\kappa_{F}\) when the energy densities are small or when the area radius of the marginally trapped region is small.
### Booth and Fairhurst method
The dynamical surface gravity is defined in the paper [13]. This definition is tailored for the case of a slowly evolving horizon. The criteria for the same is described in [13]. Using the idea in [13], we can define the tangent to the marginally trapped curve to be \(V^{\alpha}=k^{\alpha}-\alpha_{k}l^{\alpha}\) and normal to be \(\tau_{\alpha}=k_{\alpha}+\alpha_{k}l_{\alpha}\). The norm for each vector is evaluated with a normalization \(k^{\alpha}l_{\alpha}=-2\) is given by \(\alpha_{k}\) (14) and \(\beta_{k}\) (13) respectively. We define surface gravity to be,
\[K_{BF}=-l^{\alpha}k^{\beta}k_{\alpha;\beta}-\alpha_{k}k^{\alpha}l^{\beta}l_{ \alpha;\beta} \tag{52}\]
To evaluate the above expression, we have
\[l^{\beta}k^{\alpha}k_{\beta;\alpha}=-C_{3}\bigg{(}e^{-\lambda/2}\sigma^{\prime }+e^{-\sigma/2}\dot{\lambda}\bigg{)}\]
\[k^{\alpha}l^{\beta}l_{\alpha;\beta}=C_{3}\bigg{(}e^{-\lambda/2}\sigma^{\prime }-e^{-\sigma/2}\dot{\lambda}\bigg{)}\]
after putting the metric coefficient \(\sigma=0\), we get
\[l^{\beta}k^{\alpha}k_{\beta;\alpha}=-C_{3}\dot{\lambda} \tag{53}\]
and
\[k^{\alpha}l^{\beta}l_{\alpha;\beta}=-C_{3}\dot{\lambda} \tag{54}\]
Hence equation (52) gives us
\[K_{BF}=C_{3}\bigg{(}1+\alpha_{k}\bigg{)}\dot{\lambda} \tag{55}\]
Just as in the previous section, we evaluate the surface gravity for the analytically tractable case of dust. We set the pressure to zero. We then obtain the equation for the scalar \(\alpha_{k}\) to be
\[\alpha_{k}=\frac{-\tilde{\kappa}\rho}{\tilde{\kappa}\rho+2\Lambda-\frac{n(n-1 )}{R^{2}}} \tag{56}\]
and \(\dot{\lambda}\) is evaluated in Fodor's method. This yields
\[\kappa_{BF}=\frac{C_{3}\bigg{(}\frac{n-1}{2R}-\frac{R}{n}(\tilde{\kappa}\rho+ \Lambda)\bigg{)}\bigg{(}2\Lambda-\frac{n(n-1)}{R^{2}}\bigg{)}}{\tilde{\kappa} \rho+2\Lambda-\frac{n(n-1)}{R^{2}}} \tag{57}\]
We see again that the expressions match with the static case of \(D\) dimensional Schwarzschild black hole surface gravity where we set density \(\rho=0\) and \(\Lambda=0\). Comparison sets the value of \(C_{3}=1\). The result matches with D-dimensional Schwarzschild deSitter surface gravity.
#### iv.3.1 Causal correlation of the Booth-Fairhurst surface gravity
An inspection of (57) indicates that the denominator of the expression is positive definite with respect to \(\beta_{k}\), but the numerator is not. This makes the expression \(\kappa_{BF}\) not causally correlated. However, for the slowly evolving scenario for which the expression was the originally intended purpose of defining the surface gravity [13].
Conclusions
In this article, we consider the marginally trapped surfaces which can be spacelike, timelike, or null depending on the local information at the horizon, i.e density \(\rho\), pressure p, area radius \(R\), and cosmological constant \(\Lambda\). We analyze the various ideas regarding the definition of surface gravity when the horizon is evolving. We derived the following simple formulae for the various proposals. The Kodama-Hayward surface gravity for the general case is (32). For the case of FLRW, we obtain the formula (34). We showed that both formulae are sensitive to causal nature. The formula yields that surface gravity is positive for dynamical horizons and negative for timelike tubes. The surface gravity smoothly transitions whenever the evolving horizon makes a transition from timelike to spacelike or vice versa. The formula obtained for Kodama-Hayward surface gravity holds for any spherically symmetric situation with and without cosmological constant for any density and pressure of the fluid. This definition of surface gravity seems better suited to study thermodynamic aspects of evolving horizons since it is sensitive to the causal nature of the horizon. This also holds promise for future analysis between the nature of causal transitions and non-equilibrium thermodynamic state variables that could be defined on the horizons.
We obtained the formula for Hayward's trapping gravity (37), and similarly, for the FLRW case the formula is derived in (38). Based on the expressions, it is clear that this quantity is defined only for dynamical horizons and this quantity becomes imaginary for timelike tubes. We obtained the formula for Fodor et al. surface gravity in (49). This formula is obtained for the case of zero pressure and with the cosmological constant. The case with pressure is found to be analytically not tractable and a closed-form expression was not possible. Nevertheless, we could obtain the causal correlation for the Fodor et al case and find that there is a mismatch with the causal description. The parameter space where Fodor's surface gravity transitions in terms of its sign is different from the transitions in evolving horizons. We similarly obtain the formula for Booth and Fairhurst's proposal for surface gravity in (52). Again the expressions were obtainable in a closed form only for the case of zero pressure. Just as in Fodor's case, the Booth and Fairhurst proposal does not correlate with the causal description of the evolving horizon in terms of changing the sign of surface gravity when one goes from spacelike to timelike evolving horizons. These findings in the article, are therefore crucial for the first steps towards defining thermodynamic variables in evolving horizons.
## V Appendix
The nonzero components of the energy-momentum tensor are listed below
\[T_{00}=\rho e^{\sigma}\] \[T_{11}=pe^{\lambda}\] \[T_{22}=pR^{2}\] \[T_{(l+1\ l+1)}=sin^{2}\theta_{(l-1)}T_{(ll)}\]
where \(l\) takes values from 2 to n. From the Bianchi identities
\[T^{\mu\nu}_{\ \ ;\nu}=0 \tag{58}\]
we get the following relations
\[\dot{\rho}+\frac{(\rho+p)}{2}\bigg{(}\frac{2n\dot{R}}{R}+\dot{\lambda}\bigg{)}=0 \tag{59}\]
\[p^{\prime}+\frac{\sigma^{\prime}}{2}(p+\rho)=0 \tag{60}\]
where the dot represents a derivative with time coordinate and the prime represents a derivative with the comoving radial coordinate.
For a nonzero cosmological constant (\(\Lambda\neq 0\)), the Einstein equations are
\[G_{\mu\nu}+\Lambda g_{\mu\nu}=\tilde{\kappa}T_{\mu\nu} \tag{61}\]
here \(\tilde{\kappa}\) is a constant and is related to gravitational constant \(G_{n}\), (\(\tilde{\kappa}=8\pi G_{n}\)). With these conditions, we evaluate the left-hand side components of the Einstein equation (\(G_{\mu\nu}+\Lambda g_{\mu\nu}\)) which are summarized below
\[G_{00}+\Lambda g_{00}=\frac{e^{-\lambda}}{R^{2}}\bigg{[}\frac{n( n-1)}{2}(e^{\lambda+\sigma}+e^{\lambda}\dot{R}^{2}-e^{\sigma}R^{\prime 2})\] \[+\frac{n}{2}R(-2R^{\prime\prime}e^{\sigma}+e^{\sigma}R^{\prime} \lambda^{\prime}+e^{\lambda}\dot{R}\dot{\lambda})-\Lambda e^{\lambda+\sigma}R ^{2}\bigg{]} \tag{62}\]
\[G_{01}+\Lambda g_{01}=\frac{n}{2}\frac{(R^{{}^{\prime}}\dot{\lambda}-2\dot{R}^ {\prime}+\sigma^{\prime}\dot{R})}{R} \tag{63}\]
\[G_{11}+\Lambda g_{11}=\frac{e^{-\sigma}}{R^{2}}\bigg{[}-\frac{n( n-1)}{2}(e^{\lambda+\sigma}+e^{\lambda}\dot{R}^{2}-e^{\sigma}R^{\prime 2})\] \[+\frac{n}{2}R(e^{\sigma}R^{\prime}\sigma^{\prime}+e^{\lambda}( \dot{R}\tilde{\sigma}-2\tilde{R}))+\Lambda e^{\lambda+\sigma}R^{2}\bigg{]} \tag{64}\]
\[G_{22}+\Lambda g_{22}=\frac{e^{-(\lambda+\sigma)}}{4}\bigg{[}2( n-1)(n-2)(e^{\sigma}R^{\prime 2}-e^{\lambda+\sigma}\] \[-e^{\lambda}\dot{R}^{2})-2(n-1)R(e^{\sigma}R^{\prime}(\lambda^{ \prime}-\sigma^{\prime})-2e^{\sigma}R^{\prime\prime}\] \[+e^{\lambda}(\dot{R}(\dot{\lambda}-\dot{\sigma})+2\tilde{R}))+R ^{2}(4e^{\lambda+\sigma}\Lambda-e^{\lambda}(2\ddot{\lambda}+\dot{\lambda}^{2}\] \[-\dot{\lambda}\dot{\sigma})+e^{\lambda}(2\sigma^{\prime\prime}+{ \sigma^{\prime}}^{2}-\lambda^{\prime}\sigma^{\prime}))\bigg{]}\]
The other nonzero relations are given by
\[G_{(j+1\ j+1)}=sin^{2}\theta_{(j-1)}G_{(jj)} \tag{65}\]
where \(j\) takes values from 2 to \(n\). So from the \(G_{01}=0\) Einstein equation we get
\[R^{\prime}\dot{\lambda}-2\dot{R}^{\prime}+\sigma^{\prime}\dot{R}=0 \tag{66}\]
from the \(G_{00}=\tilde{\kappa}\rho e^{\sigma}\) equation we have
\[\frac{n(n-1)}{2}(e^{\lambda+\sigma}+e^{\lambda}\dot{R}^{2}-e^{ \sigma}R^{\prime 2}) \tag{67}\] \[+\frac{n}{2}R(-2R^{\prime\prime}e^{\sigma}+e^{\sigma}R^{\prime} \lambda^{\prime}+e^{\lambda}\dot{R}\dot{\lambda})=(\tilde{\kappa}\rho+\Lambda)R ^{2}e^{\lambda+\sigma}\]
and \(G_{11}=\tilde{\kappa}pe^{\lambda}\) equation we get
\[-\frac{n(n-1)}{2}(e^{\lambda+\sigma}+e^{\lambda}\dot{R}^{2}-e^{ \sigma}R^{\prime 2}) \tag{68}\] \[+\frac{n}{2}R(e^{\sigma}R^{\prime}\sigma^{\prime}+e^{\lambda}( \dot{R}\hat{\sigma}-2\ddot{R}))=(\tilde{\kappa}p-\Lambda)R^{2}e^{\lambda+\sigma}\]
The following two expressions will be useful in the subsequent calculations done in the paper. The sum (67) + (68) gives,
\[\frac{n}{2}\bigg{(}e^{\sigma}R^{\prime}(\sigma^{\prime}+\lambda^ {\prime})+e^{\lambda}\dot{R}(\hat{\sigma}+\dot{\lambda})-2(e^{\lambda}\ddot{R} +e^{\sigma}R^{\prime\prime})\bigg{)}\] \[=\tilde{\kappa}(\rho+p)e^{(\lambda+\sigma)}R \tag{69}\]
similarly the difference (67) - (68) gives us
\[n(n-1)(e^{\lambda+\sigma}+e^{\lambda}\dot{R}^{2}-e^{\sigma}R^{ \prime 2}) \tag{70}\] \[+\frac{nR}{2}(e^{\sigma}(R^{\prime}\lambda^{\prime}-2R^{\prime \prime}-R^{\prime}\sigma^{\prime})-e^{\lambda}(\dot{R}\hat{\sigma}-2\ddot{R} -\dot{R}\dot{\lambda}))\] \[=(\tilde{\kappa}(\rho-p)+2\Lambda)R^{2}e^{\lambda+\sigma}\]
|
2304.12375
|
The pointwise limit of metric integral operators approximating
set-valued functions
|
For set-valued functions (SVFs, multifunctions), mapping a compact interval
$[a,b]$ into the space of compact non-empty subsets of ${\mathbb R}^d$, we
study approximation based on the metric approach that includes metric linear
combinations, metric selections and weighted metric integrals. In our earlier
papers we considered convergence of metric Fourier approximations and metric
adaptations of some classical integral approximating operators for SVFs of
bounded variation with compact graphs. While the pointwise limit of a sequence
of these approximants at a point of continuity $x$ of the set-valued function
$F$ is $F(x)$, the limit set at a jump point was earlier described in terms of
the metric selections of the multifunction. Here we show that, under certain
assumptions on $F$, the limit set at $x$ equals the metric average of the left
and the right limits of $F$ at $x$, thus extending the case of real-valued
functions.
|
Elena E. Berdysheva, Nira Dyn, Elza Farkhi, Alona Mokhov
|
2023-04-24T18:12:28Z
|
http://arxiv.org/abs/2304.12375v1
|
# The pointwise limit of metric integral operators
# The pointwise limit of metric integral operators
approximating set-valued functions
Elena E. Berdysheva, Nira Dyn, Elza Farkhi Alona Mokhov
University of Cape Town, South AfricaTel-Aviv University, School of Mathematical SciencesAfeka, Tel-Aviv Academic College of Engineering
**Abstract.** For set-valued functions (SVFs, multifunctions), mapping a compact interval \([a,b]\) into the space of compact non-empty subsets of \(\mathbb{R}^{d}\), we study approximation based on the metric approach that includes metric linear combinations, metric selections and weighted metric integrals. In our earlier papers we considered convergence of metric Fourier approximations and metric adaptations of some classical integral approximating operators for SVFs of bounded variation with compact graphs. While the pointwise limit of a sequence of these approximants at a point of continuity \(x\) of the set-valued function \(F\) is \(F(x)\), the limit set at a jump point was earlier described in terms of the metric selections of the multifunction. Here we show that, under certain assumptions on \(F\), the limit set at \(x\) equals the metric average of the left and the right limits of \(F\) at \(x\), thus extending the case of real-valued functions.
**Key words:** Set-valued functions, functions of bounded variation, metric integral, metric approximation, integral operators, metric Fourier approximation, positive linear operators
**Mathematics Subject Classification 2020:** 26E25, 28B20, 41A35, 41A36, 42A20, 26A45
## 1 Introduction
In a series of works, we developed a metric approach to approximation of set-valued functions of bounded variation. We consider set-valued functions (SVFs, multifunctions) that map a compact interval \([a,b]\) into the space of compact non-empty subsets of \(\mathbb{R}^{d}\). Such functions find applications in different fields such as control theory, optimization, dynamical systems, mathematical economy, and, more recently, geometric modeling. For general analysis of set-valued functions we refer to [3], and to [11] for the analysis of mappings of bounded variation.
Most of earlier results on approximation of set-valued functions study methods for multifunctions with convex values, see, for example, [5, 6, 10, 12, 20, 21, 22, 26]. Standard tools to work with set-valued functions are the Minkowski linear combinations, the support function and the Aumann integral. Approximation methods based on these tools work well for set-valued functions with convex values, but fail to approximate functions with general, not necessarily convex values, due to the convexification phenomenon, observed first in [26] and extended in [14].
A breakthrough idea for approximating SVFs with general, not necessarily convex images, is due to Artstein [2], who constructed piecewise linear approximants by connecting pairs of points that we term "metric pairs". The three last authors of this paper in a series of works [13, 15, 16, 17, 19] develop the metric approach to approximation of set-valued functions based on so-called metric chains (extending the notion of metric pairs), metric linear combinations and metric selections, and apply this approach to adapt many classical sample-based approximation operators to SVFs. In [18] they introduce the notion of metric integral of bounded set-valued functions, which for SVFs of bounded variation can be represented by the collection of integrals of all the metric selections. The metric integral is extended to the weighted metric integral in [8]. The metric approach is applied by the authors in [7, 8, 9] to construct metric adaptations of a number of well-known approximation operators.
In [8] we prove pointwise convergence of metric trigonometric Fourier approximants. In [9] we study a metric adaptation of general approximating integral operators, in particular the Bernstein-Durrmeyer and the Kantorovich operators. In [8, 9] we show that sequences of metric integral operators converge pointwisely to the approximated multifunction \(F\) at points of continuity of \(F\). For \(x\) a point of discontinuity, we show pointwise convergence to a set \(A_{F}(x)\) described in terms of metric selections of \(F\). The latter description seems to us to
be quite unsatisfying, and one wishes to obtain a representation of \(A_{F}(x)\) in terms of \(F\). We achieve this goal in this paper, showing that under some assumptions on \(F\), the set \(A_{F}(x)\) is the metric average of the left and the right limits of \(F\) at \(x\), in full accordance with the case of real-valued functions.
The paper is organized as follows. In Section 2 we recall background information and basic concepts, whereas Subsection 2.1 also contains some new results on the behavior of metric pairs. In Section 3 we discuss relationships between the value of a set-valued function of bounded variation with a compact graph, and its one-sided limits. We also give a full proof of Proposition 7.2 from [8] whose proof in [8] was incomplete. Section 4 is devoted to the main result of the paper on the structure of the limit set \(A_{F}(x)\). Proofs of technical statements are given in the Appendix.
## 2 Preliminaries
In this section we review some notation and basic notions related to sets and set-valued functions as well as notions of regularity of functions in metric spaces.
### On sets and set-valued functions
All sets considered from now on are sets in \(\mathbb{R}^{d}\). We denote by \(\mathrm{K}(\mathbb{R}^{d})\) the collection of all compact non-empty subsets of \(\mathbb{R}^{d}\). The metric in \(\mathbb{R}^{d}\) is of the form \(\rho(u,v)=|u-v|\), where \(|\cdot|\) is any fixed norm on \(\mathbb{R}^{d}\). Recall that \(\mathbb{R}^{d}\) endowed with this metric is a complete metric space and that all norms on \(\mathbb{R}^{d}\) are equivalent.
To measure the distance between two non-empty sets \(A,B\in\mathrm{K}(\mathbb{R}^{d})\), we use the Hausdorff metric based on \(\rho\)
\[\mathrm{haus}(A,B)_{\rho}=\max\left\{\sup_{a\in A}\mathrm{dist}(a,B)_{\rho}, \,\sup_{b\in B}\mathrm{dist}(b,A)_{\rho}\right\},\]
where the distance from a point \(c\) to a set \(D\) is \(\mathrm{dist}(c,D)_{\rho}=\inf_{d\in D}\rho(c,d)\). It is well known that \(\mathrm{K}(\mathbb{R}^{d})\) endowed with the Hausdorff metric is a complete metric space [23, 25]. In the following, we keep the metric in \(\mathbb{R}^{d}\) fixed, and omit the notation \(\rho\) as a subscript.
We denote by \(|A|=\mathrm{haus}(A,\{0\})\) the "norm" of the set \(A\in\mathrm{K}(\mathbb{R}^{d})\). The set of projections of \(a\in\mathbb{R}^{d}\) on a set \(B\in\mathrm{K}(\mathbb{R}^{d})\) is \(\Pi_{B}(a)=\{b\in B\ :\ |a-b|=\mathrm{dist}(a,B)\}\), and the set of metric pairs of two sets \(A,B\in\mathrm{K}(\mathbb{R}^{d})\) is
\[\Pi\big{(}A,B\big{)}=\{(a,b)\in A\times B\ :\ a\in\Pi_{A}(b)\,\,\,\mathrm{or} \,\,b\in\Pi_{B}(a)\}.\]
Using metric pairs, we can rewrite
\[\mathrm{haus}(A,B)=\max\{|a-b|\ :\ (a,b)\in\Pi\big{(}A,B\big{)}\}. \tag{1}\]
Now we introduce different notions of limits of sequences of sets. We say that a sequence of sets \(\{A_{n}\}_{n=1}^{\infty}\) converges to a set \(A\) in the Hausdorff metric if \(\lim_{n\to\infty}\mathrm{haus}(A_{n},A)=0\).
The upper Kuratowski limit of a sequence of sets \(\{A_{n}\}_{n=1}^{\infty}\) is the set of all limit points of converging subsequences \(\{a_{n_{k}}\}_{k=1}^{\infty}\), where \(a_{n_{k}}\in A_{n_{k}}\), \(k\in\mathbb{N}\), namely
\[\limsup_{n\to\infty}A_{n}=\left\{a\ :\ \exists\,\{n_{k}\}_{k=1}^{\infty},\,n_{k+1 }>n_{k},\,k\in\mathbb{N},\ \exists\,a_{n_{k}}\in A_{n_{k}}\ \text{such that}\ \lim_{k\to\infty}a_{n_{k}}=a\right\}.\]
The lower Kuratowski limit of \(\{A_{n}\}_{n=1}^{\infty}\) is the set of all limit points of converging sequences \(\{a_{n}\}_{n=1}^{\infty}\), where \(a_{n}\in A_{n}\), namely,
\[\liminf_{n\to\infty}A_{n}=\left\{a\ :\ \exists\,a_{n}\in A_{n}\ \text{such that}\ \lim_{n\to\infty}a_{n}=a\right\}.\]
For a set-valued function \(F:[a,b]\to\mathrm{K}(\mathbb{R}^{d})\) the upper Kuratowski limit at \(\widetilde{x}\in[a,b]\) is
\[\limsup_{x\to\widetilde{x}}F(x)=\left\{y\ :\ \exists\,\{x_{k}\}_{k=1}^{\infty} \subset[a,b]\ \text{with}\ x_{k}\to\widetilde{x}\,\ \exists\,\{y_{k}\}_{k=1}^{\infty}\ \text{with}\ y_{k}\in F(x_{k}),k\in\mathbb{N},\ \text{and}\ y_{k}\to y\right\}.\]
The lower Kuratowski limit of \(F\) at \(\widetilde{x}\in[a,b]\) is
\[\liminf_{x\to\widetilde{x}}F(x)=\left\{y\ :\ \forall\,\{x_{k}\}_{k=1}^{\infty} \subset[a,b]\ \text{with}\ x_{k}\to\widetilde{x}\,\ \exists\,\{y_{k}\}_{k=1}^{\infty}\ \text{with}\ y_{k}\in F(x_{k}),k\in\mathbb{N},\ \text{and}\ y_{k}\to y\right\}.\]
A set \(A\) is a Kuratowski limit of \(F(x)\) as \(x\to\widetilde{x}\) if
\[A=\liminf_{x\to\widetilde{x}}F(x)=\limsup_{x\to\widetilde{x}}F(x).\]
The same relations hold also for sequences of sets. It is known that convergence in the Hausdorff metric and in the sense of Kuratowski are equivalent, if the underlying metric space is compact (see, e.g., [1, Section 4.4]). In the following the notion of a limit is understood in the sense of Hausdorff/Kuratowski.
Next, we discuss some properties of metric pairs.
**Lemma 2.1**.: _Let \(A,\dot{A},B\in\mathrm{K}(\mathbb{R}^{d})\), \(\dot{A}\subset A\). If \((a,b)\in\Pi\big{(}A,B\big{)}\) and \(a\in\dot{A}\subset A\), then \((a,b)\in\Pi\big{(}\dot{A},B\big{)}\)._
The proof is straightforward.
**Lemma 2.2**.: _Let \(\lim\limits_{n\to\infty}A_{n}=A\), \(\lim\limits_{n\to\infty}B_{n}=B\), \(A,B,A_{n},B_{n}\in\mathrm{K}(\mathbb{R}^{d})\), \((a_{n},b_{n})\in\Pi\big{(}A_{n},B_{n}\big{)}\), \(n\in\mathbb{N}\), and let \(\lim_{n\to\infty}a_{n}=a\), \(\lim_{n\to\infty}b_{n}=b\). Then \((a,b)\in\Pi\big{(}A,B\big{)}\)._
Proof.: Suppose, without loss of generality, that \(b_{n}\in\Pi_{B_{n}}(a_{n})\) for infinitely many \(n\in\mathbb{N}\). We prove that \(b\in\Pi_{B}(a)\) by contradiction. Assume that \(|b-a|>\mathrm{dist}(a,B)\), and take \(0<\varepsilon<|b-a|-\mathrm{dist}(a,B)\). There exists \(b^{*}\in B\) such that \(|a-b^{*}|=\mathrm{dist}(a,B)<|a-b|-\varepsilon\). Since \(\lim_{n\to\infty}B_{n}=B\), for \(n\) large enough there exists \(b_{n}^{*}\in\Pi_{B_{n}}(b^{*})\) such that \(|b^{*}-b_{n}^{*}|<\varepsilon/5\). Moreover, for \(n\) large enough we have \(|a_{n}-a|<\varepsilon/5\), \(|b_{n}-b|<\varepsilon/5\). For such \(n\) we have
\[|a_{n}-b_{n}^{*}| \leq|a_{n}-a|+|a-b^{*}|+|b^{*}-b_{n}^{*}|<\frac{\varepsilon}{5}+ |a-b|-\varepsilon+\frac{\varepsilon}{5}\] \[\leq\frac{-3\varepsilon}{5}+|a-a_{n}|+|a_{n}-b_{n}|+|b_{n}-b|< \frac{-3\varepsilon}{5}+\frac{\varepsilon}{5}+|a_{n}-b_{n}|+\frac{ \varepsilon}{5}=|a_{n}-b_{n}|-\frac{\varepsilon}{5}.\]
We obtain that \(b_{n}^{*}\) is closer to \(a_{n}\) than \(b_{n}\), which is in contradiction to the fact that \(b_{n}\in\Pi_{B_{n}}(a_{n})\).
**Corollary 2.3**.: _Let \(\lim\limits_{n\to\infty}A_{n}=A\), \(\lim\limits_{n\to\infty}B_{n}=B\), \(A,B,A_{n},B_{n}\in\mathrm{K}(\mathbb{R}^{d})\), \(n\in\mathbb{N}\). Then_
\[\limsup\limits_{n\to\infty}\Pi\big{(}A_{n},B_{n}\big{)}\subseteq\Pi\big{(}A,B \big{)}.\]
Proof.: Let \((a,b)\in\limsup\limits_{n\to\infty}\Pi\big{(}A_{n},B_{n}\big{)}\). By the definition of \(\limsup\) there is a strictly increasing sequence \(\{n_{k}\}_{k=1}^{\infty}\subseteq\mathbb{N}\) such that \((a_{n_{k}},b_{n_{k}})\in\Pi\big{(}A_{n_{k}},B_{n_{k}}\big{)}\) and \(\lim\limits_{k\to\infty}(a_{n_{k}},b_{n_{k}})=(a,b)\). From Lemma 2.2 it follows that \((a,b)\in\Pi\big{(}A,B\big{)}\).
We recall the notions of a metric chain and of a metric linear combination [18].
**Definition 2.4**.: _[_18_]_ _Given a finite sequence of sets \(A_{0},\ldots,A_{n}\in\mathrm{K}(\mathbb{R}^{d})\), \(n\geq 1\), a metric chain of \(A_{0},\ldots,A_{n}\) is an \((n+1)\)-tuple \((a_{0},\ldots,a_{n})\) such that \((a_{i},a_{i+1})\in\Pi\big{(}A_{i},A_{i+1}\big{)}\), \(i=0,1,\ldots,n-1\). The collection of all metric chains of \(A_{0},\ldots,A_{n}\) is denoted by_
\[\mathrm{CH}(A_{0},\ldots,A_{n})=\left\{(a_{0},\ldots,a_{n})\ :\ (a_{i},a_{i+1})\in\Pi \big{(}A_{i},A_{i+1}\big{)},\ i=0,1,\ldots,n-1\right\}.\]
_The metric linear combination of the sets \(A_{0},\ldots,A_{n}\in\mathrm{K}(\mathbb{R}^{d})\), \(n\geq 1\), is_
\[\bigoplus\limits_{i=0}^{n}\lambda_{i}A_{i}=\left\{\sum\limits_{i=0}^{n} \lambda_{i}a_{i}\ :\ (a_{0},\ldots,a_{n})\in\mathrm{CH}(A_{0},\ldots,A_{n})\right\},\quad\lambda_{0}, \ldots,\lambda_{n}\in\mathbb{R}.\]
_In case \(n=1\) we write \(\lambda_{0}A_{0}\oplus\lambda_{1}A_{1}\)._
**Remark 2.5**.: _For any \(j\in\mathbb{N}\), \(0\leq j\leq n\) and for any \(a\in A_{j}\) there exists a metric chain \((a_{0},\ldots,a_{n})\in\mathrm{CH}(A_{0},\ldots,A_{n})\) such that \(a_{j}=a\). For a possible construction see [15], Figure 3.2._
Note that the metric linear combination depends on the order of the sets, in contrast to the Minkowski linear combination of sets which is defined by
\[\sum\limits_{i=0}^{n}\lambda_{i}A_{i}=\left\{\sum\limits_{i=0}^{n}\lambda_{i}a_{ i}\ :\ a_{i}\in A_{i}\right\},\quad n\geq 1.\]
### Notions of regularity of functions with values in a metric space
In this section we discuss functions defined on a fixed compact interval \([a,b]\subset\mathbb{R}\) with values in a complete metric space \((X,\rho)\), where \(X\) is either \(\mathbb{R}^{d}\) or \(\mathrm{K}(\mathbb{R}^{d})\).
We recall the notion of the variation of \(f:[a,b]\to X\). Let \(\chi=\{x_{0},\ldots,x_{n}\}\), \(a=x_{0}<x_{1}<\cdots<x_{n}=b\), be a partition of the interval \([a,b]\) with the norm
\[|\chi|=\max\limits_{0\leq i\leq n-1}(x_{i+1}-x_{i}).\]
The variation of \(f\) on the partition \(\chi\) is defined as \(V(f,\chi)=\sum_{i=1}^{n}\rho(f(x_{i}),f(x_{i-1}))\,.\) The total variation of \(f\) on \([a,b]\) is
\[V_{a}^{b}(f)=\sup_{\chi}V(f,\chi),\]
where the supremum is taken over all partitions of \([a,b]\).
A function \(f\) is said to be of bounded variation on \([a,b]\) if \(V_{a}^{b}(f)<\infty\). We call functions of bounded variation BV functions and write \(f\in\mathrm{BV}[a,b]\).
For \(f\in\mathrm{BV}[a,b]\) the function \(v_{f}:[a,b]\to\mathbb{R},\;v_{f}(x)=V_{a}^{x}(f)\) is called the variation function of \(f\). Note that
\[V_{z}^{x}(f)=v_{f}(x)-v_{f}(z)\quad\text{for}\quad a\leq z<x\leq b,\]
and that \(v_{f}\) is monotone non-decreasing.
We recall the notion of the local modulus of continuity [24], which is central to the approximation of functions at continuity points. For \(f:[a,b]\to X\) the local modulus of continuity at \(x^{*}\in[a,b]\) is
\[\omega\big{(}f,x^{*},\delta\big{)}=\sup\left\{\,\rho(f(x_{1}),f(x_{2})):\;x_{ 1},x_{2}\in[x^{*}-\delta/2,x^{*}+\delta/2]\cap[a,b]\,\right\},\quad\delta>0.\]
The left and the right local moduli of continuity of \(f\) at \(x^{*}\in[a,b]\) are defined respectively by
\[\omega^{-}\big{(}f,x^{*},\delta\big{)}=\sup\left\{\rho(f(x),f(x^{*}))\;:\;x \in[x^{*}-\delta,x^{*}]\cap[a,b]\right\},\quad\delta>0,\]
and
\[\omega^{+}(f,x^{*},\delta)=\sup\left\{\rho(f(x),f(x^{*}))\;:\;x\in[x^{*},x^{*} +\delta]\cap[a,b]\right\},\quad\delta>0.\]
It follows from the definition of the variation that
**Result 2.6**.: _A function \(f:[a,b]\to X\), \(f\in\mathrm{BV}[a,b]\) is left continuous at \(x^{*}\in(a,b]\) if and only if \(v_{f}\) is left continuous at \(x^{*}\). The function \(f\) is right continuous at \(x^{*}\in[a,b)\) if and only if \(v_{f}\) is right continuous at \(x^{*}\)._
A function \(f:[a,b]\to X\) of bounded variation with values in a complete metric space \((X,\rho)\) is not necessarily continuous, but has right and left limits at any point \(x\)[11]. We denote the one-sided limits by
\[f(x+)=\lim_{t\to x,\,t>x}f(t),\quad f(x-)=\lim_{t\to x,\,t<x}f(t).\]
From now on we write \(\lim_{t\to x+},\;\lim_{t\to x-}\) instead of \(\lim_{t\to x,\,t>x^{*}},\;\lim_{t\to x,\,t<x}\) respectively.
In [8], we introduced the notion of the left and right local quasi-moduli. For a function \(f:[a,b]\to X\) of bounded variation, the left local quasi-modulus at point \(x^{*}\) is
\[\varpi^{-}\big{(}f,x^{*},\delta\big{)}=\sup\big{\{}\rho(f(x^{*}-),f(x))\;:\;x \in[x^{*}-\delta,x^{*})\cap[a,b]\big{\}},\quad\delta>0\;,\;x^{*}\in(a,b].\]
Similarly, the right local quasi-modulus is
\[\varpi^{+}\big{(}f,x^{*},\delta\big{)}=\sup\left\{\rho(f(x^{*}+),f(x))\;:\;x \in(x^{*},x^{*}+\delta]\cap[a,b]\right\},\quad\delta>0\;,\;x^{*}\in[a,b).\]
Clearly, for \(f\in\mathrm{BV}[a,b]\) the local quasi-moduli satisfy
\[\lim_{\delta\to 0^{+}}\varpi^{-}\big{(}f,x^{*},\delta\big{)}=0,\quad x^{*}\in(a, b],\quad\text{and}\quad\lim_{\delta\to 0^{+}}\varpi^{+}\big{(}f,x^{*},\delta\big{)}=0, \quad x^{*}\in[a,b).\]
**Result 2.7**.: _[_9_, Lemma 2.5]_ _Let \(f:[a,b]\to X\), \(f\in\mathrm{BV}[a,b]\), then for any \(x^{*}\in(a,b]\) or \([a,b)\), respectively, and \(\delta>0\) we have_
\[\varpi^{-}\big{(}f,x^{*},\delta\big{)}\leq\varpi^{-}\big{(}v_{f},x^{*},\delta \big{)},\quad\varpi^{+}\big{(}f,x^{*},\delta\big{)}\leq\varpi^{+}\big{(}v_{f},x^{*},\delta\big{)}.\]
### Metric selections of set-valued functions
We consider set-valued functions (SVFs, multifunctions) mapping a compact interval \([a,b]\subset\mathbb{R}\) to \(\mathrm{K}(\mathbb{R}^{d})\). The graph of a multifunction \(F\) is the set of points in \(\mathbb{R}^{d+1}\)
\[\mathrm{Graph}(F)=\{(x,y)\;:\;y\in F(x),\;x\in[a,b]\}\,.\]
It is easy to see that if \(F\in\mathrm{BV}[a,b]\) then \(\mathrm{Graph}(F)\) is a bounded set and
\[\|F\|_{\infty}=\sup_{x\in[a,b]}|F(x)|<\infty.\]
We denote the class of SVFs of bounded variation with compact graphs by \(\mathcal{F}[a,b]\).
For a set-valued function \(F:[a,b]\to\mathrm{K}(\mathbb{R}^{d})\), a single-valued function \(s:[a,b]\to\mathbb{R}^{d}\) such that \(s(x)\in F(x)\) for all \(x\in[a,b]\) is called a selection of \(F\).
The notions of chain functions and metric selections are among central notions in our work. Given a multifunction \(F:[a,b]\to\mathrm{K}(\mathbb{R}^{d})\), a partition \(\chi=\{x_{0},\ldots,x_{n}\}\subset[a,b]\), \(a=x_{0}<\cdots<x_{n}=b\), and a corresponding metric chain \(\phi=(y_{0},\ldots,y_{n})\in\mathrm{CH}\left(F(x_{0}),\ldots,F(x_{n})\right)\) (see Definition 2.4), the **chain function** based on \(\chi\) and \(\phi\) is
\[c_{\chi,\phi}(x)=\left\{\begin{array}{ll}y_{i},&x\in[x_{i},x_{i+1}),\quad i =0,\ldots,n-1,\\ y_{n},&x=x_{n}.\end{array}\right. \tag{2}\]
A selection \(s\) of \(F\) is called a **metric selection**, if there is a sequence of chain functions \(\{c_{\chi_{k},\phi_{k}}\}_{k\in\mathbb{N}}\) of \(F\) with \(\lim_{k\to\infty}|\chi_{k}|=0\) such that
\[s(x)=\lim_{k\to\infty}c_{\chi_{k},\phi_{k}}(x)\quad\text{pointwisely on }[a,b].\]
We denote the set of all metric selections of \(F\) by \(\mathcal{S}(F)\).
Note that the definitions of chain functions and metric selections imply that a metric selection \(s\) of a multifunction \(F\) is constant in any open interval where the graph of \(s\) stays in the interior of \(\mathrm{Graph}(F)\).
Below we quote several relevant results from [18] and [8].
**Result 2.8**.: _[_18_, Theorem 3.6]_
_Let \(s\) be a metric selection of \(F\in\mathcal{F}[a,b]\). Then \(V_{a}^{b}(s)\leq V_{a}^{b}(F)\) and \(\|s\|_{\infty}\leq\|F\|_{\infty}\)._
**Result 2.9**.: _[_18_, Corollary 3.7]_ _Let \(F\in\mathcal{F}[a,b]\). Through any point \(\alpha\in\mathrm{Graph}(F)\) there exists a metric selection which we denote by \(s_{\alpha}\). Moreover, \(F\) has a representation by metric selections, namely_
\[F(x)=\{s_{\alpha}(x)\ :\ \alpha\in\mathrm{Graph}(F)\},\quad x\in[a,b].\]
The last result implies
**Corollary 2.10**.: _For \(F\in\mathcal{F}[a,b]\),_
\[F(x)=\{s(x)\ :\ s\in\mathcal{S}(F)\},\quad x\in[a,b].\]
**Result 2.11**.: _[_8_, Theorem 4.9]_
_Let \(F\in\mathcal{F}[a,b]\), \(s\) be a metric selection of \(F\) and \(x^{*}\in[a,b]\). Then_
\[\omega\big{(}s,x^{*},\delta\big{)}\leq\omega\big{(}v_{F},x^{*},2\delta\big{)}, \quad\delta>0.\]
_In particular, if \(F\) is continuous at \(x^{*}\), then \(s\) is continuous at \(x^{*}\)._
## 3 The one-sided limits of SVFs and their metric selections
In this section we study relationships between the value \(F(x)\) of \(F\in\mathcal{F}[a,b]\) and its one-sided limits at \(x\in(a,b)\). We recall and refine some relevant results from Section 7 of our recent paper [8].
**Result 3.1**.: _[_8_, Proposition 7.1]_
_Let \(F\in\mathcal{F}[a,b]\) and \(x\in(a,b)\), then_
\[F(x-)\cup F(x+)\subseteq F(x).\]
The next result is stated in [8], however, the proof in [8] is incomplete. Here we provide a constructive method of proof that is applicable, with obvious minor changes, to each of the two claims.
**Proposition 3.2**.: _[_8_, Proposition 7.2]_
_For \(F\in\mathcal{F}[a,b]\)_
_(i)_ \(F(x-)=\{s(x-)\ :\ s\in\mathcal{S}(F)\},\quad x\in(a,b]\)_,_
_(ii)_ \(F(x+)=\{s(x+)\ :\ s\in\mathcal{S}(F)\},\quad x\in[a,b).\)__
Proof.: (i) Fix \(x^{*}\in(a,b]\). The inclusion \(\{s(x^{*}-):\ s\in\mathcal{S}(F)\}\subseteq F(x^{*}-)\) follows from the fact that \(F(x^{*}-)\) is the Kuratowski upper limit \(\limsup_{x\to x^{*}-}F(x)=\limsup_{x\to x^{*}-}\{s(x):\ s\in\mathcal{S}(F)\}\).
Now we prove the reverse inclusion, i.e. that \(F(x^{*}-)\subseteq\{s(x^{*}-)\ :\ s\in\mathcal{S}(F)\}\). Let \(y^{*}_{-}\in F(x^{*}-)\), in view of Result 3.1\(y^{*}_{-}\in F(x^{*})\). We have to show that there exists \(s\in\mathcal{S}(F)\) such that \(y^{*}_{-}=s(x^{*}-)\). We construct a
sequence of chain functions converging to the required \(s\) pointwisely. Let \(\{\chi_{n}\}_{n\in\mathbb{N}}\) be a sequence of partitions of \([a,b]\) such that
\[x^{*}\in\chi_{n},\,\forall n\in\mathbb{N}\quad\text{and}\quad\lim_{n\to\infty}| \chi_{n}|=0,\]
and let \(\xi_{n}^{-},\,\xi_{n}^{+}\) be the closest points to \(x^{*}\) in the partition \(\chi_{n}\) from the left and from the right respectively. Let \(\{c_{n}\}_{n\in\mathbb{N}}\) be a sequence of chain functions of \(F\) based on \(\{\chi_{n}\}_{n\in\mathbb{N}}\), satisfying for all \(n\in\mathbb{N}\):
\[c_{n}(x^{*})=y_{-}^{*},\quad c_{n}(\xi_{n}^{-})\in\Pi_{F(\xi_{n}^{-})}(y_{-}^{ *}),\quad c_{n}(\xi_{n}^{+})\in\Pi_{F(\xi_{n}^{+})}(y_{-}^{*}).\]
For a construction of chain functions with these properties see Definition 2.4 and (2). By Helly's Selection Principle there exists a subsequence of \(\{c_{n}\}_{n\in\mathbb{N}}\) converging to a metric selection \(s\) pointwisely for each \(x\in[a,b]\). By construction we have \(\lim_{n\to\infty}c_{n}(x^{*})=y_{-}^{*}=s(x^{*})\).
Define a multifunction \(\widetilde{F}^{-}\in\mathcal{F}[a,b]\) by
\[\widetilde{F}^{-}(t)=\left\{\begin{array}{ll}F(t),&t\neq x^{*},\\ F(x^{*}-),&t=x^{*}.\end{array}\right.\]
Clearly, \(\widetilde{F}^{-}\) is left continuous at \(x^{*}\) and therefore \(v_{\widetilde{F}^{-}}\) is left continuous at \(x^{*}\) as well (see Result 2.6). By construction \(s\in\mathcal{S}(F)\) and also \(s\in\mathcal{S}(\widetilde{F}^{-})\), since \(y_{-}^{*}\in F(x^{*})\) and \(y_{-}^{*}\in\widetilde{F}^{-}(x^{*})\), and the above chain functions of \(F\) are also chain functions of \(\widetilde{F}^{-}\).
To prove that \(y_{-}^{*}=s(x^{*}-)\), we estimate \(|s(x^{*}-\delta)-y_{-}^{*}|\), for \(0<\delta<x^{*}-a\),
\[|s(x^{*}-\delta)-y_{-}^{*}|=|s(x^{*}-\delta)-s(x^{*})|\leq V_{x^{*}-\delta}^{ x^{*}}(s)\leq V_{x^{*}-\delta}^{x^{*}}(\widetilde{F}^{-})\leq\omega^{-} \big{(}v_{\widetilde{F}^{-}},x^{*},\delta\big{)},\]
where the second inequality follows from Result 2.8. Since \(\omega^{-}\big{(}v_{\widetilde{F}^{-}},x^{*},\delta\big{)}\) tends to zero as \(\delta\to 0^{+}\), we get
\[s(x^{*}-)=\lim_{\delta\to 0^{+}}s(x^{*}-\delta)=y_{-}^{*}=s(x^{*})\in F(x^{*}-).\]
In particular \(s\) is left continuous at \(x^{*}\).
The proof of (ii) is similar with obvious minor changes such as replacing \(y_{-}^{*}\in F(x^{*}-)\) by \(y_{+}^{*}\in F(x^{*}+)\) and \(\widetilde{F}^{-}\) by \(\widetilde{F}^{+}\). In this case the constructed metric selection is right continuous at \(x^{*}\).
A direct conclusion from Proposition 3.2 is
**Corollary 3.3**.: _In the notation and assumptions of Proposition 3.2_
* _If_ \(y^{*}\in F(x^{*}-)\cap F(x^{*}+)\)_, then there exists a metric selection_ \(s\) _satisfying_ \(s(x^{*})=y^{*}\) _which is continuous at_ \(x^{*}\)_._
* _If_ \(y^{*}\in F(x^{*}-)\setminus F(x^{*}+)\)_, then there exists a metric selection_ \(s\) _satisfying_ \(s(x^{*})=y^{*}\) _which is left continuous at_ \(x^{*}\)_._
* _If_ \(y^{*}\in F(x^{*}+)\setminus F(x^{*}-)\)_, then there exists a metric selection_ \(s\) _satisfying_ \(s(x^{*})=y^{*}\) _which is right continuous at_ \(x^{*}\)_._
**Remark 3.4**.: _While for \(F\) which is left continuous at \(x^{*}\)**all** its metric selections are left continuous at \(x^{*}\) as well (see Theorem 4.7 in [8]), in the proof of (ii) in Proposition 3.2 we obtain that for \(F\) which is right continuous at \(x^{*}\)**there exists** a metric selection which is right continuous at \(x^{*}\)._
**Lemma 3.5**.: _Let \(F\in\mathcal{F}[a,b]\). If \((x^{*},y^{*})\) is an interior point of \(\operatorname{Graph}(F)\), then **any** metric selection satisfying \(s(x^{*})=y^{*}\) is continuous at \(x^{*}\) and is constant in a small neighborhood of \(x^{*}\)._
Proof.: Since \((x^{*},y^{*})\) is an interior point of \(\operatorname{Graph}(F)\) there exists a small open neighborhood of this point, \(I_{x^{*}}\times I_{y^{*}}\), in the interior of \(\operatorname{Graph}(F)\). Thus \(y^{*}\in I_{y^{*}}\subset F(x)\) for all \(x\in I_{x^{*}}\) and therefore for any \(y_{1},y_{2}\in I_{y^{*}}\), \((y_{1},y_{2})\in\Pi\big{(}F(x_{1}),F(x_{2})\big{)}\) for some \(x_{1},x_{2}\in I_{x^{*}}\) if and only if \(y_{1}=y_{2}\). Hence any chain function, with graph passing through \(I_{x^{*}}\times I_{y^{*}}\), is constant there. In particular, any metric selection \(s\) satisfying \(s(x^{*})=y^{*}\), being the pointwise limit of chain functions, is also constant in \(I_{x^{*}}\).
**Remark 3.6**.: _If \(y^{*}\in F(x^{*}-)\cap F(x^{*}+)\), but \((x^{*},y^{*})\) is not an interior point of \(\operatorname{Graph}(F)\), then a metric selection \(s\) satisfying \(s(x^{*})=y^{*}\) is not necessarily continuous at \(x^{*}\)._
The limit set \(A_{F}(x)\)
In our recent research, the following set surfaced, for a function \(F\in\mathcal{F}[a,b]\),
\[A_{F}(x)=\left\{\frac{1}{2}\left(s(x-)+s(x+)\right)\ :\ s\in\mathcal{S}(F) \right\},\ x\in(a,b).\]
This set appeared as a limit set of sequences of metric Fourier approximations [8], or of metric intergal approximation operators [9].
The set \(A_{F}(x)\) extends the well known limit, \(\frac{1}{2}\left(f(x-)+f(x+)\right)\), of many integral approximation operators and of Fourier approximations from real-valued functions to set-valued ones.
In [8] we conjectured that under certain assumptions on \(F\) one has
\[A_{F}(x)=\frac{1}{2}F(x-)\oplus\frac{1}{2}F(x+) \tag{3}\]
with \(\oplus\) as in Definition 2.4. Notice that (3) does not hold for all \(F\in\mathcal{F}[a,b]\). See the examples below, taken from [8, Section 7].
**Remark 4.1**.: _For \(x\in(a,b)\) a point of continuity of \(F\), we have_
\[A_{F}(x)=F(x)=\frac{1}{2}F(x-)\oplus\frac{1}{2}F(x+).\]
_Indeed, by Result 2.11 all metric selections of \(F\) are continuous at \(x\), therefore_
\[A_{F}(x)=\left\{s(x):\ s\in\mathcal{S}(F)\right\},\ x\in(a,b),\]
_and then by Corollary 2.10\(A_{F}(x)=F(x)\)._
_Moreover, since \(F(x-)=F(x+)=F(x)\) we get \(\frac{1}{2}F(x-)\oplus\frac{1}{2}F(x+)=F(x)\)._
Now we formulate two properties of \(F\in\mathcal{F}[a,b]\) that are sufficient to guarantee (3) also at discontinuity points.
**Property 1** (Minimality of \(F\))
We say that \(F\in\mathcal{F}[a,b]\) has Property 1 at \(\xi\in(a,b)\) if
\[F(\xi)=F(\xi-)\cup F(\xi+).\]
**Remark 4.2**.:
_(i) The name "Minimality of \(F\)" reflects Result 3.1. (ii) Property 1 holds at continuity points of \(F\in\mathcal{F}[a,b]\)._
**Property 2**
We say that \(F\in\mathcal{F}[a,b]\) satisfies Property 2 at \(\xi\in(a,b)\), if for each pair \((y^{-},y^{+})\in\Pi\big{(}F(\xi-),F(\xi+)\big{)}\) there exist four sequences \(\{\xi_{n}^{-}\}_{n\in\mathbb{N}}\), \(\{\xi_{n}^{+}\}_{n\in\mathbb{N}}\), \(\{y_{n}^{-}\}_{n\in\mathbb{N}}\), \(\{y_{n}^{+}\}_{n\in\mathbb{N}}\), such that
\[(y_{n}^{-},y_{n}^{+})\in\Pi\big{(}F(\xi_{n}^{-}),F(\xi_{n}^{+})\big{)},\ n\in \mathbb{N},\]
where \(\xi_{n}^{-}<\xi<\xi_{n}^{+}\), \(\lim_{n\to\infty}\xi_{n}^{-}=\xi=\lim_{n\to\infty}\xi_{n}^{+}\), \(\lim_{n\to\infty}y_{n}^{-}=y^{-}\), \(\lim_{n\to\infty}y_{n}^{+}=y^{+}\).
**Remark 4.3**.: _Property 2 can be written equivalently as_
\[\Pi\big{(}F(\xi-),F(\xi+)\big{)}\subseteq\limsup_{(x,z)\to(\xi-,\xi+)}\Pi \big{(}F(x),F(z)\big{)}.\]
_(For the definition of \(\limsup\) see Section 2.1.) Recall that for \(F\in\mathcal{F}[a,b]\) the inverse inclusion_
\[\limsup_{(x,z)\to(\xi-,\xi+)}\Pi\big{(}F(x),F(z)\big{)}\subseteq\Pi\big{(}F( \xi-),F(\xi+)\big{)}\]
_follows from Corollary 2.3. Thus Property 2 implies the equality_
\[\limsup_{(x,z)\to(\xi-,\xi+)}\Pi\big{(}F(x),F(z)\big{)}=\Pi\big{(}F(\xi-),F( \xi+)\big{)}.\]
**Lemma 4.4**.: _If \(\xi\in(a,b)\) is a point of continuity of \(F\in\mathcal{F}[a,b]\), then \(F\) has Property 2 at \(\xi\)._
Proof.: Since \(F(\xi)=F(\xi-)=F(\xi+)\), any metric pair in \(\Pi\big{(}F(\xi-),F(\xi+)\big{)}\) is of the form \((y,y)\), with \(y\in F(\xi)\). For a fixed \((y,y)\in\Pi\big{(}F(\xi-),F(\xi+)\big{)}\), take arbitrary sequences \(\{\xi_{n}^{-}\}_{n\in\mathbb{N}}\), \(\{\xi_{n}^{+}\}_{n\in\mathbb{N}}\) satisfying \(\xi_{n}^{-}<\xi<\xi_{n}^{+}\), \(\lim_{n\to\infty}\xi_{n}^{-}=\xi=\lim_{n\to\infty}\xi_{n}^{+}\). Let \(y_{n}^{-}\in\Pi_{F(\xi_{n}^{-})}(y)\), \(y_{n}^{+}\in\Pi_{F(\xi_{n}^{+})}(y_{n}^{-})\), \(n\in\mathbb{N}\). By construction \((y_{n}^{-},y_{n}^{+})\in\Pi\big{(}F(\xi_{n}^{-}),F(\xi_{n}^{+})\big{)}\). Since \((y_{n}^{-},y)\in\Pi\big{(}F(\xi_{n}^{-}),F(\xi)\big{)}\), by (1) and the continuity of \(F\) at \(\xi\) we obtain \(|y_{n}^{-}-y|\leq\operatorname{haus}(F(\xi_{n}^{-}),F(\xi))\to 0\) as \(n\to\infty\).
Using the triangle inequality and (1), we have
\[|y_{n}^{+}-y|\leq|y_{n}^{+}-y_{n}^{-}|+|y_{n}^{-}-y|\leq\operatorname{haus}(F (\xi_{n}^{+}),F(\xi_{n}^{-}))+\operatorname{haus}(F(\xi_{n}^{-}),F(\xi)).\]
By the triangle inequality for \(\operatorname{haus}(F(\xi_{n}^{+}),F(\xi_{n}^{-}))\) and the continuity of \(F\) at \(\xi\) we obtain
\[|y_{n}^{+}-y|\leq\operatorname{haus}(F(\xi_{n}^{+}),F(\xi))+2\operatorname{haus }(F(\xi),F(\xi_{n}^{-}))\to 0\quad\text{as}\quad n\to\infty,\]
implying that \(\lim_{n\to\infty}y_{n}^{+}=y\).
The next two examples, taken from [8, Section 7], show that Property 1 and Property 2 are not necessarily satisfied by all functions in \(\mathcal{F}[a,b]\).
The first example ([8, Example 7.3]) presents a multifunction \(F:[a,b]\to\operatorname{K}(\mathbb{R}^{2})\), \(F\in\mathcal{F}[a,b]\) that does not satisfy Property 1 at a given \(\xi\in(a,b)\). Consider
\[F(t)=\begin{cases}B(-2,2),&t\in[a,\xi),\\ B(-2,2)\cup\{(0,0)\}\cup B(2,2),&t=\xi,\\ B(2,2),&t\in(\xi,b],\end{cases}\]
where \(B(x_{1},x_{2})\) denotes the closed unit disc with center at \((x_{1},x_{2})\). For its metric selection
\[s(t)=\begin{cases}(-2+\frac{\sqrt{2}}{2},2-\frac{\sqrt{2}}{2}),&t\in(a,\xi), \\ (0,0),&t=\xi,\\ (2-\frac{\sqrt{2}}{2},2-\frac{\sqrt{2}}{2}),&t\in(\xi,b].\end{cases}\]
it is shown that \((s(\xi-),s(\xi+))\not\in\Pi\big{(}F(\xi-),F(\xi+)\big{)}\), and that \(\frac{1}{2}(s(\xi-)+s(\xi+))=(0,2-\frac{\sqrt{2}}{2})\in A_{F}(\xi)\), but does not belong to \(\frac{1}{2}F(\xi-)\oplus\frac{1}{2}F(\xi+)\). Thus \(A_{F}(\xi)\not\subseteq\frac{1}{2}F(\xi-)\oplus\frac{1}{2}F(\xi+)\). Clearly, \(F\) satisfies Property 2 at \(\xi\) since \(F\) is constant on each of the intervals \([a,\xi)\) and \((\xi,b]\): for any pair \((y^{-},y^{+})\in\Pi\big{(}F(\xi-),F(\xi+)\big{)}\) and an arbitrary suitable choice of \(\{\xi_{n}^{-}\}_{n\in\mathbb{N}}\), \(\{\xi_{n}^{+}\}_{n\in\mathbb{N}}\) we can take constant sequences \(y_{n}^{-}=y^{-}\), \(y_{n}^{+}=y^{+}\), \(n\in\mathbb{N}\).
It is easy to modify \(F\) to a multifunction \(\widetilde{F}\) satisfying both Property 1 and Property 2 at \(\xi\),
\[\widetilde{F}(t)=\begin{cases}B(-2,2),&t\in[a,\xi),\\ B(-2,2)\cup B(2,2),&t=\xi,\\ B(2,2),&t\in(\xi,b].\end{cases}\]
An example of a multifunction \(G:[a,b]\to\operatorname{K}(\mathbb{R})\), \(G\in\mathcal{F}[a,b]\) that satisfies Property 1 but does not satisfy Property 2 at a given \(\xi\in(a,b)\) can be found in [8, Example 7.4]:
\[G(t)=\begin{cases}\{-\frac{1}{4},0,\frac{1}{4}\}\,,&t\in[a,\xi),\\ \{-1,-\frac{1}{4},0,\frac{1}{4},1\}\,,&t=\xi,\\ \{-1+t-\xi,1+t-\xi\}\,,&t\in(\xi,b].\end{cases}\]
It is easy to see that \((0,1)\in\Pi\big{(}G(\xi-),G(\xi+)\big{)}\), so that \(\frac{1}{2}\in\frac{1}{2}G(\xi-)\oplus\frac{1}{2}G(\xi+)\). We showed that there is no metric selection \(s\) of \(G\) such that \(\frac{1}{2}=\frac{1}{2}(s(x-)+s(x+))\). Thus, for this function \(A_{G}(\xi)\not\supseteq\frac{1}{2}G(\xi-)\oplus\frac{1}{2}G(\xi+)\). A similar argument as in [8] can be used to show that there are no sequences \(\{\xi_{n}^{-}\}_{n\in\mathbb{N}}\), \(\{\xi_{n}^{+}\}_{n\in\mathbb{N}}\) with \(\xi_{n}^{-}<\xi<\xi_{n}^{+}\), \(\xi_{n}^{\pm}\to\xi\), \(\{y_{n}^{-}\}_{n\in\mathbb{N}}\) with \(y_{n}^{-}\in G(\xi_{n}^{-})\), \(y_{n}^{-}\to 0\), \(\{y_{n}^{+}\}_{n\in\mathbb{N}}\) with \(y_{n}^{+}\in G(\xi_{n}^{+})\), \(y_{n}^{+}\to 1\), such that \((y_{n}^{-},y_{n}^{+})\in\Pi\big{(}G(\xi_{n}^{-}),G(\xi_{n}^{+})\big{)}\), \(n\in\mathbb{N}\).
A modified function
\[\widetilde{G}(t)=\begin{cases}\{-\frac{1}{4},0,\frac{1}{4}\}\,,&t\in[a,\xi),\\ \{-1,-\frac{1}{4},0,\frac{1}{4},1\}\,,&t=\xi,\\ \{-1,1\}\,,&t\in(\xi,b],\end{cases}\]
satisfies both Properties 1 and 2 at \(\xi\).
Now we state the main result of the paper.
**Theorem 4.5**.: _Let \(F\in\mathcal{F}[a,b]\) satisfy Property 1 and Property 2 at \(x\in(a,b)\). Then_
\[A_{F}(x)=\frac{1}{2}F(x-)\oplus\frac{1}{2}F(x+).\]
By Remark 4.1 the statement is trivial if \(x\) is a point of continuity of \(F\). If \(x\) is a point of discontinuity, it is a direct consequence of the following two propositions.
**Proposition 4.6**.: _Let \(F\in\mathcal{F}[a,b]\) satisfy Property 2 at a point of discontinuity \(\xi\in(a,b)\). Then_
\[\frac{1}{2}F(\xi-)\oplus\frac{1}{2}F(\xi+)\subseteq A_{F}(\xi).\]
**Proposition 4.7**.: _Let \(F\in\mathcal{F}[a,b]\) satisfy Property 1 and Property 2 at a point of discontinuity \(\xi\in(a,b)\). Then_
\[A_{F}(\xi)\subseteq\frac{1}{2}F(\xi-)\oplus\frac{1}{2}F(\xi+).\]
The proofs of these propositions are postponed to the Appendix.
|
2305.15733
|
Black hole scalarizations induced by parity violations
|
It is well-known that parity symmetry is broken in the weak interaction but
conserved for Einstein's general relativity and Maxwell's electromagnetic
theory. Nevertheless, parity symmetry could also be violated in the
gravitational/electromagnetic sectors if a fundamental scalar field couples to
the parity-violating gravitational/electromagnetic curvature terms. Such
parity-violating terms, which flip signs under reversed spatial directions, can
inevitably lead to a negative effective mass squared for the scalar field
perturbations near nonspherically symmetric black holes and thus are expected
to trigger tachyonic instability. As illustrative examples, we show that the
scalar field coupled to gravitational/electromagnetic Chern-Simons terms near a
Kerr-Newmann spacetime can develop tachyonic instabilities, leading to
equilibrium scalar field configurations in certain parameter regions of black
holes. This instability, which is an indication of the black hole scalarization
process, can occur in a broad class of nonspherically symmetric black holes and
parity-violating theories.
|
Hao-Jie Lin, Tao Zhu, Shao-Jun Zhang, Anzhong Wang
|
2023-05-25T05:34:11Z
|
http://arxiv.org/abs/2305.15733v2
|
# Black hole scalarizations induced by parity violations
###### Abstract
It is well-known that parity symmetry is broken in the weak interaction but conserved for Einstein's general relativity and Maxwell's electromagnetic theory. Nevertheless, parity symmetry could also be violated in the gravitational/electromagnetic sectors if a fundamental scalar field couples to the parity-violating gravitational/electromagnetic curvature terms. Such parity-violating terms, which flip signs under reversed spatial directions, can inevitably lead to a negative effective mass squared for the scalar field perturbations near nonspherically symmetric black holes and thus are expected to trigger tachyonic instability. As illustrative examples, we show that the scalar field coupled to gravitational/electromagnetic Chern-Simons terms near a Kerr-Newmann spacetime can develop tachyonic instabilities, leading to equilibrium scalar field configurations in certain parameter regions of black holes. This instability, which is an indication of the black hole scalarization process, can occur in a broad class of nonspherically symmetric black holes and parity-violating theories.
## I Introduction
In recent years, the availability of detection data on black holes has increased significantly [1; 2; 3; 4; 5], making the study of black holes a topic of growing interest in the scientific community. It is widely accepted that, as an indispensable prediction of the theory of general relativity (GR), black holes can only be described in terms of their mass, electric charge, and angular momentum. Any other charge is not expected to exist according to the _no-hair theorem_[6; 7; 8]. Remarkably, most of the current observations, including the detection of gravitational waves [9], black hole images [3; 5], and the stars orbiting the supermassive black hole in our Galactic Center [10], are in good agreement with hairless black holes. Future precise observations, such as gravitational wave detection and black hole images, can provide more accurate and details information about the nature of the black hole spacetime in the regime of strong gravity. Importantly, these precise observations can also provide a significant way to probe or constrain possible extra _hairs_ in black holes.
One type of such extra hairs, a fundamental scalar field, could survive around black hole spacetime in many classes of extended scalar-tensor theories through a tachyonic instability known as spontaneous scalarization. Hence, a growth of scalar hair or spontaneous scalarization can render the new fundamental scalar field detectable near black holes [11]. Several mechanisms that account for the spontaneous scalarization around black holes have been proposed. In particular, matter-induced spontaneous scalarization was proposed for compact neutron stars in scalar-tensor theories [12; 13]. With nontrivial couplings of the scalar field to the spacetime curvature or electromagnetic field, the curvature-induced and charge-induced spontaneous scalarizations are also proven to be possible in the Einstein-Gauss-Bonnet gravity [14; 15; 16; 17; 18; 19] and Einstein-scalar-Maxwell theories [20; 21; 22; 23], respectively. More recently, it has been shown that the spontaneous scalarization induced by the spin of a black hole can also occur in various modified theories [24; 25; 26; 27; 28; 29; 30]. In addition, the scalarization of black holes realized through the nonlinear accretion of scalar fields has also been discovered and has attracted a lot of attention recently [31; 32; 33; 34; 35].
A key ingredient to trigger the spontaneous scalarization around a black hole is a threshold in both the coupling function, describing the interaction between the scalar field and gravitational/electromagnetic fields, and the black hole parameters, beyond which a tachyonic instability is induced by a sufficiently large and negative effective mass squared of the scalar field [36]. Such a negative effective mass squared sensitively depends on the sign of the coupling functions in both the curvature/charge-induced scalarization [16; 20; 21; 22] and the value of the spin of the black hole in the spin-induced scalarization [22; 23; 24; 25; 26; 27], for example.
In this paper, we present a study on a mechanism for the spontaneous scalarization of nonspherically symmetric black holes, _the parity-violation-induced black hole scalarization_. It is worth mentioning that the spontaneous scalarization of black holes in a specific parity-violating theory--i.e., the Chern-Simons modified gravity--has been extensively studied previously [37; 38; 39; 40; 41; 42; 43]. In these theories, the scalar field coupled with the parity-violating gravitational/electromagnetic field can inevitably lead to a negative effective mass squared for the scalar field and thus can trigger tachyonic instability
when it is sufficiently large.
To illustrate this mechanism, we analyze the behavior of the scalar field which couples to two specific parity-violating terms: the gravitational and electromagnetic Chern-Simons terms, in a fixed Kerr-Newmann background. Due to the inherent complexity and nonlinearity of scalarization dynamics in rotating black hole backgrounds, our study only focuses on the scalarization dynamics within the framework of the "decoupling limit." In this way, we numerically evolve the nonlinear scalar field equation on the fixed Kerr-Newmann geometry, disregarding the backreaction of the scalar field on the background spacetime [39]. This simplification allows us to gain valuable insights into the scalarization phenomenon while bypassing the significant challenges associated with capturing its complete generality and nonlinearity. Our result reveals that the scalar field in a Kerr-Newmann spacetime can develop tachyonic instabilities, eventually leading to the formation of an equilibrium scalar field configuration.
We additionally check that the effective mass squared of the scalar field can be inevitably negative in a broad class of nonspherically symmetric black holes and parity-violating theories, which are expected to lead to tachyonic instabilities in specific regions of the spacetime. Our results suggest that the phenomenon of black hole scalarization potentially occurs for a broad class of nonspherically symmetric black holes in numerous parity-violating theories.
## II The model
Let us describe our model by starting with the following scalar field action on a nontrivial black hole spacetime,
\[S_{\phi}=-\int d^{4}x\sqrt{-g}\left(\frac{1}{2}\nabla_{\mu}\phi \nabla^{\mu}\phi+f(\phi)I(\psi;g_{\mu\nu})\right), \tag{1}\]
where \(\phi\) is the scalar field, \(I(\psi;g_{\mu\nu})\) represents the source term depending on \(g_{\mu\nu}\) and matter fields \(\psi\), and \(f(\phi)\) is the coupling function determining the coupling strength between the scalar field \(\phi\) and the spacetime metric \(g_{\mu\nu}\) and other matter fields \(\psi\). By varying the above action with respect to the scalar field one obtains the equation of motion of the scalar field on the black hole spacetime,
\[\Box\phi-f^{\prime}(\phi)I=0, \tag{2}\]
where \(f^{\prime}(\phi)=df(\phi)/d\phi\). This equation allows for the existence of scalar-free solutions which are also solutions of GR with conditions \(\phi=0\) and \(f^{\prime}(0)=0\)[44; 45]. And spontaneous scalarization occurs if the scalar-free solution is unstable against scalar perturbations \(\delta\phi\). To study how the scalar-free background is affected by the small scalar field perturbation \(\delta\phi\), it is convenient to consider the linearized scalar field equation, which is
\[(\Box-\mu_{\text{eff}}^{2})\delta\phi=0, \tag{3}\]
where \(\mu_{\text{eff}}^{2}=f^{\prime\prime}(0)I(\psi;g_{\mu\nu})\) is the effective mass squared of the scalar field. Once \(\mu_{\text{eff}}^{2}\) becomes sufficiently negative, tachyonic instability occurs, causing the scalar-free spacetime to be unstable under \(\delta\phi\) in a certain region of the parameter space, indicating an onset of the black hole scalarization process [36].
One well-studied model of the spontaneous scalarization is realized in the framework of the Einstein-scalar-Gauss-Bonnet gravity with \(I(\psi;g_{\mu\nu})=R_{\text{GB}}^{2}\), where \(R_{\text{GB}}^{2}\) denotes the Gauss-Bonnet scalar. For a Schwarzschild black hole, one has \(R_{\text{GB}}^{2}=48M^{2}/r^{6}\), with \(M\) being the mass of the black hole. It is evident that effective mass \(\mu_{\text{eff}}\) is only allowed to be negative for \(f^{\prime\prime}(0)<0\), with which a tachyonic instability occurs when \(|f^{\prime\prime}(0)|\) is sufficiently large [27]. Similar curvature-induced scalarization occurs in Reissner-Nordstrom(RN) and Kerr black holes as well with a negative \(f^{\prime\prime}(0)\)[19; 27; 28; 15; 29]. For a Kerr black hole, \(\mu_{\text{eff}}^{2}\) can become negative even for positive \(f^{\prime\prime}(0)\) with a high spin and thus can trigger the spin-induced scalarization [27; 28; 29]. The black hole scalarization with other models--for example, \(I(\psi,g_{\mu\nu})=F_{\mu\nu}F^{\mu\nu}\)--have also been extensively studied in the literature [20; 21; 22; 23; 24].
In this paper, we consider two choices of \(I(\psi,g_{\mu\nu})\):
\[I(\psi,g_{\mu\nu})=R\tilde{R}\ \ \text{and}\ \ I(\psi,g_{\mu\nu})=F\tilde{F}, \tag{4}\]
representing two parity-violating interactions between the scalar field and the gravitational/electromagnetic field, where \(R\tilde{R}=\frac{1}{2}\epsilon^{\mu\nu\lambda\gamma}R^{\eta}_{\ \epsilon\lambda\gamma}R^{\xi}_{\ \eta\mu\nu}\) and \(F\tilde{F}=\frac{1}{2}\epsilon^{\mu\nu\lambda\gamma}F_{\mu\nu}F_{\lambda\gamma}\) with \(\epsilon^{\mu\nu\lambda\gamma}\) being the totally antisymmetric Levi-Civita tensor. Intriguingly, such couplings break the parity symmetry in the gravitational/electromagnetic sectors, and thus \(I(\psi;g_{\mu\nu})\rightarrow-I(\psi;g_{\mu\nu})\) under parity transformations. When \(I(\psi;g_{\mu\nu})\) does not vanish for a nontrivial black hole background, \(I(\psi;g_{\mu\nu})\) has to be negative in a certain region of the spacetime, which indicates that a negative effective mass squared for the scalar field inevitably exists. When \(\mu_{\text{eff}}^{2}=f^{\prime\prime}(0)I(\psi;g_{\mu\nu})\) is sufficiently large and negative, exceeding a threshold, the tachyonic instability of the scalar field occurs which triggers the process of the spontaneous scalarization.
## III Scalarization process in Kerr-Newmann black holes
To illustrate the mechanisms of the parity-violation-induced scalarization process, let us consider the scalarization of the Kerr-Newmann black hole as a concrete example, in which both of the parity-violating terms \(R\tilde{R}\) and \(F\tilde{F}\) do not vanish.
In the Boyer-Lindquist coordinates \(\{t,r,\theta,\varphi\}\), the Kerr-Newmann metric with charge \(Q\), mass \(M\), and spin
\(a\), is given by
\[ds^{2} \equiv -\frac{\Delta-a^{2}\sin^{2}\theta}{\Sigma^{2}}dt^{2}-\frac{2a\sin^{2 }\theta\left(r^{2}+a^{2}-\Delta\right)}{\Sigma^{2}}dtd\varphi \tag{5}\] \[+\frac{\left[\left(r^{2}+a^{2}\right)^{2}-\Delta a^{2}\sin^{2} \theta\right]\sin^{2}\theta}{\Sigma^{2}}d\varphi^{2}\] \[+\frac{\Sigma^{2}}{\Delta}dr^{2}+\Sigma^{2}d\theta^{2},\]
where
\[\Delta = r^{2}-2Mr+a^{2}+Q^{2}, \tag{6}\] \[\Sigma^{2} = r^{2}+a^{2}\cos^{2}\theta,\] (7) \[a = \frac{J}{M}. \tag{8}\]
The corresponding vector potential is
\[A_{\mu}dx^{\mu}=-\frac{Qr}{\Sigma^{2}}\left(dt-a\sin^{2}\theta d\varphi\right). \tag{9}\]
Therefore, the explicit form of the parity-violating terms \(R\tilde{R}\) and \(F\tilde{F}\) in a Kerr-Newmann black hole, are given, respectively, by
\[R\tilde{R} = \frac{96a\cos\theta\left[\left(3Mr-2Q^{2}\right)r-Ma^{2}\cos^{2} \theta\right]}{\left(r^{2}+a^{2}\cos^{2}\theta\right)^{6}} \tag{10}\] \[\times\left[\left(Mr-Q^{2}\right)r^{2}-\left(3Mr-Q^{2}\right)a^{ 2}\cos^{2}\theta\right],\] \[F\tilde{F} = \frac{8aQ^{2}r\cos\theta\left(r^{2}-a^{2}\cos\theta^{2}\right)}{ \left(r^{2}+a^{2}\cos\theta^{2}\right)^{4}}. \tag{11}\]
Intriguingly, both of the parity-violating terms contain a factor \(\cos\theta\), such that it always has a negative sign under the following parity transformation:
\[\theta\rightarrow\pi-\theta, \tag{12}\] \[R\tilde{R}\rightarrow-R\tilde{R},\] (13) \[F\tilde{F}\rightarrow-F\tilde{F}, \tag{14}\]
which implies that regardless of the specific form of \(f(\phi)\), the effective mass square \(\mu_{\rm eff}^{2}\) always has negative values in the interval \(\theta\in[0,\pi]\). Therefore, this nonminimal coupling between the parity-violating terms and the scalar field will inevitably lead to tachyonic instability as long as \(|f^{\prime\prime}(0)|\) is sufficiently large, resulting in the occurrence of the spontaneous scalarization process of Kerr-Newman black holes at the linear level.
To further validate this assertion, we consider the specific choice, \(f(\phi)=\frac{\alpha}{2\beta}(1-e^{-\beta\phi^{2}})\)[46; 39], where \(\alpha\) and \(\beta\) are two constants. When \(\beta\to 0\), the coupling function \(f(\phi)\) reduces to the quadratic form \(\frac{1}{2}\alpha\phi^{2}\), which is sufficient for studying spontaneous scalarization resulting from linear tachyonic instability. As the instability progresses and the scalar field grows, the significance of nonlinear terms increases and eventually quenches the instability. The final state is an equilibrium scalar field configuration.
To solve Eq. (2) for the time evolution of the scalar field, we utilize a hyperboloidal foliation method described in Ref. [47].
First, we transform the Bayer-Lindquist coordinates into the ingoing Kerr-Schild coordinates \(\{\tilde{t},r,\theta,\tilde{\varphi}\}\):
\[d\tilde{t} = dt+\frac{2Mr-Q^{2}}{\Delta}dr, \tag{15}\] \[d\tilde{\varphi} = d\varphi+\frac{a}{\Delta}dr. \tag{16}\]
Considering the axisymmetry of the Kerr-Newmann spacetime, the scalar perturbation can be decomposed as
\[\phi(\tilde{t},r,\theta,\tilde{\varphi})=\sum_{m}\frac{\Psi_{m}(\tilde{t},r, \theta)}{r}e^{im\tilde{\varphi}}, \tag{17}\]
where \(m\) is the azimuthal mode number and \(\Psi_{m}\) is a new field variable.1 Substituting the above expressions into Eq. (2), we obtain
Footnote 1: The introduction of a conformal compactification, as described in Eqs. (20) and (21), necessitates the rescaling of the scalar field variable, such as \(\phi=r^{-1}\Psi\). This rescaling is crucial to mitigate the singularity of the physical metric at the future null infinity \(\mathscr{I}^{+}\), which is also present in the wave equation [49; 50].
\[A^{\tilde{t}\tilde{t}}\partial_{\tilde{t}}^{2}\Psi_{m}+A^{\tilde {t}\tilde{r}}\partial_{\tilde{t}}\partial_{r}\Psi_{m}+A^{rr}\partial_{r}^{2} \Psi_{m}+A^{\theta\theta}\partial_{\theta}^{2}\Psi_{m}\] \[+B^{\tilde{t}}\partial_{\tilde{t}}\Psi_{m}+B^{r}\partial_{r}\Psi _{m}+B^{\theta}\partial_{\theta}\Psi_{m}+C\Psi_{m}=0,\]
where
\[A^{\tilde{t}\tilde{t}}=2Mr-Q^{2}+\Sigma^{2},\] \[A^{\tilde{r}r}=-4Mr+2Q^{2},\] \[A^{rr}=-\Delta,\] \[A^{\theta\theta}=-1,\] \[B^{\tilde{t}}=2M-\frac{2Q^{2}}{r},\] \[B^{r}=\frac{2\left(a^{2}+Q^{2}-Mr\right)}{r}-2ima, \tag{19}\] \[B^{\theta}=-\cot\theta,\] \[C=-\frac{2\left(a^{2}+Q^{2}-Mr\right)}{r^{2}}+\frac{2ima}{r}+ \frac{m^{2}}{\sin^{2}\theta}\] \[\qquad+\Sigma^{2}\alpha e^{-\beta(\sum_{m}\Psi_{m}e^{im\tilde{ \varphi}}/r)^{2}}I.\]
Each azimuthal mode \(m\neq 0\) has harmonic dependence on \(\tilde{\varphi}\), which causes Eq. (18) to be nonlinear. To eliminate the interference from \(\tilde{\varphi}\), we only consider the axisymmetric mode with \(m=0\) to study the time evolution of the scalar field perturbation. This particular choice has an
impact on the excited quasinormal modes in stable scenarios and the characteristic times involved, but it does not alter the overall outcome of the scalarization process [48].
The second step is to define the compactified horizon-penetrating, hyperboloidal coordinates (HH coordinates) \(\{\tau,\rho,\theta,\tilde{\varphi}\}\). Specifically, we replace the ingoing Kerr-Schild coordinates \(\tilde{t}\) and \(r\) with
\[\tilde{t} = \tau+h(\rho), \tag{20}\] \[r = \frac{\rho}{\Omega(\rho)}, \tag{21}\]
where
\[h(\rho) = \frac{\rho}{\Omega}-\rho-4M\text{ln}\Omega, \tag{22}\] \[\Omega(\rho) = 1-\frac{\rho}{S}. \tag{23}\]
By these coordinate transformations, the future null infinity \(\mathscr{I}^{+}\) is compactified at \(\rho=S\), and the event horizon of the Kerr-Newman black hole \(r_{+}=M+\sqrt{M^{2}-a^{2}-Q^{2}}\) is located at
\[\rho_{+}=\frac{S^{2}\left(M+\sqrt{M^{2}-a^{2}-Q^{2}}\right)+S \left(a^{2}+Q^{2}\right)}{a^{2}+2MS+Q^{2}+S^{2}}. \tag{24}\]
Next, we define a boost function as
\[H(\rho)=\frac{dh(\rho)}{dr}, \tag{25}\]
and then the partial derivatives for \(\tilde{t}\) and \(r\) can be rewritten as
\[\partial_{\tilde{t}}=\partial_{\tau},\ \partial_{r}=-H\partial_{\tau}+ \frac{d\rho}{dr}\partial_{\rho}. \tag{26}\]
Substituting the above expressions into Eq. (18), we obtain
\[A^{\tau\tau}\partial_{\tau}^{2}\Psi_{m}+A^{\tau\rho}\partial_{ \tau}\partial_{\rho}\Psi_{m}+A^{\rho\rho}\partial_{\rho}^{2}\Psi_{m}+A^{ \theta\theta}\partial_{\theta}^{2}\Psi_{m}\] \[+B^{\tau}\partial_{\tau}\Psi_{m}+B^{\rho}\partial_{\rho}\Psi_{m}+ B^{\theta}\partial_{\theta}\Psi_{m}+C\Psi_{m}=0\,\]
where
\[A^{\tau\tau} =A^{\tilde{t}t}-HA^{\tilde{t}r}+H^{2}A^{rr},\] \[A^{\rho\rho} =\left(\frac{d\rho}{dr}\right)^{2}A^{rr},\] \[A^{\tau\rho} =\frac{d\rho}{dr}\left(A^{\tilde{t}r}-2HA^{rr}\right), \tag{28}\] \[B^{\tau} =B^{\tilde{t}}-HB^{r}-\frac{dH}{dr}A^{rr},\] \[B^{\rho} =\frac{d\rho}{dr}\left[B^{r}+\frac{d}{d\rho}\left(\frac{d\rho}{dr} \right)A^{rr}\right],\]
By introducing a new auxiliary variable \(\Pi_{m}=\partial_{\tau}\Psi_{m}\), Eq.(IV) is recast into a first-order form of coupled partial differential equations:
\[\partial_{\tau}\Psi_{m} = \Pi_{m}, \tag{29}\] \[\partial_{\tau}\Pi_{m} = -\frac{1}{A^{\tau\tau}}\left(A^{\tau\rho}\partial_{\rho}\Pi_{m}+A ^{\rho\rho}\partial_{\rho}^{2}\Psi_{m}+A^{\theta\theta}\partial_{\theta}^{2} \Psi_{m}\right.\] \[\left.+B^{\tau}\Pi_{m}+B^{\rho}\partial_{\rho}\Psi_{m}+B^{\theta} \partial_{\theta}\Psi_{m}+C\Psi_{m}\right).\]
In terms of numerical implementations, we employ a fourth-order finite difference method for spatial grid discretizations, while the time evolution is accomplished using a fourth-order Runge-Kutta integrator. The HH coordinates automatically satisfy the ingoing/outgoing boundary condition at the horizon/infinity, eliminating the need to handle the complex outer boundary problem that can impact accuracy. However, at the angular poles (\(\theta=0\) and \(\pi\)), we impose physical boundary conditions: \(\Psi_{m}|_{\theta=0,\pi}=0\) for \(m\neq 0\), and \(\left.\partial_{\theta}\Psi_{m}|_{\theta=0,\pi}=0\right.\) for \(m=0\)[51]. For the initial data, we consider a Gaussian distribution localized outside the horizon at \(\rho=\rho_{c}\). Specifically, we have
\[\Psi_{lm}(\tau=0,\rho,\theta)\sim\mathbf{Y}_{lm}(\theta)e^{-\frac {(\rho-\rho_{c})^{2}}{2\sigma}} \tag{31}\] \[\Pi_{m}(\tau=0,\rho,\theta)=0 \tag{32}\]
where \(\mathbf{Y}_{lm}\) represents the \(\theta\)-dependent part of the spherical harmonic function, and \(\sigma\) represents the width of the Gaussian distribution.
It is noteworthy to mention that the absence of spherical symmetry in rotating black holes, leads to a phenomenon known as mode mixing. This phenomenon has been extensively studied in Kerr and Kerr-Newmann black holes in different theories of gravity, including GR [52; 53; 54], Chern-Simons modified gravity [40; 41], Gauss-Bonnet gravity [27; 18; 29], and Einstein-Maxwell-scalar gravity [24], which indicates that a pure initial \(l\) multipole will induce the presence of other \(l^{\prime}\) multipoles with the same \(m\) as it evolves. The evolution of different \(m\) modes is decoupled in the presence of axisymmetry. Additionally, the reflection symmetry results in the decoupling of even \(l\) modes from odd \(l\) modes. However, the evolution of a specific mode \((l,m)\) is coupled to that of all the modes \((l+2k,m)\) with \(k\) being an integer [27]. Notably, the \(l=|m|\) mode plays a prominent role in the later stages of the evolution of the scalar field, as indicated in Ref. [29]. Among the dominant modes with different values of \(m\), the mode with \(m=0\) exhibits the shortest growth times, holding the greatest relevance to the instabilities, and has the largest parameter region of the instability of the Kerr/Kerr-Newmann black holes [27; 18; 29; 40]. Moreover, the equations governing the evolution of the scalar field perturbation with the chosen azimuthal mode, \(m=0\), are linear in nature. This will significantly simplify the analysis as we avoid the complexities associated with solving nonlinear partial differential equations. Therefore, we solely focus on the perturbations with \(l=m=0\) in the subsequent discussion.
Additionally, we adopt the convention of setting \(M=1\) to express all quantities in units of \(M\).
Our results, as presented below, demonstrate that the occurrence of instability at the linear level is contingent upon the values of spin \(a\), charge \(Q\), and \(\alpha\). Figure 1 illustrates the time evolution of the scalar field for a fixed set of parameters. It is evident that the scalar field decays over time when \(\alpha\) is below the scalarization threshold. However, when \(\alpha\) surpasses this threshold, the scalar field experiences exponential growth. As anticipated, such growth is quenched by nonlinearity of the scalar field, leading to an equilibrium scalar field configuration, i.e., the black hole scalarizes. Conversely, in the case of a decaying scalar field perturbation, the effect of nonlinearity is largely insignificant. Figure 2 depicts the parameter space in which spontaneous scalarization arises due to the linear tachyonic instability, where the extreme Kerr-Newmann black hole provides an upper limit for the spin parameter for a fixed charge \(Q\) (dashed lines). In the case of zero spin, the Kerr-Newmann black hole degenerates to a RN black hole with no parity-violating terms, leading to a decaying scalar field perturbation. For \(Q=0\), \(F\bar{F}\) vanishes, and only \(R\bar{R}\) remains, which has been extensively investigated in Refs. [39; 40]. Additionally, as \(Q\) increases, the parameter space for the linear stability of the scalar field decreases.
## IV Extensions to other nonspherically symmetric black holes
The scalarizations of the Kerr-Newmann black hole, with \(I(\psi;g_{\mu\nu})\) being chosen to be the gravitational/electromagnetic Chern-Simons term, represent two illustrative examples of the scalarized mechanism induced by the parity violation. One remarkable property of such a mechanism is that it can exist in a broad class of black hole backgrounds, provided that the gravitational/electromagnetic Chern-Simons term does not vanish. Due to the parity violations, \(I(\psi;g_{\mu\nu})\) simply vanishes in the spherical backgrounds and could be nontrivial only for nonspherically symmetric black holes.
We now show the nonzero of the parity-violating gravitational/electromagnetic curvature invariants, \(R\bar{R}\) and \(F\bar{F}\) for several specific nonspherically symmetric black holes. We consider five different nonspherically symmetric black hole solutions, including the Kerr, Kerr-Newmann, RN-Melvin [55; 56], Kerr-Melvin [57], and Magnetized Kerr-Newmann black holes [55; 56; 57; 58]. As shown in Table 1, \(R\bar{R}\) is nonzero for all the five black holes, while \(F\bar{F}\) is nonzero for four spacetimes, all except the Kerr black hole. Once \(R\bar{R}\) or \(F\bar{F}\) is nonzero, it must give rise to a negative effective mass squared in a certain region of spacetime for the scalar perturbation no matter what the sign of \(f^{\prime\prime}(0)\) is. This is shown in Figure 3, in which we find the \(R\bar{R}\) and \(F\bar{F}\) for all the nonspherically symmetric black holes considered in this paper. It is evident that all the nonzero \(R\bar{R}\) and \(F\bar{F}\) exhibit a negative sign in the region \(\theta\in[0,\pi]\) outside the horizon of the nonspherically symmetric black holes. When this negative mass squared is sufficiently negative, the scalar field will develop a tachyonic instability, resulting in the spontaneous scalarization of the nonspherically symmetric black holes. Here we would like to mention that the spontaneous scalarization of the Kerr and RN-Melvin black holes with \(I=R\bar{R}\) has been previously investigated in Refs. [38; 39; 40; 41; 42; 43].
Figure 1: Time evolutions of the scalar field perturbation due to \(F\bar{F}\) and \(R\bar{R}\) for \(a=0.5\), \(Q=0.8\) with various values of \(\alpha\) and \(\beta\). Here we fix \(\rho_{c}=6\) and \(\sigma=0.2\). Observers are assumed to be located at \(\rho=6\) and \(\theta=\pi/4\).
## V Extensions to other parity-violating theories
The models studied in this paper can be easily extended to other types of parity-violating theories--for example, the chiral-scalar tensor theory which extends the Chern-Simons gravity by including parity-violating interactions between the higher derivatives of the scalar field and the spacetime curvatures [59]. Similarly to the Chern-Simons gravity, the chiral-scalar-tensor theory can naturally trigger the tachyonic instability of the scalar field in non-spherical backgrounds and can exhibit more rich phenomena of black hole scalarizations.
In the framework of the teleparallel gravity, the GR equivalent teleparallel gravity can be modified by adding a parity-violating interaction between the scalar field and the Nieh-Yan term in the gravitational action, i.e., \(I(\psi;g_{\mu\nu})=\mathcal{T}_{A\mu\nu}\tilde{\mathcal{T}}^{A\mu\nu}\) with \(\mathcal{T}_{A\mu\nu}\) being the torsion tensor two-form and \(\tilde{\mathcal{T}}^{A\mu\nu}=\frac{1}{2}\epsilon^{\mu\nu\lambda\gamma} \mathcal{T}^{A}_{A\gamma}\) being the dual torsion tensor two-form [60; 61]. Similarly, the parity-violating symmetric teleparallel gravity can also be constructed by including an interaction between the scalar field and the nonmetric tensor \(Q_{\mu\nu\lambda}\)--i.e., \(I(\psi;g_{\mu\nu})=\epsilon^{\mu\nu\lambda\gamma}Q^{\xi}_{\mu\nu}Q_{\lambda \gamma\xi}\)[62]. More parity-violating gravities can be found in Refs. [63; 64; 65] and references therein.
In all these theories, \(f^{\prime\prime}(0)I(\psi;g_{\mu\nu})\) can inevitably become negative in a certain region of the nontrivial black hole backgrounds and thus can naturally trigger tachyonic instabilities. Therefore, it is interesting to explore such parity-violation-induced black hole scalarization and the corresponding scalarized black hole solutions in such a broad class of theories. We would like to address this issue in detail in our future works.
## VI Summary and discussion
The results presented here exhibit a possible black hole scalarization process induced by the couplings between a scalar field and the parity-violating spacetime/electromagnetic curvature terms, potentially extending the family of known black hole scalarization mechanisms in the literature to a broader range of modified theories of gravity.
Figure 3: The parity-violating terms of non-spherical black holes as a function of \(\theta\), where \(B\) represents the magnetic field strength, \(Q\) represents the charges of the KN and RNM black holes, and \(q\) and \(p\) represent the electric and magnetic charges of the MKN black holes, respectively. We simply choose \(r=4\). For other value of \(r\), the result is similar.
Figure 2: The parameter spaces of \(a\) and \(\alpha\) for \(F\tilde{F}\) and \(R\tilde{R}\) at different \(Q\) values (0.8, 0.5, 0.25, and 0.1) in the linear level. The solid lines represent the thresholds of the dominant \(m=0\) mode for which scalarization occurs. The dashed lines represent the upper limits for the spin parameter for the fixed charges \(Q\) (extreme Kerr-Newmann black holes). The filled areas indicate the stable regions, while the areas between the dashed and solid lines of the same color in both panels indicate instability.
For illustrative purposes, we only consider the dynamics in the "decoupling limit" by numerically evolving the nonlinear scalar field equation on the fixed Kerr-Newmann geometry. While a full scalarized black hole induced by parity violation is still lacking, it is crucial to understand the full scalarization dynamics by constructing the full scalarized black hole with parity-violating interactions, which remains a challenging problem [39]. The main issue in this problem is that one has to solve the dynamical equation that may contain higher time derivatives in some specific parity-violating theories. For example, in the Chern-Simons modified gravity [66], the dynamical equation contains the third-order time derivative which results in the presence of ghost modes that generically render the theory pathological. In this sense, the parity-violating theories that contain high-order derivative terms can only be treated as effective field theories. This is also the reason why we only consider the dynamics in the "decoupling limit." Recently, several parity-violating theories which are expected to be healthy have been proposed--for example, the Nieh-Yan modified teleparallel gravity [60; 61] and symmetric teleparallel gravity with parity violations [62]. These particular parity-violating theories do not contain high-order derivative terms and do not have any ghost degree of freedom. In addition, a ghost-free parity-violating scalar-tensor theory, as an extension of the Chern-Simons modified gravity, has also been proposed [59]. It is interesting to explore the full dynamics of the scalar field and seek any possible full scalarized black hole solutions in these parity-violating theories. We expect to come back to this issue in our future works.
## Acknowledgements
This work is supported by the National Key Research and Development Program of China under Grant No. 2020YFC2201503, the Zhejiang Provincial Natural Science Foundation of China under Grants No. LR21A050001 and No. LY20A050002, and the National Natural Science Foundation of China under Grants No. 12275238, No. 11975203, and No. 12075207, and the US NSF Grant, No. PHY2308845.
|
2307.03728
|
On the representation theory of cyclic and dihedral quandles
|
Quandle representations are homomorphisms from a quandle to the group of
invertible matrices on some vector space taken with the conjugation operation.
We study certain families of quandle representations. More specifically, we
introduce the notion of regular representation for quandles, investigating in
detail the regular representations of dihedral quandles and \emph{completely
classifying} them. Then, we study representations of cyclic quandles, giving
some necessary conditions for irreducibility and providing a complete
classification under some restrictions. Moreover, we provide various
counterexamples to constructions that hold for group representations, and show
to what extent such theory has the same properties of the representation theory
of finite groups. In particular, we show that Maschke's theorem does not hold
for quandle representations.
|
Mohamed Elhamdadi, Prasad Senesi, Emanuele Zappala
|
2023-07-07T17:30:33Z
|
http://arxiv.org/abs/2307.03728v1
|
# On the representation theory of dihedral and cyclic quandles
###### Abstract.
Quandle representations are homomorphisms from a quandle to the group of invertible matrices on some vector space taken with the conjugation operation. We study certain families of quandle representations. More specifically, we introduce the notion of regular representation for quandles, investigating in detail the regular representations of dihedral quandles and _completely classifying_ them. Then, we study representations of cyclic quandles, giving some necessary conditions for irreducibility and providing a complete classification under some restrictions. Moreover, we provide various counterexamples to constructions that hold for group representations, and show to what extent such theory has the same properties of the representation theory of finite groups. In particular, we show that Maschke's theorem does not hold for quandle representations.
###### Contents
* 1 Introduction
* 2 Review of racks and quandles
* 3 Quandles of cyclic type: a presentation and classification
* 3.1 A complete classification of quandles of cyclic type
* 4 Representation theory
* 4.1 The regular representation
* 5 The regular representations of dihedral quandles
* 6 Reducible and irreducible representations of cyclic quandles
* 7 Examples of non-decomposability
* 8 Appendix
* 8.1 Constant 2-dimensional representations of a cyclic quandle
* 8.2 An explicit construction
## 1. Introduction
Quandles are algebraic objects whose defining axioms algebraically capture the Reidemeister moves from knot theory. Since they have been introduced independently by Joyce and Matveev in [9, 12], they have found several applications in low-dimensional topology. The _fundamental quandle_ of Joyce and Matveev, in fact, is a complete invariant of knots and links (up to mirror image and orientation reversal) and it is therefore a very powerful tool in distinguishing links. However, the fact that it completely classifies links is substantially indicative of the fact that it simply translates the classification complexity of knot theory into another equivalent complexity in algebraic terms. In fact, more practically, the fundamental quandle is derived from a presentation where generators and relations are obtained from a diagram of the knot following a procedure very similar to the Wirtinger presentation of the knot group. Comparing two presentations of two quandles is a significantly hard problem. Consequently, in order to apply these algebraic tools,
## 1. Introduction
The study of the theory of group rings has been initiated by the study of the theory of group rings. The theory of group rings has been developed by the study of the group group rings [1], and by the study of the group rings [2]. The group rings are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group rings \(G\) and the group rings \(G\) are the group \(G\) and the
be extrapolated from the representation category of the fundamental quandle of a link. We will not indulge on the topological applications of this theory in this article, and defer such work to a subsequent article, while we will focus on the algebraic properties of some classes of representations, and unveil some important examples that further motivate this study.
This article is organized as follows: Section 2 reviews the basics of racks and quandles including the definition of cyclic quandles used later in the article. In Section 3 we provide a presentation of the Alexander quandles of cyclic type in order to facilitate the representation theory explored later in this paper. Although it is not the focus of this work, our results in this section provide unexpected and previously unknown results: a presentation of an arbitrary cyclic quandle via generators and relations, and the complete classification of all cyclic quandles (Theorem 3.7). Section 4 gives the basic definitions of representation theory for quandles. In Section 5 we introduce the regular representation of dihedral quandles, and completely classify them by providing their explicit decomposition into irreducible subrepresentations (Theorem 5.2). In Section 6, we study representations of cyclic quandles, examining in detail the case of dimension 2 representations, and describe a class of representations for which a complete classification is given (Theorem 6.4). In Section 7 we provide various counterexamples regarding the decomposability of quandle representations, and show that Maschke's theorem does not hold for quandles. Some computational results are deferred to the Appendix.
## 2. Review of racks and quandles
**Definition 2.1**.: [5, 9, 12] A _rack_ is a set \(X\) provided with a binary operation
\[\begin{array}{ccccc}\triangleright:&X\times X&\longrightarrow&X\\ &(x,y)&\longmapsto&x\triangleright y\end{array}\]
such that
1. for all \(x,y\in X\), there is a unique \(z\in X\) such that \(y=z\triangleright x\);
2. (_right distributivity_) for all \(x,y,z\in X\), we have \((x\triangleright y)\triangleright z=(x\triangleright z)\triangleright(y\triangleright z)\).
Observe that property (i) also reads that for any fixed element \(x\in X\), the map \(R_{x}:X\ni y\longmapsto y\triangleright x\in X\) is a bijection. Also, notice that the distributivity condition is equivalent to the relation \(R_{x}(y\triangleright z)=R_{x}(y)\triangleright R_{x}(z)\) for all \(y,z\in X\).
Unless otherwise stated, we will always assume our racks (or quandles) to be finite racks (or quandles).
**Definition 2.2**.: A _quandle_ is a rack such that \(x\triangleright x=x,\forall x\in X\).
_Example 2.3_.: The set \(\mathbb{Z}_{n}=\{1,2,\ldots,n\}\), with quandle operation \(x\triangleright y=2y-x\) (mod \(n\)) is a quandle, called _dihedral_ quandle.
_Example 2.4_.: Any group \(G\) with the operation \(x\triangleright y=yxy^{-1}\) is a quandle, called conjugation quandle and denoted by Conj(G).
_Example 2.5_.: Any group \(G\) with the operation \(x\triangleright y=yx^{-1}y\) is a quandle called the Core quandle of \(G\) and denoted Core(G).
_Example 2.6_.: Let \(G\) be a group and \(f\in\operatorname{Aut}(G)\), then one can define a quandle structure on \(G\) by \(x\triangleright y=f(xy^{-1})y\). It is called a _generalized Alexander quandle_. If \(G\) is abelian, the operation becomes \(x\triangleright y=f(x)+(Id-f)(y)\), where \(Id\) stands for the identity map. This quandle is called an _Alexander quandle_.
In particular, let \(q\) be a power of a prime \(p\), and let \(\mathbb{F}_{q}\) be the field with \(q\) elements. For an element \(\alpha\in\mathbb{F}_{q}\), we will denote by \((\mathbb{F}_{q},\alpha)\) the Alexander quandle with \(x\triangleright y=\alpha x+(1-\alpha)y\) for all \(x,y\in\mathbb{F}_{q}\). These quandles will be closely studied later in this paper.
For \(x\in X\), we denote by \(R_{x}\) the quandle automorphism of \(X\) given by
\[R_{x}(y)=y\triangleright x.\]
As each \(R_{x}\) permutes the elements of \(X\), we can consider the subgroup \(\left\langle R_{x}\right\rangle_{x\in X}\) in \(S_{X}\) (the symmetric group on \(X\)) generated by the \(R_{x}\), which we denote by \(\operatorname{Inn}(X)\) (the group of inner automorphisms of \(X\)).
**Definition 2.7**.: [10] A finite quandle \(X\) of cardinality \(n>2\) is said to be of cyclic type if for all \(i\in X\), the right multiplication \(R_{i}\) is a cycle of length \((n-1)\).
The phrase 'quandle of cyclic type' is prevalent in the literature. In this manuscript, we also use the phrase 'cyclic quandle' to mean the same thing.
Here are few examples of _cyclic_ quandles.
* For \(n=3\), the dihedral quandle \(\mathbb{Z}_{n}\) is the only cyclic quandle.
* For \(n=4\), the Alexander quandle \(\mathbb{Z}_{2}[t]/(t^{2}+t+1)\) is the only cyclic quandle. It is isomorphic to the quandle \(X=\{1,2,3,4\}\) with \(R_{1}=(234)\), \(R_{2}=(143)\), \(R_{3}=(124)\) and \(R_{4}=(132)\).
* For \(n=5\), there are exactly two cyclic quandles which are the Alexander quandles \(\mathbb{Z}_{5}[t]/(t-3)\) and \(\mathbb{Z}_{5}[t]/(t-2)\).
Quandles of cyclic type are (isomorphic to) certain Alexander quandles \((\mathbb{F}_{q},\alpha)\). This correspondence is given in [14]:
**Theorem 2.8** ([14]).: _A quandle \(X\) is of cyclic type if and only if \(X\) is isomorphic to an Alexander quandle \((\mathbb{F}_{q},\alpha)\), where \(\alpha\) is a primitive element in \(\mathbb{F}_{q}\)._
## 3. Quandles of cyclic type: a presentation and classification
To facilitate the representation theory explored later in this paper, we provide in this section a presentation of the Alexander quandles of cyclic type \((\mathbb{F}_{q},\alpha)\).
Let \(p\) be a prime, \(q=p^{s}\) for some \(s\geq 1\), and let \(\alpha\) be a primitive element in the finite field \(\mathbb{F}_{q}\) of order \(q\). For any nonzero element \(r\in\mathbb{F}_{q}\), we define \(\log_{\alpha}r\) to be the unique integer \(0\leq\log_{\alpha}r\leq q-2\) such that
\[\alpha^{\log_{\alpha}r}=r.\]
For elements \(x_{1},x_{2},\ldots,x_{s}\) in a quandle \(X\), we will denote by \(x_{1}x_{2}\cdots x_{s}\) the left-associated element \((\cdots((x_{1}\triangleright x_{2})\triangleright x_{3})\cdots\triangleright x_{s})\). If \(S\) is the set of elements \(\{x_{1},x_{2},\ldots,x_{s}\}\), we will say that an element \(u\in X\) can be _left-associated in \(S\)_ if we can write \(u=x_{q_{1}}x_{q_{2}}\cdots x_{q_{t}}\), for \(x_{q_{i}}\in S\).
Let \(F_{q,\alpha}\) be the quandle with presentation
\[F_{q,\alpha}=\left\langle x,y\ \mid\ a\mathfrak{b}^{q-1}=a,\ \ ab^{k}=ba^{\log_{ \alpha}(1-\alpha^{k})}\right\rangle_{a\neq b\in\{x,y\},\ 1\leq k\leq q-2}.\]
**Lemma 3.1**.: _Let \(u,v\) be nonzero elements in \(\mathbb{F}_{q,\alpha}\), \(n\in\mathbb{Z}_{+}\), and \(s\in\mathbb{Z}_{\geq 0}\)._
1. _For any_ \(z\in F_{q,\alpha}\)_, we have_ \(zx^{n}=zx^{n\bmod(q-1)}\)_, and similarly for_ \(zy^{n}\)_._
2. _For any nonzero_ \(u\in\mathbb{F}_{q}\)_,_ \[-\log_{\alpha}(u)\equiv\log_{\alpha}(u^{-1})\bmod(q-1).\]
3. _For any nonzero_ \(u,v\in\mathbb{F}_{q}\)_,_ \[\log_{\alpha}uv\equiv\log_{\alpha}u+\log_{\alpha}v\bmod(q-1).\]
4. _For any_ \(b,u\in\mathbb{F}_{q}\)_,_ \(u\neq 0\)_,_ \[b^{\log_{\alpha}u}b^{s}=b^{\log_{\alpha}(u\alpha^{s})}.\]
5. For any \(u\in\mathbb{F}_{q}\setminus\{0,1\}\), \[xy^{\log_{\alpha}(u)}=yx^{\log_{\alpha}(1-u)}.\]
Proof.: Parts (2) - (4) are standard properties of \(\log_{\alpha}\) over a finite field, and (5) follows since any element \(u\in\mathbb{F}_{q}\) can be written as \(\alpha^{k}\). Only (1) requires proof. For this argument we need a well-defined notion of the length of an element in \(F_{q,\alpha}\), given as follows.
We can define the set \(\mathcal{A}\) of admissible words in the symbols \(x,y,\triangleright\), ( and ) inductively: we have \(x,y\in\mathcal{A}\), and for \(u,v\in\mathcal{A}\), \((u)\triangleright(v)\in\mathcal{A}\). Furthermore we write \((x)=x\) and \((y)=y\) for any occurrence of \((x)\) or \((y)\) in a word. Elements of \(F_{q,\alpha}\) consist of equivalence classes of admissible words modulo the defining relations. For an element \(u\in F_{q,\alpha}\), we define its length \(\ell(u)\) to be the smallest integer \(n\) for which there exists a sequence \(\left\{s_{i}\right\}_{i=1}^{n}\), \(s_{i}\in\{x,y\}\), such that \(u\) has a representative written with \(\left\{s_{i}\right\}_{i=1}^{n}\). For example, if \(q=5\) and \(u=y((xy^{4})x)\in F_{q,\alpha}\), the length of \(u\) is \(2\), since \(y((xy^{4})x)=y(xx)=yx\).
The argument for (1) is brief. For any \(u\in F_{q,\alpha}\), a short inductive argument on \(k\) shows that \(uxy^{k}=(uy^{k})(xy^{k})\) for any positive integer \(k\). Therefore \((ux)y^{q-1}=(uy^{q-1})(xy^{q-1})\), and another inductive argument on the length of \(u\) gives us the result in (1).
We note several consequences of \(\log_{\alpha}\) and the given relations.
**Lemma 3.2**.: _Any element of \(F_{q,\alpha}\) can be left-associated in \(\{x,y\}\), the set containing the generators._
Proof.: Any quandle \(X\) with operation \(\triangleright:X\times X\to X\) also comes equipped with a second operation \(\triangleright^{-1}:X\times X\to X\), the _right inverse_ of \(\triangleright\), satisfying \((a\triangleright b)\triangleright^{-1}b=a\) for all \(a,b\in X\). If \(X\) is generated by elements \(\left\{x_{i}\right\}_{i=1}^{n}\), then any element of \(X\) can be left-associated in the generators \(\{x_{i}\}\) if we allow both operations \(\triangleright\) and \(\triangleright^{-1}\)[15]; i.e., for any \(a\in X\) We can write
\[a=((\ldots(x_{i_{1}}\triangleright^{\pm 1}x_{i_{2}})\triangleright^{\pm 1}x_{i_{3}}) \ldots)\triangleright^{\pm 1}x_{i_{k}}\]
for appropriate choices of \(\triangleright\), \(\triangleright^{-1}\) and generators \(x_{i_{j}}\).
Furthermore, for the quandle \(X=F_{q,\alpha}\), the definining relations and Lemma 3.1 give us \(a\triangleright^{-1}b=ab^{q-2}\) for any \(a,b\in F_{q,\alpha}\). The conclusion follows.
**Lemma 3.3**.: _Let \(p\) be a prime, \(q=p^{n}\), and \(\alpha\) a primitive element in the field \(\mathbb{F}_{q}\). For nonnegative integers \(r\) and \(s\) with \(r+s>0\), set \(\mu(r,s)=\alpha^{r+1}-\alpha^{s+1}+\alpha^{s}\). In the quandle \(F_{q,\alpha}\), we have the relations_
\[(xy^{r})(xy^{s})=\begin{cases}y,&\mu(r,s)=0\\ xy^{\log_{\alpha}(\mu(r,s))},&\mu(r,s)\neq 0.\end{cases}\]
Proof.: Assume \(\mu(r,s)=0\). Then \(0=\alpha^{r+1}-\alpha^{s+1}-\alpha^{s}\), so \(\alpha-\alpha^{r-s+1}=1\). This also gives us \(r-s\not\equiv 0\mod(q-1)\) (else \(\alpha-\alpha^{r-s+1}=\alpha-\alpha=0\)) and so \(xy^{r-s}=yx^{\log_{\alpha}(1-\alpha^{r-s})}\).
A brief inductive argument shows that \((xy^{r})(xy^{s})=(xy^{r-s}x)y^{s}\). Let \(m\) be any non-negative integer such that \(m(q-1)+r>s\). Then since \(xy^{q-1}=x\), we can write
\[(xy^{r})(xy^{s}) = (xy^{m(q-1)+r})(xy^{s})\] \[= (xy^{m(q-1)+r-s})(x)y^{s}\] \[= yx^{\log_{\alpha}(1-\alpha^{m(q-1)+r-s})\alpha}y^{s}\] \[= yx^{\log_{\alpha}(1-\alpha^{r-s})\alpha}y^{s}\] \[= yx^{\log_{\alpha}(\alpha-\alpha^{r-s+1})}y^{s}\] \[= yx^{\log_{\alpha}(1)}y^{s}\] \[= yx^{0}y^{s}=y^{s+1}=y.\]
Now suppose \(\mu(r,s)\neq 0\). We prove this case by induction on the sum \(r+s\). We first verify the base case \(r+s=1\) in two cases: \(r=1\), \(s=0\), and \(r=0\), \(s=1\). First suppose \(r=1\) and \(s=0\). Then
\[(xy^{r})(xy^{s})=(xy)x=(yx^{log_{\alpha}(1-\alpha)})x=yx^{\log_{\alpha}(\alpha -\alpha^{2})}=xy^{\log_{\alpha}(1-\alpha+\alpha^{2})}=xy^{\log_{\alpha}(\mu(1,0 ))},\]
where we have used the assumption \(\mu(1,0)\neq 0\) when writing \(yx^{\log_{\alpha}(\alpha-\alpha^{2})}=xy^{\log_{\alpha}(1-\alpha+\alpha^{2})}\). For \(r=0\), \(s=1\), we have
\[(xy^{r})(xy^{s}) = (x)(xy)\] \[= (xy^{q-1})(xy)\] \[= ((xy^{q-2})(x))y\] \[= (yx^{\log_{\alpha}(1-\alpha^{q-2})\alpha})y\] \[= (yx^{\log_{\alpha}(\alpha-1)})y\] \[= xy^{\log_{\alpha}(1-(\alpha-1))\alpha}\] \[= xy^{\log_{\alpha}(\alpha-\alpha^{2}+\alpha)}=xy^{\log_{\alpha}( \mu(0,1))},\]
where we have used the assumption \(\mu(0,1)\neq 0\) when writing \(yx^{\log_{\alpha}(\alpha-1)}=xy^{\log_{\alpha}(1-(\alpha-1))}\).
For the inductive step, we fix \(r,s\) with \(r+s>1\), and assume the result is true for all indices \(\tilde{r},\tilde{s}\) with \(\tilde{r}+\tilde{s}<r+s\). Then since \(\mu(r-1,s-1))\alpha=\mu(r,s)\), we have
\[(xy^{r})(xy^{s}) = (xy^{r-1})(xy^{s-1})y\] \[= xy^{\log_{\alpha}(\mu(r-1,s-1)\alpha)}\] \[= xy^{\log_{\alpha}(\mu(r,s))}.\]
**Theorem 3.4**.: _Let \(p\) be a prime, \(q=p^{s}\), and \(\alpha\) a primitive element in \(\mathbb{F}_{q}\). Then the Alexander quandle \((\mathbb{F}_{q},\alpha)\) is isomorphic to \(F_{q,\alpha}\)._
Proof.: In \((\mathbb{F}_{q},\alpha)\), the formulas \(01^{k}=1-\alpha^{k}\) and \(10^{k}=\alpha^{k}\) (for any nonnegative integer \(k\)) can be easily verified by the definition of the quandle operation in \((\mathbb{F}_{q},\alpha)\) and an inductive argument in \(k\). In particular, this shows that \((\mathbb{F}_{q},\alpha)\) is generated by \(0\) and \(1\). Furthermore, as a consequence of these formulas we have
\[01^{q-1}=1-\alpha^{q-1}=1-1=0,\] \[10^{q-1}=\alpha^{q-1}=1,\]
and for any integer \(1\leq k\leq q-2\),
\[10^{\log_{\alpha}(1-\alpha^{k})}=\alpha^{\log_{\alpha}(1-\alpha^{ k})}=1-\alpha^{k}=01^{k},\] \[01^{\log_{\alpha}(1-\alpha^{k})}=1-\alpha^{\log_{\alpha}(1- \alpha^{k})}=1-(1-\alpha^{k})=\alpha^{k}=10^{k}.\]
This shows that \((\mathbb{F}_{q},\alpha)\) is generated by two elements (\(0\) and \(1\)) which satisfy the defining relations of \(F_{q,\alpha}\), from which it follows that there exists a quandle epimorphism \(F_{q,\alpha}\twoheadrightarrow(\mathbb{F}_{q},\alpha)\). Now the result is proven once we prove \(|F_{q,\alpha}|\leq|(\mathbb{F}_{q},\alpha)|\).
Consider the following set of elements in \(F_{q,\alpha}\):
\[S=\left\{x,y,xy^{r}\right\}_{1\leq r\leq q-2}.\]
We have \(|S|\leq q\). Now we will show that any other element in \(F_{q,\alpha}\) is in \(S\) as well. By Lemma 3.2, any element in \(F_{q,\alpha}\) can be left-associated in \(\{x,y\}\). We proceed by induction on the number \(k\) of \(x\) and \(y\) used to write such a left-associated element. For \(k=1\) and \(2\), the statement is true using the relations in \(F_{q,\alpha}\). Now assume the statement is true for
elements of length \(\leq k\), left-associated in \(\{x,y\}\), where \(k\geq 3\). Let \(u\) be an element of length \(k+1\), left-associated in \(\{x,y\}\):
\[u=r_{1}r_{2}\cdots r_{k}r_{k+1},\ \ \ r_{i}\in\{x,y\}\,.\]
By induction, the element \(r_{1}r_{2}\cdots r_{k}\) is equal to some element in \(S\). So the proof is complete once we show that \(S\) is closed under 'right multiplication' by \(x\) or \(y\). This is easily seen for \(y\). For right multiplication by \(x\), the only nontrivial observation is
\[xy^{r}x = (xy^{r})x\] \[= (yx^{\log_{\alpha}(1-\alpha^{r})})x\] \[= yx^{\log_{\alpha}(1-\alpha^{r})+1}\] \[= yx^{(\log_{\alpha}(1-\alpha^{r})+1)\mathrm{mod}(q-1)}\in S.\]
Therefore \(|F_{q,\alpha}|\leq q\), and the isomorphism is proven.
### A complete classification of quandles of cyclic type
The presentation of cyclic quandles given in Theorem 3.4 is used in subsequent sections to prove some representation-theoretic results. Although the remainder of this Section is not needed for any of the representation theory in the remainder of this paper, Theorem 3.4 also provides us with a previously unknown result: the complete classification of quandles of cyclic type. If \(\alpha\) and \(\beta\) are distinct primitive elements in the field \(\mathbb{F}_{q}\), there was no way to determine if \((\mathbb{F}_{q},\alpha)\) and \((\mathbb{F}_{q},\beta)\) are isomorphic quandles. In Theorem 3.7 below, we obtain a necessary and sufficient condition for this isomorphism, and also provide a simple formula to count the number of nonisomorphic quandles of a given order.
**Lemma 3.5**.: _Let \(m,n,k\), and \(r\) be nonnegative integers, \(1\leq r<q-1\), and \(u,v\in F_{q,\alpha}\). Then_
1. \((xy^{m})(xy^{n})^{k}=xy^{\log_{\alpha}(\alpha^{m+k}-\alpha^{n+k}+\alpha^{n})}\)_,_
2. \(uv^{r}=vu^{\log_{\alpha}(1-\alpha^{r})}\)_,_
3. _If_ \(u\neq v\)_, then_ \(uv^{k}=uv^{\ell}\) _if and only if_ \(k\equiv\ell\) _mod_ \((q-1)\)_._
Proof.: We prove (1) by induction on \(k\). The base case \(k=1\) is given in Lemma 3.3. For the inductive step, we have
\[(xy^{m})(xy^{n})^{k+1} = (xy^{m})(xy^{n})^{k}(xy^{n})\] \[= xy^{\log_{\alpha}(\alpha^{m+k}-\alpha^{n+k}+\alpha^{n})}(xy^{n})\] \[= xy^{\log_{\alpha}\mu(\log_{\alpha}(\alpha^{m+k}-\alpha^{n+k}+ \alpha^{n}),n)}\] \[= xy^{\log_{\alpha}((\alpha^{m+k}-\alpha^{n+k}+\alpha^{n})\alpha- \alpha^{n+1}+\alpha^{n})}\] \[= xy^{\log_{\alpha}(\alpha^{m+k+1}-\alpha^{n+k+1}+\alpha^{n})}.\]
Part (2) is proven by induction on \(r\). We begin by representing \(u\) and \(v\) as
\[u=xy^{m},\ \ v=xy^{n}\]
for some \(0\leq m,n\leq q-2\). For the base case \(r=1\), Lemma 3.5, item (1), gives us
\[vu^{\log_{\alpha}(1-\alpha)} = (xy^{n})(xy^{m})^{\log_{\alpha}(1-\alpha)}\] \[= xy^{\log_{\alpha}(\alpha^{n+\log_{\alpha}(1-\alpha)}-\alpha^{m+ \log_{\alpha}(1-\alpha)}+\alpha^{m})}\] \[= xy^{\log_{\alpha}(\alpha^{n}(1-\alpha)-\alpha^{m}(1-\alpha)+ \alpha^{m})}\] \[= xy^{\log_{\alpha}(\alpha^{m+1}+\alpha^{n}-\alpha^{n+1})}\] \[= xy^{\log_{\alpha}(\mu(m,n))}=(xy^{m})(xy^{n})=uv.\]
For the inductive step, we have
\[uv^{r+1} = uv^{r}v\] \[= vu^{\log_{\alpha}(1-\alpha^{r})}v\] \[= (xy^{n})(xy^{m})^{\log_{\alpha}(1-\alpha^{r})}xy^{n}\] \[= xy^{\log_{\alpha}(\alpha^{n+\log_{\alpha}(1-\alpha^{r})}-\alpha^{ m+\log_{\alpha}(1-\alpha^{r})}+\alpha^{m})}xy^{n}\] \[= xy^{\log_{\alpha}((1-\alpha^{r})\alpha^{n}-(1-\alpha^{r})\alpha^{ m}+\alpha^{m})}xy^{n}\] \[= xy^{\log_{\alpha}(\mu(\log_{\alpha}((1-\alpha^{r})\alpha^{n}-(1- \alpha^{r})\alpha^{m}+\alpha^{m}),n))}\] (Lemma 3.3) \[= xy^{\log_{\alpha}[((1-\alpha^{r})\alpha^{n}-(1-\alpha^{r})\alpha^{ m}+\alpha^{m})\alpha+\alpha^{n}-\alpha^{n+1}]}\] \[= xy^{\log_{\alpha}(\alpha^{r+m+1}-\alpha^{r+n+1}+\alpha^{n})}\] \[= xy^{\log_{\alpha}(\alpha^{n}(1-\alpha^{r+1})-\alpha^{m}(1- \alpha^{r+1})+\alpha^{m})}\] \[= xy^{\log_{\alpha}(\alpha^{n+\log_{\alpha}(1-\alpha^{r+1})}- \alpha^{m+\log_{\alpha}(1-\alpha^{r+1})}+\alpha^{m})}\] \[= (xy^{n})(xy^{m})^{\log_{\alpha}(1-\alpha^{r+1})}\qquad\text{ (part (1))}\] \[= vu^{\log_{\alpha}(1-\alpha^{r+1})}.\]
We first prove Part (3) explicitly for \(a,b\) in \((\mathbb{F}_{q},\alpha)\) (in place of \(u,v\) in \(F_{q,\alpha}\)). Induction on \(k\) shows that \(ab^{k}=\alpha^{k}a+(1-\alpha^{k})b\). Suppose \(ab^{k}=ab^{\ell}\). If \(a=0\), then this assumption gives us
\[(1-\alpha^{k})b=0b^{k}=0b^{\ell}=(1-\alpha^{\ell})b,\]
and if \(b=0\) it gives us \(\alpha^{k}a=\alpha^{\ell}a\). Either way we obtain \(\alpha^{k}=\alpha^{\ell}\), hence \(k\equiv\ell\mod(q-1)\).
If both \(a,b\neq 0\), we argue by contradiction, assuming \(k\not\equiv\ell\mod(q-1)\). Then \(\alpha^{k}\neq\alpha^{\ell}\). The assumption \(ab^{k}=ab^{\ell}\) gives us
\[\alpha^{k}a+(1-\alpha^{k})b=\alpha^{\ell}a+(1-\alpha^{\ell})b,\quad\text{ so }\quad(\alpha^{k}-\alpha^{\ell})a=(\alpha^{k}-\alpha^{\ell})b,\]
and since \(\alpha^{k}-\alpha^{\ell}\neq 0\), we have \(a/b=(\alpha^{k}-\alpha^{\ell})/(\alpha^{k}-\alpha^{\ell})=1\), which contradicts \(a\neq b\). This proves the result for \(a,b\) in \((\mathbb{F}_{q},\alpha)\). Finally we use the isomorphism \(F_{q,\alpha}\cong(\mathbb{F}_{q},\alpha)\) to obtain the same result in \(F_{q,\alpha}\).
Suppose \(\alpha\) and \(\beta\) are primitive elements in a finite field \(F_{q}\), where \(q=p^{n}\) for \(p\) prime. If \(\alpha\) can be writen as a 'prime power' of \(\beta\), i.e., \(\log_{\beta}\alpha=p^{s}\) for some integer \(s\), then \(\alpha=\beta^{p^{s}}\), and \(\beta=\beta^{p^{n}}=(\beta^{p^{s}})^{p^{n-s}}=\alpha^{p^{n-s}}\), so that \(\beta\) can also be written as a 'prime power' of \(\alpha\). Hence this condition is an equivalence relation on the set of primitive elements of \(F_{q}\). We will say that two primitive elements are _prime power equivalent_ if this is the case.
**Lemma 3.6**.: _Let \(q=p^{n}\), p prime, and let \(\alpha\), \(\beta\) be two primitive elements in \(\mathbb{F}_{q}\). Then \(\alpha\) and \(\beta\) are prime-power equivalent if and only if_
\[\log_{\alpha}(1-\alpha^{k})\equiv\log_{\beta}(1-\beta^{k}),\ \ 0<k<q-1. \tag{1}\]
Proof.: Set \(N=\log_{\beta}\alpha\). Then (1) is equivalent to
\[\big{(}1-\beta^{k}\big{)}^{N}=1-\alpha^{k},\ \ 0<k<q-1.\]
Note that \(N\) must be odd, for if \(N\) were even, then the choice \(k=\log_{\beta}2\) would give us
\[1=(-1)^{N}=\big{(}1-\beta^{k}\big{)}^{N}=1-\alpha^{k}=1-2^{N},\]
hence \(2^{N}=0\) - but no such \(N\) exists. Also note that \(1\leq N<q-1\), because \(\beta^{q-1}=1\).
Now suppose \(\alpha\) and \(\beta\) are prime power equivalent, so that \(N\) is a power of \(p\): \(N=p^{s}\) for some positive integer \(s\). Then we have
\[\big{(}1-\beta^{k}\big{)}^{N}=\big{(}1-\beta^{k}\big{)}^{p^{s}}=1^{p^{s}}-(\beta ^{k})^{p^{s}}=1-(\beta^{p^{s}})^{k}=1-\alpha^{k}\]
for all \(k\), and (1) is established.
For the opposite direction, assume \(N=p^{s}v\), where \(s\geq 0\) and \(v\not\equiv 0\mod p\), and suppose that the system (1) holds. Let \(G(x)=(1-x)^{N}+x^{N}-1\in\mathbb{F}_{q}[x]\). Let \(r\) be any nonzero element of \(\mathbb{F}_{q}\), and set \(k=\log_{\beta}r\). Then using (1) we have
\[(1-r)^{N}=\big{(}1-\beta^{\log_{\beta}r}\big{)}^{N}=1-\alpha^{\log_{\beta}r}=1 -r^{N},\]
hence every element of \(\mathbb{F}_{q}^{*}\) is a root of \(G(x)\). Now we claim \(G(x)\neq 0\). Expanding the first term \((1-x)^{N}\) gives us
\[(1-x)^{N}=(1-x)^{p^{s}v}=(1-x^{p^{s}})^{v}=1-x^{N}+vx^{p^{s}}+\sum_{t=2}^{v-1} \binom{v}{k}(-x^{p^{s}})^{t},\]
hence \(G(x)=vx^{p^{s}}+(\text{higher-order terms})\). In particular, since \(v\not\equiv 0\mod p\), \(G(x)\neq 0\). Therefore \(G(x)\) is a nontrivial polynomial with \(1\leq\deg(G)<q-1\), and so it has less than \(q-1\) roots - contradicting the fact that \(G(r)=0\) for all \(r\in F_{q}\). This contradiction shows that the system (1) cannot hold, and the reverse direction is proven.
**Theorem 3.7**.: _Let \(q=p^{n}\), \(p\) prime, and let \(\alpha\), \(\beta\) be two primitive elements in \(\mathbb{F}_{q}\)._
1. _The quandles_ \(F_{q,\alpha}\) _and_ \(F_{q,\beta}\) _are isomorphic if and only if_ \(\alpha\) _and_ \(\beta\) _are prime-power equivalent._
2. _There are a total of_ \(\frac{\phi(p^{n}-1)}{n}\) _isomorphism classes of cyclic quandles of order_ \(p^{n}\)_, where_ \(\phi\) _is Euler's totient function._
Proof.: Assume \(\phi:F_{q,\alpha}\to F_{q,\beta}\) is a quandle isomorphism, and let \(u\) and \(v\) be the preimages in \(F_{q,\alpha}\) of the generators \(x\) and \(y\) in \(F_{q,\beta}\). By Lemma 3.5 item (2), for any \(1\leq k<q-1\) we have \(uv^{k}=vu^{\log_{\alpha}(1-\alpha^{k})}\). Applying \(\phi\) to this relation gives us \(xy^{k}=yx^{\log_{\alpha}(1-\alpha^{k})}\). But since \(x,y\) are the generators in \(\in F_{q,\beta}\), we have \(xy^{k}=yx^{\log_{\beta}(1-\beta^{k})}\), so
\[yx^{\log_{\alpha}(1-\alpha^{k})}=xy^{k}=yx^{\log_{\beta}(1-\beta^{k})}.\]
Therefore by Lemma 3.5 item (3), we have \(\log_{\alpha}(1-\alpha^{k})\equiv\log_{\beta}(1-\beta^{k})\) for all \(1\leq k<q-1\), and then Lemma 3.6 gives us our result.
If we now assume that \(\alpha\) and \(\beta\) are prime-power equivalent, then we have
\[\log_{\alpha}(1-\alpha^{k})\equiv\log_{\beta}(1-\beta^{k})\]
for all \(1\leq k<q\). If \(F_{q,\alpha}\) is generated by elements \(x,y\), and \(F_{q,\beta}\) is generated by elements \(u,v\), define a map \(\phi:F_{q,\alpha}\to F_{q,\beta}\) by \(\phi(x)=u\), \(\phi(y)=v\). Then we can show that \(\phi\) is one-to-one, and Lemma 3.6 now ensures that the defining relations of \(F_{q,\alpha}\) are satisfied.
Item (2) is an immediate corollary: given a primitive element \(\alpha\) in \(\mathbb{F}_{q}\), any prime power \(\alpha^{p^{s}}\) is also primitive, but \(\alpha^{p^{n}}=\alpha\). So the corresponding prime-power equivalence class is \(\Big{\{}\alpha,\alpha^{p},\dots,\alpha^{p^{n-1}}\Big{\}}\), with \(n\) elements.
_Example 3.8_.: Let \(p=5\). We construct a field of order \(5^{3}\) as a quotient field of the ring \(\mathbb{Z}_{5}[x]\). The polynomial \(m(x)=x^{3}+x+1\) is irreducible, hence \(\mathbb{F}_{5^{3}}\cong\mathbb{Z}_{5}[x]/(x^{3}+x+1)\). Consider the (left cosets of the) following elements:
\[\alpha:=2x^{2}+x+2,\quad\beta:=3x^{2}+2x+1,\quad\gamma:=3x^{2}+4x+3\]
These elements are all primitive, since \(|\alpha|=|\beta|=|\gamma|=124\). Moreover, \(\alpha^{5^{2}}=\beta\), hence \(\beta^{5}=\alpha\). So \(\alpha\) and \(\beta\) are prime-power equivalent primitive elements. However, \(\alpha^{5^{s}}\neq\gamma\) for any \(s\). So by Theorem 3.7, we can conclude:
\[F_{125,\alpha}\cong F_{125,\beta}\not\cong F_{125,\gamma}.\]
There are a total of \(\phi(124)=60\) primitive elements in \(\mathbb{F}_{125}\), and the order of each prime-power equivalence class is \(3\) (e.g., \(\{\alpha,\alpha^{5},\alpha^{25}\}\) is one of these equivalence classes). So there are a total of \(60/3=20\) equivalence classes, from which we can conclude: up to isomorphism, there are \(20\) (pairwise nonisomorphic) cyclic quandles of order \(125\).
## 4. Representation theory
A _representation_ of a finite quandle \((X,*)\) on a finite dimensional complex vector space \(V\) is a quandle homomorphism \(\rho:X\to GL(V)\), where the automorphism group \(GL(V)\) of \(V\) is considered as a quandle with conjugation (see [6]). In other words, for all \(x,y\in X\), we have \(\rho(x*y)=\rho(y)\rho(x)\rho(y)^{-1}.\) To simplify the notation, we will denote \(\rho(x)\) by just \(\rho_{x}\). Let \(V\) and \(W\) be two representations of a quandle \(X\). A map from the representation \(V\) to \(W\) is a linear map \(\phi:V\to W\) which makes the following diagram commute:
If furthermore \(\phi\) is an isomorphism then we say that the two representations are _equivalent_. If \(\phi:V\hookrightarrow W\) is the inclusion of the linear subspace \(V\) into \(W\), then we say that \(V\) is a _subrepresentation_ of \(W\).
A representation \(V\) of a quandle \(X\) is called _irreducible_ if its only subrepresentations are \(\{0\}\) and \(V\), and _completely reducible_ if it can be written as a direct sum of irreducible subrepresentations. A representation ia called _indecomposable_ if it cannot be written as direct sum of nontrivial subrepresentations. Thus, clearly every irreducible representation is indecomposable.
### The regular representation
For any finite quandle \(X\), we denote by \(\mathbb{C}X\) the \(\mathbb{C}\)-vector space of \(\mathbb{C}\)-valued functions on \(X\); equivalently it is the \(\mathbb{C}\)-vector space generated by basis vectors \(\{e_{x}\}_{x\in X}\), whose elements are formal sums \(f=\sum_{x\in X}a_{x}e_{x}\), \(a_{x}\in\mathbb{C}\). The _regular representation_ of \(X\) is
\[\lambda:X\to\operatorname{Conj}(GL(\mathbb{C}X)),\]
where \(\lambda_{t}(f)(x):=f(R_{t}^{-1}(x))\). This action of \(X\) is equivalent to the right \(X\)-action on \(\mathbb{C}X\) given by the linear extension of \(e_{x}\cdot t=e_{R_{t}(x)}=e_{x*t}\). We note that a subspace \(W\) of \(\mathbb{C}X\) is a quandle subrepresentation of \(\mathbb{C}X\) if and only if \(W\) is a subrepresentation of \(\mathbb{C}X\) as a group representation of \(\operatorname{Inn}(X)\).
When \(|X|>2\), The regular representation always has two non-trivial subrepresentations, one of which is always irreducible. This irreducible subrepresentation is the one-dimensional subspace \(\mathbb{C}\mathbf{1}\), where \(\mathbf{1}\) is the constant function \(\mathbf{1}(x)=1\), for all \(x\in X\). Its vector space complement, \((\mathbb{C}\mathbf{1})^{\perp}\), is also a subrepresentation, since \((\mathbb{C}\mathbf{1})^{\perp}=\left\{\sum_{x\in X}a_{x}e_{x}|\ \sum a_{x}=0\right\}\), and action by any \(t\in X\) only permutes the coefficients \(a_{x}\). So \(\mathbb{C}X\) decomposes into a direct sum of subrepresentations
\[\mathbb{C}X=\mathbb{C}\mathbf{1}\oplus(\mathbb{C}\mathbf{1})^{\perp}.\]
For \(x,y\in X\) we denote \(v_{xy}=e_{x}-e_{y}\). If we enumerate \(X=\{x_{1},\ldots,x_{n}\}\), then a basis for \((\mathbb{C}\mathbf{1})^{\perp}\) is \(\left\{v_{x_{1}x_{2}},v_{x_{2}x_{3}},\ldots,v_{x_{n-1}x_{n}}\right\}\).
**Proposition 4.1**.: _The regular representation of a quandle \(X\) is completely reducible._
Proof.: A subspace \(W\) of \(\mathbb{C}X\) is a quandle subrepresentation of \(\mathbb{C}X\) if and only if \(W\) is a subrepresentation of \(\mathbb{C}X\) as a group representation of \(\operatorname{Inn}(X)\). Then Maschke's Theorem gives us the result.
## 5. The regular representations of dihedral quandles
In this section, we obtain an explicit decomposition of the regular representation of a dihedral quandle. We denote by \((\mathbb{Z}_{n},a)\) the linear quandle \(\mathbb{Z}_{n}\) with operation \(x\star y=ax+(1-a)y\), for any \(a\) invertible in \(\mathbb{Z}_{n}\). We denote by \(\operatorname{Inn}(\mathbb{Z}_{n})\) the group of inner automorphisms of the dihedral quandle \(\mathbb{Z}_{n}\); this is the subgroup of \(\operatorname{Aut}(\mathbb{Z}_{n})\) generated by the right multiplication operators \(R_{i}\).
**Theorem 5.1**.: _[_4_]_ _If \(n\) is even, \(\operatorname{Inn}(\mathbb{Z}_{n})=D_{n/2}\), the dihedral group of order \(n\). If \(n\) is odd, \(\operatorname{Inn}(\mathbb{Z}_{n})=D_{n}\)._
We describe the (isomorphism classes of) finite-dimensional irreducible group representations of \(D_{n}\) here, to facilitate statement of results below. The dihedral group \(D_{n}\) has a presentation given by
\[D_{n}=\left\langle\alpha,\beta\ \mid\ |\alpha|=|\beta|=2,\ |\alpha\beta|=n \right\rangle.\]
We fix generators \(\alpha,\beta\) for this presentation. The classification of the finite-dimensional irreducible group representations of \(D_{n}\) falls into two cases: \(n\) even and \(n\) odd. In either case, let \(\omega_{r}=e^{2\pi i/r}\). If \(\lambda,\mu\in\mathbb{C}\), we will denote by \(\mathbb{C}(\lambda,\mu)\) the one-dimensional group representation of \(D_{n}\) on which \(\alpha\) and \(\beta\) act by scalar multiplication by \(\lambda\) and \(\mu\), respectively. And for \(s\in\mathbb{Z}\), we will denote by \(W(\omega_{r}^{s})\) the two-dimensional group representation of \(D_{n}\) for which the matrix representations of \(\alpha\) and \(\beta\) are
\[\alpha\mapsto\begin{bmatrix}0&1\\ 1&0\end{bmatrix},\quad\beta\mapsto\begin{bmatrix}0&\omega_{r}^{s}\\ \omega_{r}^{-s}&0\end{bmatrix}.\]
With this notation fixed, we can give the classification of the finite-dimensional irreducible group representations of \(D_{n}\). All such representations are either \(1\)- or \(2\)- dimensional, and are described here:
\begin{tabular}{|c|c|c|} \hline \(D_{n}\) & \(n=2k\) even & \(n=2k+1\) odd \\ \hline \(1\)-dimensional & \(\mathbb{C}(1,1)\), \(\mathbb{C}(1,-1)\), & \\ group representations & \(\mathbb{C}(-1,1)\), \(\mathbb{C}(-1,-1)\) & \(\mathbb{C}(1,1)\), \(\mathbb{C}(-1,-1)\) \\ \hline \(2\)-dimensional & \(W(\omega_{n}^{s})\), & \(W(\omega_{n}^{s})\), \\ group representations & \(1\leq s\leq k-1\) & \(1\leq s\leq k\) \\ \hline \end{tabular} We will denote by \(\Gamma(D_{n})\) the set of (isomorphism classes of) finite-dimensional irreducible group representations of \(D_{n}\). For example, \(D_{8}\) has \(4\) representations of dimension \(1\) and \(3\) irreducible representations of dimension \(2\), and they are
\[\Gamma(D_{8})=\left\{\mathbb{C}(\pm 1,\pm 1),\ \ \mathbb{C}(\pm 1,\mp 1),\ \ W( \omega_{8}),\ \ W(\omega_{8}^{2}),\ \ W(\omega_{8}^{3})\right\},\]
while \(D_{9}\) has \(2\) representations of dimension \(1\) and \(3\) irreducible representations of dimension \(2\), and they are
\[\Gamma(D_{9})=\left\{\mathbb{C}(\pm 1,\pm 1),\ \ W(\omega_{9}),\ \ W(\omega_{9}^{2 }),\ \ W(\omega_{9}^{3}),\ \ W(\omega_{9}^{4})\right\}.\]
Now we consider the regular representation \(\mathbb{C}\mathbb{Z}_{n}\) of \(\mathbb{Z}_{n}\) and the subrepresentation \((\mathbb{C}\mathbf{1})^{\perp}\), in cases \(n\) even or \(n\) odd. In either case, \((\mathbb{C}\mathbf{1})^{\perp}\) is spanned by \(\left\{v_{ij}\right\}_{1\leq i,j\leq n}\).
If \(n=2k\) is even, there are two orbits of \(\mathbb{Z}_{n}\) under the quandle action: the even orbit \(\left\{2,4,\ldots,2k\right\}\), and the odd orbit \(\left\{1,3,\ldots,2k-1\right\}\). Let
\[\Phi_{n,0}=\operatorname{sp}\left\{v_{ij}\right\}_{i\neq j\text{\ even}}, \quad\Delta_{n,0}=\left\{v_{24},v_{46},\ldots,v_{2k-2\,2k}\right\},\]
and
\[\Phi_{n,1}=\operatorname{sp}\left\{v_{ij}\right\}_{i\neq j\text{\ odd}}, \quad\Delta_{n,1}=\left\{v_{13},v_{35},\ldots,v_{2k-3\,2k-1}\right\}.\]
We call \(\Phi_{n,0}\) and \(\Phi_{n,1}\) the even and odd subspaces, respectively, of \((\mathbb{C}\mathbf{1})^{\perp}\). For \(i=0,1\), \(\Delta_{n,i}\) is a basis for \(\Phi_{n,i}\).
If \(n\) is odd, we set
\[\Phi_{n}=\operatorname{sp}\left\{v_{ij}\right\}_{i\neq j},\quad\Delta_{n}= \left\{v_{12},v_{23},\ldots,v_{n-1\,n}\right\},\]
and \(\Delta_{n}\) is a basis for \(\Phi_{n}\).
**Theorem 5.2**.: _If \(n\) is even, then the decomposition of \(\mathbb{C}\mathbb{Z}_{n}\) into irreducible subrepresentations of \(\mathbb{Z}_{n}\) is, for \(n=4k\),_
\[\mathbb{C}\mathbb{Z}_{n}\cong\mathbb{C}(1,1)^{\oplus 2}\oplus\mathbb{C}(\pm 1, \mp 1)\bigoplus_{s=1}^{k-1}W(\omega_{2k}^{s})^{\oplus 2}\]
_And for \(n=4k+2\),_
\[\mathbb{C}\mathbb{Z}_{n}\cong\mathbb{C}(1,1)^{\oplus 2}\bigoplus_{s=1}^{k}W( \omega_{2k+1}^{s})^{\oplus 2}.\]
_If \(n\) is odd, \(n=2r+1\), then the decomposition of \(\mathbb{C}\mathbb{Z}_{n}\) into irreducible subrepresentations of \(\mathbb{Z}_{n}\) is_
\[\mathbb{C}\mathbb{Z}_{n}\cong\mathbb{C}(1,1)\bigoplus_{s=1}^{r}W(\omega_{n}^{ 2s}).\]
Proof.: For arbitrary \(n\), let \(\hat{\mathbf{1}}=\sum_{i=1}^{n}(-1)^{i}e_{i}\in(\mathbb{C}\mathbf{1})^{\perp}\). Then \(\mathbb{C}\mathbb{Z}_{n}\) decomposes as \(\mathbb{C}\mathbb{Z}_{n}=\mathbb{C}\mathbf{1}\oplus(\mathbb{C}\mathbf{1})^{\perp}\), and \(\mathbb{C}\mathbf{1}\cong\mathbb{C}\hat{\mathbf{1}}\cong\mathbb{C}(1,1)\). We proceed in the cases \(n\) even or \(n\) odd.
**Case 1:**\(n=2r\). In this case, each \(\Phi_{n,i}\) (for \(i=0,1\)) is a subrepresentation of \(\mathbb{C}\mathbb{Z}_{n}\) of dimension \(r-1\), and \((\mathbb{C}\mathbf{1})^{\perp}\) decomposes as
\[(\mathbb{C}\mathbf{1})^{\perp}=\mathbb{C}\hat{\mathbf{1}}\oplus\Phi_{n,0} \oplus\Phi_{n,1}.\]
Set \([a_{1},\ldots,a_{r-1}]_{0}=\sum_{i=1}^{r-1}a_{i}v_{2i,2i+2}\) (the coordinate vector in the basis \(\Delta_{n,0}\)). The right multiplication operators \(R_{1}\) and \(R_{2}\) together generate all of \(\operatorname{Inn}(\mathbb{Z}_{n})\cong D_{r}\), and with respect to the basis \(\Delta_{n,0}\), the matrix representations of these operators on the regular representation are
\[\left[R_{1}\right]_{\Delta_{n,0}}=\left[\begin{array}{ccccc}0&0&\cdots&0&-1 \\ 0&0&\cdots&-1&0\\ \vdots&\vdots&\iddots&\vdots&\vdots\\ 0&-1&\cdots&0&0\\ -1&0&\cdots&0&0\\ \end{array}\right],\quad\left[R_{2}\right]_{\Delta_{n,0}}=\left[\begin{array}[] {ccccc}1&0&\cdots&0&0\\ 1&0&\cdots&0&-1\\ \vdots&\vdots&\iddots&\vdots&\vdots\\ 1&0&\cdots&0&0\\ 1&-1&\cdots&0&0\\ \end{array}\right]. \tag{2}\]
For example, for \(n=12\), we have
\[R_{1}\left([a_{1},a_{2},a_{3},a_{4},a_{5}]_{0}\right)=\left[-a_{5},-a_{4},-a_{3}, -a_{2},-a_{1}\right]_{0},\]
\[R_{2}\left([a_{1},a_{2},a_{3},a_{4},a_{5}]_{0}\right)=\left[a_{1},a_{1}-a_{5},a_ {1}-a_{4},a_{1}-a_{3},a_{1}-a_{2}\right]_{0}.\]
Let \(\omega_{r}=e^{2\pi i/r}\), and for \(1\leq s\leq r-1\), let
\[\mathbf{u}_{s}:=\left[1-\omega_{r}^{s},1-(\omega_{r}^{s})^{2},\cdots,1-(\omega_ {r}^{s})^{r-1}\right]_{0}=\sum_{i=1}^{r-1}\left(1-(\omega_{r}^{s})^{i}\right)v _{2i,2i+2},\ \ \text{and}\ \ \mathbf{v}_{s}:=R_{1}(\mathbf{u}_{s}).\]
Then \(\mathbf{v}_{s}=\sum_{i=1}^{r-1}\left((\omega_{r}^{s})^{r-i}-1\right)v_{2i,2i+2}\), and
\[R_{2}(\mathbf{u}_{s}) = \sum_{i=1}^{r-1}\left((\omega_{r}^{s})^{r+1-i}-\omega_{r}^{s} \right)v_{2i,2i+2}\] \[= \omega_{r}^{s}\sum_{i=1}^{r-1}\left((\omega_{r}^{s})^{r-i}-1 \right)v_{2i,2i+2}\] \[= \omega_{r}^{s}\mathbf{v}_{s},\]
and since all \(R_{i}\) are involutions, we also have \(R_{2}(\mathbf{v}_{s})=(\omega_{r}^{-1})^{s}\mathbf{u}_{s}\). Let \(W_{s,0}\) be the subspace of \(\Phi_{n,0}\) spanned by \(\mathbf{u}_{s}\) and \(\mathbf{v}_{s}\). The above identities show that each \(W_{s,0}\) is a subrepresentation of \(\Phi_{n,0}\): if \(\mathbf{u}_{s}\) and \(\mathbf{v}_{s}\) are linearly dependent, then \(\mathbf{u}_{s}\) is an eigenvector for \(R_{1}\) and \(R_{2}\), hence \(W_{s}\) is a \(1\)-dimensional subrepresentation. And if \(\mathbf{u}_{s}\) and \(\mathbf{v}_{s}\) are linearly independent, the matrix representations of \(R_{1}\) and \(R_{2}\) with respect to the basis \(\{\mathbf{u}_{s},\mathbf{v}_{s}\}\) are
\[\left[R_{1}\right]_{\{\mathbf{u}_{s},\mathbf{v}_{s}\}}=\begin{bmatrix}0&1\\ 1&0\end{bmatrix},\ \ \ \left[R_{2}\right]_{\{\mathbf{u}_{s},\mathbf{v}_{s}\}}= \begin{bmatrix}0&\omega_{r}^{s}\\ \omega_{r}^{-s}&0\end{bmatrix},\]
which gives us \(W_{s,0}\cong W(\omega_{r}^{s})\) (as group representations of \(D_{r}\)). From the identity \((\omega_{r}^{r-s})^{\ell}=(\omega_{r}^{s})^{-\ell}\) (for any integers \(\ell,s\)), we obtain \(\mathbf{u}_{s}=-\mathbf{v}_{r-s}\), for \(1\leq s\leq r-1\). Therefore \(W_{s,0}=W_{k-s,0}\), \(1\leq s\leq r-1\).
Next we will show that the set \(\{\mathbf{u}_{s}\}_{1\leq s\leq r-1}\) is a basis for \(\Phi_{n,0}\). Let
\[A=\begin{bmatrix}-\mathbf{u}_{1}-\\ -\mathbf{u}_{2}-\\ \vdots\\ -\mathbf{u}_{k-1}-\end{bmatrix}=\begin{bmatrix}1-\omega_{r}&1-\omega_{r}^{r} &\cdots&1-\omega_{r}^{r-1}\\ 1-\omega_{r}^{2}&1-(\omega_{r}^{2})^{2}&\cdots&1-(\omega_{r}^{2})^{r-1}\\ \vdots&\vdots&&\vdots\\ 1-\omega_{r}^{r-1}&1-(\omega_{r}^{r-1})^{2}&\cdots&1-(\omega_{r}^{r-1})^{r-1}. \end{bmatrix}\]
Then we can write
\[A=(-1)^{s}\begin{bmatrix}2&1&\cdots&1&1\\ 1&2&\cdots&1&1\\ \vdots&\vdots&&\vdots&\vdots\\ 1&1&\cdots&2&1\\ 1&1&\cdots&1&2\end{bmatrix}\begin{bmatrix}\omega&\omega_{r}^{2}&\cdots& \omega_{r}^{r-1}\\ \omega_{r}^{2}&(\omega_{r}^{2})^{2}&\cdots&(\omega_{r}^{r-1})^{2}\\ \vdots&\vdots&&\vdots\\ \omega_{r}^{r-1}&(\omega_{r}^{2})^{r-1}&\cdots&(\omega_{r}^{r-1})^{r-1}. \end{bmatrix},\]
where each matrix is \(r-1\) square. The first matrix in this factorization is easily row-reduced to the identity, while the second matrix is the submatrix obtained from the Vandermonde matrix of the \(r\) roots of unity by removing the first row and column of \(1\)'s. Since this Vandermonde matrix is invertible, it follows that \(A\) is invertible, hence the set \(\{\mathbf{u}_{s}\}_{1\leq s\leq r-1}\) is linearly independent. As a corollary, we obtain the following: For \(r\) even, \(W_{\ell,0}\cap W_{s,0}=\{0\}\) for \(1\leq\ell\neq s\leq r/2\), and \(\dim(W_{s,0})=\begin{cases}2,&1\leq s<r/2\\ 1,&s=r/2\end{cases}.\) For \(r\) odd, \(W_{\ell,0}\cap W_{s,0}=\{0\}\) for \(1\leq\ell\neq s\leq r/2\), and \(\dim(W_{s,0})=\begin{cases}2,&1\leq s<r/2\\ 1,&s=r/2\end{cases}.
\(W_{\ell,0}\cap W_{s,0}=\{0\}\) for \(1\leq\ell\neq s\leq(r-1)/2\), and \(\dim(W_{s,0})=2\) for all \(s\).
Hence we obtain a decomposition of \(\Phi_{n,0}\) into irreducible subrepresentations:
\[\Phi_{n,0}\cong\begin{cases}W_{r/2,0}\bigoplus_{s=1}^{r/2-1}W(\omega_{r}^{s}), &r\text{ even}\\ \\ \bigoplus_{s=1}^{(r-1)/2}W(\omega_{r}^{s}),&r\text{ odd}\end{cases}\]
The decomposition of \(\Phi_{n,1}\) is identical - the only change being that \(R_{1}\), \(R_{2}\) are replaced by \(R_{n/2}\), \(R_{1}\). The matrix representations of these transformations, with respect to the basis \(\Delta_{n,1}\), are given by
\[\left[R_{n/2}\right]_{\Delta_{n,1}}=\begin{bmatrix}0&0&\cdots&0&-1\\ 0&0&\cdots&-1&0\\ \vdots&\vdots&\cdot&\vdots&\vdots\\ 0&-1&\cdots&0&0\\ -1&0&\cdots&0&0\end{bmatrix},\quad\left[R_{1}\right]_{\Delta_{n,1}}=\begin{bmatrix} 1&0&\cdots&0&0\\ 1&0&\cdots&0&-1\\ \vdots&\vdots&\cdot&\vdots&\vdots\\ 1&0&\cdots&0&0\\ 1&-1&\cdots&0&0\end{bmatrix}. \tag{3}\]
The remainder of the argument is the same, _mutatis mutandis_, and shows
\[\Phi_{n,1}\cong\begin{cases}W_{r/2,1}\bigoplus_{s=1}^{r/2-1}W(\omega_{r}^{s}),&r\text{ even}\\ \\ \bigoplus_{s=1}^{(r-1)/2}W(\omega_{r}^{s}),&r\text{ odd}\end{cases}\]
For \(r\) even, the one-dimensional subrepresentations of \(\mathbb{C}\mathbb{Z}_{2k}\) are:
\[\mathbb{C}\mathbf{1},\quad\mathbb{C}\left(\sum_{i=0}^{n}(-1)^{i}e_{i}\right), \quad W_{r/2,0},\quad\text{ and }\quad W_{r/2,1},\]
and the two-dimensional irreducible representations are \(W_{s,0}\) and \(W_{s,1}\), \(1\leq s\leq r/2-1\).
For \(r\) odd, the one-dimensional subrepresentations of \(\mathbb{C}\mathbb{Z}_{2r}\) are:
\[\mathbb{C}\mathbf{1}\quad\text{and}\quad\mathbb{C}\left(\sum_{r=0}^{n}(-1)^{i} e_{i}\right),\]
and the two-dimensional irreducible representations are \(W_{s,0}\) and \(W_{s,1}\), \(1\leq s\leq(r-1)\,/\,2\).
Next assume \(n\) is odd: \(n=2r+1\). In this case, \(\operatorname{Inn}(\mathbb{Z}_{n})\cong D_{n}\). With respect to the basis \(\Delta_{n}=\{v_{12},v_{23},\ldots,v_{n-1,n}\}\), the elements \(R_{1}\), \(R_{2}\in\operatorname{Inn}(Z_{n})\) have matrix representation
\[\left[R_{1}\right]_{\Delta_{n}}=\begin{bmatrix}0&\cdots&0&-1&1\\ 0&\cdots&-1&0&1\\ \vdots&\cdots&\vdots&\vdots&\vdots\\ -1&0&\cdots&0&1\\ 0&0&\cdots&0&1\end{bmatrix},\quad\left[R_{2}\right]_{\Delta_{n}}=\begin{bmatrix} 1&0&\cdots&0&0\\ 1&0&\cdots&0&-1\\ \vdots&\vdots&\cdots&\vdots&\vdots\\ 1&0&\cdots&0&0\\ 1&-1&\cdots&0&0\end{bmatrix}.\]
Let \(\omega=e^{2\pi i/n}\), and for \(1\leq s\leq n-1\), let
\[\mathbf{u}_{s}=\left[1-\omega^{s},1-(\omega^{s})^{2},\cdots,1-(\omega^{s})^{n -1}\right],\quad\text{and}\quad\mathbf{v}_{s}=R_{1}(\mathbf{u}_{s}).\]
We can then show that
\[\mathbf{v}_{s}=\omega^{-s}\left(\omega^{-s}-1,(\omega^{-s})^{2}-1,\ldots,(\omega^ {-s})^{n-1}-1\right).\]
Let \(W_{s}\) be the subspace of \(\Phi_{n}\) spanned by \(\mathbf{u}_{s}\) and \(\mathbf{v}_{s}\). We can show that \(\mathbf{u}_{s}=\mathbf{v}_{n-s}\), hence \(W_{s}=W_{n-s}\). Furthermore, we have
\[R_{2}(\mathbf{u}_{s}) = R_{2}(1-\omega^{s},1-(\omega^{s})^{2},\ldots,1-(\omega^{s})^{n-1})\] \[= (1-\omega^{s},(\omega^{s})^{n-1}-\omega^{s},(\omega^{s})^{n-2}- \omega^{s},\ldots,(\omega^{s})^{2}-\omega^{s})\] \[= \omega^{2s}\omega^{-s}(\omega^{-s}-1,(\omega^{-s})^{2}-1,\ldots,( \omega^{-s})^{n-1}-1)\] \[= \omega^{2s}\mathbf{v}_{s}.\]
Therefore, if \(\mathbf{u}_{s}\) and \(\mathbf{v}_{s}\) are linearly independent, the matrix representations of \(R_{1}\) and \(R_{2}\) with respect to the basis \(\{\mathbf{u}_{s},\mathbf{v}_{s}\}\) of \(W_{s}\) are
\[\left[R_{1}\right]_{\{\mathbf{u}_{s},\mathbf{v}_{s}\}}=\begin{bmatrix}0&1\\ 1&0\end{bmatrix},\quad\left[R_{2}\right]_{\{\mathbf{u}_{s},\mathbf{v}_{s}\}}= \begin{bmatrix}0&\omega^{2s}\\ \omega^{-2s}&0\end{bmatrix},\]
which gives us \(W_{s}\cong W(\omega^{2s})\) (as group representations of \(D_{n}\)). We can show linear independence of \(\mathbf{u}_{s}\) and \(\mathbf{v}_{s}\) for \(1\leq s\leq r\), which gives us the desired decomposition into irreducible representations
\[\mathbb{C}\mathbb{Z}_{n}\cong\mathbb{C}(1,1)\bigoplus_{s=1}^{r}W(\omega_{n}^{2 s}).\]
_Example 5.3_.: We list the irreducible subrepresentations of \(\mathbb{C}\mathbb{Z}_{10}\), \(\mathbb{C}\mathbb{Z}_{11}\), and \(\mathbb{C}\mathbb{Z}_{12}\):
\begin{tabular}{c c c c} subrep of \(\mathbb{Z}_{10}\) & irrep of \(D_{5}\) & generated by & dimension \\ \hline \(\mathbb{C}\mathbf{1}\) & \(\cong\) & \(\mathbb{C}(1,1)\) & \(\mathbf{1}\) & \(1\) \\ \(\mathbb{C}\hat{\mathbf{1}}\) & \(\cong\) & \(\mathbb{C}(1,1)\) & \(\hat{\mathbf{1}}\) & \(1\) \\ \(W_{1,0}\) & \(\cong\) & \(W(\omega_{5})\) & \(\sum_{i=1}^{4}\left(1-\omega_{5}^{i}\right)v_{2i,2i+2}\) & \(2\) \\ \(W_{2,0}\) & \(\cong\) & \(W(\omega_{5}^{2})\) & \(\sum_{i=1}^{4}\left(1-\omega_{5}^{2i}\right)v_{2i,2i+2}\) & \(2\) \\ \(W_{1,1}\) & \(\cong\) & \(W(\omega_{5})\) & \(\sum_{i=1}^{4}\left(1-\omega_{5}^{i}\right)v_{2i-1,2i+1}\) & \(2\) \\ \(W_{2,1}\) & \(\cong\) & \(W(\omega_{5}^{2})\) & \(\sum_{i=1}^{4}\left(1-\omega_{5}^{2i}\right)v_{2i-1,2i+1}\) & \(2\) \\ \end{tabular}
\begin{tabular}{c c c c} subrep of \(\mathbb{Z}_{11}\) & irrep of \(D_{11}\) & generated by & dimension \\ \hline \(\mathbb{C}\mathbf{1}\) & \(\cong\) & \(\mathbb{C}(1,1)\) & \(\mathbf{1}\) & \(1\) \\ \(W_{1}\) & \(\cong\) & \(W(\omega_{11}^{2})\) & \(\sum_{i=1}^{8}\left(1-\omega_{11}^{2i}\right)e_{i}\) & \(2\) \\ \(W_{2}\) & \(\cong\) & \(W(\omega_{11}^{4})\) & \(\sum_{i=1}^{8}\left(1-\omega_{11}^{4i}\right)e_{i}\) & \(2\) \\ \(W_{3}\) & \(\cong\) & \(W(\omega_{11}^{6})\) & \(\sum_{i=1}^{8}\left(1-\omega_{11}^{6i}\right)e_{i}\) & \(2\) \\ \(W_{4}\) & \(\cong\) & \(W(\omega_{11}^{8})\) & \(\sum_{i=1}^{8}\left(1-\omega_{11}^{8i}\right)e_{i}\) & \(2\) \\ \(W_{5}\) & \(\cong\) & \(W(\omega_{11}^{10})\) & \(\sum_{i=1}^{8}\left(1-\omega_{11}^{10i}\right)e_{i}\) & \(2\) \\ \end{tabular}
## 6. Reducible and irreducible representations of cyclic quandles
We consider in this section representations of cyclic quandles of arbitrary cardinality. We start by studying \(2\)-dimensional representations in detail.
**Theorem 6.1**.: _Let \(\phi:X\longrightarrow\operatorname{Conj}(\operatorname{Aut}\mathbb{C}^{2})\) be a non-constant representation of a cyclic quandle \(X\) of order \(n\geq 3\). Then \(\phi\) is irreducible, or \(A^{q-1}\) and \(B^{q-1}\) are multiples of the identity._
Proof.: Let us assume, by way of contradiction, that \(\phi\) is reducible, and that it is non-constant such that \(A^{q-1}\) is not a multiple of the identity. Let \(A\) and \(B\) be different \(2\times 2\) matrices with \(\phi(x)=A\) and \(\phi(y)=B\), and let \(A\) be in Jordan canonical form. Since the quandle \(X\) is connected, \(B=CAC^{-1}\) for some invertible matrix \(C\). We distinguish two cases, depending on the Jordan canonical form of \(A\) consisting of a single block, or two distinct blocks (i.e. \(A\) diagonalizable). Suppose first that \(A=\begin{bmatrix}\lambda&1\\ 0&\lambda\end{bmatrix}\). Further, we set \(C=\begin{bmatrix}a&b\\ c&d\end{bmatrix}\), where \(det(C)\neq 0\). Since \(\phi\) is reducible, \(A\) and \(B\) have common eigenvectors. Using Shemesh's Criterion ([13]), this is equivalent to saying that \(\ker[A,B]\neq 0\). Using \(B=CAC^{-1}\), for the commutator of \(A\) and \(B\) we find
\[[A,B]=det(C)\begin{bmatrix}c(a\lambda-c+d\lambda)&-a\lambda+ac-cd+d^{2}\lambda \\ 0&-c(a\lambda-c+d\lambda)\end{bmatrix}.\]
Since \(\ker([A,B])\neq 0\), we have that \(c(a\lambda-c+d\lambda)=0\). From this condition, we obtain two subcases, namely when \(c=0\) and when \(-a\lambda+c-d\lambda=0\). Let us assume that \(c=0\) first. We argue that it must also hold \(-a\lambda+ac-cd+d^{2}\lambda=0\). In fact, suppose this is not the case. Then we can rewrite (up to some normalizing constant) \(C=\begin{bmatrix}t&\mu\\ 0&1\end{bmatrix}\) for some \(t\) and \(\mu\). Then we obtain \(B=\begin{bmatrix}\lambda&t\\ 0&\lambda\end{bmatrix}\), which implies \([A,B]=0\), against the fact that we assumed that one of the entries of \([A,B]\) was nonzero. In the case \(c=0\), then, from \(-a\lambda+ac-cd+d^{2}\lambda=0\) we obtain \(d=\pm a\). Therefore, up to rescaling \(C\) by a constant (which does not affect the conjugation of \(A\) by \(C\)) we can assume \(C=\begin{bmatrix}1&\mu\\ 0&1\end{bmatrix}\). Therefore it follows that \(CAC^{-1}=A\), which implies \(A=B\) against our initial assumptions. Let us now consider \(-a\lambda+c-d\lambda=0\). In this case we have \(c=a\lambda+d\lambda\), from which we have
\(C=\begin{bmatrix}a&b\\ (a+d)\lambda&d\end{bmatrix}\). Then, we have
\[CAC^{-1}=\begin{bmatrix}\lambda[ad-(a+d)(a+b\lambda)]&a^{2}\\ -\lambda^{2}(a+d)^{2}&\lambda[a(a+2d)-b\lambda(a+d)]\end{bmatrix}.\]
From the relation \(A^{q-1}BA^{-q+1}=B\) we obtain, using the fact that \(B=CAC^{-1}\), the equality \(A^{q-1}CAC^{-1}A^{-q+1}=CAC^{-1}\). Moreover, using \(A=\begin{bmatrix}1&1/\lambda\\ 0&1\end{bmatrix}\) we find that the LHS of this equation gives (through an induction argument in \(q\))
\[\begin{array}{l}A^{q-1}CAC^{-1}A^{-q+1}\\ =&\begin{bmatrix}ad-(q-1)(a+d)^{2}-(a+d)(a+b\lambda)&a^{2}/\lambda+\frac{(q-1) \left[(a(a+d)-b\lambda(a+b)\right]}{-\lambda(a+d)^{2}}&\\ -\lambda(a+d)^{2}&a(a+(q-1)d)-b\lambda(a+d)+(q-1)(a+d)^{2}\end{bmatrix},\end{array}\]
while for the RHS one has
\[CAC^{-1}=\begin{bmatrix}ad-(a+d)(a+b\lambda)&a^{2}/\lambda\\ -\lambda(a+d)^{2}&a(a+(q-1)d)-b\lambda(a+d)+(q-1)(a+d)^{2}\end{bmatrix}.\]
Comparing the top-left entries of the two matrices we obtain \((a+d)^{2}=0\), which in turn implies \(a=-d\). But since we started from the assumption that \(c=\lambda(a+d)\), we find that \(c=0\), and we can apply the same argument as for the previous case to show that \(A=B\), against the assumption that the representation \(\phi\) is not constant. This contradiction completes the case where \(A\) consists of a single Jordan block (i.e. it is not diagonalizable).
Let us now suppose that \(A=\begin{bmatrix}\lambda&0\\ 0&\mu\end{bmatrix}\), where clearly \(\mu\neq\lambda\), or we would immediately have a contradiction. As before, using \(B=CAC^{-1}\) for some generic invertible \(C\), we obtain for the commutator
\[[A,B]=\begin{bmatrix}0&b(a\mu(\lambda-\mu)+d\lambda^{2}-d\mu\lambda)\\ c(-a\lambda\mu+a\mu^{2}+d\lambda(\lambda-\mu))&0\end{bmatrix}.\]
Once again, using the hypothesis of reducibility of \(\phi\) and Shemesh's Criterion, it follows that \(\ker[A,B]\neq 0\). As a consequence, this forces \(b(a\mu(\lambda-\mu)+d\lambda^{2}-d\mu\lambda)=0\), or \(c(-a\lambda\mu+a\mu^{2}+d\lambda(\lambda-\mu))=0\). If either \(b=0\) or \(c=0\), then we can proceed similarly to the previoius case to see that we obtain a contradiction. We can then assume that \(a\mu(\lambda-\mu)+d\lambda^{2}-d\mu\lambda=0\) or \(-a\lambda\mu+a\mu^{2}+d\lambda(\lambda-\mu)=0\). Either way we obtain \(d\lambda=a\mu\). Therefore, we can write \(C=\begin{bmatrix}a&b\\ c&a\mu/\lambda\end{bmatrix}\). From the relation \(CAC^{-1}=A^{q-1}CAC^{-1}A^{-q+1}\), equating the terms corresponding to entries \(1,2\) in the matrices, we find that \(\frac{ab\lambda(\lambda-\mu)(\lambda^{q-1}-\mu^{q-1})}{\mu^{q-1}(a^{2}\mu-bc \lambda)}=0\). This implies (since we had \(b\neq 0\) and \(\lambda\neq\mu\)) that \(a=0\) (and hence \(d=0\) as well), or \(\lambda^{q-1}-\mu^{q-1}=0\). If \(a=0\), then the matrix \(C\) simplifies to an anti-diagonal matrix, and a direct computation using the relations again shows that this is not possible. It must therefore hold that \(\lambda^{q-1}-\mu^{q-1}=0\), from which it follows that \(A^{q-1}\) is a multiple of the identity. The representation \(\phi\) must therefore be irreducible.
We now consider \(2\)-dimensional irreducible representations of cyclic quandles of arbitrary order \(n\geq 3\).
**Proposition 6.2**.: _Let \(X\) be a cyclic quandle of order \(n\geq 3\), and let \(\phi:X\longrightarrow\operatorname{Conj}(\operatorname{Aut}\mathbb{C}^{2})\) be a representation of \(X\) that does not map the generators to multiples of \((q-1)\)-roots of the identity. Then \(\phi\) is reducible._
Proof.: Let us assume that \(\phi\) is irreducible. We denote by \(\phi(x)=A\) and \(\phi(y)=B\) the images of the generators of \(X\). Let \(e\) denote an eigenvector of \(A\) with eigenvalue \(\lambda\). Then, from one of the defining relations, we have that \(A^{q-1}BA^{-q+1}e=Be\), which implies
\[\lambda^{-q+1}A^{q-1}Be=Be,\]
which means that \(Be\) is an eigenvector of \(A^{q-1}\) with eigenvalue \(\lambda^{q-1}\). Also, \(e\) is an eigenvector of \(A^{q-1}\) with eigenvalue \(\lambda^{q-1}\) because \(e\) is an eigenvector of \(A\) with eigenvalue \(\lambda\). Since \(\phi\) is irreducible, then \(e\) and \(Be\) are not proportional, and therefore \(\{e,Be\}\) is an eigenbasis for \(A^{q-1}\) with same eigenvalue. In other words, it follows that \(A^{q-1}\) is the multiplication by the constant \(\lambda^{q-1}\), which is not possible by hypothesis. Therefore \(\phi\) needs to be reducible.
**Corollary 6.3**.: _Let \(X\) be a cyclic quandle of order \(n\geq 3\) and let \(\phi\) be a representation of \(X\). Then \(\phi\) is a constant map \(X\longrightarrow\operatorname{Conj}(\operatorname{Aut}\mathbb{C}^{2})\), or \(\phi\) maps the generators (up to a constant) to \((q-1)^{\mathrm{th}}\)-roots of the identity._
Proof.: If \(\phi\) is not constant and it does not map the generators to roots of the identity, then it is irreducible by Theorem 6.1. But this is not possible by Proposition 6.2.
For any cyclic quandle \(F(q,\alpha)\), presently it is unknown by the authors whether non-constant representations \(\phi:F(q,\alpha)\rightarrow\operatorname{Conj}(\operatorname{Aut}(V))\) exist. However, there is a condition on the minimal polynomial of \(\phi(0)^{q-1}\) which guarantees this representation to be constant. We describe this result in the remainder of this section.
We will denote by \(J(\lambda,s)\) the \(s\times s\) Jordan block matrix
\[J(\lambda,s)=\begin{bmatrix}\lambda&1&\cdots&0\\ 0&\lambda&\ddots&0\\ \vdots&\ddots&\ddots&1\\ 0&0&\cdots&\lambda\end{bmatrix}.\]
Let \(A\) be an \(n\times n\) matrix in Jordan Normal form, with \(r\) Jordan blocks and corresponding eigenvalues \(\lambda_{1},\ldots,\lambda_{r}\). We will write \(A=J(\lambda_{1},s_{1})\oplus J(\lambda_{2},s_{2})\oplus\cdots\oplus J(\lambda_ {r},s_{r})\) (where the \(i^{th}\) Jordan block has size \(s_{i}\)). The \(k^{th}\) powers of these values \(\lambda_{1}^{k},\ldots,\lambda_{r}^{k}\) are all pairwise distinct if and only if the minimal polynomial of \(A^{k}\) has maximal degree \(n\). For this reason, we will say that \(A\) is \(k^{th}\)_-power maximal_ if this condition on its Jordan block values (\(\lambda_{i}^{k}\neq\lambda_{j}^{k}\) for \(i\neq j\)) is satisfied. For example, consider the following matrices:
\[A=\begin{bmatrix}1&1&0&0\\ 0&1&0&0\\ 0&0&i&1\\ 0&0&0&i\end{bmatrix}=J(1,2)\oplus J(i,2),\ \ \ B=\begin{bmatrix}1&1&0&0\\ 0&1&0&0\\ 0&0&2i&1\\ 0&0&0&2i\end{bmatrix}=J(1,2)\oplus J(2i,2).\]
Then \(A\) is \(k^{th}\)-power maximal for all \(k\) with \(k\not\equiv 0\mod 4\), while \(B\) is \(k^{th}\)-power maximal for all \(k\).
If \(X=\mathbb{F}(q,\alpha)\) is a quandle of cyclic type, we will say that a representation \(\phi:X\rightarrow\operatorname{Conj}(\operatorname{Aut}(V))\) is _maximal_ if \(\phi(0)\) is \((q-1)^{th}\)-power maximal.
If \(M\) is any \(n\times n\) matrix, the constant map \(\phi_{M}:\mathbb{F}(q,\alpha)\rightarrow\operatorname{Aut}(\mathbb{C}^{n})\) given by \(\phi_{M}(0)=\phi_{M}(1)=M\) is an \(n\)-dimensional representation of \(\mathbb{F}(q,\alpha)\); for brevity we denote this representation by \(U_{M}\) (isomorphic to \(\mathbb{C}^{n}\) as a vector space). In particular, \(U_{J(\lambda,k)}\) yields an indecomposable - yet reducible (for \(k>1\)) - representation of \(\mathbb{F}(q,\alpha)\), and if a matrix \(A\) is in Jordan normal form \(A=J(\lambda_{1},s_{1})\oplus J(\lambda_{2},s_{2})\oplus\cdots\oplus J(\lambda _{r},s_{r})\), then we have an
isomorphism of quandle representations
\[U_{A}\cong U_{J(\lambda_{1},s_{1})}\oplus U_{J(\lambda_{2},s_{2})}\oplus\cdots \oplus U_{J(\lambda_{r},s_{r})}.\]
**Theorem 6.4**.: _Let \(X=\mathbb{F}(q,\alpha)\) be a quandle of cyclic type, and let \(\psi:X\rightarrow\mathrm{Conj}(\mathrm{Aut}(V))\) be a maximal quandle representation of \(X\) of dimension \(>1\). Then \(\psi\) is a constant map. If \(\rho\) is the characteristic polynomial of \(\psi(0)\) and \(n_{z}\) the multiplicity of \(z\) as a root of \(\rho\), then we have a quandle isomorphism_
\[V\cong\bigoplus_{\rho(z)=0}U_{J(z,n_{z})}.\]
_In particular, there are no non-trivial irreducible maximal representations of \(X\)._
Proof.: Let \(J=\psi(0)\), and \(M=\psi(1)\). Assume without loss of generality that \(J\) is in Jordan canonical form. Then \(J\), \(M\) is a pair of matrices that satisfies
\[J^{q-1}MJ^{1-q}=M,\ \ M^{q-1}JM^{1-q}=J,\]
\[M^{k}JM^{-k}=J^{\log_{\alpha}(1-\alpha^{k})}MJ^{-\log_{\alpha}(1-\alpha^{k})}, \ \ \ 1\leq k\leq q-2.\]
It follows that \(J\) and \(M\) are similar, hence they have the same Jordan canonical form. By Corollary 8.5, we must have \(J=M\). Therefore \(\psi:X\rightarrow\mathrm{Conj}(\mathrm{Aut}(V))\) is the constant map \(\psi:X\mapsto J\), hence \(V\ \cong U_{J}\cong\bigoplus_{\rho(z)=0}U_{J(z,n_{z})}\).
## 7. Examples of non-decomposability
We give first an example of a quandle homomorphism \(q:S_{3}\to S_{3}\), where \(S_{3}=\langle r,\theta\mid r^{3}=1=\theta^{2},\theta r\theta=r^{2}\rangle\) is considered with conjugation operation, that is not a group homomorphism. We want to define thus, a map satisfying:
\[q(yxy^{-1})=q(y)q(x)q(y)^{-1},\]
for all \(x,y\in S_{3}\), that is not a group homomorphism. Choose any nontrivial element \(R\neq 1\). Then define a map \(q\) by mapping \(1,\theta,\theta r,\theta r^{2}\) to \(1\) and mapping \(r,r^{2}\) to \(R\). It is straightforward to see that \(q\) is a quandle homomorphism. However, since \(q(\theta\ r)=1\) and \(q(\theta)q(r)=R\) it follows that \(q\) is _not_ a group homomorphism.
A quandle representation induces a quandle representation on the inner group of the quandle, but the latter does not necessarily defines a group representation. All this suggests that representing a quandle might be much different than representing a group, even in the most common case in which the quandle we are considering is a conjugation quandle. In fact this suggestion turns out to be correct. We show by an explicit example that, by contrast to the case of groups, complementary invariant subspaces may not exist for some representations of quandles. As a consequence, Maschke's Theorem does not hold for quandles.
_Example 7.1_.: Fix a positive integer \(n\) greater than or equal to \(2\) and consider the dihedral quandle \(\mathbb{Z}_{2n}=\{0,2,\cdots,2n\}\sqcup\{1,3,\cdots,2n-1\}\) as a union of its two orbits. Consider the map \(\mathbb{Z}_{2n}\to GL(2,\mathbb{C})\) sending the orbit \(\{0,2,\cdots,2n\}\) to the identity matrix and the orbit \(\{1,3,\cdots,2n-1\}\) to the matrix \(\begin{bmatrix}1&1\\ 0&1\end{bmatrix}.\) It is clear that \(W=\mathbb{C}e_{1}\) is invariant, where \(e_{1}=\begin{bmatrix}1\\ 0\end{bmatrix}\). A simple check shows that \(W\) does not have a complementary invariant subspace inside \(\mathbb{C}^{2}\). More generally, we can define a quandle representation of \(\mathbb{Z}_{2n}\) into \(GL(m,\mathbb{C})\)
by mapping the orbit \(\{0,2,\cdots,2n\}\) to the identity matrix and the orbit \(\{1,3,\cdots,2n-1\}\) to any invertible matrix \(B\). Assume that
\[1+\sum_{i=1}^{k}\mu_{G}(\lambda_{i})=\sum_{j=1}^{k}\mu_{A}(\lambda_{i}),\]
where \(\lambda_{i}\) are the eigenvalues of \(B\) and \(\mu_{G}(\lambda_{i})\) and \(\mu_{A}(\lambda_{i})\) are respectively the geometric and algebraic multiplicities of \(\lambda_{i}\). In this case the representation is not completely reducible. In general, the same procedure will work for non-connected quandles.
The representation theory of quandles, therefore, appears to be drastically different from the representation theory of groups. The easiest quandle we can think of, the trivial quandle on one element admits non completely reducible representations: Consider the representation \(\{1\}\longrightarrow GL_{2}(\mathbb{R})\) defined by \(1\mapsto\begin{bmatrix}1&1\\ 0&1\end{bmatrix}\). As above, it is easy to verify that this representation is not completely reducible. The same procedure shows that the trivial quandle of any cardinality admits non completely reducible representations and more generally, any quandle admits a non completely reducible representation.
_Example 7.2_.: Let \(X=\sqcup_{i}X_{i}\) be a quandle partitioned into its orbits. Then map the whole orbit \(X_{1}\) to the (usual) matrix \(\begin{bmatrix}1&1\\ 0&1\end{bmatrix}\), and any other orbit to the identity matrix. This representation is not completely reducible.
## 8. Appendix
In this Appendix we state and prove some technical results. We also consider some explicit constructions of representations. We show that reducible representations of quandles need not be completely reducible, and we also give some computations for the dihedral quandle of order \(6\) which exhibit its decomposition into irreducible subrepresentations.
**Lemma 8.1**.: _For any prime power \(q\) and \(\alpha\) a primitive root in \(\mathbb{F}_{q}\), the system of equations_
\[1-x^{k}=x^{\log_{\alpha}(1-\alpha^{k})},\ \ 1\leq k\leq q-2\]
_has no solutions._
Proof.: We will prove a more general statement: _Let \(M>1\), and suppose \(\phi\) is an order-2 permutation of \(\{1,2,\ldots,M\}\) with a fixed point \(N\). Let \(S\) be the set of equations_
\[S=\left\{x^{k}+x^{\phi(k)}-1=0,\ \ \ 1\leq k\leq M\right\}.\]
_Then \(S\) has no simultaneous solutions (in \(\mathbb{C}\))._
The proof of this statement follows. Since \(N\) is a fixed point, \(S\) contains the equation \(2x^{N}-1=0\). Furthermore, if we add all equations in \(S\), we get \(\sum_{i=1}^{N}x^{i}=M/2\). Adding \(1\) to both sides of this equation gives us
\[\frac{1-x^{M+1}}{1-x}=\sum_{i=0}^{M}x^{i}=\frac{M+2}{2}.\]
So any solution of \(S\) must also be a solution of the system
\[2x^{N}-1=0, \tag{4}\] \[1-x^{M+1}=\frac{M+2}{2}(1-x). \tag{5}\]
The polynomial \(f(x)=x^{N}-2\) is irreducible over \(\mathbb{Q}\) by Eisenstein's criterion, hence its reciprocal \(\tilde{f}(x):=2x^{N}-1\) appearing in Equation 4 is irreducible over \(\mathbb{Q}\) as well.
Using the division algorithm, we write
\[M+1=bN+r,\ \ \ 0\leq r<N.\]
Now suppose \(x_{0}\) is a solution for \(S\). Then it must also satisfy Equations (4) and (5). Therefore \(x_{0}^{N}=1/2\), \(x_{0}^{M+1}=x_{0}^{bN+r}=\left(x_{0}^{N}\right)^{b}x_{0}^{r}=(1/2)^{b}x_{0}^{r}\), and so
\[1-\left(\frac{1}{2}\right)^{b}x_{0}^{r}=1-x_{0}^{M+1}=\frac{M+2}{2}(1-x_{0}).\]
Therefore \(x_{0}\) is a root of a polynomial in \(\mathbb{Q}[x]\) of degree \(r\), hence the minimal polynomial of \(x_{0}\) (over \(\mathbb{Q}\)) must have degree \(\leq r\). On the other hand, since \(\tilde{f}(x)=2x^{N}-1\) is irreducible over \(\mathbb{Q}\) and \(2x_{0}^{N}-1=0\), the minimal polynomial of \(x_{0}\) has degree \(N>r\). This contradiction shows that \(S\) can have no solutions, and the statement at the beginning of the proof is demonstrated.
To conclude the proof of the Lemma, note that the map \(k\mapsto\log_{\alpha}(1-\alpha^{k})\) is an order-2 bijection of the set \(\{1,\ldots,q-2\}\), with one fixed point (this fixed point is \(N=-\log(2)\ \mathrm{mod}\ (q-1)\)). Now it follows that the set of equations
\[\left\{x^{k}+x^{\log_{\alpha}(1-\alpha^{k})}-1=0,\ \ \ 1\leq k\leq q-2\right\}\]
has no solutions.
In general, if a square matrix \(A\) is upper triangular, a \(k^{th}\) root of \(A\) (\(X\) such that \(X^{k}=A\)) is not necessarily upper triangular. But a sufficient condition for this to hold is given here:
**Lemma 8.2**.: _Let \(A\) be an \(n\times n\) square matrix and \(k\) a positive integer such that \(A\) is \(k^{th}\)-power maximal. If \(A^{k}\) is upper triangular, then \(A\) is also upper triangular._
Proof.: Suppose \(A\) satisfies the given conditions, and let the diagonal entries of \(A^{k}\) be \(a_{1},\ldots,a_{n}\). For \(1\leq j\leq n\), let
\[T_{j}=\prod_{i=1}^{j}\left(A^{k}-a_{i}I\right),\]
and let \(W_{j}=\mathrm{span}(e_{1},\ldots,e_{j})\) (where \(e_{i}\) is the \(i^{th}\) standard basis vector).
Upper triangularity of \(A^{k}\) means that all subspaces \(W_{j}\) are \(A^{k}\)-stable. Also note that any solution \(X\) of \(X^{k}=A^{k}\) will commute with \(A^{k}\), and therefore the kernel of any polynomial in \(A^{k}\) will be \(X\)-stable. If we can show that each \(W_{j}\) is \(A\)-stable, it will follow that \(A\) is upper triangular. We will do this by showing that each \(W_{j}\) is the kernel of the polynomial \(T_{j}(\lambda)=\prod_{i=1}^{j}(\lambda-a_{i})\), evaluated at \(\lambda=A^{k}\).
An inductive argument shows that upper triangularity of \(A^{k}\) guarantees containment \(W_{j}\subseteq\ker(T_{j})\), for all \(j\). To show equality, we will argue that all subspaces \(\ker T_{j}\) must be distinct. This is sufficient for the following reason: suppose two kernels \(\ker(T_{r})\) and \(\ker(T_{s})\) coincide, where \(r<s\). Then
\[\mathbb{C}^{n}=\ker(T_{n})=\begin{cases}\ker\left(\prod_{i\leq r}\left(A^{k}- a_{i}I\right)\prod_{i>s}\left(A^{k}-a_{i}I\right)\right),&s<n,\\ \ker\left(\prod_{i\leq r}\left(A^{k}-a_{i}I\right)\right),&s=n;\end{cases}\]
hence the minimal polynomial of \(A^{k}\) has degree strictly less than \(n\). But this contradicts our assumption that \(A\) is \(k^{th}\)-power maximal. Therefore all kernels \(\ker(T_{j})\) must be
distinct, and we must have proper containments
\[\{\mathbf{0}\}\subsetneq\ker(T_{1})\subsetneq\ker(T_{2})\subsetneq\cdots \subsetneq\ker(T_{n})=\mathbb{C}^{n}.\]
This guarantees that \(\dim(\ker(T_{j}))=j\), and since \(W_{j}\subseteq\ker(T_{j})\), we obtain the desired equalities \(W_{j}=\ker(T_{j})\). Since each \(W_{j}\) is the kernel of a polynomial in \(A^{k}\), it follows that each \(W_{j}\) is \(A\)-stable, hence \(A\) is upper-triangular.
Let \(J\) be the matrix in Jordan canonical form
\[J=J(\lambda_{1},s_{1})\oplus J(\lambda_{2},s_{2})\oplus\cdots\oplus J(\lambda_ {k},s_{k}).\]
Also let \(M\) be the upper triangular matrix with the same diagonal entries as \(J\), and entries \(a_{i,j},b_{i}\) on the superdiagonal given as follows:
\[M=\begin{bmatrix}\lambda_{1}&a_{1,1}&&&&\\ 0&\lambda_{1}&\ddots&&&&\\ &&\ddots&a_{1,n_{1}-1}&&\\ 0&0&\cdots&\lambda_{1}&\mathbf{b_{1}}&&\\ \vdots&\vdots&&0&\lambda_{2}&a_{2,1}&&\\ &&&&0&\lambda_{2}&\ddots&&\\ &&&&&&\ddots&a_{2,n_{2}-1}&&\\ &&&&0&0&\cdots&\lambda_{2}&\mathbf{b_{2}}&&\\ &&&&\vdots&\vdots&&0&\lambda_{3}&a_{3,1}&&\\ &&&&&&&&0&\lambda_{3}&\ddots&&\\ &&&&&&&&&&\ddots&a_{3,n_{3}-1}&\\ &&&&&&&&0&0&\cdots&\lambda_{3}&\mathbf{b_{3}}\\ &&&&\vdots&\vdots&&&\ddots\end{bmatrix}\]
**Lemma 8.3**.: _Let \(J\) and \(M\) be square matrices as given above. If \(J\) and \(M\) satisfy the conditions_
\[M^{k}JM^{-k}=J^{\log_{\alpha}(1-\alpha^{k})}JM^{-\log_{\alpha}(1-\alpha^{k})}, \quad 1\leq k\leq q-2,\]
_then all superdiagonal \(a_{i,j}=1\), \(1\leq j\leq n_{i}-1\), and all \(b_{i}=0\), \(1\leq i\leq k-1\)._
Proof.: Let \(k\) be any positive integer. First we show the product \(M^{k}JM^{-k}\) is upper triangular, with the same diagonal entries as \(M\) and superdiagonal entries given as follows:
\[(M^{k}JM^{-k})_{s,s+1}=\begin{cases}1,&s\neq n_{j}\\ b_{j}\left(1-\lambda_{j}^{k}/\lambda_{j+1}^{k}\right),&s=n_{j}\end{cases}\]
\[M^{k}JM^{-k}=\begin{bmatrix}\lambda_{1}&1&&&&&\\ 0&\lambda_{1}&\ddots&&&&&\\ &&\ddots&1&&&&&\\ 0&0&\cdots&\lambda_{1}&b_{1}(1-\frac{\lambda_{1}^{k}}{\lambda_{2}^{k}})&&&&&\\ \vdots&\vdots&&&\lambda_{2}&1&&\\ &&&0&\lambda_{2}&\ddots&&\\ &&&&&&\ddots&1&\\ &&&0&0&\cdots&\lambda_{2}&b_{2}(1-\frac{\lambda_{2}^{k}}{\lambda_{3}^{k}})&&\\ &&&\vdots&&&\lambda_{3}&1&&\\ &&&&&&&0&\lambda_{3}&\ddots&\\ &&&&&&&\ddots&1&\\ &&&&0&0&\cdots&\lambda_{3}&\\ &&&&&&&\ddots\end{bmatrix}\]
Furthermore, the product \(J^{k}MJ^{-k}\) is also upper triangular, with the same diagonal entries as \(J\) and superdiagonal entries given as follows:
\[(J^{k}MJ^{-k})_{s,s+1}=\begin{cases}M_{s,s+1},&s\neq n_{j}\\ b_{j}\left(\lambda_{j}^{k}/\lambda_{j+1}^{k}\right),&s=n_{j}\end{cases}\]
\[J^{k}MJ^{-k}=\begin{bmatrix}\lambda_{1}&a_{1,1}&&&&&\\ 0&\lambda_{1}&\ddots&&&&&\\ &&\ddots&a_{1,n_{1}-1}&&\\ 0&0&\cdots&\lambda_{1}&b_{1}(\lambda_{1}^{k}/\lambda_{2}^{k})&&\\ \vdots&\vdots&&&\lambda_{2}&a_{2,1}&&\\ &&&0&\lambda_{2}&\ddots&&\\ &&&&&&\ddots&a_{2,n_{2}-1}\\ &&&&&0&0&\cdots&\lambda_{2}&b_{2}(\lambda_{2}^{k}/\lambda_{3}^{k}))\\ &&&&&\vdots&\vdots&&\ddots\end{bmatrix}\]
For the first assertion: the base case \((k=1)\) can be verified by direct computation. For the inductive step, we calculate \(M(M^{k}JM^{-k})M^{-1}\). First we find \(M(M^{k}JM^{-k})\) on the diagonal and superdiagonal entries. The diagonal entries are \(\lambda_{1}^{2},\ldots,\lambda_{k}^{2}\) (with the same multiplicities that occur in \(J\) and \(M\)). The superdiagonal entries are, in order:
\[\lambda_{1}(1+a_{1,1}),\ldots,\lambda_{1}(1+a_{1,n_{1}-1}),b_{1}\left(\lambda_ {1}+\lambda_{2}-\frac{\lambda_{1}^{k+1}}{\lambda_{2}^{k}}\right),\]
\[\lambda_{2}(1+a_{2,1}),\ldots,\lambda_{2}(1+a_{2,n_{2}-1}),b_{2}\left(\lambda_ {2}+\lambda_{3}-\frac{\lambda_{2}^{k+1}}{\lambda_{3}^{k}}\right),\]
The inverse of \(M\) has the following diagonal and superdiagonal entries:
\[M^{-1}=\begin{bmatrix}\lambda_{1}^{-1}&-a_{1,1}/\lambda_{1}^{2}\\ 0&\lambda_{1}^{-1}&\ddots\\ &&\ddots&-a_{1,n_{1}-1}/\lambda_{1}^{2}\\ 0&0&\cdots&\lambda_{1}^{-1}&-b_{1}/\lambda_{1}\lambda_{2}\\ \vdots&\vdots&&\lambda_{2}^{-1}&-a_{2,1}/\lambda_{2}^{2}\\ &&&&&0&\lambda_{2}^{-1}&\ddots\\ &&&&&&&\ddots&-a_{2,n_{2}-1}/\lambda_{2}^{2}\\ &&&&0&0&\cdots&\lambda_{2}^{-1}\\ &&\vdots&\vdots&&\ddots\end{bmatrix}\]
Multiplying \(M(M^{k}JM^{-k})\) with \(M^{-1}\) yields the desired expressions on the diagonal and superdiagonal entries. The proof for the second product \(J^{k}MJ^{-k}\) uses a similar inductive argument.
When we impose the conditions
\[M^{k}JM^{-k}=J^{\log_{\alpha}(1-\alpha^{k})}MJ^{-\log_{\alpha}(1-\alpha^{k})}, \quad 1\leq k\leq q-2,\]
we obain the following relations on the superdiagonal entries of \(M\):
\[a_{ij}=1,\text{ for all superdiagonal }a_{ij},\]
and
\[b_{i}\left(1-(\lambda_{i}/\lambda_{i+1})^{k}\right)=b_{i}\left(\lambda_{i}/ \lambda_{i+1}\right)^{\log_{\alpha}(1-\alpha^{k})},\ \ 1\leq i\leq s-1,\ \ 1\leq k\leq n-2.\]
If some \(b_{i}\neq 0\), then the ratio \(r=\lambda_{i}/\lambda_{i+1}\) must satisfy all polynomials
\[1-r^{k}=r^{\log_{\alpha}(1-\alpha^{k})},\ \ 1\leq k\leq q-2.\]
Lemma 8.1 prohibits any such solution, whence we conclude that all \(b_{i}=0\), and the Lemma is proven.
**Lemma 8.4**.: _Let \(J=J(\lambda_{1},s_{1})\oplus J(\lambda_{2},s_{2})\oplus\cdots\oplus J(\lambda _{k},s_{r})\). Suppose \(M\) is an upper triangular matrix with_
\[M_{i,i}=J_{i,i},\quad M_{i,i+1}=J_{i,i+1},\]
_so that \(M\) and \(J\) are equal on the diagonal and superdiagonal, and assume \(M^{k}JM^{-k}=J^{\log_{\alpha}(1-\alpha^{k})}MJ^{-\log_{\alpha}(1-\alpha^{k})}\) for all \(1\leq k\leq q-2\). Then \(M=J\)._
Proof.: We begin with the assertion that all elements along the 2nd superdiagonal \(M_{i,i+2}\), \(1\leq i\leq n-2\), must be \(0\). To show this, we first verify by induction on \(k\) the equality
\[(M^{k}JM^{-k})_{i,i+2}=M_{i,i+2}\left(1-M_{i,i}^{k}/M_{i+2,i+2}^{k}\right),\ \ 1\leq i\leq n-2.\]
Another inductive argument on \(k\) shows that
\[(J^{k}MJ^{-k})_{i,i+2}=M_{i,i+2}\cdot M_{i,i}^{k}/M_{i+2,i+2}^{k},\ \ 1\leq i \leq n-2.\]
The assumption \(M^{k}JM^{-k}=J^{\log_{\alpha}(1-\alpha^{k})}MJ^{-\log_{\alpha}(1-\alpha^{k})}\) for all \(1\leq k\leq q-2\) then gives us
\[M_{i,i+2}\left(1-(M_{i,i}/M_{i+2,i+2})^{k}\right)=M_{i,i+2}\cdot(M_{i,i}/M_{i +2,i+2})^{\log_{\alpha}(1-\alpha^{k})}\,,1\leq k\leq q-2.\]
Let \(r=M_{i,i}/M_{i+2,i+2}\). If \(M_{i,i+2}\neq 0\), this equation requires a nontrivial solution (in \(r\)) to the system
\[1-r^{k}=r^{\log_{\alpha}(1-\alpha^{k})},\ \ 1\leq k\leq q-2,\]
which is prohibited by Lemma 8.1. Therefore we must have \(M_{i,i+2}=0\).
An induction argument will now show that the entries of \(M\) are \(0\) everywhere above the first superdiagonal. For some \(2<r\), we assume that the \(j^{th}\) superdiagonal entries \(M_{i,i+j}\), \(1\leq i\leq n-j\), are all \(0\), for \(1\leq j\leq r\). Another induction on \(k\) shows that
\[M_{i,i+r+1}\left(1-(M_{i,i}/M_{i+r+1,i+r+1})^{k}\right)\]
\[=M_{i,i+r+1}\cdot(M_{i,i}/M_{i+r+1,i+r+1})^{\log_{\alpha}(1-\alpha^{k})}\,,1 \leq k\leq q-2,\]
and Lemma 8.1 guarantees again that we must have \(M_{i,i+r+1}=0\) for all \(1\leq i\leq n-r\). Therefore the \((r+1)^{th}\) superdiagonal entries are all \(0\). Since \(M\) and \(J\) are equal on their diagonals and superdiagonals, we have \(M=J\).
**Proposition 8.5**.: _Let \(q\) be a prime power and \(\alpha\) a primitive root in \(\mathbb{F}_{q}\). Let \(J\) be an invertible \(n\times n\) matrix, and suppose \(J\) is \((q-1)^{th}\)-power maximal. If \(M\) is an invertible \(n\times n\) matrix satisfying_
\[\left[J,M^{q-1}\right]=\left[M,J^{q-1}\right]=0,\]
\[M^{k}JM^{-k}=J^{\log_{\alpha}(1-\alpha^{k})}MJ^{-\log_{\alpha}(1-\alpha^{k})}, \quad 1\leq k\leq q-2,\]
_then \(J=M\)._
Proof.: Without loss of generality we assume \(J\) is in Jordan Normal form, hence upper triangular. Then \(J^{q-1}\) is also upper triangular. Using the given defining relations, we then have
\[J^{q-1}=\left(M^{k}JM^{-k}\right)^{q-1}=\left(J^{\log_{\alpha}(1-\alpha^{k})} MJ^{-\log_{\alpha}(1-\alpha^{k})}\right)^{q-1}=M^{q-1}.\]
Therefore \(M^{q-1}\) is upper-triangular. Since \(M\) is similar to \(J\), it also satisfies the condition that the characteristic polynomial of \(M^{q-1}\) coincides with its minimal polynomial, hence \(M\) is also \((q-1)^{th}\)-power maximal, and so by Lemma 8.2\(M\) is also upper triangular. Now Lemma 8.4 gives us \(J=M\).
### Constant 2-dimensional representations of a cyclic quandle
Let \(X=(\mathbb{Z}_{q},\alpha)\) (so \(q\) is a prime power and \(\alpha\) is a primitive root of unity). Let \(V=\mathbb{C}^{2}\). Fix \(a,b\in\mathbb{C}^{*}\), and let \(\phi:X\to\operatorname{Conj}(\operatorname{Aut}(V))\) defined by the constant map
\[\phi:x\mapsto A=\begin{bmatrix}a&0\\ 0&b\end{bmatrix},\quad x\in X.\]
Then \(\phi\) is a quandle morphism, because \(\phi(0)\) and \(\phi(1)\) both trivially satisfy the defining relations of \(X\) (since \([\phi(0),\phi(1)]=0\)). Clearly \(\mathbb{C}\begin{bmatrix}1\\ 0\end{bmatrix}\) and \(\mathbb{C}\begin{bmatrix}0\\ 1\end{bmatrix}\) are \(1\)-dimensional subrepresentations of \(V\). So this is a completely reducible \(2\)-dimensional representation of \(X\).
Next, let \(\psi:X\to\operatorname{Conj}(\operatorname{Aut}\mathbb{C}^{2})\) defined by the constant map
\[\psi:x\mapsto A=\begin{bmatrix}\omega&1\\ 0&\omega\end{bmatrix},\quad x\in X.\]
Then \(\phi\) is a quandle morphism, because \(\psi(0)\) and \(\psi(1)\) both trivially satisfy the defining relations of \(X\) (since \([\psi(0),\psi(1)]=0\)). Then we can verify that \(\mathbb{C}\begin{bmatrix}1\\ 0\end{bmatrix}\) is a \(1\)-dimensional subrepresentation of \(\mathbb{C}^{2}\), while its vector space complement \(\mathbb{C}\begin{bmatrix}0\\ 1\end{bmatrix}\) is _not_ a subrepresentation. So this is a reducible - but not completely reducible - representation of \(X\).
### An explicit construction
Let \(\pi\) denote the regular representation of the dihedral quandle \(\mathbb{Z}_{6}\), \(\pi:\mathbb{Z}_{6}\to\operatorname{Aut}(\mathbb{C}\mathbb{Z}_{6})\). We use the quandle structure of \(\mathbb{Z}_{6}\) to find the matrix representations of all elements of \(\mathbb{Z}_{6}\). We'll do this with two bases: first with the standard basis \(\{e_{i}\}_{i=0}^{5}\), and next with the basis taken from the decomposition of \(\mathbb{C}\mathbb{Z}_{6}\) into irreducible representations.
With respect to the standard basis, we have
\[\pi(0)=\pi(3)=\begin{bmatrix}1&0&0&0&0&0\\ 0&0&0&0&0&1\\ 0&0&0&0&1&0\\ 0&0&0&1&0&0\\ 0&0&1&0&0&0\\ 0&1&0&0&0&0\end{bmatrix}\]
\[\pi(1)=\pi(4)=\begin{bmatrix}0&0&1&0&0&0\\ 0&1&0&0&0&0\\ 1&0&0&0&0&0\\ 0&0&0&0&0&1\\ 0&0&0&0&1&0\\ 0&0&0&1&0&0\end{bmatrix}\]
\[\pi(2)=\pi(5)=\begin{bmatrix}0&0&0&1&0&0\\ 0&0&1&0&0&0\\ 0&1&0&0&0&0\\ 1&0&0&0&0&1\\ 0&0&0&0&0&1\\ 0&0&0&0&1&0\end{bmatrix}\]
Next we list the matrix representation with respect to the basis
\[v_{1}=(1,1,1,1,1,1),\] \[v_{2}=(1,-1,1,-1,1,-1),\] \[v_{3}=e_{0}-e_{2},\] \[v_{4}=e_{2}-e_{4},\] \[v_{5}=e_{1}-e_{3},\] \[v_{6}=e_{3}-e_{5}.\]
Then, as described above, \(\mathbb{C}\mathbb{Z}_{6}\) decomposes into four irreducible representations.
|
2310.00836
|
Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical
Reasoning Capabilities of Language Models
|
Logical reasoning is fundamental for humans yet presents a substantial
challenge in the domain of Artificial Intelligence. Initially, researchers used
Knowledge Representation and Reasoning (KR) systems that did not scale and
required non-trivial manual effort. Recently, the emergence of large language
models (LLMs) has demonstrated the ability to overcome various limitations of
formal Knowledge Representation (KR) systems. Consequently, there's a growing
interest in using LLMs for logical reasoning via natural language. This work
strives to understand the proficiency of LLMs in logical reasoning by offering
a brief review of the latest progress in this area; with a focus on the logical
reasoning datasets, tasks, and the methods adopted to utilize LLMs for
reasoning. To offer a thorough analysis, we have compiled a benchmark titled
LogiGLUE. This includes 24 varied datasets encompassing deductive, abductive,
and inductive reasoning. Utilizing LogiGLUE as a foundation, we have trained an
instruction fine-tuned language model, resulting in LogiT5. We study
single-task training, multi-task training, and "chain-of-thought" knowledge
distillation fine-tuning technique to assess the performance of model across
the different logical reasoning categories. We also assess various LLMs using
LogiGLUE, and the findings indicate that LLMs excel most in abductive
reasoning, followed by deductive reasoning, while they are least effective at
inductive reasoning. We aim to shed light on the capabilities and potential
pathways for enhancing logical reasoning proficiency in LLMs, paving the way
for more advanced and nuanced developments in this critical field.
|
Man Luo, Shrinidhi Kumbhar, Ming shen, Mihir Parmar, Neeraj Varshney, Pratyay Banerjee, Somak Aditya, Chitta Baral
|
2023-10-02T01:00:50Z
|
http://arxiv.org/abs/2310.00836v3
|
Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models
###### Abstract
Logical reasoning is fundamental for humans yet presents a substantial challenge in the domain of Artificial Intelligence. Initially, researchers used Knowledge Representation and Reasoning (KR) systems that did not scale and required non-trivial manual effort. Recently, the emergence of large language models (LLMs) has demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems. Consequently, there's a growing interest in using LLMs for logical reasoning via natural language.
This work strives to understand the proficiency of LLMs in logical reasoning by offering a brief review of the latest progress in this area; with a focus on the logical reasoning datasets, tasks, and the methods adopted to utilize LLMs for reasoning. To offer a thorough analysis, we've compiled a benchmark titled LogiGLUE. This includes 24 varied datasets encompassing deductive, abductive, and inductive reasoning. We have standardized these datasets into Seq2Seq tasks to facilitate straightforward training and evaluation for future research. Utilizing LogiGLUE as a foundation, we have trained an instruction fine-tuned language model, resulting in LogiT5. We study single-task training, multi-task training, and a "chain-of-thought" knowledge distillation fine-tuning technique to assess the model's performance across the different logical reasoning categories. By this comprehensive process, we aim to shed light on the capabilities and potential pathways for enhancing logical reasoning proficiency in LLMs, paving the way for more advanced and nuanced developments in this critical field1.
Footnote 0: 1}\) Arizona State University, \({}^{2}\) Mayo Clinic, 3 Amazon Alexa AI, 4 IIT KGP
{mluo26, skumbha4}@asu.edu
Footnote 0: 1}\) The dataset and models are available in Huggingface:logicreasoning/logi_glue and logicreasoning/LogiT5
## 1 Introduction
With logical reasoning, humans can explain an answer to a question via step-wise deduction, make robust planning and decision, or even reason about the workings in an unseen universe. Black holes serve as a compelling example of the power of logical reasoning. Even before the first observational evidence of black holes, scientists like Stephen Hawking used the principles of relativity and quantum mechanics to predict their existence and properties. Through pure thought and mathematical calculations alone, they logically deduced the presence of these mysterious cosmic entities, which were later confirmed through real observation by MIT researchers in 2015, fifty years later after the theorem had been derived. This underscores the profound capability of human reasoning to unveil truths about the universe that lie beyond our immediate perception2.
Footnote 2: 2}\)[https://news.mit.edu/2021/hawkings-black-hole-theorem-confirm-0701](https://news.mit.edu/2021/hawkings-black-hole-theorem-confirm-0701)
In the field of Artificial Intelligence (AI), there has been significant attention directed towards the aspiration to develop machines equipped with logical reasoning capabilities McCarthy (1989); Colmerauer and Roussel (1996). Early approaches in logical reasoning were primarily dedicated to the design of formal logic languages to encapsulate rules and knowledge, along with the development of automated theorem provers Lifschitz (2019). This paradigm, however, necessitated a deep understanding of the syntax and semantics of the formal logic for manual rule formulation - making knowledge _representation_ and knowledge _acquisition_ hard and expert-driven endeavor. Due to these challenges, contemporary research has progressively turned towards addressing logical reasoning tasks Clark et al. (2020); Tian et al. (2021); Han et al. (2022) by employing transformer-based Vaswani et al. (2017) pre-trained language models Devlin et al. (2019); Brown et al. (2020).
The Language models (LMs) that are pretrained using objectives such as the mask language modeling Devlin et al. (2019) and next word predic
adequate syntax and semantics of language, alongside commonsense knowledge. These language models can excel in numerous natural language understanding tasks, owing to the unsupervised pretraining on a vast array of unstructured text data. However, it is unclear if the current pretraining objectives are sufficient enough for the models to infer logical reasoning because this involves understanding structure; coupled with inductive, deductive, and abductive reasoning skills. This question has drawn intense attention and inspired different research directions to examine if LMs can learn logical reasoning ability (Wu et al., 2023; Lanham et al., 2023; Clark et al., 2020; Joshi et al., 2020). For instance, Clark et al. (2020) shows that pre-trained language models can serve as a "soft-reasoner" based on their near-perfect performance on synthetic datasets. Creswell et al. (2022) showed that large LMs are few-shot logical reasons. On the other hand, Liu et al. (2020); Joshi et al. (2020); Han et al. (2022) shows that logical reasoning remains challenging for language models. Furthermore, Wu et al. (2023); Lanham et al. (2023) showed that LLMs maybe retrieving or reciting previously seen facts and steps, instead of actually reasoning. Liu et al. (2023) shows that while Chat-GPT and GPT-4 generally perform well on some benchmarks, their performance noticeably diminishes when faced with new or out-of-distribution datasets.
To better understand the progress of logical reasoning ability in the current language model era, we first provide a concise survey of its role within current language models. Based on the insights gathered through the survey, we assembled a logical reasoning benchmark termed as LogiGLUE. Subsequently, we trained a model on this benchmark by utilizing diverse training strategies; our contributions are summarized below.
Concise Survey.We provide a brief survey of the recent development of logical reasoning using natural language (see Figure 1). First we discuss three types of logical reasoning. Then we focus on the relevant benchmarks and the methodologies for applying LMs to logical reasoning tasks.
LogiGLUE.One result of this survey is a benchmark for logical reasoning (LogiGLUE), with the aim to facilitate a consistent progress of logical reasoning in NLP. The importance for the LogiGLUE benchmark arises from several critical considerations. First, it encompasses diverse logical reasoning tasks and generalization evaluation, ensuring a comprehensive assessment of how a model performs across varied logical paradigms. Second, the unique format of each dataset within LogiGLUE simplifies both training and evaluation processes,
Figure 1: Logical Reasoning Survey: Datasets and Language Model Application.
facilitating swift integration into research workflows. Lastly, researchers can easily compare with established baselines, and the LogiGLUE offers the flexibility to seamlessly integrate new datasets in the future, ensuring its lasting relevance in logical reasoning evaluation.
LogiT5.Drawing inspiration from recent successes in multi-task learning and instruction-fine-tuned models, we trained seq2seq models, specifically Flan-T5 (Chowdhery et al., 2022), using multi-task learning on LogiGLUE's in-domain data. The resulting model, named LogiT5, demonstrated effective generalization on out-of-domain data.
A Concise Survey of Logical Reasoning in NLP: Types of Reasoning, Datasets and Language Models Approach
The advent of large language models has been transformative for the AI community; prompting many to speculate that we are on the cusp of achieving general artificial intelligence (GAI). Yet, as astounding as their capabilities are, these models grapple with numerous challenges, particularly with logical reasoning (Han et al., 2022; Valmeekam et al., 2023, 2022; Guan et al., 2023). Recognizing the significance of this, our survey aims to provide a timely comprehensive overview of advancements in logical reasoning within the context of language models, elucidating their performance, limitations, and the obstacles that remain, which casts a vision for future research directions. While other surveys have touched upon the broader theme of logical reasoning using natural language (Helwe et al., 2022; Yu et al., 2023; Yang et al., 2023), our survey has led us to propose a comprehensive benchmark collection, and include a systematic review of techniques to adopt LLMs for logical reasoning tasks. More importantly, we categorize different ways of using LMs on logical reasoning tasks, highlighting the intricacies and challenges faced by models for such tasks.
### Three Types of Logical Reasoning
Deductive Reasoning.In this predominant form of reasoning, we start with a set of premises which can be facts or rules, and derive a specific conclusion based on the available premises with a valid logical derivation path. In short, deductive reasoning derives specific conclusion(s) from generic observation(s) (Byrne et al., 2019). There are two characteristics related to a deductive reasoning system, _validity_ and _soundness_. A conclusion is _valid_ if and only if it is fully supported by the premises irrespective of the factuality of the premises. A conclusion is _sound_ if and only if it is valid and the premises are true. For example in Figure 2, the conclusion is valid but it is not sound because it is not true that "All kids love animals." Most of the synthetic deductive reasoning datasets such as RuleTaker (Clark et al., 2020) has valid conclusions, but may not be sound as the rules in the premises are often synthetically generated and may be untrue in the real world. Datasets such as PrOn-toQA (Saparov and He, 2023) offer a broader view, by sourcing the premise rules from a true, a false and a fictional ontology.
Inductive Reasoning.For inductive reasoning, one starts with a set of observations, and derives a general conclusion that is merely true, but not certain (Heit, 2007; Sauce and Matzel, 2017). In contrast to deductive reasoning, inductive reasoning is a bottom-up reasoning process which starts from specific observations and derives a generic conclusion. Many Knowledge graph completion task requires inductive reasoning such as WN18RR3. To apply inductive reasoning, one usually relies on a large number of observations (both positive and negative in support or against an induced rule). Since large language models are pretrained on large amount of free-text, it learns several generic patterns or conclusions, therefore reasoning inductively (even if the rules may not be represented symbolically or a human-readable fashion) (Han et al., 2023). In general, commonsense reasoning tasks in NLP require both inductive and deductive reasoning.
Footnote 3: Here, we exclude this task since we are more interested in natural language input. In this paper, we do not discuss about knowledge graph completion tasks since most of them are not in natural language forms.
Abductive Reasoning.Abductive reasoning typically begins with an incomplete set of observations and proceeds to derive most likely explanations for the observations to be true (Paul, 1993; Hobbs et al., 1993). Similar to inductive reasoning, this also involves uncertainty, as there can be different explanations. Compared to inductive reasoning, the deductive reasoning is a process from known facts or rules to derive a new conclusion, while abductive reasoning is from an observation to "guess" what can be the reason to cause the observation. It is
used more often in our daily decision-making, such as medical diagnoses based on a set of incomplete symptoms.
In previous paragraphs, we mentioned how both inductive and abductive reasoning inherently encompass uncertainty. In fact, deductive reasoning can also operate within the realm of uncertainty (De Raedt et al., 2007; Richardson and Domingos, 2006; Lee and Wang, 2016; Bach et al., 2017; Lee and Luo, 2018). Such reasoning paradigm uses "soft rules" to indicate the likelihood of a rule being true rather than its absolute truth. Consequently, conclusions derived may carry probabilistic true/false values. _Reasoning under uncertainty_ is particularly useful because the real world is inherently unpredictable and full of unknown variables. While many datasets operate under the assumption that rules are unequivocally true, Rulebert (Saeed et al., 2021) deviates by attributing weight values to each rule.
### Logical Reasoning Tasks and Datasets
We discuss the task and datasets in terms of format of the tasks and how they are created.
#### 2.2.1 Four Types of Tasks
Multiple Choice Question Answering (MCQA).In the MCQA task, the given inputs are a paragraph which forms a context, a question, and a list of answer candidates (typically four choices). The goal is to predict which candidate is (most likely) correct. All datasets are pure-text (Yu et al., 2019; Liu et al., 2020)4.
Footnote 4: ReClor and LogiQA sources the datasets from real examination questions that may involve images or charts. But they remove such questions and retain only those which are self-contained and answerable from the provided text.
Free Form Question Answering.Unlike MCQA, where a set of answer choices are given, freeform QA only has a context and a question, and the answer to the question can be any format, including but not limited to a single word, a list of words, and a number (Weston et al., 2015; Banerjee et al., 2020).
Fact Checking.In fact verification, given a context and a fact, the goal is to classify the binary truth value of the fact according to the given information (Clark et al., 2020; Saeed et al., 2021; He et al., 2021).
Natural language inference (NLI)NLI is the task of detecting inferential relationships between a premise and a hypothesis. For most NLI datasets, there are three relationships, entailment (the hypothesis follows or can be inferred by the premise), neutral (the truth of hypothesis is undetermined by the premise), and contradiction (the hypothesis contradicts the premise or some facts in the premise) (Tian et al., 2021a).
Figure 2: Examples (top) of three types of logical reasoning and explanations (bottom) correlating each example with its respective reasoning type.
#### 2.2.2 Dataset Creation Techniques
Human Annotation.Crowdsourcing is one of the major approaches to create datasets, such as for NLI tasks. The advantages of this methodology include a richer linguistic grammar and potentially increased task complexity. However, it comes with drawbacks. In addition to being a cost-intensive process, crowdsourced datasets tend to harbor biases (as highlighted in numerous previous studies Yu et al. (2019)). These biases can be leveraged by neural models to artificially inflate accuracy scores. Furthermore, assembling a dataset for logical reasoning tasks demands a level of expertise that poses a significant challenge.
Extraction from Academic Challenge.It is hard for crowdsourcing workers to produce questions requiring complicated logical reasoning since such reasoning tasks require extensive training and practice. Fortunately, questions in some standardized tests are aligned with the goal of logical reasoning and can be utilized to create such datasets after some preprocessing Yu et al. (2019); Liu et al. (2020). However, the domains of these examinations are limited and the dataset size is small.
Synthetic Generation.Synthetic generation is more efficient to create large data than manually created ones Luo et al. (2022). There are two ways, simulation based Weston et al. (2015) and rule-based Clark et al. (2020); Saeed et al. (2021); Banerjee et al. (2020). In rule based methods, logic programs (either written by humans or mined from knowledge graphs) are generated, and then implications are drawn by automatic theorem prover. Last, the rules and facts in the logic programs are converted into English form using natural language patterns. Synthetic generation has issues that the rules or facts do not have real-world meaning and the language could be simple.
### Language Models for Logical Reasoning over Natural Language
Language models (LMs) have been actively studied these days for logical reasoning tasks. Dasgupta et al. (2022) demonstrates that large language models (LLMs) show human-level abstract reasoning skill. Creswell et al. (2022) proposes a selection-inference pipeline that given a context and question, the model can firstly select which facts or rules given in the context are important to answer the question and decompose the question into step by step reasoning. Wei et al. (2022) demonstrates that language models have the capacity to engage in chain-of-thoughts (CoT) reasoning. This approach facilitates a step-by-step reasoning process that enhances the performance of the model in downstream tasks such as mathematical reasoning. In following section, we summarize the five prevalent trends in utilizing language models for logical reasoning over language.
#### 2.3.1 Supervised Finetuning
Fine-tuning a language model on the downstream tasks has been a standard way to teach a model to perform a task. Such a paradigm has also been the prevalent method for logical reasoning tasks Clark et al. (2020); Liu et al. (2021); Tian et al. (2021); Saeed et al. (2021); Han et al. (2022); Chan et al. (2023). In general, such a method is usually applied to a moderate size of the language model such as BERT Devlin et al. (2019), GPT2 Radford et al. (2019), RoBERTa Liu et al. (2019), and XLNet Yang et al. (2019). It has been shown that transformer based models perform better than the other types of neural models such as LSTM Yu et al. (2019); Liu et al. (2020), probably because such pretrained models have a certain degree of commonsense and logical reasoning Huang et al. (2019). This has been further proven in Clark et al. (2020). They show that when every word in the passage is replaced by a random word resulting in no grammaticality, the performance of a transformer-based model dramatically decreases. In addition, the larger model performs better than smaller ones, indicating that the deeper a model is, the more complicated reasoning it can execute He et al. (2021). While the IID performance of a fine-tuned model can be nearly perfect, such model has poor generalization. For example, model can not generalize from lower depth to higher depth reasoning Clark et al. (2020), from low level language diversity to high level diversity Richardson and Sabharwal (2021); Tafjord et al. (2021), from one domain to another domain Banerjee et al. (2020). Such observations indicate that models might just learn the inductive pattern in the training data rather than the underline logical reasoning skill Zhang et al. (2022).
#### 2.3.2 Logical Reasoning Pretraining
The next word prediction or mask language modeling pretraining tasks allow the language models to learn the language syntax and semantic as well
as the world knowledge, however, it does not guarantee a model to learn logical operations. Thus, researchers have been exploring logical-oriented pretraining tasks to teach a model of logical reasoning from large free data. APOLLO (Sanyal et al., 2022)improves the logical reasoning of a model by two pre-training tasks. The first pretraining task is selective mask language modeling (MLM). Unlike the naive MLM which randomly masks the words, s-MLM selects and masks the logical words (defined by Spacy POS tags). The second pretraining task is entailment classification which aims to classify if there is an entailment relationship within a masked sentence or not. MERIt (Fangkai Jiao, 2022) proposes a meta-path-guided pretraining task to teach a model to learn logical reasoning by self-supervised learning. They construct the training data by converting any document into a graph with entities as the node and the relation between the entities as edges. Then, given a pair of entities, the positive candidates are the sentences that connect this pair of entities, and the negative candidates are obtained by data augmentation. Such training data allows the model trained by contrastive learning manner to identify the positive sentence from the negative sentences. MERIt\({}^{+}\)(Jiao et al., 2023) combines MERIt with the autoregression training objective: rather than using contrastive learning, MERIt\({}^{+}\) optimizes the probability of positive candidate sentences.
#### 2.3.3 Proof Generation
Proof generation is found to be harder than answer generation (Saha et al., 2020; Tafjord et al., 2021). However, models developed for a proof generation task have better performance on out-of-domain datasets or unseen depth reasoning (e.g., train on lower depth and test on higher depth). Kaiyu Yang and Chen (2022) introduce NLProofS, a novel method for generating step-by-step logically valid and relevant proofs given a set of supporting facts and hypothesis. In their proposed method, they employ a prover which generates candidate proofs step-by-step, a verifier to measure the validity of generated proof steps to avoid the prover from hallucinating proof steps, and an algorithm for retrieving the entire proof with highest validity score by aggregating proof step scores. ProofWriter (Tafjord et al., 2021) proposed two ways to generate proofs based on T5 models. The first one is to predict the sequence of proof in one output; the second one is to iteratively generate a proof and specifically, predict one intermediate conclusion and combine it with the given facts and rules as a new input to predict the following conclusion, and repeat this process until no new conclusion is predicted.
Typically, proof strategies fall into one of two categories: backward chaining, also known as bottom-up reasoning, and forward chaining, or top-down reasoning. In forward chaining, the process begins with established facts and rules, cyclically deriving new inferences and integrating them into the known facts until the target is either confirmed or rejected. Conversely, backward chaining initiates with the target in question, employing rules recursively to break it down into sub-goals. These sub-goals are then verified against the known rules and facts. Kazemi et al. (2022) found that backward chaining is more beneficial for LLM to solve deductive logical reasoning tasks.
#### 2.3.4 CoT Knowledge Distillation
The previous approach relies on the proof annotations in the datasets, however, in many cases, the dataset does not come with the proof. It is shown that large language model (LLM) can generate step-by-step reasoning (similar as the proof) (Saparov and He, 2023; Liu et al., 2023). Namgyu Ho (2022) propose Fine-tune-CoT (i.e. chain-of-thought (Wei et al., 2022)) approach which involves three key steps. In the first step, a large teacher model is prompted to address intricate queries, generating multi-step reasoning explanations. These explanations are then filtered based on the accuracy of the final prediction. In the second step, a reasoning sample is constructed, incorporating the question, rationale, and answer, thereby forming a comprehensive prompt and multi-step solution. This collection of carefully curated reasoning samples is leveraged to fine-tune a compact student model, imbuing it with the ability to engage in reasoning tasks. Nonetheless, LLMs encounter difficulties in planning proofs, occasionally making wrong selections when presented with multiple valid choices. This challenge leads to proofs that are not fully developed and consequently produces inaccurate responses.
#### 2.3.5 Neural Symbolic
Recent advancements in pre-trained language models have demonstrated impressive reasoning abilities using explanations or "chain-of-thought" for in-context learning. Conversely, reasoning tasks are considered more straightforward for symbolic
programming. A promising way is to use LLM to translate a natural language input into a symbolic program which can be consumed by a symbolic solver. Such a paradigm has been shown to effectively avoid unfaithfulness of LLM Pan et al. (2023). Hanlin Zhang1 (2022) employs LLMS as Logic Programmers (LMLP) to learn logic rules and examples and reason over knowledge bases (KBs) using Prolog's backward chaining algorithm. They show that LMLP outperforms CoT in deductive reasoning settings, achieving over 25% higher accuracy on length generalization benchmarks, even with fewer parameters. Pan et al. (2023) propose Logic-LM to handle deductive reasoning, first-order logic, and constraint programming tasks. They leverage GPT-3 and in-context learning (providing a few examples) to translate a natural language input to a formal language formulation that can be executed by symbolic engines. They also show that the error messages of the symbolic engines can refine the output of an LLM. Such a paradigm has been investigated for addressing other challenges, wherein LLMs act as planners, and external tools are utilized to execute the plan Lu et al. (2023); Sumers et al. (2023); Paranjape et al. (2023); Guan et al. (2023); Schick et al. (2023).
### Survey Summary
The survey delineates how current datasets address three types of logical reasoning distributed across four task formats. Additionally, the curation process of a dataset can influence its inherent difficulty level. We've also identified five approaches for utilizing LLMs in addressing these reasoning tasks. This structured insight serves as a foundation for future research, offering a roadmap to optimize model performance and curation methodologies. In the following section, we will present a logical reasoning benchmark, positioned alongside established benchmarks like SuperGlue Wang et al. (2019), BigBench Srivastava et al. (2023), and Unicorn Lourie et al. (2021), all aimed at exhaustively gauging system capabilities.
## 3 LogiGLUE: General Logical Reasoning Benchmark
As mentioned in the introduction, the reasoning ability of language models as assessed by various studies seems to differ. One plausible explanation for this variance is the inconsistency in the benchmarks used or differences in task formats, leading to performance disparities. To rectify this, our goal is to offer a standardized testbed. It becomes imperative to meticulously formulate our selection criteria to create a testbed that evaluates a system's logical reasoning capabilities. Guiding our dataset choice are two primary principles outlined in SS3.1. These endeavors have led to the formation of a diverse and comprehensive logical reasoning benchmark, which we've named LogiGLUE (SS3.2).
### Principle of Collecting LogiGLUE
Numerous logical reasoning datasets have been accessible since 2015, such as bAbi Weston et al. (2015). Our selection process for including a dataset in LogiGLUE is primarily driven by principles of diversity and generalization Gokhale et al. (2022).
Diversity.There are two aspects to Diversity. First aspect concerns the types of reasoning in the dataset. We ensure that our coverage encompasses three main reasoning types, which collectively represent the full spectrum of logical reasoning. These three categories have been previously discussed. The second aspects concerns the level of difficulty, with datasets ranging from easy to hard. Our experimental results indicate a varied model performance across different datasets - excelling in some, delivering average results on others, and struggling significantly on a few. We discovered a strong correlation between the complexity of a dataset and the methodology employed in its creation. Datasets built using simple templates and basic rule structures tend to be easier. In contrast, those with more sophisticated rules and uncertain elements are relatively more challenging. However, the most difficult datasets are those meticulously crafted by human hands.
Generalization.We also consider the axis of generalization, which aims to quantify (or assess) whether a model trained on the logical reasoning tasks can genuinely acquire reasoning skills. Previous studies have found that the superior performance of a fine-tuned language model primarily stems from learning the patterns exhibited within the dataset, which unfortunately often leads to poor generalization to other datasets. Consequently, the model's performance tends to be overestimated due to the identical and independently distributed (IID) nature of the testing data. To counteract this, LogiGLUE includes an out-of-domain testing set
that also encompasses the three types of reasoning. The out-of-domain testing set is readily adaptable to incorporate future logical reasoning tasks.
Excluded Datasets.Lexicographically speaking, reasoning is defined as "the process of forming conclusions, judgments, or inferences from facts or premises."5 Reasoning is usually associated with an entailment relation, where given a premises, the truth value of a hypothesis depends on if the latter is entailed by the premise or not. There are many datasets that require reasoning which we decided to exclude from the scope of this work. This includes some well-known NLI datasets, such as, SNLI Bowman et al. (2015) and MultiNLI Williams et al. (2018). These datasets use many linguistic forms, unasted background knowledge, and sometimes unsupported inference steps Clark et al. (2020). We also exclude datasets where reasoning with external domain knowledge is required since for such tasks, retrieving the external knowledge is essential and it is hard to diagnose whether the noisy retrieved knowledge affects systems or systems lack of logical reasoning capacity. This includes QuAIL Rogers et al. (2020), WSC Levesque et al. (2012), QuaRTz Tafjord et al. (2019), ROPES Lin et al. (2019). Commonsense reasoning datasets are not covered in this survey either since they focus on solving a task using commonsense knowledge Sap et al. (2020) and thus it is more important to acquire the commonsense knowledge rather than to do logical reasoning. Other datasets that we exclude are ones that require logical reasoning but are not presented in the natural language form such as logical entailment Evans et al. (2018), NeuroSAT Selsam et al. (2019), and LTL Hahn et al. (2020).
Footnote 5: [https://www.dictionary.com/browse/reasoning](https://www.dictionary.com/browse/reasoning)
### Statistic of LogiGLUE
A suite of natural language logical reasoning benchmarks with 10 in-domain and 12 out-domain datasets that cover different types of logical reasoning. In addition, LogiGLUE includes three task formats, multiple choice question answer (MCQA), natural language inference (NLI), and fact verifica
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline \hline
**Dataset** & **Train size** & **Dev size** & **Test size** & **Synthetic** & **Task Type** & **Reasoning Type** \\ \hline \multicolumn{8}{l}{**In-domain datasets**} \\ \hline \(\alpha\)ARCT 2019 & 2420 & 632 & 888 & ✗ & MCQA & Abductive \\ \(\alpha\)NLI 2019 & 169,654 & - & 1532 & ✗ & NLI & Abductive \\ CLUTTR-Robust 2019 & 10,100 & - & 144 & ✓ & FF & Inductive \\ AbductionRule-Animal 2019 & 23,100 & 3,300 & 6,600 & ✓ & FF & Abductive \\ ANLI 2020 & 162,865 & 3,200 & 3,200 & ✗ & NLI & Deductive \\ LogiQA 2021 & 7,376 & 651 & 651 & ✗ & MCQA & Mixed \\ LogicNLI 2021b & 16,000 & 2,000 & 2000 & ✓ & NLI & Deductive \\ ProofWriter 2021 & 69,814 & 10,158 & 20,058 & ✓ & FV & Deductive \\ Rulebert-Union 2021 & 56,000 & 4,666 & 9,334 & ✓ & FV & Deductive \\ FOLIO 2022 & 1004 & 204 & 227 & ✗ & FV & Deductive \\ \hline \multicolumn{8}{l}{**Out-of-domain datasets**} \\ \hline bAbi 2015a & - & - & 5000 & ✓ & FF & Inductive \\ bAbi 2015a & - & - & 5000 & ✓ & FF & Deductive \\ CLUTTR-Systematic 2019 & - & - & 10100 & ✓ & FF & Inductive \\ AbductionRule-person 2019 & - & - & 4,864 & ✓ & FF & Abductive \\ ReClor 2020 & - & - & 500 & ✗ & MCQA & Mixed \\ Bird-Electricity 2021 & - & - & 5270 & ✗ & FV & Deductive \\ NatlLang 2021 & - & - & 8,008 & ✗ & FV & Deductive \\ Winologic 2021 & - & - & 562 & ✗ & FV & Deductive \\ WaNLI 2022 & - & - & 5000 & ✓ & NLI & Deductive \\ Rulebert-Union 2021 & - & - & 5000 & ✓ & FV & Deductive \\ BigBench 2022 & - & - & 1300 & ✗ & FF & Deductive \\ BigBench 2022 & - & - & 32 & ✗ & FF & Inductive \\ LogiQA 2.0 2023a & - & - & 3238 & ✗ & NLI & Deductive \\ PrOntoQA 2023b & - & - & 200 & ✗ & MCQA & Deductive \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of In-domain (IID) and out-of-domain (OOD) datasets of LogiGLUE benchmark.
tion (FV)/ fact checking (FC). Table 1 shows the statistics.
**Unique Format.** There are many existing practicing for standardizing different datasets into a consistent format (Mishra et al., 2022; Lourie et al., 2021), such as transforming all tasks to question answering (McCann et al., 2018) or NLI (Poliak et al., 2018) styles. Through our model analysis, it's evident that certain models can only manage specific reasoning tasks. For instance, classification models are commonly used for NLI and MCQA tasks, where the number of classification heads matches the number of choices (like 3 for NLI and 4 for MCQA). Yet, these models struggle when confronted with free-form question answering, thus limiting their versatility. Hence, to develop a model adept at logical reasoning regardless of task structure, we convert them into a singular format. An added advantage of this standardized format is that it ensures consistency in the input, ruling out performance disparities arising from different inputs. Every dataset is then adapted to this specific format. In the case of MCQA/FV/NLI tasks, each instance encompasses a context, a question, and potential answer options. Conversely, FF tasks don't present any answer choices. The correct answer is the expected output for every instance. For FV tasks, we use the statement as the question with true/false as potential answers. In NLI tasks, the options include natural, contradictory, and entailment.
## 4 Experiments and Results
We selected Flan-T5-large (Chowdhery et al., 2022) as our base model for training due to two pivotal reasons. Firstly, Flan-T5 stands as an instruction-fine-tuned iteration of T5, exhibiting enhanced performance when compared to its peers. Secondly, Flan-T5's manageable size renders it to be trainable on a machine that is conducive to an academic setting. In the following, we present results that encompass both quantitative and qualitative aspects.
### In-Domain Performance
#### 4.1.1 Single Task Fine-tuning
We fine-tune the Flan-T5-large on each individual dataset and the result is presented in Table 2. We draw some interesting observations. It is apparent that the model exhibits superior performance when operating on synthetic data compared to handcrafted alternatives. In support of this, the datasets that garnered the top 5 performances are predominantly synthetic. This trend holds even when considering the ANLI dataset, which, despite having a more substantial training set than its synthetic counterparts, yielded inferior results. Moreover, we ventured to explore if the model displayed a predilection for one form of reasoning over another. Preliminary insights suggest a potential preference towards abductive reasoning in comparison to deductive reasoning, as evidenced in the disparity in performance between the \(\alpha\)NLI and ANLI datasets - both of which are similar in terms of training size and are hand-crafted. This, however, is a mild observation and warrants further exploration to derive a conclusive statement. For instance, our statistical analysis revealed that the average context length for ANLI is 105, whereas for \(\alpha\)NLI, it is 66, potentially leading to varying degrees of difficulty.
#### 4.1.2 Multitask Fine-tuning
We fine-tune the Flan-T5 on all in-domain datasets utilizing a weighted sampling technique to accommodate for the unbalanced size of the training datasets. We find that this sampling is better than random sampling and the comparison is given in the Appendix. We termed this model as LogiT5.
One benefit of multi-task training compared to the single task training is that the low resources data can benefit from other tasks (Parmar et al., 2022; Luo et al., 2022). From Table 2, it is apparent that the multi-task training model holds a significant advantage when dealing with tasks with small training set. It showcases higher proficiency compared to its single-task counterpart, notably performing better by 5% and 8% on the \(\alpha\)ARCT and FOLIO tasks, respectively. These datasets, characterized by their smaller training size (limited to 1/2 K training samples), benefited notably from the multi-task training approach. Contrastingly, the tasks with large trainig set did not reap any benefits from multi-task training, such as \(\alpha\)NLI and ANLI datasets. A potential explanation for this could be the substantial training set already facilitates optimum learning for the model, rendering the multi-task training approach redundant. This observation underlines a critical limitation in leveraging multi-task training when the individual training datasets are already sufficiently large.
#### 4.1.3 Fine-tuned LogiT5 on Single Dataset
Here, we further fine-tune LogiT5 on each dataset. However, upon analyzing the performance dis
played in Table 2, we did not observe any notable advantages from this additional fine-tuning even though small margin gains are achieved. This suggests that LogiT5 has likely already learned the majority of knowledge from these tasks.
### Out-of-Domain Generation
When we study the out-of-domain generalization, we compare three models, Flan-T5, LogiT5, and LLama-2 (7B) (Touvron et al., 2023). In addition, for Llama-2, we also study the chain-of-thought prompting (Wei et al., 2022). Here, we evaluate the model's zero-shot capabilities rather than its few-shot in-context learning performance(Luo et al., 2023). Investigating the latter will be reserved for future research. More specifically, we add a prompt "let's think step by step" after the question. However, by our results, we do not see the advantage of CoT prompting, probably because the model already generates the reasoning even without such a prompt. Evaluating the LLama-2 answer poses a challenge since the output is usually a free form and not use the exact answer option. On the other hand Flan-T5 generate answer in a more structure way that is easier for evaluation, probably because Flan-T5 is already trained on instruction fine-tuned data which are already in a structured templates. The preliminary results of LLama-2 were poor. Upon manually reviewing the predictions, we observed that LLama-2 occasionally produces synonyms of the ground truth. To address this, we employed ConceptNET (Speer et al., 2017) to identify synonyms and verify if the prediction aligns with any of them, a strategy akin to the one explored in (Luo et al., 2021). Furthermore, on the babi dataset, we have seen that sometimes the llama model ignores the input text and generates answer based on its pre-trained knowledge. This is similar to the findings revealed in Varshney et al. (2023).
### CoT Distillation
As shown by previous work (Namgyu Ho, 2022), distill the chain-of-thoughts from a large model to a small student model can boost the performance of the student model. We apply such a CoT finetuning strategy and conduct experiments on LogiQA, identified as the most challenging task, by distilling the CoT from LLama-7B to Flan-T5. Initially, we generated a single answer for each question, retaining only the samples where the predicted answer was correct, resulting in approximately 3K valid samples. Alternatively, we created 10 answers for each question and preserved the samples with at least one correct predicted answer, which generated a unique set of 6K questions. It is worth noting that some questions offered multiple correct reasoning paths. In such cases, we either opted for a singular path or utilized all available paths, the latter approach amassing a total of 15K training samples. With the CoT fine-tuning, we observe that the fine-tuning takes longer time and a larger learning rate in the beginning is helpful. Thus, instead of using 1e-4 as the learning rate, we use 3e-4. we train the model with 40 epochs. We do see that the model performance increase when the number of epoch increase.
Following this, we trained the Flan-T5 model utilizing datasets consisting of 3K, 6K, and 15K samples, derived from the generated CoT,], with the results delineated in Table 4. Our findings indicate that the training with 3K and 6K samples did not enhance the CoT's fine-tuning efficacy. However, an increased dataset size of 15K samples facilitated a 4% improvement in performance, suggesting that
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline
**Dataset** & **Single-Task (Flan-T5-large)** & **Multi-Task (LogiT5)** & **Single Task (LogiT5)** \\ \hline \(\alpha\)ARCT 2019 & 72.31 & **77.22** & 76.74 \\ \(\alpha\)NLI 2019 & 78.26 & 76.37 & **78.46** \\ CLUTTR-Robust 2019 & **97.22** & 96.53 & **97.22** \\ AbductionRule-Animal 2019 & **100** & **100** & **100** \\ ANLI 2020 & **61.16** & 59.53 & 60.53 \\ LogiQA 2021 & 37.94 & 38.56 & **39.94** \\ LogicNLI 2021b & 82.60 & 88.40 & **88.65** \\ ProofWriter 2021 & 99.42 & 98.85 & **99.55** \\ Rulebert-Union 2021 & 99.69 & 99.36 & **99.70** \\ FOLIO 2022 & 66.66 & **74.02** & 72.06 \\ \hline Average & 79.52 & 80.88 & **81.28** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Three training strategies for models and the performance on In-domain Dataset.
CoT distillation becomes more beneficial with a larger volume of data.
## 5 Conclusion
In this study, we concentrate our efforts on a crucial area of research: logical reasoning over natural language. Initially, we offer a survey to provide a thorough comprehension of this domain, emphasizing the role of large language models in addressing this demanding task. Following this, we assemble a benchmark for logical reasoning named LogiGLUE, set to be publicly available to aid forthcoming research. Finally, we refine a language model utilizing LogiGLUE, demonstrating encouraging results across both in-domain and out-of-domain datasets.
|
2308.01962
|
Creating kinks with quantum mediation
|
We consider the creation of kink-antikink pairs of a scalar field $\phi$ by
the scattering of classical wavepackets of a second scalar field, $\psi$, when
there are no direct interactions between $\phi$ and $\psi$. The creation
becomes possible only due to a quantum field that interacts with both $\phi$
and $\psi$. We scan parameter space and find it favorable for kink production
when the initial wavepackets have large total energy and wide spatial extent
but scatter at low velocities.
|
Omer Albayrak, Tanmay Vachaspati
|
2023-08-03T18:00:04Z
|
http://arxiv.org/abs/2308.01962v2
|
# Creating kinks with quantum mediation
###### Abstract
We consider the creation of kink-antikink pairs of a scalar field \(\phi\) by the scattering of classical wavepackets of a second scalar field, \(\psi\), when there are no direct interactions between \(\phi\) and \(\psi\). The creation becomes possible only due to a quantum field that interacts with both \(\phi\) and \(\psi\). We scan parameter space and find it favorable for kink production when the initial wavepackets have large total energy and wide spatial extent but scatter at low velocities.
+
Footnote †: preprint: CERN-TH-2023-154
## I Introduction
One of the most fascinating aspects of quantum field theory is the existence of non-perturbative topological structures (solitons) and their interactions with the perturbative excitations (particles) of the model [1; 2; 3; 4; 5]. This area of research has received much attention but most of it has been relegated to treating the soliton sector as a fixed static or dynamical classical background. In any attempt in which solitons annihilate or are created, one is faced with the additional conceptual issue that solitons are described in terms of classical fields while particles are quantum excitations. A bridge between the soliton and particle sectors must also bridge between classical and quantum behavior, unless one can overcome the difficult problem of treating the soliton as a fully quantum object.
The creation of solitons by the scattering of particles [6; 7; 8; 9; 10; 11; 12; 13; 14; 15] is of particular physical interest. Sphalerons are classical solutions in the standard model that are intermediate states in baryon number violating processes that are necessary to generate the cosmic matter-antimatter asymmetry [16; 17]. If baryon number violation is to be experimentally tested in particle accelerators, it will be necessary to understand the creation of a sphaleron in particle collisions [18]. The expectation is that the process will be exponentially suppressed because perturbative expansions are in powers of the coupling constant, while the sphaleron and its interactions depend inversely on the coupling constant. Another process of interest is the production of magnetic monopoles in proton-proton or heavy ion scatterings such as at the Large Hadron Collider, a process that is being searched by the MoEDAL experiment [19; 20; 21]. Monopole creation by the scattering of large classical initial states has been considered in Ref. [22].
While the two particle to soliton-antisoliton process is of interest because of the way accelerators operate, one can envision situations where many particles may scatter and lead to the creation of solitons. This is the case for baryon number violation at high temperatures such as in cosmology. It may be possible that future particle machines may also involve \(N\) particle scattering where \(N\) can be large. Then the initial scattering state may be described classically and the final state with solitons may also be adequately described using classical physics. For example, we may be interested in the production of magnetic monopoles in the scattering of intense light. The problem in this setup is that the classical description of light is given by Maxwell equations that are linear and classically light does not interact with light. Colliding beams of intense light will simply pass through each other in the classical description. Only when we include quantum effects such as box diagrams does light interact with light [23; 24; 25]. Such quantum effects need only occur at intermediate stages in the scattering - the initial and final states can be described classically.
Guided by these motivations we have studied the creation of 1+1 dimensional kinks in the scattering of classical initial states but those that interact with the classical kink degrees of freedom only by a quantum "bridge". Then we have three fields: \(\phi\) the classical field that has kink configurations, \(\psi\) the classical field that defines the initial scattering state, and \(\rho\) the quantum field that bridges between \(\phi\) and \(\psi\), the two classical fields. We will set up the field theory model in more detail in Sec. II. In Sec. III we describe the kink solution and its energy along with the initial conditions for the model. We describe the numerical method in Sec. IV and present the parameters that we have used. In Sec. V we analyze few typical cases in detail and display the parameter space suitable for kink production. Finally we discuss our results in Sec. VI.
## II Model
The Lagrangian for the model we study is,
\[L = \frac{1}{2}(\partial_{\mu}\phi)^{2}-\frac{\lambda}{4}(\phi^{2}- \eta^{2})^{2}+\frac{1}{2}(\partial_{\mu}\psi)^{2}-\frac{m_{\psi}^{2}}{2}\psi^{2} \tag{1}\] \[+ \frac{1}{2}(\partial_{\mu}\rho)^{2}-\frac{1}{2}\left(m_{\rho}^{ 2}+\alpha\phi^{2}+\beta\psi^{2}\right)\rho^{2}\]
and the equations of motion are
\[\Box\phi+\lambda\phi(\phi^{2}-\eta^{2})+\alpha\rho^{2}\phi = 0 \tag{2}\] \[\Box\psi+m_{\psi}^{2}\psi+\beta\rho^{2}\psi = 0\] (3) \[\Box\rho+\left(m_{\rho}^{2}+\alpha\phi^{2}+\beta\psi^{2}\right)\rho = 0. \tag{4}\]
Since \(\rho\) is a quantum operator that also appears in the \(\phi\) and \(\psi\) classical equations of motion, we use the semiclassical approximation to write
\[\square\phi+\lambda\phi(\phi^{2}-\eta^{2})+\alpha\langle\rho^{2} \rangle\phi = 0 \tag{5}\] \[\square\psi+m_{\psi}^{2}\psi+\beta\langle\rho^{2}\rangle\psi = 0 \tag{6}\]
where the expectation of \(\rho^{2}\) is taken in its initial quantum state. (We work in the Heisenberg representation in which operators evolve but the quantum states do not.) The equation for the quantum operator \(\rho\) can be solved since the equation is linear in \(\rho\). As discussed in Refs. [26; 27; 28], the solution is obtained using a "classical-quantum correspondence" (CQC) that we now summarize.
Starting with the action for the field \(\rho(t,x)\);
\[\mathcal{S}_{\rho}=\int d^{2}x\Bigg{[}\frac{1}{2}(\partial_{\mu}\rho)^{2}- \frac{1}{2}(m_{\rho}^{2}+\alpha\phi^{2}+\beta\psi^{2})\rho^{2}\Bigg{]} \tag{7}\]
This action describes the massive quantum field \(\rho\) in the time-dependent background of \(\phi(t,x)\) and \(\psi(t,x)\). We continue with discretizing the action in space. On a lattice with \(N\) sites with lattice spacing \(a\), for any field, \(f\), we define
\[f(t,x)\to f(t,ja)=f_{j}(t) \tag{8}\]
\[\nabla^{2}f_{j}(t)=\frac{1}{a^{2}}(f_{j+1}(t)-2f_{j}(t)+f_{j-1}(t)). \tag{9}\]
where \(j=1,2,...,N\). The lattice under consideration is subjected to periodic boundary conditions such that for any field \(f(t,x)\), \(f_{j+N}(t)=f_{j}(t)\). The discretized action (7) reads
\[\mathcal{S}_{\rho}=\int dt\frac{1}{a}\Bigg{[}\frac{1}{2}\dot{\mathbf{x}}^{T} \dot{\mathbf{x}}-\frac{1}{2}\mathbf{x}^{T}\mathbf{\Omega}^{2}\mathbf{x}\Bigg{]} \tag{10}\]
where \(\mathbf{x}=(a\rho_{1},...,a\rho_{N})^{T}\) and \(\mathbf{\Omega}^{2}\) is an \(N\crosscross N\) matrix is given by
\[\Omega_{jk}^{2}=\begin{cases}2/a^{2}+(m_{\rho}^{2}+\alpha\phi_{j}^{2}+\beta \psi_{j}^{2})&j=k\\ -1/a^{2}&j=k\pm 1(\text{mod}N)\\ 0&\text{otherwise}.\end{cases} \tag{11}\]
The mod \(N\) is due to the periodic boundary conditions of the lattice. The energy of the system given by the action \(\mathcal{S}_{\rho}\) as above can be derived as follows
\[H_{\rho}=\frac{a}{2}\mathbf{p}^{T}\mathbf{p}+\frac{1}{2a}\mathbf{x}^{T} \mathbf{\Omega}^{2}\mathbf{x} \tag{12}\]
where \(\mathbf{p}=\dot{\mathbf{x}}/a\). This expression is precisely the Hamiltonian of \(N\) coupled harmonic oscillators with time-dependent spring constant matrix.
The Hamiltonian in (12) can be mapped to a classical system. The technique is to use the Bogoliubov transformations to map the \(N\) coupled quantum harmonic oscillator problem to an \(N^{2}\) classical harmonic oscillator problem whose variables are written as an \(N\times N\) matrix \(\mathbf{Z}(t)=[Z_{jk}(t)]\) and the corresponding momentum matrix \(\mathbf{P}=[P_{jk}(t)]=\dot{\mathbf{Z}}/a\)[28]. The mapping is given by
\[\mathbf{x}=\mathbf{Z}^{*}\mathbf{a}_{0}+\mathbf{Z}\mathbf{a}_{0} ^{\dagger T} \tag{13}\] \[\mathbf{p}=\mathbf{P}^{*}\mathbf{a}_{0}+\mathbf{P}\mathbf{a}_{0} ^{\dagger T} \tag{14}\]
where \(\mathbf{a}=(a_{1},...,a_{N})^{T}\) and \(\mathbf{a}^{\dagger}=(a_{1}^{\dagger},...,a_{N}^{\dagger})\) are the ladder operators for each of the \(N\) harmonic oscillators and the subscript "0" represents the operators at the initial time \(t_{0}\). The quantum field \(\rho(x,t)\) can now be represented in terms of corresponding expressions of \(\mathbf{Z}\) and \(\mathbf{P}\) (or equivalently \(\dot{\mathbf{Z}}\)) using (13) and (14).
The resulting classical system of \(\mathbf{Z}(t)\) has the following action
\[\mathcal{S}_{c}=\int dt\frac{1}{2a}\text{Tr}\ [\dot{\mathbf{Z}}^{\dagger}\dot{ \mathbf{Z}}-\mathbf{Z}^{\dagger}\mathbf{\Omega}^{2}\mathbf{Z}] \tag{15}\]
and the equations of motion are
\[\ddot{\mathbf{Z}}+\mathbf{\Omega}^{2}\mathbf{Z}=0. \tag{16}\]
which are to be solved with the initial conditions,
\[\mathbf{Z}_{0}=-i\sqrt{\frac{a}{2}}\sqrt{\mathbf{\Omega}}^{-1}\quad\text{and} \quad\dot{\mathbf{Z}}_{0}=\sqrt{\frac{a}{2}}\sqrt{\mathbf{\Omega}}. \tag{17}\]
Since the CQC provides an exact correspondence of the quantum problem into its classical counterpart, from now on we only need equation (16) and initial conditions (17) to fully understand the time evolution of the quantum field. The quantum evolution of \(\rho\) is then obtained from (13) and (14).
The vacuum expectation value of \(\rho^{2}\) at the spatial point labelled by \(i\) can be written in terms of \(\mathbf{Z}\) as,
\[\langle\rho_{i}^{2}\rangle=\frac{1}{a^{2}}\sum_{j=1}^{N}Z_{ij}^{*}Z_{ij}. \tag{18}\]
using (13). Therefore, from (5) and (6), the discretized equations we would like to solve for \(\phi\) and \(\psi\) are
\[\ddot{\phi}_{i}-\nabla^{2}\phi_{i}+\lambda\phi_{i}(\phi_{i}^{2}- \eta^{2})+\frac{\alpha}{a^{2}}\sum_{j=1}^{N}Z_{ij}^{*}Z_{ij}\phi_{i}=0 \tag{19}\] \[\ddot{\psi}_{i}-\nabla^{2}\psi_{i}+m_{\psi}^{2}\psi_{i}+\frac{ \beta}{a^{2}}\sum_{j=1}^{N}Z_{ij}^{*}Z_{ij}\psi_{i}=0 \tag{20}\]
where we use second order spatial differences as in (9) to calculate the Laplacians. The equation for \(Z_{ij}\) is
\[\ddot{Z}_{ij}+\Omega_{ik}^{2}Z_{kj}=0. \tag{21}\]
The system of equations (19), (20) and (21) need to be solved with suitable boundary conditions that we will discuss below. Before proceeding to the solution, however, the issue of renormalization needs to be addressed.
The parameters appearing in the above equations of motion are bare parameters that will get dressed by quantum effects. This can also be seen by realizing that the quantity \(\langle\rho_{i}^{2}\rangle\) in (18) diverges as \(\log(N)\) as \(N\rightarrow\infty\). The divergence can be absorbed in the mass parameters \(m_{\phi}\) and \(m_{\psi}\). Equivalently, we can subtract out the fluctuations in the trivial vacuum,
\[\langle\rho_{i}^{2}\rangle\rightarrow\langle\rho_{i}^{2}\rangle-\langle\rho_{ i}^{2}\rangle_{0} \tag{22}\]
where,
\[\langle\rho_{i}^{2}\rangle_{0}\equiv\frac{1}{a^{2}}\sum_{j=1}^{N}Z_{ij}^{*}Z_{ ij}\bigg{|}_{0}. \tag{23}\]
The "0" subscript refers to the trivial vacuum with \(\phi=\eta\) and \(\psi=0\).
The energy in the quantum field \(\rho\) can now be written as
\[E_{\rho}=\frac{1}{2a}\text{Tr}\ [\mathbf{\dot{Z}^{\dagger}\dot{Z}}+\mathbf{Z^{ \dagger}\Omega^{2}Z}]. \tag{24}\]
and the discrete energy density,
\[\epsilon_{\rho,i} =\frac{1}{a^{2}}\sum_{k}\biggl{\{}\frac{1}{2}|\dot{Z}_{ij}|^{2}+ \frac{1}{4a^{2}}\biggl{[}|Z_{i+1j}-Z_{ij}|^{2} \tag{25}\] \[+|Z_{ij}-Z_{i-1j}|^{2}\biggr{]}+\frac{1}{2}\left[m_{\rho}^{2}+ \alpha\phi_{j}^{2}+\beta\psi_{j}^{2}\right]|Z_{ij}|^{2}\biggr{\}}.\]
Owing to the last term, this expression also suffers from the divergence mentioned above. We use the same renormalizing scheme to remove the lattice dependence and to obtain a finite expression even as \(N\rightarrow\infty\),
\[\epsilon_{\rho,i}^{R}=\epsilon_{\rho,i}-\frac{1}{2}\left[m_{\rho}^{2}+\alpha \phi_{i}^{2}+\beta\psi_{i}^{2}\right]\left\langle\hat{\rho}_{i}^{2}\right\rangle _{0}-\epsilon_{\rho,i}|_{{}_{0}} \tag{26}\]
where the last term is added for the purpose of subtracting out the zero-point energy. The total energy of \(\rho\) is similarly defined,
\[E_{\rho}^{R}=E_{\rho}-\frac{1}{2}\sum_{i=1}^{N}\left[m_{\rho}^{2}+\alpha\phi_ {i}^{2}+\beta\psi_{i}^{2}\right]\left\langle\hat{\rho}_{i}^{2}\right\rangle_{ 0}-E_{\rho}|_{{}_{0}}. \tag{27}\]
By adding the energy of the fields \(\phi\) and \(\psi\) the total conserved energy of the system is,
\[E=E_{\phi+\psi}+E_{\rho}^{R} \tag{28}\]
where \(E_{\phi+\psi}\) defined as
\[E_{\phi+\psi}=\sum_{i}\biggl{\{}\frac{1}{2}\left[\dot{\phi}_{i} ^{2}+\phi_{i}^{\prime 2}+\dot{\psi}_{i}^{2}+\psi_{i}^{\prime 2}\right]\] \[+\frac{1}{2}m_{\psi}^{2}\psi_{i}^{2}+\frac{\lambda}{4}\left(\phi_ {i}^{2}-\eta^{2}\right)^{2}\biggr{\}}. \tag{29}\]
Spatial first derivatives are calculated using central differencing,
\[f_{i}^{\prime}=\frac{f_{i+1}-f_{i-1}}{2a}. \tag{30}\]
To summarize this section, the final equations we wish to solve are,
\[\ddot{\phi}_{i}-\nabla^{2}\phi_{i}+\lambda\phi_{i}(\phi_{i}^{2}- \eta^{2})\] \[+\frac{\alpha}{a^{2}}\sum_{j=1}^{N}\left(Z_{ij}^{*}Z_{ij}-Z_{ij}^ {*}Z_{ij}\bigg{|}_{0}\right)\phi_{i}=0 \tag{31}\] \[\ddot{\psi}_{i}-\nabla^{2}\psi_{i}+m_{\psi}^{2}\psi_{i}\] \[+\frac{\beta}{a^{2}}\sum_{j=1}^{N}\left(Z_{ij}^{*}Z_{ij}-Z_{ij}^ {*}Z_{ij}\bigg{|}_{0}\right)\psi_{i}=0 \tag{32}\]
and also Eq. (21) for \(\mathbf{Z}\).
## III Initial conditions
We are interested in the creation of \(Z_{2}\) kinks of \(\phi\) due to collisions of classical wavepackets of \(\psi\). The kink configurations are solutions of the model,
\[L_{\phi}=\frac{1}{2}(\partial_{\mu}\phi)^{2}-\frac{\lambda}{4}(\phi^{2}-\eta^ {2})^{2} \tag{33}\]
and boosted kinks are given by the solutions,
\[\phi_{K}(t,x)=\pm\eta\tanh\left(\sqrt{\frac{\lambda}{2}}\ \eta\gamma(x-vt)\right) \tag{34}\]
where the Lorentz boost factor \(\gamma=1/\sqrt{1-v^{2}}\), the \(+\) sign denotes a kink, and a \(-\) sign an antikink. The energy of the kink (or antikink) is given by
\[E_{K}=\gamma\ \frac{2\sqrt{2}}{3}\sqrt{\lambda}\eta^{3}. \tag{35}\]
In the classical scattering of \(\psi\), kinks cannot be created without the participation of the quantum field \(\rho\) because \(\phi\) and \(\psi\) have no direct coupling. Initially there are no kinks and we take \(\phi\) to be in its vacuum state,
\[\phi(t=0,x)=\eta,\ \ \dot{\phi}(t=0,x)=0. \tag{36}\]
Figure 1: Initial field configurations for \(\phi\) (blue) and \(\psi\) (orange) and initial renormalized energy density of \(\rho\) (dashed purple).
Our choice for the initial conditions for \(\psi\) contains two Gaussian wavepackets that move towards each other with velocity \(v\). Then,
\[\psi(t=0,x)=F(\gamma(x+x_{0}))+F(\gamma(x-x_{0})) \tag{37}\]
where \(\gamma=1/\sqrt{1-v^{2}}\), \(2x_{0}\) is the initial (\(t=0\)) wavepacket separation, and
\[F(x)=Ae^{-kx^{2}}. \tag{38}\]
We also have,
\[\dot{\psi}(t=0,x)=\gamma v\left[F^{\prime}(\gamma(x+x_{0}))-F^{\prime}(\gamma( x-x_{0}))\right] \tag{39}\]
where primes denote derivatives with respect to the argument.
The quantum field \(\rho\) is initially assumed to be in its ground state in the background of \(\phi(t=0,x)\) and \(\psi(t=0,x)\). In terms of **Z** this is given by Eq. (17) where the matrix \(\mathbf{\Omega}_{0}^{2}\) is evaluated from (11) using the initial values of \(\phi\) and \(\psi\).
In Fig. 1 we show the \(\phi\) and \(\psi\) fields and the renormalized energy density in \(\rho\) at the initial time.
## IV Numerical method
There are a large number of parameters that we need to fix before we can solve the equations. We choose
\[\lambda=1,\ \ \eta=1,\ \ \alpha=0.5,\ \ m_{\rho}=1,\ \ m_{\psi}=1,\ \ \beta=0.5 \tag{40}\]
The equations of motion are evolved using the position Verlet method with lattice spacing \(a=0.4\) and time step \(dt=a/50\) on a periodic lattice with \(N=1000\). The code is evolved for less than a light-crossing time to prevent interference from excitations that propagate all the way across the lattice.
There are also several parameters associated with the initial conditions: \(x_{0}\), \(A\), \(v\) and \(k\). The initial separation of the Gaussian wavepackets is fixed to be \(2x_{0}=30\). This is large enough that the overlap of the Gaussian wavepackets is minimal for all runs. We scan over \(A\), \(v\) and \(k\) in the following intervals,
\[A\in[7,16],\ \ v\in[0.1,0.8],\ \ k=0.03,0.1,0.3. \tag{41}\]
There is some ambiguity in deciding if the scattering has led to kink-antikink production. The simplest definition is to identify a zero of \(\phi\) as a kink or an antikink (depending on the gradient of \(\phi\) at the location of the zero). However, two zeros representing a kink and an antikink may be very close to each other and they may eventually annihilate. A further refinement of the criterion that we adopt is to require that the distance between zeros be larger than four times the kink width and should increase with time.
## V Results
In this section, we present our simulation results based on the methods mentioned above. Initially, we analyze a few distinct cases as examples for kink creation. Subsequently, we study the regions within parameter space that satisfy the conditions necessary for kink formation.
In Fig. 2 we illustrate a clear case of kink production. The three snapshots of the evolution for \(v=0.3\) and \(A=11.0\) show the collision of the \(\psi\) wavepackets and the creation of a kink-antikink pair that separates out with velocities \(\pm 0.68\) respectively. Initially there is some energy in \(\rho\) that propagates together with the incoming wavepackets. After the collision, if kinks are created, they too carry some \(\rho\) energy along with them. In addition, we observe that there is energy in \(\rho\) not directly related to the interactions with the initial Gaussian wavepackets or the final kinks. This energy is in the form of quantum radiation and can be seen on Fig 3a. The evolution of the total energy in the various fields is shown in Fig. 3b.
From Fig. 3b we see that the initial energy is \(\sim 500\) in units of \(\sqrt{\lambda}\) whereas the energy of a kink \(\sim 1\) from (35) and that of a kink-antikink pair is \(\sim 2\). The collision has therefore converted less than a percent of the initial energy into solitons; the rest is in radiative modes.
With somewhat different parameters, the evolution can be quite different, with the production of several kink-antikink pairs. An example is shown in Fig. 4 for the parameters \(v=0.25\) and \(A=13.5\). Now the evolution leads to a lot more fluctuations of \(\phi\) and there are many zeros of \(\phi\) at the final time. With further evolution we expect some of the zeros to annihilate but by our criteria, described in Sec. IV, this final state contains five kink-antikink pairs. Now the energy density in \(\rho\) is more spread out as in Fig. 5a and the total energies in the fields shows an interesting crossover in Fig. 5b where most of the energy ends up in the quantum field \(\rho\).
In this case, we start out with a higher initial energy \(\sim 750\) but we end up with five kink-antikink pairs with energy \(\sim 10\) which is a higher fraction of the initial energy than in the case of Fig. 2. However, it is not clear how many of the five kink-antikink pairs will survive at very late times. The complexity is shown in Fig. 6 where we plot the zeros of \(\phi\) as a function of time. In the case of Fig. 2, the zeros are shown in Fig.6a and there is only one kink-antikink pair and they are separating with velocity \(\sim\pm 0.68\). In the case corresponding to Fig. 4, the plot of zeros of \(\phi\) is shown in Fig. 6b. The outermost zeros are moving apart with \(\sim\pm 0.78\) but the inner ones are slower and some annihilations are very likely.
In order to find initial conditions that are favorable for the production of kinks we have evolved the system for the range of initial conditions given in (41) and checked which initial conditions lead to kink production. Our results are shown in Fig. 7 and indicates favorable conditions for kink production for large \(A\) and small \(v\) (at least in the \(k=0.1,0.3\) cases). However, the results suggest a
fractal structure and there are lots of holes in the parameter space where otherwise one may expect kink production. There are also isolated special places in parameter space where a large number of kinks are produced.
From Fig. 7 it is clear that choosing wider Gaussian wavepackets (smaller \(k\)) is more favorable to kink production. A first thought is that smaller \(k\) might imply higher initial energy which would explain the greater rate of kink production but that is not necessarily the case since the initial energy in \(\psi\) can be calculated explicitly in the limit of large \(x_{0}\) by using (37) and (39) in (29) (see Appendix A),
\[E_{\psi}(t=0)=\sqrt{\frac{\pi}{2}}\,\gamma A^{2}\bigg{[}(1+v^{2})\sqrt{k}+ \frac{m_{\psi}^{2}}{\sqrt{k}}(1-v^{2})\bigg{]} \tag{42}\]
For fixed \(v\), \(E_{\psi}(t=0)\) is minimum when
\[k=k_{*}=m_{\psi}^{2}\left(\frac{1-v^{2}}{1+v^{2}}\right) \tag{43}\]
and the energy does not monotonically increase with decreasing \(k\). While it is true that for our choice of values
Figure 3: (a) Evolution of the energy density \(\epsilon_{\rho,i}\) for \(v=0.3\) and \(A=11.0\). At the initial time, the quantum fluctuations are affected by the wavepackets of \(\psi\) and there is non-vanishing energy density of \(\rho\) within the wavepackets. Once kinks are created (\(t\gtrsim 50\)), energy in the quantum fluctuations of \(\rho\) are carried by the kink-antikink pair. In addition, \(\rho\) particles are radiated. (b) Total energies of the individual fields over time. For \(\phi\) and \(\psi\) only their kinetic, gradient and potential terms are included (see (29)). The suitably renormalized Interaction energy is included in \(\rho\). The final energy in the kink field, \(\phi\), is \(E_{\phi,\text{final}}\sim 21.02\) which is about 10 times the energy in the kink-antikink pair.
Figure 2: Three snapshots of the time evolution of the fields \(\phi\) and \(\psi\) with initial parameters \(v=0.3\) and \(A=11.0\). This is a clear case where a kink-antikink pair is produced.
of \(k\) and \(v\) in (41), the initial energy is higher for smaller values of \(k\), the energies for \(k=0.1\) and \(k=0.3\) and with \(v=0.8\) are very close, to within \(6\%\), yet there is much more kink production with \(k=0.03\) than with \(k=0.1\). This suggests that a more spread out wavepacket in the initial conditions is favorable for kink production.
To explore the effect of changing Gaussian width and wavepacket velocity, we have performed several runs in which the total initial energy in \(\psi\) is fixed. We implement this by choosing
\[A^{2}=\frac{E_{i}}{\sqrt{\frac{\pi}{2}}\,\gamma\bigg{[}(1+v^{2})\sqrt{k}+ \frac{m_{\psi}^{2}}{\sqrt{k}}(1-v^{2})\bigg{]}} \tag{44}\]
for some choice of initial energy \(E_{i}\). We scan over parameters \(k\) and \(v\), adjusting \(A^{2}\) according to (44) so that the initial energy stays fixed (up to very tiny corrections due to the quantum fluctuations of \(\rho\) and exponentially small corrections due to the overlap of the two wavepackets). The results are shown in Fig. 8 for fixed initial energy of \(250\), \(400\) and \(550\). The first feature that stands out is that
Figure 4: Three snapshots of the time evolution of the fields \(\phi\) and \(\psi\) with initial parameters \(v=0.25\) and \(A=13.5\). We observe a somewhat chaotic behaviour where there are multiple kink-antikink creation.
Figure 5: (a) Space-time plot of energy density \(\epsilon_{\rho,i}\). Imprints of the initial \(\psi\) wave packets and resulting kink-antikinks in \(\phi\) are observed. (b) Energies of individual fields over time. For \(\phi\) and \(\psi\) only kinetic, gradient and potential terms are included. Interaction energy is included in \(\rho\) with apt renormalization. \(E_{\phi,\mathrm{final}}\sim 101.8\)\(E_{K}|_{\gamma=1}\). The parameters are \(v=0.25\) and \(A=13.5\).
for fixed \(E_{\psi,0}=250\) there is only a very small area that yields kink production as seen in Fig. (a)a, which suggests an energy threshold for kink production for the model. Figs. (b)b and (c)c exhibit similar band patterns, although the location and size of these bands are slightly different, for example, the gap between the two bright bands is larger for \(E_{\psi,0}=400\). The plots show the general trend that higher energy, wider wavepackets, and slower scattering velocities create favorable conditions for kink production.
## VI Conclusions
We have studied the creation of classical kinks by scattering classical wavepackets but where the wavepacket and kink interactions are mediated by a quantum field. This setup was motivated by the case of monopole production in light on light scattering, since classical light on light scattering is trivial and only becomes non-trivial when quantum effects, such as box diagrams, are included. However, there are differences between our toy model and the physical case of magnetic monopole production. In the latter, light on light scattering would produce heavy gauge bosons due to quantum interactions
Figure 6: The space-time graphs of zeros of \(\phi\) for cases (a) \(v=0.3\), \(A=11.0\) and (b) \(v=0.25\), \(A=13.5\). Only zeros that are well separated (four kink widths) and moving away with time from their neighbors are counted as kink-antikinks. These plots also display the kink-antikink pairs that are created but annihilat during the simulation.
Figure 7: Kink-antikink pair production in the amplitude (\(A\)) and velocity (\(v\)) plane as a contour plot for (a) \(k=0.03\), (b) \(k=0.1\), and (c) \(k=0.3\). The color bar shows the total number of kink-antikinks produced – a kink-antikink pair counts as 2 on the color bar. The gaps are genuine and show chaotic behavior – simply increasing the amplitude, for example, does not guarantee kink production. Since the width of the initial wavepackets decreases as \(k\) increases, the plots show that kink production is favored for larger widths.
and the heavy gauge bosons themselves would form the magnetic monopoles. This is unlike in our toy model where we have chosen a classical field, distinct from the quantum fields, that composes the kinks. Our choice was necessary because kinks are conveniently described as classical configurations, not as a conglomerate of quantum particles.
We have scanned a set of parametrized initial conditions for successful kink production. Certain trends are clear within our analysis. The initial conditions that led to kink production in our simulations all have total energy that is \(10^{2}-10^{3}\) times the energy in a kink-antikink pair. However, the energy per quanta need not be large and is of order \(m_{\psi}\) as the velocities are only mildly relativistic. In fact, we found that it is somewhat favorable to choose moderate velocities, \(v\sim 0.5\), but to have large values of the amplitude \(A\) corresponding to a large number of quanta in the initial state, \(N\sim E/m_{\psi}\sim 10^{2}-10^{3}\). There is no systematic trend, however, and there are "holes" in our scan of parameter space as seen in Fig. 7. This suggests that there may be resonances at work - if certain frequencies match, kink production is more favorable. It would be of interest to find initial conditions with less energy and that convert into kink-antikink pairs more efficiently.
We have investigated the effect of the width of the initial wavepackets and the scattering velocity on the kink production with fixed initial energies. It is clear from Fig. 8 that there is a lower energy threshold and wider wavepackets with lower velocities provide better conditions for kink production. We also observe a band structure where certain widths seems more favorable than others. It's also worth noting that production is significantly lower for smaller widths (higher k) in all the cases.
Our analysis is of an exploratory nature as magnetic monopole production in the real world is much more complicated. Yet our analysis suggests that scattering at high luminosities is much more desirable than scattering at high energies if the goal is to produce magnetic monopoles.
###### Acknowledgements.
TV is grateful to the University of Geneva for hospitality. This work was supported by the U.S. Department of Energy, Office of High Energy Physics, under Award No. DE-SC0019470.
## Appendix A Energy of the initial wavepackets
We ignore the quantum corrections to the energy of the initial wavepackets as these are small (see Figs. (b)b and (b)b) and consider widely separated wavepackets in which case the initial energy is just twice that of a single wavepacket,
\[E_{\psi,0}=2\int dx\ \left[\frac{1}{2}\ (\dot{\psi}^{2}+\psi^{{}^{\prime}2})+ \frac{m_{\psi}^{2}}{2}\psi^{2}\right] \tag{10}\]
with
\[\psi=Ae^{-kX^{2}},\ \ \dot{\psi}=\gamma vA(-2kX)e^{-kX^{2}} \tag{11}\]
where \(X=\gamma(x+x_{0})\). This leaves us with Gaussian integrals and we get,
\[E_{\psi}(t=0)=\sqrt{\frac{\pi}{2}}\,\gamma A^{2}\bigg{[}(1+v^{2})\sqrt{k}+ \frac{m_{\psi}^{2}}{\sqrt{k}}(1-v^{2})\bigg{]} \tag{12}\]
This expression can also be written as,
\[E_{\psi}(t=0)=\gamma E_{\psi,v=0}+\sqrt{\frac{\pi}{2}}\,\gamma v^{2}A^{2} \bigg{[}\sqrt{k}-\frac{m_{\psi}^{2}}{\sqrt{k}}\bigg{]} \tag{13}\]
Note that \(E_{\psi}(t=0)\) is not \(\gamma E_{\psi,v=0}\) as we might expect from special relativistic boosts. This is because the
Figure 8: Kink-antikink pair production at fixed initial energies (a)\(E_{\psi,0}=250\), (b)\(E_{\psi,0}=400\), (c)\(E_{\psi,0}=550\) showing enhanced production at small \(k\) (wider wavepackets) scattering at low velocities. In Fig. (a)a there is only one case of kink production at \(k=0.14\) and \(v=0.78\).
Gaussian wavepacket is not a solution of the static equations of motion. Only static solutions of the equations of motion obey the special relativistic transformation when boosted.
For \(k<m_{\psi}^{2}\), the term in square brackets in (10) can be negative and it may happen that the initial energy _decreases_ with increasing \(v\).
|
2302.05587
|
Hierarchical Optimization-Derived Learning
|
In recent years, by utilizing optimization techniques to formulate the
propagation of deep model, a variety of so-called Optimization-Derived Learning
(ODL) approaches have been proposed to address diverse learning and vision
tasks. Although having achieved relatively satisfying practical performance,
there still exist fundamental issues in existing ODL methods. In particular,
current ODL methods tend to consider model construction and learning as two
separate phases, and thus fail to formulate their underlying coupling and
depending relationship. In this work, we first establish a new framework, named
Hierarchical ODL (HODL), to simultaneously investigate the intrinsic behaviors
of optimization-derived model construction and its corresponding learning
process. Then we rigorously prove the joint convergence of these two sub-tasks,
from the perspectives of both approximation quality and stationary analysis. To
our best knowledge, this is the first theoretical guarantee for these two
coupled ODL components: optimization and learning. We further demonstrate the
flexibility of our framework by applying HODL to challenging learning tasks,
which have not been properly addressed by existing ODL methods. Finally, we
conduct extensive experiments on both synthetic data and real applications in
vision and other learning tasks to verify the theoretical properties and
practical performance of HODL in various application scenarios.
|
Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang
|
2023-02-11T03:35:13Z
|
http://arxiv.org/abs/2302.05587v2
|
# Hierarchical Optimization-Derived Learning
###### Abstract
In recent years, by utilizing optimization techniques to formulate the propagation of deep model, a variety of so-called Optimization-Derived Learning (ODL) approaches have been proposed to address diverse learning and vision tasks. Although having achieved relatively satisfying practical performance, there still exist fundamental issues in existing ODL methods. In particular, current ODL methods tend to consider model construction and learning as two separate phases, and thus fail to formulate their underlying coupling and depending relationship. In this work, we first establish a new framework, named Hierarchical ODL (HODL), to simultaneously investigate the intrinsic behaviors of optimization-derived model construction and its corresponding learning process. Then we rigorously prove the joint convergence of these two sub-tasks, from the perspectives of both approximation quality and stationary analysis. To our best knowledge, this is the first theoretical guarantee for these two coupled ODL components: optimization and learning. We further demonstrate the flexibility of our framework by applying HODL to challenging learning tasks, which have not been properly addressed by existing ODL methods. Finally, we conduct extensive experiments on both synthetic data and real applications in vision and other learning tasks to verify the theoretical properties and practical performance of HODL in various application scenarios.
Optimization-derived learning, meta optimization, hierarchical convergence analysis, constrained and regularized learning applications, bilevel optimization.
## 1 Introduction
Optimization-Derived Learning (ODL) is a class of methods for constructing deep models based on optimization techniques [1, 2] and has been widely used in different vision tasks in the past years [3, 4, 5, 6, 7]. Specifically, each of the optimization iteration is regarded as a layer of the network. All of these layers are concatenated to form a deep model. Passing through the network is equivalent to performing a finite number of optimization iterations. In addition, the optimization algorithm parameters (e.g., model parameters and regularization coefficients) are transferred to the learning variables in the network. In this way, the training network can be naturally interpreted as a parameterized optimization model, effectively overcoming the lack of interpretability in most of the traditional neural networks and leading to excellent performance as well.
Hence, a core problem of ODL is how to design the network structure based on the optimization model. In other words, ODL focuses on how to embed learnable modules into the optimization model. Depending on the way in which the learnable module is handled, existing approaches can be broadly classified into two main categories, respectively called ODL based on Unrolling with Numerical Hyperparameters (UNH), which aims to embed learnable modules under reliable theories and focuses on the convergence guarantee of the algorithm [8, 9, 10], and ODL Embedded with Network Architectures (ENA), which heuristically embeds learnable modules into the optimization algorithm, focusing more on the performance of practical tasks [11, 12, 13, 14, 15, 16]. Unfortunately, UNH usually treats optimization and networks as two separated modules, and ENA ignores the optimization process after designing the network structure. As a new framework, our Hierarchical ODL (HODL) also treats optimization and networks as two modules. However, unlike existing approaches, HODL establishes the nested relationship between optimization and networks via a hierarchical structure, and specifies the influence of learnable networks on the optimization process.
### _Related Works_
As introduced in the last paragraph, existing ODL approaches are classified into UNH and ENA. Earlier UNH methods usually set learnable modules as some hyper-parameters in the optimization algorithm which do not affect the convergence, such as the step size [8]. These methods avoid damaging the convergence results, but the number of learnable parameters is limited, making it hard to be applied to various practical tasks flexibly. In recent years, some UNH methods have embedded learnable modules to replace the descent direction in optimization as a novel perspective [9, 10]. In particular, they use the learnable module to provide the actual descent directions and set a convergence criterion to adjust it. For these methods, if the learnable modules are decoupled from the optimization model, the system flexibility is greatly enhanced. As for ENA, it often considers the optimization objective as a prototype to motivate its network model design for specific tasks. Specifically, ENA greatly improves the flexibility of traditional optimization by replacing some structures in the optimization model
directly with learnable modules having similar effects. Some ENA methods only regard the linear layers of networks as learnable matrices. In LISTA [11] and CPSS [12], a non-linear feed-forward predictor is trained to produce the best approximation of sparse coding; in DLADMM [13], some learnable network modules are embedded into LADMM, and some learnable parameters are embedded into the proximal operator. In addition, some other ENA methods utilize more general networks. For instance, ISTA-Net [17] is based on ISTA as the fundamental iteration scheme, and adds a range of filters to learn parameters for image compressive sensing (CS); Plug-and-Play ADMM [14] replaces the projection gradient operator in ADMM with an implicit denoising module; pre-trained-CNN-based modules such as DPSR and DPIR [15, 16] are introduced to handle image restoration problems such as deconvolution, denoising, and super-resolution.
However, these existing ODL methods ignore the relationship between learnable modules and optimization models, leading to some drawbacks in methodology and theory. From the methodological perspective, since the convergence of UNH depends entirely on the original optimization algorithm, its performance is limited to manually designed target features, and it is impossible to further narrow the gap between target features and real-world tasks. ENA relies on the fact that the pre-trained network modules need to indeed have similar performance to the replaced part, which usually can only be promised by proper pre-training. Furthermore, existing ODL approaches have another common shortcoming in methodology: they deal with the optimization model and the learnable module separately, meaning that existing learnable modules are often trained independently of the optimization model. While it is still possible to obtain the modules needed to optimize the model on the macroscopic level (e.g., replacing soft threshold operation with noise reduction modules), this has led to a gap between the modules needed to optimize the model and those that are actually learning. Although new methods exist to better isolate the learnable modules, this gap cannot be fundamentally addressed.
In terms of theoretical perspective, some works have analyzed the convergence of optimization process with the help of classic optimization techniques. To be specific, in [14, 21, 22] authors consider the non-expansive property of optimization iterative process under the condition that the embedded networks are bounded; in [20] the convergence is achieved when the Lipschitz constant of network residuals is strictly smaller than one. However, these works only focus on the convergence towards the fixed points of the approximated optimization model, but not the solution to the intrinsic task considering both optimization models and learnable modules. An intuitive treatment to handle this problem is to learn fewer learning variables. For example, in [8], only the step size of ISTA is learned, which nevertheless restricts the model. In addition, for ODL, additional artificially designed corrections are needed when learning the network. For example, in [10, 23, 24] authors manually design various rules to decide updates from the temporary updates generated by networks and optimization algorithms. However, the lack of learning variables and the manual design of rules severely limit these methods. Furthermore, ignoring the convergence of learning process also leads to some theoretical defects. First, as aforementioned, learning variables are fixed in the optimization process, and thus it is only able to consider the convergence of optimization variables, instead of the convergence of learning variables in learnable modules. Second, the learnable modules for ODL are too complex to determine the relationship between the true solution to the task and the obtained fixed points. Moreover, the convergence analysis of most existing ODL methods is developed from a specific optimization framework, so it is difficult to be extended to other optimization models.
### _Our Contributions_
To address the aforementioned problems, we explicitly model ODL as a hierarchical relationship paradigm between the learnable module and the optimization algorithm, called HODL. Subsequently, in order to jointly train the optimization variables and the learning variables, we propose the corresponding solution strategy for solving HODL. We further put forward its simplified version to speed up the algorithm, and the simplified solution strategy can contain existing gradient-based unrolling algorithms as special cases. After that we provide the convergence analysis for this algorithmic
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline Method & Category & Base Model & \(\mathbf{u}^{R}\rightarrow\mathbf{u}^{*}\) & \(\inf\varphi_{K}(\mathbf{\omega})\rightarrow\inf\varphi(\mathbf{\omega})\) & \(\nabla\varphi(\mathbf{\omega})\to 0\) & Application \\ \hline ISTA-Net [17] & & PG & ✗ & ✗ & ✗ & CS Reconstruction \\ ADMM-Net [18] & & ALM & ✗ & ✗ & ✗ & CS-MRI \\ DUBILD [19] & ENA & HQS & ✗ & ✗ & Image Deconvolution \\ LISTA [12] & & PG & ✗ & ✗ & Sparse Coding \\ DLADMM [13] & & ALM & ✗ & ✗ & Image Deblurring \\ PnP [20] & & ALM, PG & ✗ & ✗ & Image Super Resolution \\ \hline PADNet [9] & & ALM & ✗ & ✗ & Image Haze Removal \\ FIMA [10] & UNH & PG & ✗ & ✗ & Image Restoration \\ OISTA [8] & & PG & ✗ & ✗ & Sparse coding \\ \hline \hline \multirow{3}{*}{\begin{tabular}{c} HODL \\ (Ours) \\ \end{tabular} } & \multirow{3}{*}{Both} & \multirow{3}{*}{Flexible} & \multirow{3}{*}{✗} & \multirow{3}{*}{✗} & \multirow{3}{*}{
\begin{tabular}{c} Sparse Coding \\ Image Restoration \\ Hyper-parameter Optimization \\ Few-shot Learning \\ Generative Adversarial Learning \\ \end{tabular} } \\ \cline{1-1} \cline{5-5} & & & & & & \\ \hline \end{tabular}
\end{table} TABLE I: Various ODL methods whose base models include Proximal Gradient (PG), Augmented Lagrangian Method (ALM), and Half-Quadratic Splitting (HQS). Existing ODL methods are widely used in many fields, but the analysis of convergence, especially the convergence of learning variables \(\mathbf{\omega}\), is still insufficient.
framework. To be specific, we strictly prove the detailed theoretical properties to guarantee the joint convergence of optimization variables and learning variables, containing the convergence on approximation quality analysis and stationary analysis. We also conduct plenty of experiments on various learning and vision tasks to verify the effectiveness and wide applications of HODL. Our contributions can be summarized as follows, and the overall comparison of our HODL and existing ODL methods is displayed in Table I.
* Unlike existing works that only pay attention to either learning or optimization process in ODL, we take both learning and optimization into consideration as two nested solution processes and formulate the general ODL paradigm, allowing us to further analyze the hierarchical relationship between the optimization and learning variables.
* From the hierarchical perspective, we build up the HODL framework and provide the novel and general ODL solution strategy. Our framework considers the nested relationship between optimization and learning, making it possible to jointly train optimization variables and learning variables.
* This work provides the strict joint convergence analysis of optimization variables and learning variables under the HODL framework, both on the approximation quality and on the stationary convergence. We additionally put forward a fast algorithm for HODL and its convergence analysis, which significantly extend the results in [3].
* We apply our HODL framework and the solution strategies to various learning tasks, containing sparse coding as the toy example, and image processing tasks (e.g., rain streak removal, image deconvolution, and low-light enhancement). In addition, our HODL can also handle bileved optimization tasks that cannot be handled by existing ODL methods, such as adversarial learning, hyper-parameter optimization, and few-shot learning.
## 2 The Proposed Algorithmic Framework
In this section, we first put forward the general ODL paradigm, and introduce our Hierarchical Optimization-Derived Learning (HODL) framework to unify the optimization algorithms and learnable modules. Then the solution strategies for this HODL framework are provided.
### _The General ODL Paradigm_
ODL usually translates the application problem into two parts of the optimization problem, the task term and the learnable term, with respect to the optimization variable \(\mathbf{u}\in U\). The task term is usually an objective function \(f(\mathbf{u})\) that represents the dependence of the solution of \(\mathbf{u}\) on the task itself. The learnable term, on the other hand, can be classified into two common forms, the regularization term \(g(\mathbf{u})\) and the linear constraint term \(\mathcal{A}(\mathbf{u})=\mathbf{y}\), which are used to represent the task prior that aids in solving the problem. Hence, ODL usually transforms the specific task into the following form
\[\min_{\mathbf{u}\in U}\overbrace{\widehat{f(\mathbf{u})}}^{\text{ Task Term}}+\underbrace{\underbrace{g(\mathbf{u},\boldsymbol{\omega})}_{\text{Regularization}}\text{ and/or }\underbrace{\text{s.t. }\mathcal{A}(\mathbf{u},\boldsymbol{\omega})=\mathbf{y}(\boldsymbol{\omega})} _{\text{Constraint}}}^{\text{Learable Term}}, \tag{1}\]
where \(\boldsymbol{\omega}\), the parameters of learnable term, is called the learning variable. Denote the solution set with respect to \(\mathbf{u}\) for a given \(\boldsymbol{\omega}\) to be \(\mathcal{S}(\boldsymbol{\omega})\), and denote the corresponding algorithmic operator for solving Eq. (1) to be \(\mathcal{D}\). In classical optimization methods \(\mathcal{D}\) is usually constructed manually by optimization experts based on theory and experience. As a paradigm for designing network structures, ODL designs the network from an optimization perspective. To be specific, by building the model based on classical optimization process as the structural basis and embedding learnable modules, ODL generates a complete network structure with both interpretability of optimization models and learnability of neural networks. This paradigm is flexible enough that the learnable module can be not only the hyper-parameters in the numerical optimization process, but also the entire networks used to replace certain process steps. The corresponding networks are respectively denoted as \(\mathcal{D}_{\text{num}}\) and \(\mathcal{D}_{\text{net}}\).
Unfortunately, existing ODL methods only consider optimization when building the initial network structure, and follow the ordinary deep neural network strategy during training, instead of combining optimization and learning. This splits ODL into two parts: during the training procedure, they only care about the convergence of learning variables \(\boldsymbol{\omega}\) and ignore the iterations of optimization variables \(\mathbf{u}\); while in testing, they fix \(\boldsymbol{\omega}\) and hope \(\mathbf{u}\) to converge in the optimization process under the fixed network structure.
### _Our Meta Optimization Framework_1
Footnote 1: In numerical optimization, meta-optimization is the use of one optimization method to tune another optimization method [25].
To address the fragmentation of optimization and learning processes, we use the idea of optimization not only when building the network structure, but also during the training procedure. Despite the embedded learnable module, the network structure of ODL can still be considered as an optimization process for solving a specific problem. Hence, by nesting the results of the optimization process into the inputs of the learning process, for the problem in Eq. (1), we can transfer it to the following
\[\min_{\mathbf{u}\in U,\boldsymbol{\omega}\in\Omega}\ell(\mathbf{u}, \boldsymbol{\omega}),\text{ s.t. }\mathbf{u}\in\mathcal{S}(\boldsymbol{\omega}), \tag{2}\]
where \(\ell\) is the objective function.
Next, we put forward a unified form in dealing with all kinds of problems in Eq. (1), which also facilitates our subsequent analysis. Specifically, each iteration of the ODL method constitutes an operator origin from the optimization algorithm but embedded with a learnable module, and the result of a stable iteration is taken as the output of ODL. Therefore, a reasonable assumption is to consider the operator as non-expansive and the output of ODL as the fixed point of the corresponding iterative operator for solving Eq. (1). Therefore, we model the optimal solution of ODL uniformly by \(\mathbf{u}=\mathcal{D}(\mathbf{u},\boldsymbol{\omega})\) to find the fixed point, where \(\boldsymbol{\omega}\) is the learning variable, and \(\mathcal{D}\) is the non-expansive
operator. Here \(\mathcal{D}(\cdot,\mathbf{\omega})\in\{\mathcal{D}_{\mathsf{num}}(\cdot,\mathbf{\omega}) \circ\mathcal{D}_{\mathsf{net}}(\cdot,\mathbf{\omega})\}\), where \(\circ\) represents compositions of operators. Same as introduced in Section 2.1, \(\mathcal{D}_{\mathsf{num}}\) regards the hyper-parameters in the numerical optimization process as learnable modules, while \(\mathcal{D}_{\mathsf{net}}\) replaces certain process steps to be networks directly. Hence, this form not only includes optimization algorithms, but also contains other implicitly defined models, which originate from optimization but are added with learnable modules additionally. The process to find the fixed point can be implemented via the classical Krasnoselskii-Mann updating scheme [26] generalized with learning variables \(\mathbf{\omega}\), in the form of \(\mathcal{T}(\mathbf{u}^{k},\mathbf{\omega})=\mathbf{u}^{k}+\alpha(\mathcal{D}( \mathbf{u}^{k},\mathbf{\omega})-\mathbf{u}^{k}),\) as the \(k\)-th iteration step, where \(\alpha\in(0,1)\). Note that if \(\mathcal{D}\) is non-expansive, then \(\mathcal{T}\) is an \(\alpha\)-averaged non-expansive operator. Furthermore, the fixed point of \(\mathcal{D}\) is also a fixed point of \(\mathcal{T}\). In experiments, for guaranteeing that \(\mathcal{D}\) is non-expansive, some normalization techniques such as spectral normalization [27] are implemented on parameters. By choosing the solution of the fixed point problem \(\mathbf{u}=\mathcal{T}(\mathbf{u},\mathbf{\omega})\) as the input for learning \(\mathbf{\omega}\), the hierarchical formulation of a general ODL problem can be expressed in the following form
\[\min_{\mathbf{u}\in\mathcal{U},\mathbf{\omega}\in\Omega}\ell(\mathbf{u},\mathbf{ \omega}),\text{ s.t. }\mathbf{u}=\mathcal{T}(\mathbf{u},\mathbf{\omega}), \tag{3}\]
where \(\ell\) is the loss function corresponding to the learning process, and \(\mathcal{T}\) denotes the optimization process. We call problems of this formulation as Hierarchical Optimization-Derived Learning (HODL), which also serve as our meta optimization framework.
Actually, HODL can overcome several shortcomings in existing ODL methods mentioned in Section 1 thanks to its hierarchical modeling. From the viewpoint of theory, HODL makes it possible to study the joint convergence of \(\mathbf{\omega}\) and \(\mathbf{u}\) under their nested relationship, in place of only considering one of them independently. Hence, instead of only obtaining the fixed points of the optimization process for a fixed \(\mathbf{\omega}\), we can approach the true optimal solution of the whole problem. We will provide the detailed convergence analysis in Section 3. From the viewpoint of applications, in the practical training procedure, the learning variables \(\mathbf{\omega}\) are also adjusted along with the iterations of optimization variables \(\mathbf{u}\), rather than just embedding a network that ignores the optimization structure.
### _Efficient Solution Strategy_
Next we establish the algorithm to simultaneously solve the optimization variables \(\mathbf{u}\) and learning variables \(\mathbf{\omega}\). Existing ODL methods usually update the optimization variables with fixed pre-trained learning variables, ignoring the nested relationships in ODL when training the optimization variables and learning variables and failing to solve them together.
**The Nested Learning Iteration.** To begin with, the learning variables \(\mathbf{\omega}\) are nested into the optimization variables \(\mathbf{u}\). Note that existing ODL approaches ignore the hierarchical structure of \(\mathbf{\omega}\) and \(\mathbf{u}\) in modeling, so their algorithms also do not contain their hierarchy and are unavailable under our HODL framework. We design the training of \(\mathbf{\omega}\) in order that the nested relationship between \(\mathbf{u}\) and \(\mathbf{\omega}\) can be effectively exploited. Specifically, each iterative step of \(\mathbf{u}\) is parameterized by \(\mathbf{\omega}\), so the iteration result of \(\mathbf{u}\) is a function of \(\mathbf{\omega}\), i.e., \(\mathbf{u}^{k}(\mathbf{\omega})\). This reveals the dependence of optimization variables \(\mathbf{u}\) on the learning variables \(\mathbf{\omega}\), and thus the complete optimization iteration of \(\mathbf{u}\) (inner loop) is embedded within the learning iteration of \(\mathbf{\omega}\) (outer loop). Hence, the objective function of learning \(\mathbf{\omega}\) contains the entire iterative trajectory of \(\mathbf{u}\), which effectively exploits their nested relationship.
**The Nested Optimization Iteration.** For the iteration of optimization variables \(\mathbf{u}\), we also add an additional nested structure related to the learning process. To begin with, we compute the iterative direction \(\mathbf{v}_{l}\) from the optimization process corresponding to \(\mathcal{T}\) in Eq. (3) (lower level). At the \(k\)-th step, to approach the fixed point of \(\mathcal{T}(\cdot,\mathbf{\omega})\) for a given \(\mathbf{\omega}\), \(\mathbf{v}_{l}^{k}=\mathcal{T}\left(\mathbf{u}^{k-1},\mathbf{\omega}\right)\) is defined as an update direction of \(\mathbf{u}\). Note that here the operator \(\mathcal{T}\) is adjusted to be non-expansive under the induced norm \(\|\cdot\|_{\mathbf{G}_{\mathbf{\omega}}}\) where \(\mathbf{G}_{\mathbf{\omega}}\) is a positive-definite correction matrix parameterized by \(\mathbf{\omega}\) and will be discussed in detail in Section 3. Next, we compute another iterative direction \(\mathbf{v}_{u}\) from the learning process in Eq. (3) (upper level). It makes our updating direction of \(\mathbf{u}\) able to utilize the information of \(\mathbf{\omega}\) by using the gradient of loss function \(\ell\) with respect to \(\mathbf{u}\). Nevertheless, directly applying its gradient may destroy the non-expansive property with respect to \(\|\cdot\|_{\mathbf{G}_{\mathbf{\omega}}}\). Consequently, for the consistent non-expansive property with direction \(\mathbf{v}_{l}\), we further add an additional correction \(\mathbf{G}_{\mathbf{\omega}}^{-1}\) to the gradient of \(\ell\), and request the corresponding step sizes \(s_{k}\) to be a decreasing sequence for assuring the correctness of this iterative direction \(\mathbf{v}_{u}\), i.e., \(\mathbf{v}_{u}^{k}=\mathbf{u}^{k-1}-s_{k}\mathbf{G}_{\mathbf{\omega}}^{-1}\frac{ \partial}{\partial u}\ell(\mathbf{u}^{k-1},\mathbf{\omega})\), where \(s_{k}\to 0\) as \(k\) increases. Lastly, inspired by [28], we generate the final updating direction of \(\mathbf{u}\) by aggregating the two iterative directions \(\mathbf{v}_{l}\) and \(\mathbf{v}_{u}\) via a linear combination under the projection, i.e., \(\mathbf{u}^{k}=\texttt{Proj}_{U,\mathbf{G}_{\mathbf{\omega}}}\left(\mu\mathbf{v}_{u} ^{k}+(1-\mu)\mathbf{v}_{l}^{k}\right)\), where \(\mu\in(0,1)\). Here the projection operator \(\mathrm{Proj}_{U,\mathbf{G}_{\mathbf{\omega}}}(\cdot)\) is associated to \(\mathbf{G}_{\mathbf{\omega}}\) with the definition \(\texttt{Proj}_{U,\mathbf{G}_{\mathbf{\omega}}}(\mathbf{u})=\mathrm{argmin}_{\mathbf{u }\in U}\|\bar{\mathbf{u}}-\mathbf{u}\|_{\mathbf{G}_{\mathbf{\omega}}}\). Note that in the theoretical analysis part, the projection is only used to guarantee the boundedness of \(\mathbf{u}^{k}\); while in practical experiments and applications, generally \(U\) is set to be such a large bounded set or even unbounded \(\mathbb{R}^{n}\) that the projection operator can be ignored. To conclude, the iterations of optimization variables \(\mathbf{u}\) in our solution strategy for HODL reads as
\[\begin{cases}\mathbf{v}_{l}^{k}(\mathbf{\omega})=\mathcal{T}(\mathbf{u}^{k-1}(\mathbf{ \omega}),\mathbf{\omega}),\\ \mathbf{v}_{u}^{k}(\mathbf{\omega})=\mathbf{u}^{k-1}(\mathbf{\omega})-s_{k}\mathbf{G}_{ \mathbf{\omega}}^{-1}\frac{\partial}{\partial u}\ell(\mathbf{u}^{k-1}(\mathbf{\omega}), \mathbf{\omega}),\\ \mathbf{u}^{k}(\mathbf{\omega})=\texttt{Proj}_{U,\mathbf{G}_{\mathbf{\omega}}}\big{(}\mu \mathbf{v}_{u}^{k}(\mathbf{\omega})+(1-\mu)\mathbf{v}_{l}^{k}(\mathbf{\omega})\big{)}, \end{cases} \tag{4}\]
where \(k=1,\ldots,K\).
Here our solution strategy to solve the HODL problem is with aggregation of \(\mathbf{v}_{l}\) and \(\mathbf{v}_{u}\), so it is shortened as HODL with aggregation (aHODL for short). On the other hand, from the viewpoint of computational efficiency in practical applications, the algorithm can be further improved. To be specific, a computational drawback comes from the need for gradual decay of \(s_{k}\) in Eq. (4), which leads to an increase in the number of training iteration. In addition, \(\mathbf{G}_{\mathbf{\omega}}^{-1}\) may be challenging to compute according to different forms of \(\mathcal{D}\), and even \(\mathbf{G}_{\mathbf{\omega}}\) itself may be hard to estimate. Therefore, we adjust aHODL and put forward a simplified HODL (sHODL for short) without the aggregation step as in Eq. (4). That is,
we let \(\mu\) in Eq. (4) to be \(0\), and then \(\mathbf{u}^{K}(\mathbf{\omega})\) is iterated as
\[\mathbf{u}^{k}(\mathbf{\omega})=\texttt{Proj}_{U,\mathbf{G}_{\mathbf{\omega}}}\left( \mathbf{v}_{l}^{k}(\mathbf{\omega})\right), \tag{5}\]
where \(\mathbf{v}_{l}^{k}(\mathbf{\omega})=\mathcal{T}(\mathbf{u}^{k-1}(\mathbf{\omega}),\bm {\omega})\), and \(k=1,\ldots,K\). Compared with Eq. (4), sHODL, the strategy without aggregation, is simpler to implement with higher efficiency as a fast algorithm than aHODL. Hence, our HODL framework can then be extended to more application tasks. Convergence of aHODL and sHODL will be discussed in the next section, which also indicates the superiority of aHODL over sHODL in theory. The algorithmic flow for aHODL and sHODL is summarized in Algorithm 1.
```
0: Step sizes \(\{s_{k}\}\), \(\gamma\) and parameter \(\mu\).
1: Initialize \(\mathbf{\omega}^{0}\).
2:for\(t=1\to T\)do
3: Initialize \(\mathbf{u}^{0}\).
4:for\(k=1\to K\)do
5: Compute \(\mathbf{u}^{k}\) by Eq. (4) (aHODL) or the simplified version Eq. (5) (sHODL).
6:endfor
7:\(\mathbf{\omega}^{t}=\texttt{Proj}_{\Omega,\mathbf{G}_{\mathbf{\omega}}}\big{(}\mathbf{ \omega}^{t-1}-\gamma\frac{\partial}{\partial\mathbf{\omega}}\ell(\mathbf{u}^{K}( \mathbf{\omega}^{t-1}),\mathbf{\omega}^{t-1})\big{)}\).
8:endfor
```
**Algorithm 1** HODL
## 3 Theoretical Analysis
In this section, we propose the convergence analysis of the solution strategies for HODL problems in Eq. (3) with respect to both optimization variables \(\mathbf{u}\) and learning variables \(\mathbf{\omega}\). Our analysis for the solution strategy of HODL is separated into two parts, the approximation quality analysis on the convergence of optimal value in Section 3.1, and the stationary analysis on the convergence of stationary points in Section 3.2. For the simplified solution strategy without aggregation sHODL as mentioned in Section 2.3, we also provide further analysis in Section 3.3. Note that since HODL in Eq. (3) is a general form of ODL problems, our analysis also serves as a unified route of theoretical analysis for other methods and more problems with hierarchical structures.
To begin with, we denote the fixed point set of operator \(\mathcal{T}\) to be \(\texttt{Fix}(\mathcal{T}(\cdot,\mathbf{\omega}))\) for a given \(\mathbf{\omega}\), and then the HODL problem in Eq. (3) can be rewritten as
\[\min_{\mathbf{\omega}\in\Omega}\;\varphi(\mathbf{\omega}),\quad\text{where}\quad \varphi(\mathbf{\omega}):=\inf_{\mathbf{u}\in\texttt{Fix}(\mathcal{T}(\cdot,\mathbf{ \omega}))\cap U}\;\ell(\mathbf{u},\mathbf{\omega}). \tag{6}\]
In Algorithm 1, \(\mathbf{u}^{K}(\mathbf{\omega})\) is obtained by iterating as Eq. (4) (aHODL) or its simplification in Eq. (5) (sHODL), to solve the simple bilevel problem \(\inf_{\mathbf{u}\in\texttt{Fix}(\mathcal{T}(\cdot,\mathbf{\omega}))\cap U}\;\ell( \mathbf{u},\mathbf{\omega})\). Substituting \(\mathbf{u}^{K}(\mathbf{\omega})\) for \(\mathbf{u}\) in \(\ell(\mathbf{u},\mathbf{\omega})\) of Eq. 6, we have its approximation problem as the following
\[\min_{\mathbf{\omega}\in\Omega}\;\varphi_{K}(\mathbf{\omega}):=\ell(\mathbf{u}^{K}( \mathbf{\omega}),\mathbf{\omega}), \tag{7}\]
which is only about the variable \(\mathbf{\omega}\), and is solved by the sequence \(\{\mathbf{\omega}^{I}\}\) generated by Algorithm 1.
### _Approximation Quality Analysis_
In this part, we show that Eq. (7) obtained by aHODL is actually an appropriate approximation to Eq. (3), meaning that any limit point \((\bar{\mathbf{u}},\bar{\mathbf{\omega}})\) of the sequence \(\big{\{}(\mathbf{u}^{K}(\mathbf{\omega}^{K}),\mathbf{\omega}^{K})\big{\}}\) is a solution to the HODL problem in Eq. (3), where \(\mathbf{\omega}^{K}\in\operatorname*{argmin}_{\mathbf{\omega}\in\Omega}\varphi_{K}( \mathbf{\omega})\) as a solution to Eq. (7) is generated by Algorithm 1 and \(\mathbf{u}^{K}(\mathbf{\omega})\) is computed from Eq. (4). Hence, we can approach the optimal solution of HODL in Eq. (3) by solving Eq. (7).
We make the following standing assumptions throughout this part, and then show that Algorithm 1 can achieve convergence in the sense of approximation quality under mild conditions.
**Assumption 3.1**: \(\Omega\) _is a compact set and \(U\) is a convex compact set. \(\texttt{Fix}(\mathcal{T}(\cdot,\mathbf{\omega}))\) is nonempty for any \(\mathbf{\omega}\in\Omega\). \(\ell(\mathbf{u},\mathbf{\omega})\) is continuous on \(\mathbb{R}^{n}\times\Omega\). For any \(\mathbf{\omega}\in\Omega\), \(\ell(\cdot,\mathbf{\omega}):\mathbb{R}^{n}\to\mathbb{R}\) is \(L_{\ell}\)-smooth, convex and bounded below by \(M_{0}\)._
Please notice that function \(\ell\) is usually defined to be the MSE loss, so Assumption 3.1 is quite standard for ODL problems [20, 16]. Next we present some necessary preliminaries. For any two matrices \(\mathbf{G}_{1},\mathbf{G}_{2}\in\mathbb{R}^{n\times n}\), we consider the following partial ordering relation:
\[\mathbf{G}_{1}\succeq\mathbf{G}_{2}\quad\Leftrightarrow\quad\langle\mathbf{u },\mathbf{G}_{1}\mathbf{u}\rangle\geq\langle\mathbf{u},\mathbf{G}_{2}\mathbf{u }\rangle,\quad\forall\mathbf{u}\in\mathbb{R}^{n}.\]
If \(\mathbf{G}\succ 0\), then \(\langle\mathbf{u}_{1},\mathbf{G}\mathbf{u}_{2}\rangle\) for \(\mathbf{u}_{1},\mathbf{u}_{2}\in\mathbb{R}^{n}\) defines an inner product on \(\mathbb{R}^{n}\). Denote the induced norm with \(\|\cdot\|_{\mathbf{G}}\), i.e., \(\|\mathbf{u}\|_{\mathbf{G}}:=\sqrt{\langle\mathbf{u},\mathbf{G}\mathbf{u}\rangle}\) for any \(\mathbf{u}\in\mathbb{R}^{n}\). We assume that \(\mathcal{D}(\cdot,\mathbf{\omega})\) satisfies the following assumptions throughout this part.
**Assumption 3.2**: _There exist \(\mathbf{G}_{ub}\succeq\mathbf{G}_{lb}\succ 0\), such that for each \(\mathbf{\omega}\in\Omega\), there exists \(\mathbf{G}_{ub}\succeq\mathbf{G}_{\mathbf{\omega}}\succeq\mathbf{G}_{lb}\) such that_
1. \(\mathcal{D}(\cdot,\mathbf{\omega})\) _is non-expansive with respect to_ \(\|\cdot\|_{\mathbf{G}_{\mathbf{\omega}}}\)_, i.e., for all_ \((\mathbf{u}_{1},\mathbf{u}_{2})\in\mathbb{R}^{n}\times\mathbb{R}^{n}\)_,_ \[\|\mathcal{D}(\mathbf{u}_{1},\mathbf{\omega})-\mathcal{D}(\mathbf{u}_{2},\mathbf{\omega}) \|_{\mathbf{G}_{\mathbf{\omega}}}\leq\|\mathbf{u}_{1}-\mathbf{u}_{2}\|_{\mathbf{G}_{ \mathbf{\omega}}}.\]
2. \(\mathcal{D}(\cdot,\mathbf{\omega})\) _is closed, i.e.,_ \(\operatorname*{gph}\mathcal{D}(\cdot,\mathbf{\omega})\) _is closed, where_ \[\operatorname*{gph}\mathcal{D}(\cdot,\mathbf{\omega}):=\{(\mathbf{u},\mathbf{v})\in \mathbb{R}^{n}\times\mathbb{R}^{n}\mid\mathbf{v}=\mathcal{D}(\mathbf{u},\mathbf{ \omega})\}.\]
The non-expansive property of \(\mathcal{T}(\cdot,\mathbf{\omega})\) in Eq. (3) can be obtained immediately from that of \(\mathcal{D}(\cdot,\mathbf{\omega})\) in Assumption 3.2[29][Proposition 4.25]. Then we can prove that the sequence \(\{\mathbf{u}^{k}(\mathbf{\omega})\}\) generated by Eq. (4) not only converges to the solution set of \(\inf_{\mathbf{u}\in\texttt{Fix}(\mathcal{T}(\cdot,\mathbf{\omega}))\cap U}\;\ell( \mathbf{u},\mathbf{\omega})\), but also admits a uniform convergence towards the fixed point set \(\texttt{Fix}(\mathcal{T}(\cdot,\mathbf{\omega}))\) with respect to \(\|\mathbf{u}^{k}(\mathbf{\omega})-\mathcal{T}(\mathbf{u}^{k}(\mathbf{\omega}),\mathbf{ \omega})\|_{\mathbf{G}_{lb}}^{2}\) for \(\mathbf{\omega}\in\Omega\). Thanks to the uniform convergence property of the sequence \(\{\mathbf{u}^{k}(\mathbf{\omega})\}\), inspired by the arguments used in [28], we can establish the convergence on both \(\mathbf{u}\) and \(\mathbf{\omega}\) of Algorithm 1 towards the solution of HODL problem in Eq. (3). The convergence results of approximation quality are summarized in the following theorem. Please refer to our conference version in [3] for detailed proofs.
**Theorem 3.1**: _Suppose Assumptions 3.1 and 3.2 are satisfied. Let \(\{\mathbf{u}^{k}(\mathbf{\omega})\}\) be the sequence generated by Eq. (4) with \(\mu\in(0,1)\) and \(s_{k}=\frac{\kappa}{k+1}\), where \(s\in(0,\frac{\lambda_{\min}(\mathbf{G}_{lb})}{\mathcal{L}})\), and \(\lambda_{\min}(\mathbf{G}_{lb})\) denotes the smallest eigenvalue of matrix \(\mathbf{G}_{lb}\)._
1. _For any_ \(\mathbf{\omega}\in\Omega\)_, we have_ \[\lim_{k\to\infty}\operatorname{dist}(\mathbf{u}^{k}(\mathbf{\omega}),\mathtt{Fix}( \mathcal{T}(\cdot,\mathbf{\omega}))=0,\] _and_ \[\lim_{k\to\infty}\ell(\mathbf{u}^{k}(\mathbf{\omega}),\mathbf{\omega})=\varphi(\mathbf{ \omega}).\] _Furthermore, there exits_ \(C>0\) _such that for any_ \(\mathbf{\omega}\in\Omega\)_,_ \[\|\mathbf{u}^{k}(\mathbf{\omega})-\mathcal{T}(\mathbf{u}^{k}(\mathbf{\omega}),\mathbf{ \omega})\|_{\mathbf{G}_{lb}}^{2}\leq C\sqrt{\frac{1+\ln(1+k)}{k^{\frac{1}{4}}}}.\]
2. _Let_ \(\mathbf{\omega}^{K}\in\operatorname{argmin}_{\mathbf{\omega}\in\Omega}\varphi_{K}( \mathbf{\omega})\)_, and we have any limit point_ \((\bar{\mathbf{u}},\bar{\mathbf{\omega}})\) _of the sequence_ \(\{(\mathbf{u}^{K}(\mathbf{\omega}^{K}),\mathbf{\omega}^{K})\}\) _is a solution to the problem in Eq. (_3_), i.e.,_ \(\bar{\mathbf{\omega}}\in\operatorname{argmin}_{\mathbf{\omega}\in\Omega}\varphi(\mathbf{ \omega})\) _and_ \(\bar{\mathbf{u}}=\mathcal{T}(\bar{\mathbf{u}},\bar{\mathbf{\omega}})\)_. Furthermore,_ \(\inf_{\mathbf{\omega}\in\Omega}\varphi_{K}(\mathbf{\omega})\to\inf_{\mathbf{\omega}\in \Omega}\varphi(\mathbf{\omega})\) _as_ \(K\to\infty\)_._
### _Stationary Analysis_
Next, we put forward the convergence analysis of our solution strategy with aggregation aHODL (using Eq. (4) to compute \(\mathbf{u}^{K}\) in Algorithm 1) on stationary points. That is, for any limit point \(\bar{\mathbf{\omega}}\) of the sequence \(\{\mathbf{\omega}^{K}\}\), we have \(\nabla\varphi(\bar{\mathbf{\omega}})=0\), where \(\varphi(\mathbf{\omega})\) is defined in Eq. (6).
Here we make \(U=\mathbb{R}^{n}\) and suppose the operator \(\mathcal{T}\) has a unique fixed point, which means the fixed point set \(\mathtt{Fix}(\mathcal{T}(\cdot,\mathbf{\omega}))\) is a singleton. We denote the unique solution by \(\mathbf{u}^{*}(\mathbf{\omega})\). Our analysis is partly inspired by [28] and [30].
**Assumption 3.3**: \(\Omega\) _is a compact set and \(U=\mathbb{R}^{n}\). \(\mathtt{Fix}(\mathcal{T}(\cdot,\mathbf{\omega}))\) is nonempty for any \(\mathbf{\omega}\in\Omega\). \(\ell(\mathbf{u},\mathbf{\omega})\) is twice continuously differentiable on \(\mathbb{R}^{n}\times\Omega\). For any \(\mathbf{\omega}\in\Omega\), \(\ell(\cdot,\mathbf{\omega}):\mathbb{R}^{n}\to\mathbb{R}\) is \(L_{\ell}\)-smooth, convex and bounded below by \(M_{0}\)._
For \(\mathcal{D}(\cdot,\mathbf{\omega})\) we request a stronger assumption than Assumption 3.2 that \(\mathcal{D}(\cdot,\mathbf{\omega})\) is contractive with respect to \(\|\cdot\|_{\mathbf{G}_{\mathbf{\omega}}}\) throughout this part, to guarantee the uniqueness of the fixed point.
**Assumption 3.4**: _There exist \(\mathbf{G}_{ub}\succeq\mathbf{G}_{lb}\succ 0\), such that for each \(\mathbf{\omega}\in\Omega\), there exists \(\mathbf{G}_{ub}\succeq\mathbf{G}_{\mathbf{\omega}}\succeq\mathbf{G}_{lb}\) such that_
1. \(\mathcal{D}(\cdot,\mathbf{\omega})\) _is contractive with respect to_ \(\|\cdot\|_{\mathbf{G}_{\mathbf{\omega}}}\)_, i.e., there exists_ \(\bar{\rho}\in(0,1)\)_, such that for all_ \((\mathbf{u}_{1},\mathbf{u}_{2})\in\mathbb{R}^{n}\times\mathbb{R}^{n}\)_,_ \[\|\mathcal{D}(\mathbf{u}_{1},\mathbf{\omega})-\mathcal{D}(\mathbf{u}_{2},\mathbf{\omega })\|_{\mathbf{G}_{\mathbf{\omega}}}\leq\bar{\rho}\|\mathbf{u}_{1}-\mathbf{u}_{2} \|_{\mathbf{G}_{\mathbf{\omega}}}.\]
2. \(\mathcal{D}(\cdot,\mathbf{\omega})\) _is closed._
Denote \(\hat{\mathcal{S}}(\mathbf{\omega}):=\operatorname{argmin}_{\mathbf{u}\in\mathtt{ Fix}(\mathcal{T}(\cdot,\mathbf{\omega}))\cap U}\ell(\mathbf{u},\mathbf{\omega})\), and we have the following stationary analysis results.
**Theorem 3.2**: _Suppose Assumptions 3.3 and 3.4 are satisfied, \(\frac{\partial}{\partial\omega}\mathcal{T}(\mathbf{u},\mathbf{\omega})\) and \(\frac{\partial}{\partial\omega}\mathcal{T}(\mathbf{u},\mathbf{\omega})\) are Lipschitz continuous with respect to \(\mathbf{u}\), and \(\hat{\mathcal{S}}(\mathbf{\omega})\) is nonempty for all \(\mathbf{\omega}\in\Omega\). Let \(\{\mathbf{u}^{k}(\mathbf{\omega})\}\) be the sequence generated by Eq. (4) with \(\mu\in(0,1)\) and \(s_{k}=\frac{s}{k+1}\), where \(s\in(0,\frac{\lambda_{\min}(\Theta_{lb})}{L_{\ell}})\)._
1. _We have_ \[\sup_{\mathbf{\omega}\in\Omega}\|\nabla\varphi_{k}(\mathbf{\omega})-\nabla\varphi(\mathbf{ \omega})\|_{\mathbf{G}}\to 0,\text{ as }k\to\infty.\]
2. _Let_ \(\mathbf{\omega}^{K}\) _be an_ \(\varepsilon_{K}\)_-stationary point of_ \(\varphi_{K}(\mathbf{\omega})\)_, i.e.,_ \[\varepsilon_{K}=\nabla\varphi_{K}(\mathbf{\omega}^{K}).\]
_Then if \(\varepsilon_{K}\to 0\), we have that any limit point \(\bar{\mathbf{\omega}}\) of the sequence \(\{\mathbf{\omega}^{K}\}\) is a stationary point of \(\varphi\), i.e.,_
\[0=\nabla\varphi(\bar{\mathbf{\omega}}).\]
For detailed proofs of the above results, please refer to our conference version in [3].
### _Convergence of HODL without Aggregation (sHODL)_
In Section 3.1 and 3.2, we discuss the convergence properties (approximation quality and stationary analysis) of solution strategy aHODL (using Eq. (4) to compute \(\mathbf{u}^{K}\)). Now we further extend these convergence properties to the solution strategy without aggregation sHODL introduced in Section 2.3 (using Eq. (5) to compute \(\mathbf{u}^{K}\)).
On the approximation quality, based on Assumptions 3.1 and 3.2, under the further assumptions that \(\ell(\cdot,\mathbf{\omega})\) is uniformly Lipschitz continuous and \(\mathcal{T}(\cdot,\mathbf{\omega})\) has a unique fixed point, the approximation quality result for the solution strategy without aggregation can be obtained. For detailed discussions please refer to [31, 32]. Note that for the convergence guarantee, compared with aHODL, the simplified solution strategy sHODL reduces the computational burden but requires a stronger assumption that the operator \(\mathcal{T}\) is contractive, i.e., the set \(\mathtt{Fix}(\mathcal{T}(\cdot,\mathbf{\omega}))\) is a singleton. Also note that in this situation the convexity of \(\ell\) is not required. Corresponding to those classic gradient-based unrolling algorithms without linear constraints, they require that the objective function in Eq. (1) is strongly convex [33, 31]. If the solutions to the optimization process are not unique (such as \(f\) is only convex, i.e., the corresponding operator is only non-expansive), and substituted to the learning process directly, then the obtained solution may be far away from the true solution of the original bilevel problem. Please refer to the counter-example in [32]. However, using the solution strategy with aggregation aHODL (using Eq. (4) to compute \(\mathbf{u}^{K}\)) which aggregates the upper and lower iterative directions \(\mathbf{v}_{l}\) and \(\mathbf{v}_{u}\), then even if the fixed points are not unique (the lower iterative operator is merely non-expansive), we can still approach the true solution with joint convergence.
On the stationary analysis, please note that our stationary analysis in Section 3.2 is also a unified convergence analysis of our solution strategies with and without aggregation (aHODL and sHODL), so it is applicable to all kinds of hierarchical problems. Specifically, \(\mu\) in aHODL (using Eq. (4) to compute \(\mathbf{u}^{K}\)) is taken to be between 0 and 1, while in the solution strategy without aggregation sHODL (using the simplified form Eq. (5) to compute \(\mathbf{u}^{K}\)), it is taken to be 0. Taking \(\mu=0\), Theorem 3.2 also holds, and the proofs parallel. Please also refer to [33] for the stationary analysis of the classic gradient-based unrolling algorithms as the special case of our solution strategy without aggregation. The discussions above for the convergence properties of HODL without aggregation (sHODL) can be concluded in the following proposition.
**Proposition 3.1**: _Suppose \(\{\mathbf{u}^{k}(\mathbf{\omega})\}\) to be the sequence generated by sHODL in Section 2.3._
1. _Suppose Assumptions_ 3.1 _and_ 3.2 _are satisfied,_ \(\ell(\cdot,\mathbf{\omega})\) _is uniformly Lipschitz continuous and_ \(\mathcal{T}(\cdot,\mathbf{\omega})\) _has a
unique fixed point. Then, let \(\mathbf{\omega}^{K}\in\operatorname*{argmin}_{\mathbf{\omega}\in\Omega}\varphi_{K}(\mathbf{ \omega})\), and we have any limit point \((\bar{\mathbf{u}},\bar{\mathbf{\omega}})\) of the sequence \(\{(\mathbf{u}^{K}(\mathbf{\omega}^{K}),\mathbf{\omega}^{K})\}\) is a solution to the problem in Eq. (3), i.e., \(\bar{\mathbf{\omega}}\in\operatorname*{argmin}_{\mathbf{\omega}\in\Omega}\varphi(\mathbf{ \omega})\) and \(\bar{\mathbf{u}}=\mathcal{T}(\bar{\mathbf{u}},\bar{\mathbf{\omega}})\). Further, \(\inf_{\mathbf{\omega}\in\Omega}\varphi_{K}(\mathbf{\omega})\to\inf_{\mathbf{\omega}\in \Omega}\varphi(\mathbf{\omega})\) as \(K\to\infty\)._
2. _Suppose Assumptions_ 3.3 _and_ 3.4 _are satisfied,_ \(\frac{\partial}{\partial\mathbf{\omega}}\mathcal{T}(\mathbf{u},\mathbf{\omega})\) _and_ \(\frac{\partial}{\partial\mathbf{\omega}}\mathcal{T}(\mathbf{u},\mathbf{\omega})\) _are Lipschitz continuous with respect to_ \(\mathbf{u}\)_, and_ \(\mathcal{S}(\mathbf{\omega})\) _is nonempty for all_ \(\mathbf{\omega}\in\Omega\)_. Let_ \(\mathbf{\omega}^{K}\) _be an_ \(\varepsilon_{K}\)_-stationary point of_ \(\varphi_{K}(\mathbf{\omega})\)_, i.e.,_ \(\varepsilon_{K}=\nabla\varphi_{K}(\mathbf{\omega}^{K})\)_. Then if_ \(\varepsilon_{K}\to 0\)_, we have that any limit point_ \(\bar{\mathbf{\omega}}\) _of the sequence_ \(\{\mathbf{\omega}^{K}\}\) _is a stationary point of_ \(\varphi\)_, i.e.,_ \(0=\nabla\varphi(\bar{\mathbf{\omega}})\)_._
## 4 Applications
In this section, we first compare HODL with other established ODL methods in detail, and then demonstrate the applications of HODL in solving practical problems of various forms and the specific settings under these forms. Summary of operators \(\mathcal{D}_{\text{num}}\) and \(\mathcal{D}_{\text{net}}\) for problems of various forms and corresponding applications is shown in Table II, where the applications for other learning tasks regarded as hierarchical models will be discussed in Section 5.
### _Comparison with Existing ODL Methods_
Compared with existing ODL methods, HODL additionally considers the optimal update of learning variables \(\mathbf{\omega}\), thus providing better theoretical guarantees and higher application value. Existing ODL methods only focus on the output optimization model, i.e., the final iterative results of the optimization variables \(\mathbf{u}\). Usually, their selection of learning variables \(\mathbf{\omega}\) is just a direct extraction of network modules from similar learning tasks [17, 18]. Hence, this selection method ignores the convergence of learning variables \(\mathbf{\omega}\) and can be considered as the optimization strategy of random search for similar learning tasks in the search space of learning variables \(\mathbf{\omega}\). On the contrary, HODL focuses on the iterative results of both optimization variables \(\mathbf{u}\) and learning variables \(\mathbf{\omega}\), and performs gradient descent on learning variables \(\mathbf{\omega}\), thus providing sufficient theoretical guarantees and clear application framework. In a word, compared with existing ODL methods, HODL makes up for the weakness of ODL in theory and upgrades from random search to gradient descent for application, providing the theoretical guarantee and usability of ODL models that existing methods cannot achieve. Under the HODL framework, the difference among algorithms for various applications lies in the operator \(\mathcal{D}\) introduced in Section 2.2. Next we introduce the specific forms of \(\mathcal{D}\) in these applications.
### _Application for Sparse Coding_
Taking sparse coding as an example, we first describe how HODL can be applied to constrained and regularized problems and show how the coupling between the optimization model and optimization variables can be handled. Specifically, the sparse coding task is dedicated to representing given data \(\mathbf{b}\) as a sparse coefficient representation \(\mathbf{u}\) of a set of basis vectors \(\mathbf{Q}\), i.e., \(\mathbf{Qu}=\mathbf{b}\). As the basis vectors in the transform matrix \(\mathbf{Q}\) are usually overcomplete, we introduce additional sparsity criterion to address the degeneracy problem caused by overcompleteness. Depending on how to force the algorithm to provide a satisfactory representation of \(\mathbf{b}\), sparse coding can be considered as a constrained or regularized problem. Note that in both cases, usually we set the objective function \(\ell\) in Eq. (3) to be MSE loss.
**Constrained Sparse Coding.** The constrained sparse coding form is based on linear equality constraints \(\mathbf{Qu}=\mathbf{b}\), corresponding to the constraint term in Eq. (1) as a guarantee of reconfigurability. As the reconstruction is usually imperfect. Since the transform matrix \(\mathbf{Q}\) is usually generated from clear data, noise in the given data \(\mathbf{b}\) cannot be perfectly restored, so the noise estimation term \(\mathbf{u}_{n}\) is added as a complement to adhere to the task information, i.e., \(\mathbf{Qu}+\mathbf{u}_{n}=\mathbf{b}\). Note that here we need to additionally estimate the noise term \(\mathbf{u}_{n}\), and in other cases if the noise is a constant vector, we just denote it to be \(\mathbf{n}\). As an overcomplete task, the \(\ell_{1}\) paradigm is usually used as a sparsity penalty which forces our representation of \(\mathbf{u}\) and \(\mathbf{u}_{n}\) to be sparse. We model the constrained sparse coding problem as the following
\[\min_{\mathbf{u},\mathbf{u}_{n}}\kappa\|\mathbf{u}\|_{1}+\|\mathbf{u}_{n}\|_{1} \quad\text{s.t.}\quad\quad\mathbf{Qu}+\mathbf{u}_{n}=\mathbf{b} \tag{8}\]
where \(\kappa\) is a scaling constant to determine the relative importance of the two norms. In order to solve the constrained optimization problem while satisfying the assumptions of HODL, we use the ALM method to determine \(\mathcal{D}_{\text{ALM}}\) as \(\mathcal{D}_{\text{num}}\) as shown in Table II. It can be proved that corresponding \(\mathcal{D}_{\text{ALM}}\) for Eq. (8) satisfies Assumption 3.2 under mild conditions. Please refer to [3, Appendix B] for details.
**Regularized Sparse Coding.** Another common type of task prior is to add regularization terms as the learnable module in Eq. (1) to the objective function. The regularized sparse coding form is based on reconstruction term \(\|\mathbf{Qu}-\mathbf{b}\|_{2}\) as a guarantee of reconfigurability. As an overcomplete task regularization, it also uses \(\ell_{1}\) paradigm as a sparsity penalty to force the representation of \(\mathbf{u}\) to be sparse. We define the objective function for regularized sparse coding as
\[\min_{\mathbf{u}}\|\mathbf{Qu}-\mathbf{b}\|_{2}+\kappa\|\mathbf{u}\|_{1} \tag{9}\]
where \(\kappa\) is a scaling constant to determine the relative importance between reconstruction term and regularization term. In order to solve the regularized optimization problem while satisfying the assumptions of HODL, we use the PG method to determine \(\mathcal{D}_{\text{PG}}\) as \(\mathcal{D}_{\text{num}}\) as shown in Table II. In [3, Appendix B], it is proved that corresponding \(\mathcal{D}_{\text{PG}}\) satisfies Assumption 3.2 under mild conditions.
**Composition of \(\mathcal{D}_{\text{num}}\) and \(\mathcal{D}_{\text{net}}\).** In the above discussion we use a fully connected layer network with spectral normalization as \(\mathcal{D}_{\text{net}}\). When compositing \(\mathcal{D}_{\text{num}}\) and \(\mathcal{D}_{\text{net}}\) for better performance, we use a non-expansive \(\mathcal{D}_{\text{net}}\) as shown in Table II and composite them to satisfy the assumptions of HODL solution strategy.
The convergence guarantee will hold when compositing \(\mathcal{D}_{\text{num}}\) and \(\mathcal{D}_{\text{net}}\), because if \(\mathcal{D}_{\text{num}}\) and \(\mathcal{D}_{\text{net}}\) satisfy Assumption 3.2(or 3.4) with the same \(\mathbf{G}_{\mathbf{\omega}}\), then \(\mathcal{D}_{\text{num}}\circ\mathcal{D}_{\text{net}}\) also satisfies these assumptions. To be specific, the non-expansive (or contractive) property of \(\mathcal{D}_{\text{num}}\circ\mathcal{D}_{\text{net}}\) with \(\mathbf{G}_{\mathbf{\omega}}\) can be easily verified from the definition. As for the closeness of \(\mathcal{D}_{\text{num}}(\cdot,\mathbf{\omega})\circ\mathcal{D}_{\text{net}}(\cdot, \mathbf{\omega})\) for a fixed \(\mathbf{\omega}\in\Omega\), we consider the sequence \(\{(\mathbf{u}^{k},\mathbf{v}^{k})\}\in\operatorname*{gph}(\mathcal{D}_{\text{ num}}(\cdot,\mathbf{\omega})\circ\mathcal{D}_{\text{net}}(\cdot,\mathbf{\omega}))\) satisfying \((\mathbf{u}^{k},\mathbf{v}^{k})\to(\bar{\mathbf{u}},\bar{\mathbf{v}})\). From the boundedness of \(\{\mathbf{u}^{k}\}\) and the non-expansive (or contractive) property of \(\mathcal{D}_{\text{num}}\circ\mathcal{D}_{\text{net}}\) with
\(\mathbf{G}_{\mathbf{\omega}}\succ 0\), it can be obtained that \(\mathcal{D}_{\mathtt{net}}(\mathbf{u}^{k},\mathbf{\omega})\) is bounded, so there exists a subsequence \(\{(\mathbf{u}^{i},\mathbf{v}^{i})\}\subseteq\{(\mathbf{u}^{k},\mathbf{v}^{k})\}\) such that \(\mathcal{D}_{\mathtt{net}}(\mathbf{u}^{i},\mathbf{\omega})\rightarrow\bar{\mathbf{ \omega}}\). Then it follows from the closeness of \(\mathcal{D}_{\mathtt{net}}(\cdot,\mathbf{\omega})\) and \(\mathcal{D}_{\mathtt{num}}(\cdot,\mathbf{\omega})\) that \((\bar{\mathbf{u}},\bar{\mathbf{\omega}})\in\mathrm{gph}\mathcal{D}_{\mathtt{net}} (\cdot,\mathbf{\omega})\) and \((\bar{\mathbf{\omega}},\bar{\mathbf{\upsilon}})\in\mathrm{gph}\mathcal{D}_{\mathtt{ num}}(\cdot,\mathbf{\omega})\). Hence, \((\bar{\mathbf{u}},\bar{\mathbf{\upsilon}})\in\mathrm{gph}(\mathcal{D}_{\mathtt{ num}}(\cdot,\mathbf{\omega})\circ\mathcal{D}_{\mathtt{net}}(\cdot,\mathbf{\omega}))\). Note that given any non-expansive \(\mathcal{D}_{\mathtt{net}}\) (which can be achieved by spectral normalization) and positive-definite matrix \(\mathbf{G}_{\mathbf{\omega}}\), by setting \(\mathcal{D}_{\mathtt{net}^{*}}=\mathbf{G}_{\mathbf{\omega}}^{-1/2}\mathcal{D}_{ \mathtt{net}}\mathbf{G}_{\mathbf{\omega}}^{1/2}\), then \(\mathcal{D}_{\mathtt{net}^{*}}\) satisfies Assumption 3.2(or 3.4) with \(\mathbf{G}_{\mathbf{\omega}}\).
### _Applications for Vision Tasks_
In this subsection, we illustrate the applications of ODL in vision tasks, describe the shortcomings of existing ODL methods, and demonstrate how to apply HODL in vision tasks. In these applications, we use \(\mathcal{D}_{\mathtt{ALR}}\) and \(\mathcal{D}_{\mathtt{pq}}\) as \(\mathcal{D}_{\mathtt{num}}\) for constrained and regularized problems, respectively, consistent with the discussion for sparse coding.
**Rain Streak Removal.** An application scenario of constrained HODL requires using variable separation to aid in problem solving. As an example, in the rain streak removal task, the sparse solutions of rain line and background are solved separately by adding auxiliary variables [34]. This scenario requires the auxiliary variables and the original variables to be kept equal, and it is suitable to use HODL framework with equality constraints. Specifically, given the input rainy image \(\mathbf{I}_{r}\), the goal is to decompose it into a rain-free background \(\mathbf{u}_{b}\) and a rain streak layer \(\mathbf{u}_{r}\), i.e., \(\mathbf{I}_{r}=\mathbf{u}_{b}+\mathbf{u}_{r}\), to enhance the visibility. The problem can be reformulated as \(\min\limits_{\mathbf{u}_{b},\mathbf{u}_{r}}\frac{1}{2}\|\mathbf{u}_{b}+\mathbf{ u}_{r}-\mathbf{I}_{r}\|_{2}^{2}+\psi_{b}(\mathbf{u}_{b})+\psi_{r}(\mathbf{u}_{r})\), where \(\psi_{b}(\mathbf{u}_{b})\) and \(\psi_{r}(\mathbf{u}_{r})\) are set to be \(\psi_{b}(\mathbf{u}_{b})=\kappa_{b}\|\mathbf{u}_{b}\|_{1}\) and \(\psi_{r}(\mathbf{u}_{r})=\kappa_{r}\|\nabla\mathbf{u}_{r}\|_{1}\), representing the priors on the background layer and rain streak layer respectively. Then we introduce auxiliary variables \(\mathbf{v}_{b}\) and \(\mathbf{v}_{r}\), and transfer the problem to be \(\min\limits_{\mathbf{u}_{b},\mathbf{u}_{r},\mathbf{v}_{b},\mathbf{v}_{r}}\frac {1}{2}\|\mathbf{u}_{b}+\mathbf{u}_{r}-\mathbf{b}\|_{2}^{2}+\kappa_{b}\|\mathbf{ v}_{b}\|_{1}+\kappa_{r}\|\mathbf{v}_{r}\|_{1}\), s.t., \(\mathbf{v}_{b}=\mathbf{u}_{b},\mathbf{v}_{r}=\nabla\mathbf{u}_{r}\), where \(\nabla=[\nabla_{h};\nabla_{v}]\) denotes the gradient in horizontal and vertical directions. Existing ODL methods usually solve \(\mathbf{u}_{b},\mathbf{u}_{r}\) using \(\mathcal{D}_{\mathtt{num}}\) and solve \(\mathbf{v}_{b},\mathbf{v}_{r}\) by a pre-trained \(\mathcal{D}_{\mathtt{net}}\), usually leading to a gap between the pre-trained task and current task. HODL, in contrast, ensures that \(\mathcal{D}_{\mathtt{net}}\) learns valid rain streak information by using a regularized \(\mathcal{D}_{\mathtt{net}}\) trained on current task jointly with \(\mathcal{D}_{\mathtt{num}}\).
**Image Deconvolution**. As an application of regularized HODL, image deconvolution does not strive for perfect image restoration, but pursues a balance between restoration and deconvolution effects whenever possible [12]. Specifically, the input image can be expressed as \(\mathbf{b}=\mathbf{Q}\ast\mathbf{u}+\mathbf{n}\), where \(\mathbf{Q},\mathbf{u}\), and \(\mathbf{n}\) respectively denote the blur kernel, latent clean image, and additional noise, and \(\ast\) denotes the two-dimensional convolution operator. Here the regularization is implemented based on Maximum A Posteriori (MAP) estimation. Then the problem is transferred to \(\min_{\mathbf{u}\in U}\|\mathbf{Q}\ast\mathbf{u}-\mathbf{b}\|_{2}^{2}+g(\mathbf{ u})\), where \(g(\mathbf{u})\) is the prior function of the image. We set \(g(\mathbf{u})\) to be \(\kappa\|\mathbf{W}\mathbf{u}\|_{1}\), where \(\mathbf{W}\) is the wavelet transform matrix, considering that there is usually a sparse image after the wavelet transform. In this task, existing ODL approaches typically have two ideas. One uses \(\mathcal{D}_{\mathtt{num}}\) for task fidelity term \(\|\mathbf{Q}\ast\mathbf{u}-\mathbf{b}\|_{2}^{2}\) and pre-trained \(\mathcal{D}_{\mathtt{net}}\) for regularization term \(g(\mathbf{u})\) to guarantee clarity. Similar to the previous task, this makes the pre-trained \(\mathcal{D}_{\mathtt{net}}\) not well adapted to the current task details such as convolution kernels and object edges. The other is to train \(\mathcal{D}_{\mathtt{net}}\) in the current task, but ignore \(\mathcal{D}_{\mathtt{num}}\) during training after having built \(\mathcal{D}_{\mathtt{net}}\) from \(\mathcal{D}_{\mathtt{num}}\). HODL, on the other hand, ensures \(\mathcal{D}_{\mathtt{num}}\) to control over the iteration and enables \(\mathcal{D}_{\mathtt{net}}\) to adapt to the current task through joint training.
**Low-Light Enhancement.** As another application of regularized HODL, low-light enhancement usually employs a complex net work to estimate illumination in order for a higher image quality. Hence, compared with linear equality constraint terms, it is more appropriate to use regularization terms as a priori. Specifically, we follow the simple Retinex rule \(\mathbf{y}=\mathbf{x}\otimes\mathbf{u}\), where \(\mathbf{y}\) is the captured underexposed observation which is a given low-light image, \(\mathbf{x}\) is the desired recovery, \(\mathbf{u}\) is the illumination to be determined for enhancement, and the operator \(\otimes\) denotes element-wise multiplication. To accurately estimate \(\mathbf{u}\), inspired by the work in [35], we estimate \(\mathbf{u}\) by \(\min\limits_{\mathbf{u}}\|\mathbf{u}-\phi(\mathbf{y})\|_{2}^{2}+\psi(\mathbf{u})\), where \(\phi\) is a given estimated illumination mapping, and \(\psi\) is a regularization function estimated implicitly from a CNN. In this task, existing ODL methods usually construct the network for solving task term \(\|\mathbf{u}-\phi(\mathbf{y})\|_{2}^{2}\) and regularization term \(\psi(\mathbf{u})\) from an optimization problem, but along with the training procedure, the network structure will be away from the original optimization structure. However, HODL able to retain the optimization structure in training, thus effectively improving the image fidelity.
## 5 Extensions to Other Learning Tasks
In this section we illustrate how to apply the hierarchical modeling of HODL to a wide range of learning tasks beyond ODL. Specifically, as a methodology, HODL framework with hierarchical structures is not limited to specific methods and can be used to uncover the hierarchical relationships in multi-task coupled learning tasks as well. Since learning tasks
\begin{table}
\begin{tabular}{c|c|c|c} \hline Model & \(\mathcal{D}_{\mathtt{num}}\) & Applications \\ \hline \begin{tabular}{c} Constrained \\ Problems \\ \end{tabular} & \(\mathbf{ALR}:\left\{\begin{array}{l}\mathbf{u}^{k+1}=\underset{\mathbf{u}}{ \operatorname{argmin}}\ \left\{f(\mathbf{u})+(\lambda^{k},\mathcal{A}(\mathbf{\omega})\mathbf{u}-\mathbf{y}( \mathbf{\omega}))+\frac{\beta}{2}\|\mathcal{A}(\mathbf{\omega})\mathbf{u}-\mathbf{y}( \mathbf{\omega})\|^{2}+\frac{1}{2}\|\mathbf{u}-\mathbf{u}^{k}\|_{\alpha}^{2}\right\} \\ \lambda^{k+1}=\lambda^{k}+\beta(\mathcal{A}(\mathbf{\omega})\mathbf{u}^{k+1}-\mathbf{y}( \mathbf{\omega}))\end{array}\right.\) & NE-net & \begin{tabular}{c} Sparse Coding \\ Rain Streak Removal \\ \end{tabular} \\ \hline \begin{tabular}{c} Regularized \\ Problems \\ \end{tabular} & \(\mathtt{PG}:\mathbf{u}^{k+1}=\underset{\mathbf{u}}{\operatorname{argmin}}\ \left\{f(\mathbf{u}^{k})+(\nabla \mathbf{u}_{f}(\mathbf{u}^{k}),\mathbf{u}-\mathbf{u}^{k})+g(\mathbf{u},\mathbf{ \omega})+\frac{1}{2^{k}}\|\mathbf{u}-\mathbf{u}^{k}\|_{\alpha_{\omega}}^{2}\right\}\) & \begin{tabular}{c} Sparse Coding \\ Image Deconvolution \\ Low-Light Enhancement \\ \end{tabular} \\ \hline \begin{tabular}{c} Hierarchical \\ Models \\ \end{tabular} & \(\mathtt{SP}:\mathbf{u}^{k+1}=\mathbf{u}^{k}-\nabla_{h}f(\mathbf{u}^{k})\) & N/A &
\begin{tabular}{c} Hyper-parameter Optimization
can be considered as optimization problems based on loss functions and specific optimizers, HODL, which is dedicated to modeling hierarchical relationships between optimization and learning, can also accommodate hierarchical coupling in multiple learning tasks. For example, by setting the optimization operator \(\mathcal{T}\) in Eq. (3) as the gradient descent operator for optimizing the sub-task loss function, and \(\ell\) in Eq. (3) as the loss function for another sub-task, HODL can be easily migrated to any learning application with multiple sub-tasks. Therefore, HODL can be widely applied in adversarial learning [36, 37], hyper-parameter optimization [38, 39], few-shot learning [31], and so on, as shown in Table II.
**Adversarial Learning.** As the best-known application of adversarial learning, Generative Adversarial Networks (GAN) has received much attention in recent years, which adversarially trains generators to solve real-world tasks by means of additional discriminators. In GAN, the generator depends on the discrimination from the discriminator to learn the features, while the discriminator depends on the output of generator to learn the classification. Therefore, by taking the update of discriminator as the operator \(\mathcal{T}\) in Eq. (3) and the learning process of generator as \(\ell\) in Eq. (3), HODL can effectively model the coupling relationship between the two sub-tasks of GAN.
**Hyper-parameter Optimization.** The increasing complexity of machine learning algorithms has driven plenty of research in the field of hyper-parameter optimization. In machine learning, hyper-parameter optimization aims at choosing a set of optimal hyper-parameters for learning algorithms. Hyper-parameters are a class of parameters whose values are used to control the learning process. Therefore, by taking the learning process as the operator \(\mathcal{T}\) in Eq. (3) and the objective function to choose optimal hyper-parameters as \(\ell\) in Eq. (3), our HODL approach is equally effective when dealing with hyper-parameter optimization.
**Few-shot Learning.** Few-shot learning (\(N\)-way \(M\)-shot) is a multi-task \(N\)-way classification which aims to learn the feature extraction structure with generalization ability, so that each new task can be solved only through \(M\)-training samples. This task has nested hierarchies, which respectively classify \(M\) samples and learn a feature structure that can be used for new tasks. Therefore, by taking the classification optimization process as the operator \(\mathcal{T}\) in Eq. (3) and the learning process of feature structure as function \(\ell\) in Eq. (3), our HODL approach can also be applied.
Besides, in some applications, the operator \(\mathcal{T}\) corresponds to optimizing an implicit energy function that is solved indirectly through a neural network. In this case, by applying spectral normalization to the network, we can still obtain a non-expansive mapping. We verify the necessity of the non-expansive property of neural network in Section 6.1.
## 6 Experimental Results
In this section, we first verify the theoretical properties of HODL on synthetic experiments in the sparse coding task. We subsequently apply HODL to visual experiments containing rain streak removal, image deconvolution, and low-light enhancement. Finally, we extend HODL to other applications with hierarchies, including adversarial learning, hyper-parameter optimization, and few-shot learning. We conduct our experiments mainly on a PC with Intel Core i9-10900KF CPU (3.70GHz), 128GB RAM and two NVIDIA GeForce RTX 3090 24GB GPUs. All experiments are implemented on synthetic datasets, and the Adam optimizer is adopted to update variable \(\mathbf{\omega}\).
### _Model Evaluation_
This part first verifies that HODL improves the overall performance compared with existing ODL methods. More specifically, we analyze the performance on convergence by HODL in terms of learning variables and optimization variables for the learning process and optimization process, respectively. After that, we investigate some factors that may affect the performance of HODL. To illustrate the generality of HODL, we verify the performance on constrained and regularized sparse coding problems.
For regularized problems, we use the regularized sparse coding model introduced in Section 4.2. We set \(m=500,n=250\) (\(\mathbf{Q}\) in Eq. (9) is a \(m\times n\) matrix), and the training and testing samples are 10000 and 1000, respectively. The elements of matrix \(\mathbf{Q}\) are sampled from the standard Gaussian distribution, and the column vector of matrix \(\mathbf{Q}\) is standardized to have the unit \(\ell_{2}\) norm. The sparse vector \(\mathbf{u}\) is sampled from the standard Gaussian distribution, and the distribution of non-zero elements follows the Bernoulli distribution with probability 0.1. The intensity of noise \(\mathbf{n}\) is 0.01 times the standard Gaussian distribution, and all data are generated by the model \(\mathbf{b}=\mathbf{Q}\mathbf{u}+\mathbf{n}\). To be fair for the comparisons, \(\mathbf{Q}\) and \(\mathbf{b}\) are fixed in the experiment. We use the MSE loss as the supervised loss for \(\mathbf{\omega}\). To show the performance of HODL for regularized problems, we compare the convergence of different methods in the optimization process for \(\mathbf{u}\) and learning process for \(\mathbf{\omega}\) in Figure 1. It can be seen that, HODL performs better in the convergence of optimization and learning than other methods.
Fig. 1: The convergence behavior of \(\mathbf{\omega}\) and \(\mathbf{u}\) by UNH, ENA, and HODL for regularized sparse coding. It can be seen that for regularized problems using PG, our HODL has better convergence results.
\begin{table}
\begin{tabular}{c|c c|c} \hline Methods & Layers & PSNR & SSIM \\ \hline \multirow{2}{*}{UNH} & 5 & 10.47\(\pm\)2.36 & 0.41\(\pm\)0.14 \\ \cline{2-4} & 25 & 11.31\(\pm\)2.29 & 0.41\(\pm\)0.15 \\ \hline \multirow{2}{*}{ENA} & 5 & 15.59\(\pm\)0.81 & 0.52\(\pm\)0.13 \\ \cline{2-4} & 25 & 15.64\(\pm\)0.87 & 0.52\(\pm\)0.13 \\ \hline \multirow{2}{*}{HODL} & 5 & 18.82\(\pm\)1.59 & 0.63\(\pm\)0.16 \\ \cline{2-4} & 25 & 18.98\(\pm\)2.53 & 0.65\(\pm\)0.15 \\ \hline \end{tabular}
\end{table} TABLE III: PSNR and SSIM results for constrained sparse coding on Set14. Best and second best results are marked in red and blue respectively.
For constrained problems, we follow the setting in [12] to use the classic Set14 dataset as experimental data, in which the salt-and-pepper noise is added to \(10\%\) pixels of each image. The rectangle of each image is divided into non-overlapping patches of size \(16\times 16\). We use the patch dictionary method to learn a \(256\times 512\) dictionary \(\bar{\mathbf{Q}}\). We set batch size \(=128\), training set size \(=10000\), and random seed \(=1126\). The testing set size depends on the size of each image. Because we conduct unsupervised single image training, we do not use the MSE loss between the clear picture and the generated picture as the loss for \(\mathbf{\omega}\), but instead use the same unsupervised loss as in [13].
To show the performance of HODL for constrained sparse coding, we present the PSNR and SSIM results in Table III. It can be seen that the performance of our HODL on both PSNR and SSIM is superior than UNH and ENA. This is because UNH can only train few learning variables (such as the step size) to maintain convergence, and the neglect of the original optimization structure during training by ENA brings about a distance from the real fixed point model. In contrary, thanks to the hybrid strategy to incorporate optimization and learning processes, HODL allows more learning variables to improve the performance. Considering the consistent performance of constrained HODL and regularized HODL, for simplicity, we base our subsequent analysis on the constrained HODL.
To illustrate in detail how HODL improves the performance of ODL, we next analyze the convergence of learning variables \(\mathbf{\omega}\) and optimization variables \(\mathbf{u}\), respectively. In Figure 2, we first analyze the convergence behavior of learning variables \(\mathbf{\omega}\) in the objective function of learning process \(\varphi_{K}(\mathbf{\omega})=\ell(\mathbf{u}^{K}(\mathbf{\omega}),\mathbf{\omega})\) defined in Eq. (7) with a fixed \(K\). ENA and UNH perform poorly in the convergence of learning objective function, while HODL is able to effectively obtain better convergence. Next, we verify the convergence of optimization variables \(\mathbf{u}\). In Figure 3, it can be seen that HODL performs better than other methods in convergence stability and convergence speed. UNH converges fast at first, but it cannot further improve the convergence performance. ENA has slow convergence speed because its neglect of optimization structure during training.
In practical applications, limited by the high computational burden on training time, one tends to train in a smaller number of optimization iterations and subsequently expects to obtain higher performance in testing. This requires ODL methods to be able to learn a stable non-expansive mapping. Therefore, we also observe the convergence curves of the optimization variables \(\mathbf{u}\) in this case to further verify the stability and non-expansive property of the trained optimization iterative module. In Figure 4, we show the convergence curve when the number of optimization iterations of \(\mathbf{u}\) in testing is more than those in training. Note that since for ENA the number of iterations of \(\mathbf{u}\) is fixed in training, it cannot be compared in this case. Still, it can be seen that HODL is superior to UNH, and the mapping learned by our HODL can indeed continue to converge in the testing iterations beyond training steps, implying that we have effectively learned a non-expansive mapping with convergence.
In addition, we investigate some factors that may affect the performance of HODL, including the necessity of non-expansive property, and the influence of parameter \(\mu\) on convergence. In Figure 5, we verify the effect of non-expansive property of \(\mathcal{D}\) on the convergence. It can be seen that the non-expansive property reduces the gradient of the learning objective \(\varphi\) by an order of magnitude, and also provides a better convergence of the optimization iteration. These verify the importance of the non-expansive property on the convergence. In Figure 6, we show the impact of different values of parameter \(\mu\) in Eq. 4 (aHODL) in
Fig. 4: Convergence curves of \(\|\mathbf{u}^{k+1}-\mathbf{u}^{k}\|/\|\mathbf{u}^{k}\|\) with respect to \(k\), the number of iterations of \(\mathbf{u}\) in testing, after (a) \(K=15\) and (b) \(K=25\) as iterations of \(\mathbf{u}\) in training. The green background indicates when the training iteration of \(\mathbf{u}\) is less than testing iteration, while the pink background represents the testing iteration is beyond training iteration. It can be seen that our method can converge better after different iterations in training.
Fig. 3: The convergence curves of \(\|\mathbf{u}^{k+1}-\mathbf{u}^{k}\|/\|\mathbf{u}^{k}\|\) with respect to \(\mathbf{u}\) after (a) \(K=15\) and (b) \(K=25\) as iterations of \(\mathbf{u}\) in training, while \(k\) is the number of iterations of \(\mathbf{u}\) for optimization in testing. It can be seen that our method can successfully learn the non-expansive mapping after different training iterations.
Fig. 2: The convergence curves of \(\varphi\) and \(\partial\varphi/\partial\mathbf{\omega}\) with respect to \(\mathbf{\omega}\) for constrained sparse coding. UNH does not add learnable knowledge to optimization and ENA ignores the optimization structure during training. It can be seen that our method achieves the optimal convergence of loss function with a stationary gradient curve.
Algorithm 1 on the convergence of optimization process. It can be seen that for a smaller number of iterations of \(\mathbf{u}\), HODL with appropriate \(\mu\) gets better performance in the network that gets relatively more fully trained. For a larger number of iterations of \(\mathbf{u}\), the best performance is obtained for smaller \(\mu\) and even for the smallest 0 (i.e., sHODL in Eq. 5) when the training is not fully adequate. Therefore, considering the computational complexity burden in practical applications, we focus on sHODL from now on, including in the subsequent experiments of vision tasks and extended applications.
Finally, it should be noted that for constrained problems, existing methods typically use ALM or ADMM, while HODL uses ALM as \(\mathcal{D}_{\mathtt{num}}\). For the fairness of comparisons, we examine the performance of UNH and ENA using ALM and ADMM, and HODL using ALM. As can be seen in Table IV, the performance using ALM is weaker than ADMM in both existing UNA and ENA methods, while our HODL only using ALM is able to outperform other methods, further demonstrating the effectiveness of HODL. Actually, aforementioned experiments of UNH and ENA for comparison are conducted using ADMM as the base method.
### _Vision Tasks_
This subsection provides experimental results in vision tasks including rain streak removal, image deconvolution, and low-light enhancement.
**Rain Streak Removal.** In the rain streak removal task, we use datasets Rain100L and Rain100H [40]. As a constrained problem, \(\mathcal{D}_{\mathtt{num}}\) is set to be \(\mathcal{D}_{\mathtt{ALM}}\). For the network architecture \(\mathcal{D}_{\mathtt{net}}\), we adopt a 2-layer convolutional network with \(\mathbf{u}_{r}\) and \(\mathbf{b}\) as the network input to estimate \(\mathbf{u}_{b}\), and a 3-layer convolutional network with \(\mathbf{u}_{b},\mathbf{u}_{r}\), and \(\mathbf{b}\) as the input to estimate \(\mathbf{u}_{r}\). In the network to estimate \(\mathbf{u}_{r}\), some prior information of \(\mathbf{u}_{r}\) is employed as input just like in [41]. In practice, we decide proper \(\Omega\) such that for all \(\mathbf{\omega}\in\Omega\) it holds that \(\mathbf{G}_{\mathbf{\omega}}\succ 0\), and \(\mathbf{G}_{\mathbf{\omega}}\) can be inverted fast by Fourier transform. Here we use MSE as the loss function for \(\mathbf{\omega}\).
We report the quantitative comparison of HODL in Table V with a series of state-of-the-art methods. It can be seen that on both benchmark datasets HODL achieves higher PSNR and SSIM. Note that HODL has a competitive performance compared with RCDNet, and it possesses superior theoretical property as well. In Figure 7, we visually present the performance of rain streak removal task on two images from Rain100L [40], compared with DDN [42], JORDER [40], PReNet [43] and RCDNet [41]. From both rows, one can observe that our HODL preserves the original counter line of wall and roof in the background and performs the best on PSNR and SSIM, while other methods produce some unsatisfactory distortion, blur some textures, or even leave noticeable rain streaks.
**Image Deconvolution**. In the image deconvolution task, similar to [16], we use a large dataset containing 400 images from Berkeley Segmentation Dataset, 4744 images from Waterloo Exploration Database, 900 images from DIV2K Dataset, and 2750 images from Flick2K Dataset. As for the network architectures \(\mathcal{D}_{\mathtt{net}}\), we use DRUNet containing four scales, each of which has an identity skip connection between \(2\times 2\) strided convolution downscaling and \(2\times 2\) transposed convolution upscaling operators. From the first scale to the fourth scale, the numbers of channels in each layer are respectively 64, 128, 256, and 512. We employ four successive
\begin{table}
\begin{tabular}{c|c|c|c} \hline Methods & UNH & ENA & HODL \\ \hline ADMM & 11.27\(\pm\)2.71 & **15.58\(\pm\)0.89** & \(\mathrm{N/A}\) \\ \hline ALM & 7.32\(\pm\)4.65 & 13.78\(\pm\)5.23 & **18.64\(\pm\)0.74** \\ \hline \end{tabular}
\end{table} TABLE IV: PSNR results of constrained sparse coding using ALM and ADMM. ADMM performs better than ALM for UNH and ENA, but our approach achieves the best performance only using ALM. Best and second best results are marked in red and blue respectively.
\begin{table}
\begin{tabular}{c|c|c|c} \hline Datasets & Rain 100L & Rain 100H \\ Metrics & PSNR & SSIM & PSNR & SSIM \\ \hline DSC & 27.34 & 0.849 & 13.77 & 0.319 \\ GMM & 29.05 & 0.871 & 15.23 & 0.449 \\ JCAS & 28.54 & 0.852 & 14.62 & 0.451 \\ Clear & 30.24 & 0.934 & 15.33 & 0.742 \\ DDN & 32.38 & 0.925 & 22.85 & 0.725 \\ RESCAN & 38.52 & 0.981 & 29.62 & 0.872 \\ PReNet & 37.45 & 0.979 & 30.11 & **0.905** \\ SPANet & 35.33 & 0.969 & 25.11 & 0.833 \\ JORDER\_E & 38.59 & **0.983** & 30.50 & 0.896 \\ SIRR & 32.37 & 0.925 & 22.47 & 0.716 \\ MPRNet & 36.40 & 0.965 & 30.41 & 0.890 \\ RCDNet & **40.00** & **0.986** & **31.28** & **0.903** \\ HODL & **40.07** & **0.986** & **30.96** & **0.905** \\ \hline \end{tabular}
\end{table} TABLE V: Averaged PSNR and SSIM results for the single image rain removal task on two widely used synthesized datasets, Rain100L and Rain100H [40]. Best and second best results are marked in red and blue respectively.
Fig. 6: Convergence curves of \(\|\mathbf{u}^{k+1}-\mathbf{u}^{k}\|/\|\mathbf{u}^{k}\|\) with different \(\mu\) in Eq. (4) (aHODL) in Algorithm 1. Note that HODL with \(\mu=0\) is equivalent to sHODL. In the case of complex networks, sHODL is more likely to achieve satisfying performance early in the iteration.
residual blocks in the downscaling and upscaling of each scale. For the numerical update operator \(\mathcal{D}_{\text{num}}\), by introducing the auxiliary variable \(\mathbf{z}=\mathbf{W}\mathbf{u}\), we transform the objective function to be \(\|\mathbf{Q}\mathbf{W}^{-1}\mathbf{z}-\mathbf{b}\|_{2}^{2}\) with a regularization term \(\|\mathbf{z}\|_{1}\). Here we use MSE as the loss function for \(\mathbf{\omega}\).
For the practical application in image deconvolution, we verify the performance of HODL on three classical testing images in Table VI, and compare our method with representative methods. For traditional methods, we compare with numerically designed method EPLL [44] and learning-based method FDN [45]. For ODL methods, we compare with IRCNN, IRCNN+, and DPIR [46, 16]. By applying a meta-optimization perspective on handcrafted network \(\mathcal{D}_{\text{net}}\) and numerical schemes \(\mathcal{D}_{\text{Pog}}\) as a regularized problem, HODL performs best in the last five columns and achieves top two in the first, in three testing images of different noise levels.
Fig. 8: Visual results of the image deconvolution task on two samples, compared with FDN, IRCNN, IRCNN+, and DPIR. The hierarchical modeling of HODL improves the clarity of details and maintains the high level of color restoration. Two metrics (PSNR / SSIM) are listed below each image to quantify the quality of generated images. Best and second best results are marked in red and blue respectively.
Fig. 7: Visual results of the rain streak removal task on two samples from Rain100L, compared with DDN, JORDER, PReNet and RCDNet. The hierarchical structure of HODL reduces the distortion and blur introduced by removing rain lines. Two metrics (PSNR / SSIM) are listed below each image to quantify the quality of generated images. Best and second best results are marked in red and blue respectively.
Note that here we choose DRUNet in DPIR [16] as \(\mathcal{D}_{\mathtt{net}}\), and the overall preferable results of HODL than directly using DPIR demonstrate the effect of compositing of \(\mathcal{D}_{\mathtt{num}}\) and \(\mathcal{D}_{\mathtt{net}}\). In addition, we show the visual results in Figure 8. It can be seen that our method is superior to other methods in color restoration, detail retention and quantitative metrics.
**Low-Light Enhancement** To further verify the effectiveness of our method on low-level vision tasks, we conduct experiments in the low-light enhancement task. Specifically, we perform experiments on two prominent MIT and LOL datasets, and adopt PSNR, SSIM and LPIPS as our evaluated metrics. For a complete evaluation, we compare HODL with MBLLEN [47], GLADNet [48] as UNH, RetinexNet [49], KinD [50], ZeroDCE [51], FIDE [52], EnGAN [53], and DRBN [54] as ENA. In the first three rows of Table VII, we evaluate HODL quantitatively on the MIT Adobe 5K dataset as a simple real-world scenario. In the last three rows of Table VII, we also perform a quantitative assessment on the LOL dataset that increases the difficulty of enhancement due to the inclusion of sensible noise as a demonstration on extremely challenging real world scenarios. It can be seen that HODL obtains the best results on both datasets.
### _Extended Applications_
The followings are experimental results on other learning tasks beyond ODL introduced in Section 5 as the extended applications of HODL.
**Adversarial Learning.** In the adversarial learning task, we visualize the two-dimensional mixed Gaussian distribution data to verify the effectiveness of our method. Performance of HODL is investigated compared to current mainstream and well-known GAN architectures which mitigate mode collapse and maintain stable training, including vanilla GAN (VGAN) [55], WGAN [56], ProxGAN [57], LCGAN [58]. Figure 9 visually shows a comparison of results by various methods regarding the number of samples generated. One can find that mainstream GAN methods only capture a part of distributions, getting into severe mode collapse dilemma and failing to achieve satisfactory performance, while our HODL generates all modes and is significantly better than other methods.
**Hyper-parameter Optimization**. In this experiment, we consider a widely used hyper-parameter optimization example, i.e., data hyper-cleaning, to evaluate the HODL. Assuming that some labels in our dataset are contaminated, the purpose of data hyper-cleaning is to reduce the impact of incorrect samples by adding hyper-parameters. We follow the settings in [32] and conduct experiments on MNIST and FashionMNIST datasets. To demonstrate the advantage of our method, we show the accuracy and F1 scores in Table VIII, compared with different methods containing Reverse Hyper-Gradient (RHG) [59], Truncated RHG (TRHG) [60], Conjugate Gradient (CG) [61], and Neumann Series (NS) [62]. Figure 10 also shows the accuracy and validation loss using different methods. It can be seen that our method achieves higher accuracy, higher F1 score, and lower loss.
**Few-shot Learning**. Next, we test the application in few-shot learning under high dimensions on Omniglot and MiniImageNet datasets to verify the computational efficiency of our method. In this experiment, we follow the settings in [32]. It can be seen in Table IX that our HODL gives the best performance in different tasks.
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline Noise level & \multicolumn{4}{c|}{\(\sigma=1\%\)} & \multicolumn{4}{c}{\(\sigma=3\%\)} \\ Image & Butterfly & Leaves & Starfish & Butterfly & Leaves & Starfish \\ \hline EPLL & 20.55 & 19.22 & 24.84 & 18.64 & 17.54 & 22.47 \\ FDN & 27.40 & 26.51 & 27.48 & 24.27 & 23.53 & 24.71 \\ IRCNN & 32.74 & 33.22 & 33.53 & 28.53 & 28.45 & 28.42 \\ IRCNN+ & 32.48 & 33.59 & 32.18 & 28.40 & 28.14 & 28.20 \\ DPIR & **34.18** & **35.12** & **33.91** & **29.45** & **30.27** & **29.46** \\ HODL & **33.67** & **35.39** & **33.98** & **29.46** & **30.69** & **29.64** \\ \hline \end{tabular}
\end{table} TABLE VI: PSNR (dB) results compared with state-of-the-art methods for the image deconvolution task with noise levels \(\sigma=1\%\) and \(3\%\). Best and second best results are marked in red and blue respectively.
Fig. 10: Comparison of the accuracy and validation loss \(\ell\) for data hyper-cleaning as an example of hyper-parameter optimization with other unrolling algorithms.
Fig. 9: Comparison among four mainstream GAN methods (i.e., vanilla GAN (VGAN), WGAN, ProxGAN, and LCGAN) and HODL on the synthetic 2D ring mixed of Gaussian distribution data. The gap of generated samples (generated/targeted number of classes) is listed on the top. The shading of dots represents the density of final distribution, with darker dots representing greater density.
## 7 Conclusions
This paper first proposes the HODL framework to nest the optimization and learning processes in ODL problems, and then presents solution strategies for HODL to jointly solve the optimization variables and learning variables. We prove the joint convergence of optimization variables and learning variables from the perspective of both the approximation quality, and the stationary analysis. Experiments demonstrate our efficiency on sparse coding, real-world applications in image processing (e.g., rain streak removal, image deconvolution, and low-light enhancement), and other learning tasks (e.g., adversarial learning, hyper-parameter optimization and few-shot learning).
## Acknowledgments
This work is partially supported by the National Natural Science Foundation of China (Nos. U22B2052, 61922019, 12222106), the National Key R&D Program of China (2020YFB1313503, 2022YFA1004101), Shenzhen Science and Technology Program (No. RCYX2020071414700072), the Guangdong Basic and Applied Basic Research Foundation (No. 2022B1515020082), and Pacific Institute for the Mathematical Sciences (PIMS).
|
2305.10893
|
Student-friendly Knowledge Distillation
|
In knowledge distillation, the knowledge from the teacher model is often too
complex for the student model to thoroughly process. However, good teachers in
real life always simplify complex material before teaching it to students.
Inspired by this fact, we propose student-friendly knowledge distillation (SKD)
to simplify teacher output into new knowledge representations, which makes the
learning of the student model easier and more effective. SKD contains a
softening processing and a learning simplifier. First, the softening processing
uses the temperature hyperparameter to soften the output logits of the teacher
model, which simplifies the output to some extent and makes it easier for the
learning simplifier to process. The learning simplifier utilizes the attention
mechanism to further simplify the knowledge of the teacher model and is jointly
trained with the student model using the distillation loss, which means that
the process of simplification is correlated with the training objective of the
student model and ensures that the simplified new teacher knowledge
representation is more suitable for the specific student model. Furthermore,
since SKD does not change the form of the distillation loss, it can be easily
combined with other distillation methods that are based on the logits or
features of intermediate layers to enhance its effectiveness. Therefore, SKD
has wide applicability. The experimental results on the CIFAR-100 and ImageNet
datasets show that our method achieves state-of-the-art performance while
maintaining high training efficiency.
|
Mengyang Yuan, Bo Lang, Fengnan Quan
|
2023-05-18T11:44:30Z
|
http://arxiv.org/abs/2305.10893v1
|
# Student-friendly Knowledge Distillation
###### Abstract
In knowledge distillation, the knowledge from the teacher model is often too complex for the student model to thoroughly process. However, good teachers in real life always simplify complex material before teaching it to students. Inspired by this fact, we propose student-friendly knowledge distillation (SKD) to simplify teacher output into new knowledge representations, which makes the learning of the student model easier and more effective. SKD contains a softening processing and a learning simplifier. First, the softening processing uses the temperature hyperparameter to soften the output logits of the teacher model, which simplifies the output to some extent and makes it easier for the learning simplifier to process. The learning simplifier utilizes the attention mechanism to further simplify the knowledge of the teacher model and is jointly trained with the student model using the distillation loss, which means that the process of simplification is correlated with the training objective of the student model and ensures that the simplified new teacher knowledge representation is more suitable for the specific student model. Furthermore, since SKD does not change the form of the distillation loss, it can be easily combined with other distillation methods that are based on the logits or features of intermediate layers to enhance its effectiveness. Therefore, SKD has wide applicability. The experimental results on the CIFAR-100 and ImageNet datasets show that our method achieves state-of-the-art performance while maintaining high training efficiency.
## 1 Introduction
In recent years, deep neural networks have achieved great success in many computer vision tasks, such as image classification [12; 16; 17; 28; 37], object recognition [11; 33; 25], and semantic segmentation [5; 27; 54]. As the performance of neural network models improves, their computational and storage costs also increase, making model compression an important research problem [3]. Knowledge distillation is an important model compression method [15].
Knowledge distillation enables a smaller model with fewer parameters (the student model) to learn from a larger model (the teacher model) to achieve better performance. The vanilla knowledge distillation (KD) method uses the Kullback-Leibler (KL) divergence [23] to mimic the teacher model's logits using the student model [15], as shown in Figure 1(a). The student model learns from the logits of the teacher model to improve its performance. As researchers increasingly studied knowledge distillation, they enabled the student model to learn from the features of intermediate layers of the teacher model [13; 40; 6; 1; 30]. However, as the outputs of intermediate layers differ across deep learning models, the design complexity and computational costs of feature-based methods increase. Recently, some researchers have begun investigating distillation methods based on the logits of the teacher model [53; 18; 46]. The distillation loss is modified to enable the student model to
effectively utilize knowledge of the teacher model, achieving distillation results comparable to or even superior to those of feature-based methods.
In logit-based methods, the logits of both the teacher and student models are softened using temperature, which leads to a softer label distribution, reduces the gap between the target class and other classes, and allows the distillation loss to focus more on other classes, thereby improving the training effect of the student model [15]. However, even with temperature, student models still cannot closely imitate a teacher model's logits due to insufficient capacity and limited data [39].
In real life, good teachers often simplify new knowledge according to students' abilities to help them better understand it. Based on the educational experience of human teachers, we propose a new method called student-friendly knowledge distillation (SKD), outlined in Figure 1(b), to optimize the output knowledge of the teacher model, making it easier for students to learn.
SKD utilizes the learning simplifier to transform the output distribution of the teacher model into a new distribution that serves as the learning target for the student network. During the training process, the learning simplifier and the student model are jointly optimized using the distillation loss for gradient backpropagation. This allows the new logit distribution to better fit the characteristics of the student model, making it easier for the student model to imitate the teacher model. We design the learning simplifier using self-attention to better construct a simplified logit distribution for the student. The self-attention mechanism enables SKD to adjust the logit distribution of the teacher model along with the output of the student model by using the similarity relationships among the data in the output of the teacher model. This makes it easier for the student network to imitate the simplified logit distribution and learn the knowledge of the teacher network. To improve the learning effect of the learning simplifier on the relationships between data, we incorporate softening processing at the beginning of SKD. We use the temperature-scaled \(\mathrm{LogSoftmax}\) function to soften the output of the teacher model, similar to temperature softening in the distillation loss. Larger models tend to produce sharper output distributions than smaller models [24], which means that the softened label distribution is more suitable for smaller student models to learn. We conducted extensive experiments, and the results showed that our SKD achieved the best performance in many combinations of knowledge distillation models.
Furthermore, most existing knowledge distillation methods aim to improve either the distillation of the intermediate features or the distillation loss function of the logits. However, our SKD changes the logits of the teacher model without changing the distillation loss function. Therefore, we can use SKD in conjunction with existing knowledge distillation methods. Experimental results show that the combined method significantly improves upon the original methods, resulting in better-performing student models.
Figure 1: Illustration of vanilla knowledge distillation (KD) and our student-friendly knowledge distillation (SKD). (a) KD uses the outputs of the teacher and student models to calculate the distillation loss. (b) Our SKD transforms the outputs of the teacher model to obtain the new SKD logits, which are then compared with the logits of the student model to calculate the distillation loss. The gradient obtained through backpropagation optimizes the student model and simultaneously optimizes the learning simplifier. Our SKD achieves better results than KD using the same loss function.
To summarize, the main contributions of our paper are as follows:
* We re-evaluate the knowledge representation of the teacher model in logit-based knowledge distillation methods. Inspired by real-life teaching scenarios, to enhance the learning performance of the student model, we propose a new direction for improving knowledge distillation. The core of this new direction is to consider the poor capacity of the student model and decrease the knowledge difficulty of the teacher model accordingly.
* Based on the above principle, we propose a knowledge distillation method called SKD. SKD uses temperature softening to enable the teacher model to utilize a higher distillation temperature than the student model, resulting in smoother teacher output. We also create a learning simplifier to further simplify the teacher output based on the similarity relationships among the data. By jointly optimizing the learning simplifier and the student model using distillation loss, the teacher's output can better adapt to the student model, reducing the student's learning difficulty. The experimental results demonstrate that SKD achieves state-of-the-art performance while maintaining high training efficiency.
* Our proposed SKD uses the same distillation loss as KD for optimization, which makes it easy to blend with other knowledge distillation models, including feature-based or logit-based distillation methods. The experimental results show that SKD can improve the performance of student networks in present methods and achieve the current best results.
## 2 Preliminaries
### Vanilla knowledge distillation
The process of vanilla knowledge distillation (KD) [15] is shown in Figure 1(a). For the training data \(x\) with the label \(y\) in a dataset with \(K\) classes, the outputs of the teacher model and the student model are \(g^{t}\in\mathbb{R}^{K}\) and \(g^{s}\in\mathbb{R}^{K}\), respectively. Using the softmax function yields the student's prediction \(p^{s}=\operatorname{softmax}\left(g^{s}\right)\in\mathbb{R}^{K}\), and we can then compute the cross-entropy loss between the student's prediction and the ground-truth label:
\[\mathcal{L}_{\mathrm{CE}}=\sum_{i=1}^{K}y_{i}\log(p_{i}^{s}). \tag{1}\]
Using the softmax function with temperature, we obtain the softened teacher prediction \(\widetilde{p}^{t}=\operatorname{softmax}\left(g^{t}/T\right)\in\mathbb{R}^{K}\) and the softened student prediction \(\widetilde{p}^{s}=\operatorname{softmax}\left(g^{s}/T\right)\in\mathbb{R}^{K}\). Through the temperature, the predictions become smoother over each class, so the distillation loss is better able to reflect the differences between the other classes in addition to the correct one. Then, we can compute the distillation loss between the softened predictions with the KL divergence:
\[\mathcal{L}_{\mathrm{KL}}=\mathrm{KL}(\widetilde{p}^{t}||\widetilde{p}^{s})= \sum_{i=1}^{K}\widetilde{p}_{i}^{t}\log(\frac{\widetilde{p}_{i}^{t}}{ \widetilde{p}_{i}^{s}}). \tag{2}\]
The total loss of KD is:
\[\mathcal{L}_{total}=\alpha\mathcal{L}_{\mathrm{CE}}+\beta\mathcal{L}_{\mathrm{ KL}}, \tag{3}\]
where \(\alpha\) and \(\beta\) are coefficients used to balance the two parts. Knowledge distillation optimizes the student model by optimizing this loss function.
### Attention
The attention used in our SKD is the standard self-attention in the Transformer [44]. First, the input data are encoded through a linear projection \(\mathrm{Linear}_{1}\) to obtain the corresponding query \(Q\), key \(K\), and value \(V\) of dimension \(D\). Then, based on the query and key, the attention matrix \(A\) relating them is calculated:
\[A=\operatorname{softmax}(QK^{\top}/\sqrt{D}). \tag{4}\]
Then, the weighted sum of values is calculated based on the attention matrix \(A\), and encoded through another linear projection \(\mathrm{Linear}_{2}\) to obtain the output of the self-attention:
\[Output=\mathrm{Linear}_{2}(AV). \tag{5}\]
Through self-attention, new representations of the data based on the relationships among these data are obtained.
Methods
### Motivation
In knowledge distillation, the capacity of the teacher model is greater than that of the student model, making it difficult for the student model to accurately simulate the output distribution of the teacher model. In real life, good teachers simplify complex knowledge before teaching it to their students. Inspired by this fact, we propose the student-friendly knowledge distillation (SKD) method, as shown in Figure 1(b). As it is difficult for the student model to generate complex and sharp outputs such as those of the teacher model [24], our SKD first softens the output of the teacher model via softening processing, making it easier for the student model to learn and more advantageous for the learning simplifier to handle.
The learning simplifier in SKD is used to modify the teacher output to reduce the difficulty of the student model to mimic the teacher model's output. The learning simplifier and the student model jointly optimize the distillation loss. The learning simplifier uses the real-time logits of the student model as its optimization objective. Therefore, the learning simplifier can transform the output of the teacher model, which is difficult for the student model to mimic, into a distribution that is more similar to the student model's output, thus reducing the difficulty of the student model to mimic the teacher model's output. By using SKD to make minor changes to the output of the teacher model, the student model can better mimic the teacher model, thereby improving the effectiveness of knowledge distillation.
### Overall
Our SKD performs softening processing and the learning simplifier on the output of the teacher model to obtain a new teacher distribution \(g^{\rm{SKD}}\). The calculation process is shown in Figure 1(b).
The output of the teacher model \(g^{t}\in\mathbb{R}^{K}\) is first softened to obtain the softened logit distribution:
\[g^{t}_{soft}={\rm Softening}(g^{t}). \tag{6}\]
Then, through the learning simplifier, the change in logits is obtained:
\[\Delta_{Simplifier}={\rm Simplifier}(g^{t}_{soft}). \tag{7}\]
\(\Delta_{Simplifier}\) is added to the softened teacher logits \(g^{t}_{soft}\), and the output distribution of SKD is obtained:
\[g^{\rm{SKD}}=\Delta_{Simplifier}+g^{t}_{soft}. \tag{8}\]
Using the softmax function with temperature, we obtain the softened prediction of SKD \(\widetilde{p}^{\rm{SKD}}={\rm softmax}\left(g^{\rm{SKD}}/T\right)\in\mathbb{R} ^{K}\). Finally, similar to the original knowledge distillation method, the distillation loss \(\mathcal{L}_{\rm{SKD}}\) can be calculated using the softmax function with temperature and the KL divergence, as in Eq. 2:
\[\mathcal{L}_{\rm{SKD}}={\rm KL}(\widetilde{p}^{\rm{SKD}}||\widetilde{p}^{s}). \tag{9}\]
Therefore, the total loss of SKD is:
\[\mathcal{L}_{total}=\mathcal{L}_{\rm{CE}}+\alpha\mathcal{L}_{\rm{SKD}}, \tag{10}\]
where \(\mathcal{L}_{\rm{CE}}\) is the cross-entropy loss between the student model's predictions and the true labels, as defined in Eq. (1). To facilitate parameter tuning, the coefficient of the cross-entropy loss is fixed at 1.0. \(\alpha\) is the weight coefficient of the SKD loss, which adjusts the relative contributions of the distillation loss and the cross-entropy loss. The effect of the distillation loss is to make the student model mimic the output of the teacher model. The larger the value of \(\alpha\), the more the student model needs to pursue the output of the teacher model during training. However, if the gap between the teacher model and the student model is large, the student model will find it difficult to mimic the output of the teacher model. For models of the same type, the smaller the difference in performance between the teacher and student models, the larger the optimal value of \(\alpha\), which enables the student model to mimic the output of the teacher model better. However, the optimal value of \(\alpha\) still needs to be determined through experiments.
### Learning simplifier
For the design of the learning simplifier, we considered using either fully connected (FC) layers or self-attention, both of which can construct output distributions that are closer to the student model based on the output of the teacher model. The student model mainly learns the relationships between categories in logit-based knowledge distillation methods. Unlike FC layers, which only consider the input of a single data point, self-attention can learn the relationships between each batch of data and weight their values according to the relationships to obtain the final output. We conducted experiments comparing the different implementation methods on the implementation methods on the CIFAR-100 dataset, where the teacher model is ResNet32\(\times\)4 and the student model is ResNet8\(\times\)4, and the results are shown in Table 1. Based on the results, we choose self-attention to implement our learning simplifier.
### Softening processing
The self-attention used in our learning simplifier focuses on the relationships among input data. Teacher models often have high confidence, resulting in the output distribution being dominated by the target class, with little difference in output distribution between similar classes. This makes it difficult for self-attention to learn the relationships among different data of the same class. In knowledge distillation, the logits are softened by temperature, which smooths the label distribution so that the logits can reflect the relationships among classes other than the target class [15]. Therefore, inputting the distribution softened by temperature into self-attention can improve the learning simplifier's ability to learn the relationships among different data, thereby enhancing the effectiveness of SKD.
On the other hand, using temperature to soften the logits of the teacher model is equivalent to using a higher temperature for the teacher model than for the student model, rather than using the same distillation temperature for both as in vanilla knowledge distillation; consequently, the output distribution of the teacher model is softer. Because models with more parameters tend to have sharper output distributions after training than models with fewer parameters [24], smaller student models more easily imitate softened distributions.
We conducted experiments to verify the effectiveness of using a temperature-scaled \(\mathrm{LogSoftmax}\) function on the CIFAR-100 dataset, where the teacher model is ResNet32\(\times\)4 and the student model is ResNet8\(\times\)4. Table 2 shows the results. Softening the output of the teacher model with a \(\mathrm{LogSoftmax}\) function with a temperature set to \(4.0\) leads to better results with our SKD.
### Combination with other methods
Notably, our SKD improves knowledge distillation by modifying the logits of the teacher model. As shown in 9, SKD uses the same distillation loss as vanilla KD. Therefore, SKD can be easily combined with other logit-based methods that modify the distillation loss. The modified logits in SKD can be used as the logits of the teacher model in other methods. Fusing SKD with other models can improve their effectiveness.
On the other hand, SKD only modifies the logits of the teacher model and does not change the structure of the intermediate layers in the model. Therefore, it does not conflict with feature-based methods. By adding the distillation losses, SKD is easily combined with feature-based methods to improve the performance of the original methods.
According to our experiments in section 4.2, integrating SKD with the currently best-performing distillation methods from two categories results in state-of-the-art performance.
\begin{table}
\begin{tabular}{c c} Learning Simplifier & Top-1 (\(\%\)) \\ \hline
1-layer FC & 75.18 \\
2-layer FC & 75.28 \\
**Self-attention** & **75.83** \\ \end{tabular}
\end{table}
Table 1: Comparison of different implementations of the learning simplifier.
\begin{table}
\begin{tabular}{c c c} Softening & Temperature & Top-1 (\(\%\)) \\ \hline \hline \(\bigtimes\) & - & 75.83 \\ \(\bigvee\) & 3.0 & 76.42 \\ \(\bigvee\) & 4.0 & **76.84** \\ \(\bigvee\) & 5.0 & 76.51 \\ \end{tabular}
\end{table}
Table 2: Softening Processing Effectiveness.
## 4 Experiments
We conducted comprehensive experiments on image classification tasks on **CIFAR-100**[22] and **ImageNet**[34]. The detailed experimental settings are given in Appendix A.1.
### Comparison with state-of-the-art methods
The results on the **CIFAR-100** dataset are shown in Table 3 and Table 4, where Table 3 shows the results where the teacher and student models had the same architectures, and Table 4 shows the results where the teacher and student models had different architectures.
Comparing the experimental results of KD with those of SKD, SKD performs significantly better when using the same distillation loss function as KD. The largest improvement occurred when the teacher model was ResNet32\(\times\)4 and the student model was ResNet8\(\times\)4. The improvement reached 3.28\(\%\). This indicates that SKD can improve the learning performance of the student model simply by modifying the logits of the teacher model.
Of the six experiments where the teacher and student models had the same architectures, SKD achieved the best performance compared to all feature-based and logit-based methods in five experiments. It ranked third in only one experiment where the feature-based method OFD [13] and
\begin{table}
\begin{tabular}{c c c c c c} Teacher & ResNet32\(\times\)4 & WRN-40-2 & ResNet50 & VGG13 & ResNet32\(\times\)4 \\ & 79.55 & 75.61 & 79.34 & 74.64 & 79.55 \\ Student & ShuffleNet-V2 & ShuffleNet-V1 & MobileNet-V2 & MobileNet-V2 & VGG8 \\ & 71.82 & 70.50 & 64.60 & 64.60 & 70.36 \\ \hline \multicolumn{6}{c}{Feature-based methods} \\ FitNet [1] & 74.29 & 73.54 & 63.11 & 63.66 & 71.72 \\ RKD [30] & 74.08 & 73.27 & 65.05 & 64.90 & 70.90 \\ CRD [40] & 76.04 & 75.94 & 69.55 & **69.36** & 73.65 \\ OFD [13] & 77.09 & 76.63 & 65.81 & 65.23 & 73.52 \\ ReviewKD [6] & **77.19** & **77.40** & 67.07 & 69.00 & 74.19 \\ \multicolumn{6}{c}{Logits-based methods} \\ KD [15] & 75.37 & 75.52 & 68.73 & 68.02 & 72.70 \\ DKD [53] & 76.90 & 76.65 & **70.46** & **69.41** & **74.32** \\
**SKD** & **76.96** & **76.96** & **69.99** & 69.25 & **74.59** \\ \(\Delta\) & +1.59 & +1.44 & +1.26 & +1.23 & +1.89 \\ \end{tabular}
\end{table}
Table 4: Results on the **CIFAR-100** dataset. Teacher and student models have **different** architectures. All reported accuracy results are averaged over five trials. \(\Delta\) denotes the improvement of SKD over the KD method. The results marked in red and blue are the best and second-best results, respectively.
\begin{table}
\begin{tabular}{c c c c c c c} Teacher & ResNet32\(\times\)4 & ResNet110 & ResNet56 & WRN-40-2 & WRN-40-2 & VGG13 \\ & 79.55 & 74.31 & 72.34 & 75.61 & 75.61 & 74.64 \\ Student & ResNet8\(\times\)4 & ResNet32 & ResNet20 & WRN-16-2 & WRN-40-1 & VGG8 \\ & 72.50 & 71.14 & 69.06 & 73.26 & 71.98 & 70.36 \\ \hline \multicolumn{6}{c}{Feature-based methods} \\ FitNet [1] & 73.52 & 70.98 & 69.02 & 73.59 & 72.08 & 71.37 \\ RKD [30] & 72.50 & 71.90 & 69.81 & 72.91 & 71.80 & 70.43 \\ CRD [40] & 75.73 & 73.71 & 71.31 & 75.66 & 74.36 & 73.90 \\ OFD [13] & 74.88 & 72.78 & 69.96 & 75.50 & **75.23** & 73.30 \\ ReviewKD [6] & 75.68 & **73.73** & 71.23 & **76.28** & **75.11** & 73.80 \\ \multicolumn{6}{c}{Logits-based methods} \\ KD [15] & 73.56 & 73.42 & 71.08 & 75.01 & 73.75 & 73.43 \\ DKD [53] & **76.13** & 73.71 & **71.64** & 75.52 & 74.43 & **74.57** \\
**SKD** & **76.84** & **74.06** & **71.73** & **76.29** & 74.52 & **74.94** \\ \(\Delta\) & +3.28 & +0.64 & +0.65 & +1.28 & +0.77 & +1.51 \\ \end{tabular}
\end{table}
Table 3: Results on the **CIFAR-100** dataset. Teacher and student models have the **same** architectures. All reported accuracy results are averaged over five trials. \(\Delta\) denotes the improvement of SKD over the KD method. The results marked in red and blue are the best and second-best results, respectively.
ReviewKD [6] performed better. Of the five experiments where the teacher and student models were of different types, SKD achieved the best performance in one experiment, ranked second in three experiments, and ranked third in one experiment.
Furthermore, SKD outperformed other state-of-the-art knowledge distillation methods, such as ReviewKD [6] and DKD [53], in 8 and 9 out of 11 teacher-student model combination experiments, respectively, achieving the best performance. Only in one teacher-student model combination did SKD not achieve a top-two result. This suggests that SKD can achieve the best distillation effect while keeping the design and training simple.
The results on the **ImageNet** dataset are shown in Table 5. Our SKD continued to perform significantly better than the classical KD. Moreover, compared with other distillation methods, SKD achieved the first and second-best results in two experiments based on top-1 accuracy and the second and third-best results in two experiments based on top-5 accuracy. This suggests that the performance of SKD is superior to that of most of the current best methods and that it achieved the best performance among logit-based methods.
### Combination with other methods
We combined SKD with the current best-performing logit-based method DKD [53] and the feature-based method ReviewKD [6] and performed experiments on the CIFAR-100 dataset. The experimental results are shown in Table 6. Using SKD in combination with the two other methods significantly improves the student model's accuracy, and the combined model achieved better performance than the two standalone methods. This strongly verifies the effectiveness of SKD and its compatibility with other knowledge distillation methods.
## 5 Analysis
To elucidate the principles of SKD, we conducted analyses of five aspects of SKD: (1) the changes in the logits, (2) the distillation fidelity of the student model, (3) the visualization of the student model's output features, (4) the attention matrix in the learning simplifier (see Appendix A.3), and (5) the training efficiency of SKD (see Appendix A.3). The dataset used in the experiment of this section was CIFAR-100, the teacher model was ResNet32\(\times\)4, and the student model was ResNet8\(\times\)4.
\begin{table}
\begin{tabular}{c c c c c} Teacher & ResNet32\(\times\)4 & VGG13 & ResNet32\(\times\)4 & VGG13 \\ & 79.55 & 74.64 & 79.55 & 74.64 \\ Student & ResNet8\(\times\)4 & VGG8 & ShuffleNet-V2 & MobileNet-V2 \\ & 72.50 & 70.36 & 71.82 & 64.60 \\ \hline ReviewKD [6] & 75.68 & 73.80 & 77.19 & 69.00 \\
**SKD+ReviewKD** & **77.20** & **75.07** & **77.35** & **69.82** \\ \(\Delta\) & +1.52 & +1.27 & +0.16 & +2.75 \\ \hline DKD [53] & 76.13 & 74.57 & 76.90 & 69.41 \\
**SKD+DKD** & **76.68** & **75.15** & **77.52** & **69.44** \\ \(\Delta\) & +0.55 & +0.58 & +0.62 & +0.03 \\ \end{tabular}
\end{table}
Table 6: Accuracy (\(\%\)) of SKD combined with other methods. All reported accuracy results are averaged over five trials on the CIFAR-100 dataset. \(\Delta\) represents the difference in the accuracy before and after fusion with SKD.
\begin{table}
\begin{tabular}{c c|c c c|c c|c c} Teacher(Student) & AT [21] & OFD [13] & CRD [40] & ReviewKD [6] & KD [15] & DKD [53] & **SKD** & \(\Delta\) \\ \hline ResNet34 & Top-1 & 70.96 & 70.81 & 71.17 & 71.61 & 70.66 & **71.70** & **71.86** & +1.20 \\ (ResNet18) & Top-5 & 90.01 & 89.98 & 90.13 & **90.51** & 89.88 & 90.41 & **90.44** & +0.56 \\ \hline ResNet50 & Top-1 & 69.56 & 71.25 & 71.37 & **72.56** & 68.58 & 72.05 & **72.24** & +3.66 \\ (MobileNet-V2) & Top-5 & 89.33 & 90.34 & 90.41 & **91.00** & 88.98 & **91.05** & 90.56 & +1.58 \\ \end{tabular}
\end{table}
Table 5: Results on the **ImageNet** dataset. The SKD results are averaged over three trials. The results of other methods are cited in [53]. \(\Delta\) denotes the improvement of SKD over the KD method. The results marked in red and blue are the best and second-best results, respectively.
Changes in the logitsBy observing the changes in the logits before and after the application of SKD, we can see the changes in the learning objectives of the student model. Based on the attention matrix, we obtained the final output of the learning simplifier, which is the change in the logits. We calculated the average change in the target class and other classes of the data distribution on the training set, and the results are shown in Table 7. Compared with the value for other classes, the target class value is significantly reduced by the learning simplifier. This allows the student model to learn the relationships between other classes more effectively when using distillation loss for training, thereby improving the learning effectiveness of the student model.
We also conducted experiments on the accuracy of the output of the teacher model before and after SKD was applied on the validation set, as shown in Table 8. We found that SKD did not change the accuracy of the teacher model, indicating that SKD did not improve the accuracy of the teacher's knowledge. Instead, SKD reduced the learning difficulty for the student model based on the relationships between classes in the output of the teacher model. This led to an improvement in the performance of knowledge distillation.
To study the changes in the output of the teacher model caused by SKD, we visualized the original teacher logit distribution and the SKD logit distribution. To make the visualization results clearer, we visualized the logits processed by the temperature-scaled \(\mathrm{LogSoftmax}\) function. The visualization results are shown in Figure 2.
From Figure 2, the distribution after being processed by SKD becomes smoother compared to the output of the teacher model. The value of the target class in the distribution is significantly lower than that of other classes. SKD makes the model output smoother and simpler for the student model, which makes it easier for the student model to learn. In addition, unlike the distillation temperature, SKD uses the learning simplifier to individually process each data point based on its similarity to other data in a batch. This allows SKD to obtain more finely tuned changes to the teacher logits compared to the distillation temperature, resulting in high distillation fidelity.
Comparison of distillation fidelityWe use the average agreement between the predictions of the student model and the teacher model to measure the distillation fidelity [39]. A higher average agreement reflects a more faithful imitation of the student model. The calculation of the average agreement for n data points is as follows:
\[\mathrm{Average\;Agreement}:=\frac{1}{n}\sum_{i=1}^{n}\mathbb{I}\{\operatorname {arg\,max}_{j}p_{i,j}^{s}=\operatorname{arg\,max}_{j}p_{i,j}^{t}\}. \tag{11}\]
By comparing the distillation fidelity using SKD and KD, we can determine whether SKD makes it easier for student models to mimic the knowledge of teacher models. The comparison results are shown in Table 9.
\begin{table}
\begin{tabular}{l c c c} Class & \(\Delta\) & SKD & Top-1 (\(\%\)) & Top-5 (\(\%\)) \\ \hline Target & -0.38 & \(\bigstar\) & 79.55 & 94.62 \\ Others & +0.02 & \(\bigstar\) & 79.55 & 94.62 \\ \end{tabular}
\end{table}
Table 7: The outputs of the learning simplifier for different classes.
Figure 2: Visualization of the teacher logits and the SKD logits for 100 classes on a random image.
The experimental results are consistent with our design idea of SKD; that is, our SKD helps student models to mimic teacher models more easily, significantly improving the distillation fidelity and consequently enhancing the effect of knowledge distillation.
Features visualizationWe also visualized the student model's features using t-SNE [43] as shown in Figure 3. The features of the student model trained with SKD are more compact within the same category, and the differences between different categories are more pronounced. This proves that SKD enables the student model to learn clearer relationships between categories and perform more accurate classification.
## 6 Conclusion
In this paper, to improve the learning effect of a student model, we propose student-friendly knowledge distillation (SKD) to simplify the teacher output into new knowledge representations. Unlike other knowledge distillation methods, our SKD focuses on changing the knowledge output of the teacher model. First, we use softening processing to soften the output of the teacher model, making it easier for the student to learn. Then, through the learning simplifier, based on the mutual relationships of various data in the logits of the teacher model output, we simplify the logits to produce a new learning objective that is more suitable for the student model. This helps the student model mimic the teacher model's logits more easily, thereby enhancing the effect of distillation while maintaining high training efficiency. At the same time, SKD can be combined with other knowledge distillation methods, including logit-based and feature-based methods, to enhance the distillation effect. We hope this paper can inspire new research ideas regarding knowledge distillation.
LimitationsSKD, as a logit-based knowledge distillation method, could not outperform state-of-the-art feature-based methods on object detection tasks due to the lack of location knowledge in the logits. Besides, the relationships between different combinations of teacher-student models and the best value of the parameter \(\alpha\) cannot be determined in SKD currently. We plan to find a method to determine the optimal \(\alpha\) in future work.
\begin{table}
\begin{tabular}{c c c} Method & Training Set & Validation Set \\ \hline KD & 0.86 & 0.75 \\
**SKD** & **0.92** & **0.79** \\ \end{tabular}
\end{table}
Table 9: Comparison results of the average agreement.
Figure 3: t-SNE visualization of the student logits.
|
2307.06888
|
Magnon-magnon coupling in synthetic ferrimagnets
|
Magnetic multilayers with interlayer exchange coupling have been widely
studied for both static and dynamic regimes. Their dynamical responses depend
on the exchange coupling strength and magnetic properties of individual layers.
Magnetic resonance spectra in such systems are conveniently discussed in terms
of coupling of acoustic and optical modes. At a certain value of applied
magnetic field, the two modes come close to being degenerate and the spectral
gap indicates the strength of mode hybridisation. In this work, we
theoretically and experimentally study the mode hybridisation of
interlayer-exchange-coupled moments with dissimilar magnetisation and thickness
of two ferromagnetic layers. In agreement with symmetry analysis for
eigenmodes, our low-symmetry multilayers exhibit sizable spectral gaps for all
experimental conditions. The spectra agree well with the predictions from the
Landau-Lifshitz-Gilbert equation at the macrospin limit whose parameters are
independently fixed by static measurements.
|
A. Sud, K. Yamamoto, K. Z. Suzuki, S. Mizukami, H. Kurebayashi
|
2023-07-13T16:39:27Z
|
http://arxiv.org/abs/2307.06888v2
|
# Magnon-magnon coupling in synthetic ferrimagnets
###### Abstract
Magnetic multilayers with interlayer exchange coupling have been widely studied for both static and dynamic regimes. Their dynamical responses depend on the exchange coupling strength and magnetic properties of individual layers. Magnetic resonance spectra in such systems are conveniently discussed in terms of coupling of acoustic and optical modes. At a certain value of applied magnetic field, the two modes come close to being degenerate and the spectral gap indicates the strength of mode hybridisation. In this work, we theoretically and experimentally study the mode hybridisation of interlayer-exchange-coupled moments with dissimilar magnetisation and thickness of two ferromagnetic layers. In agreement with symmetry analysis for eigenmodes, our low-symmetry multilayers exhibit sizable spectral gaps for all experimental conditions. The spectra agree well with the predictions from the Landau-Lifshitz-Gilbert equation at the macrospin limit whose parameters are independently fixed by static measurements.
## I Introduction
In two magnetic layers separated by a thin nonmagnetic spacer, conduction electrons in the spacer magnetically couple two spatially separated moments, via the so-called Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction Ruderman and Kittel (1954); Kasuya (1954); Yosida (1954). This interlayer exchange coupling arises from coherent propagation of electron spin across the spacer layer Ruderman and Kittel (1954); Kasuya (1954); Kasuya (1954). Due to the Friedel-like oscillation of the electron phase, the exchange coupling changes its sign as a function of the interlayer distance, switching between ferromagnetic and antiferromagnetic ordering of the two magnetic layers Ruderman and Kittel (1954); Kasuya (1954); Kasuya (1954); Kasuya (1954). The antiferromagnetically ordered states of two identical magnetic layers, often called synthetic antiferromagnets (SyAFs), have served as a testbed for studying antiferomagnetism where SyAFs' relatively weak RKKY exchange coupling, comparable to the strength of magnetic fields achievable in laboratories, helps realise experiments otherwise difficult in atomically-ordered, crystalline antiferromagnets Kittel (1954); Kasuya (1954); Kasuya (1954). One such property is the magnetic resonances in SyAFs whose typical frequency resides within a range of GHz that is readily accessible by modern microwave techniques Kittel (195
that the coupled Landau-Lifshitz-Gilbert (LLG) equations due to the interlayer exchange interaction are symmetric under twofold rotation around the applied field direction combined with a layer swap, as long as the field is within the film plane (Fig. 1(a)) [51]. The acoustic and optical modes are odd and even upon the symmetry operation respectively and therefore unable to hybridise with each other, leading to the mode degeneracy at a resonance point [14; 28]. This specific symmetry can be broken in several ways, for example, tilting the external magnetic field towards out-of-plane direction (Fig. 1(b)) [52; 23; 51]. For general expressions of spin-wave mode frequencies in interlayer exchange-coupled systems, Layadi presented analytical solutions with a particular focus on the effect of the biquadratic exchange coupling and in-plane anisotropy on the spectra for in-plane magnetised cases [54]. While spectroscopic measurements of interlayer exchange-coupled tri-layers with different magnetic-layer thicknesses were reported by some groups in the past [55; 56; 18; 57; 22], there seem no study fully dedicated to quantitative discussions of the mode hybridisation in such asymmetric interlayer-exchange-coupled systems.
In this paper, we present our detailed experimental and theoretical study of the magnon-magnon coupling phenomena in synthetic ferrimagnets where two magnetically coupled layers are not identical (Fig. 1(c)). We systematically compare spin-wave spectra measured by broadband ferromagnetic resonance (FMR) experiments and calculated using magnetic parameters deduced from static magnetometry. In all cases examined, we find excellent agreement between experiment and theory, suggesting that the coupled LLG equations at the macrospin limit are indeed a reliable tool for designing and analysing the spectral properties of the magnetic multilayers. Our calculations further reveal dissimilar roles of quadratic and biquadratic exchange interactions for the size of the gap. Our results help design and control magnetic resonance spectra in exchange-coupled magnetic moments that can be synthetic antiferro(ferri)magnets, van der Waals antiferromagnets [58; 59; 60; 51] and ferromagnetic bilayers [61; 62; 63].
## II A macrospin model of synthetic ferrimagnet
For our purposes, a theoretical model that extends the result of Ref. [54] for arbitrary direction of the static magnetic field is required, which we present in this section with an emphasis on breaking of the two-fold rotation symmetry. Let \(\mathbf{M}_{A},\mathbf{M}_{B}\) be the magnetisations of the two ferromagnetic layers. We are interested in the situations where the two magnetic materials are not identical \(|\mathbf{M}_{A}|\equiv M_{A}\neq|\mathbf{M}_{B}|\equiv M_{B}\), and the two layers have different thicknesses \(d_{A}\neq d_{B}\). The film normal is chosen \(z\) axis and the film is regarded infinitely extended in the \(x,y\) directions, as shown in Fig. 1(a).
The static state of the magnetisations corresponds to the minimum of free energy per unit area \(W\). We include the external magnetic field \(\mathbf{H}\), demagnetising field, and biquadratic as well as the usual quadratic interlayer exchange interactions:
\[W= \,d_{A}\left\{-\mu_{0}M_{A}\mathbf{H}\cdot\mathbf{n}_{A}+\frac{\mu_{0}M_{ A}^{2}}{2}\left(n_{A}^{z}\right)^{2}\right\}\] \[+d_{B}\left\{-\mu_{0}M_{B}\mathbf{H}\cdot\mathbf{n}_{B}+\frac{\mu_{0}M_{ B}^{2}}{2}\left(n_{B}^{z}\right)^{2}\right\}\] \[+J_{1}\mathbf{n}_{A}\cdot\mathbf{n}_{B}+J_{2}\left(\mathbf{n}_{A}\cdot\mathbf{n}_ {B}\right)^{2}. \tag{1}\]
Here we have normalised the magnetisations \(\mathbf{n}_{A(B)}=\mathbf{M}_{A(B)}/M_{A(B)}\), and introduced the phenomenological exchange energies per unit area \(J_{1}\) and \(J_{2}\). Without loss of generality, with the weak crystalline anisotropy being ignored, the magnetic field can be taken
Figure 1: (a) In the laboratory frame, we define the \(z\) direction normal to the plane, and \(x\) direction such that the static external magnetic field lies in the \(x\)-\(z\) plane. In the canted regime when applying the field (\(\mathbf{H}\)) in-plane, two sub-lattice moments (\(\mathbf{M}_{A}\) and \(\mathbf{M}_{B}\)) reside within the plane, canted towards \(\mathbf{H}\). For general static states, we introduce new coordinate axes \(\mathbf{X},\mathbf{Y},\mathbf{Z}\) adapted to the two-fold rotation \(\mathcal{C}_{2}\) that brings the unit vector along \(\mathbf{M}_{A}\) to that along \(\mathbf{H}\). See Eq. (9) for the concrete definition. For \(\mathbf{H}\) in-plane and identical magnetic layers, \(\mathcal{C}_{2}\) combined with interchanging A and B layers is a symmetry of the system. (b) and (c) When we apply \(\mathbf{H}\) with the polar angle \(\theta\neq 90^{\circ}\) or two magnetic moments are not identical, \(\mathcal{C}_{2}\) followed by the magnetic layer interchange ceases to be a symmetry. This impacts on the mode coupling as discussed in this study.
\(H\left(\hat{\mathbf{x}}\sin\theta+\hat{\mathbf{z}}\cos\theta\right)\). We determine the static state \(\mathbf{n}_{A(B)}^{0}\) by numerical minimisation of \(W\), which is parameterised by
\[\mathbf{n}_{A(B)}^{0}=\begin{pmatrix}\sin\theta_{A(B)}\cos\phi_{A(B)}\\ \sin\theta_{A(B)}\sin\phi_{A(B)}\\ \cos\theta_{A(B)}\end{pmatrix}. \tag{2}\]
If the magnetic field is in-plane \(\theta=90^{\circ}\) and \(0<2J_{2}<J_{1}\), the static state undergoes two phase transitions at \(H_{\text{sf}}\) and \(H_{\text{ff}}\) as \(\left|H\right|\) is increased from zero, where
\[H_{\text{sf}}= \left|\frac{1}{d_{B}M_{B}}-\frac{1}{d_{A}M_{A}}\right|\frac{J_{1 }-2J_{2}}{\mu_{0}}, \tag{3}\] \[H_{\text{ff}}= \left|\frac{1}{d_{B}M_{B}}+\frac{1}{d_{A}M_{A}}\right|\frac{J_{1 }+2J_{2}}{\mu_{0}}. \tag{4}\]
Below \(H_{\text{sf}}\), the static state is antiferromagnetic \(\mathbf{n}_{B}^{0}=-\mathbf{n}_{A}^{0}\) with \(\mathbf{n}_{A}^{0}\cdot\mathbf{H}\gtrless 0\) according to \(d_{A}M_{A}\gtrless d_{B}M_{B}\). Above \(H_{\text{ff}}\), the system is in a forced ferromagnetic state \(\mathbf{n}_{A}^{0}=\mathbf{n}_{B}^{0}=\mathbf{H}/\left|H\right|\). In between lies the spin-flop, or canted, state where \(H\cos\phi_{A,B}>0,\sin\phi_{A}\sin\phi_{B}<0\).
To calculate the magnetic resonance frequencies, let us introduce the linear perturbation \(\mathbf{n}_{A(B)}\approx\mathbf{n}_{A(B)}^{0}+\mathbf{n}_{A(B)}^{1}\) where \(\mathbf{n}_{A(B)}^{0}\cdot\mathbf{n}_{A(B)}^{1}=0\). The Landau-Lifshitz equations follow from the free energy \(W\) through the usual procedure [64]. Although one can press on using \(\mathbf{n}_{A(B)}^{1}\) as the dynamical variables [52], we normalise them so as to make them canonical in the sense of Hamiltonian mechanics [64], which ensures that the resulting eigenvalue problem retains the correct Bogoliubov form [65]:
\[\mathbf{\delta}_{A}=\sqrt{\frac{Sd_{A}M_{A}}{\hbar\left|\gamma_{A}\right|}}\mathbf{n} _{A}^{1},\quad\mathbf{\delta}_{B}=\sqrt{\frac{Sd_{B}M_{B}}{\hbar\left|\gamma_{B} \right|}}\mathbf{n}_{B}^{1}, \tag{5}\]
where \(S\) denotes the area of the film, and \(\gamma_{A(B)}<0\) are the gyromagnetic ratios. The linearised equations of motion read
\[\mathbf{n}_{A}^{0}\times\frac{d\mathbf{\delta}_{A}}{dt} =\] \[\quad-\frac{\gamma_{A}}{d_{A}M_{A}}\left\{J_{1}+2\left(\mathbf{n}_{A }^{0}\cdot\mathbf{n}_{B}^{0}\right)J_{2}\right\}\left[\left(\mathbf{n}_{A}^{0}\cdot \mathbf{n}_{B}^{0}\right)\mathbf{\delta}_{A}-\sqrt{\frac{\gamma_{B}dAM_{A}}{\gamma_{A }d_{B}M_{B}}}\left\{\mathbf{\delta}_{B}-\left(\mathbf{n}_{A}^{0}\cdot\mathbf{\delta}_{B} \right)\mathbf{n}_{A}^{0}\right\}\right]\] \[\quad+\frac{2\gamma_{A}}{d_{A}M_{A}}J_{2}\left\{\mathbf{n}_{B}^{0}- \left(\mathbf{n}_{A}^{0}\cdot\mathbf{n}_{B}^{0}\right)\mathbf{n}_{A}^{0}\right\}\left(\mathbf{ n}_{B}^{0}\cdot\mathbf{\delta}_{A}+\sqrt{\frac{\gamma_{B}d_{A}M_{A}}{\gamma_{A }d_{B}M_{B}}}\mathbf{n}_{A}^{0}\cdot\mathbf{\delta}_{B}\right), \tag{6}\] \[\mathbf{n}_{B}^{0}\times\frac{d\mathbf{\delta}_{B}}{dt} =\] \[\quad-\frac{\gamma_{B}}{d_{B}M_{B}}\left\{J_{1}+2\left(\mathbf{n}_{A} ^{0}\cdot\mathbf{n}_{B}^{0}\right)J_{2}\right\}\left[\left(\mathbf{n}_{A}^{0}\cdot\mathbf{ n}_{B}^{0}\right)\mathbf{\delta}_{B}-\sqrt{\frac{\gamma_{A}d_{B}M_{B}}{\gamma_{B}d_{A}M_{A}}} \left\{\mathbf{\delta}_{A}-\left(\mathbf{n}_{B}^{0}\cdot\mathbf{\delta}_{A}\right)\mathbf{n}_{ B}^{0}\right\}\right]\] \[\quad+\frac{2\gamma_{B}}{d_{B}M_{B}}J_{2}\left\{\mathbf{n}_{A}^{0}- \left(\mathbf{n}_{A}^{0}\cdot\mathbf{n}_{B}^{0}\right)\mathbf{n}_{B}^{0}\right\}\left(\mathbf{ n}_{A}^{0}\cdot\mathbf{\delta}_{B}+\sqrt{\frac{\gamma_{A}d_{B}M_{B}}{\gamma_{B}d_{A}M_{A}}} \mathbf{n}_{B}^{0}\cdot\mathbf{\delta}_{A}\right). \tag{7}\]
Equations (6) and (7) describe two coupled harmonic oscillators, i.e. there are four independent real functions of time to be determined. We are interested in the resonance properties, which can be analyzed in terms of any consistent choice of the four independent variables. Had it not been for the shape anisotropy and the asymmetry between \(d_{A},M_{A},\gamma_{A}\) and \(d_{B},M_{B},\gamma_{B}\), two-fold rotation around \(\mathbf{H}\) would have mapped \(\mathbf{n}_{A}^{0}\) to \(\mathbf{n}_{B}^{0}\) and the symmetry-adapted variables would have been convenient. Following MacNeil _et al._[51], let \(\mathcal{C}_{2}\) denote the two-fold rotation that brings \(\mathbf{n}_{A}^{0}\) to \(\mathbf{n}_{B}^{0}\) whose axis coincides with \(X\) direction in Fig. 1. Algebraically the action of \(\mathcal{C}_{2}\) on an arbitrary vector \(\mathbf{v}\) is given by
\[\mathcal{C}_{2}\mathbf{v}=\frac{\left(\mathbf{n}_{A}^{0}+\mathbf{n}_{B}^{0}\right)\cdot\mathbf{v} }{1+\mathbf{n}_{A}^{0}\cdot\mathbf{n}_{B}^{0}}\left(\mathbf{n}_{A}^{0}+\mathbf{n}_{B}^{0} \right)-\mathbf{v}. \tag{8}\]
Although \(\mathcal{C}_{2}\) is not in general a symmetry of the problem, it helps make sense of the results in terms of the familiar notions used in previous studies [51; 23]. The definition of \(\mathcal{C}_{2}\) becomes ambiguous for \(\left|H\right|<H_{\text{sf}}\) and what follows does not work for \(\left|H\right|>H_{\text{ff}}\) either, but these collinear cases are simple and separately handled in the Appendix. Focusing on the spin-flop phase, we introduce \(\mathbf{\delta}_{\pm}=\left(\mathbf{\delta}_{A}\pm\mathcal{C}_{2}\mathbf{\delta}_{B}\right)/ \sqrt{2}\) that are even and odd eigenvectors of \(\mathcal{C}_{2}\times\left\{A\leftrightarrow B\right\}\). To pick out two independent components each for \(\mathbf{\delta}_{\pm}\), we define a new coordinate frame \(XYZ\) (Fig. 1) given by
\[\hat{\mathbf{X}}=\frac{\mathbf{n}_{A}^{0}+\mathbf{n}_{B}^{0}}{\sqrt{2+2\mathbf{n}_{A}^{0}\cdot\mathbf{n }_{B}^{0}}},\quad\hat{\mathbf{Y}}=\frac{\mathbf{n}_{A}^{0}-\mathbf{n}_{B}^{0}}{\sqrt{2-2 \mathbf{n}_{A}^{0}\cdot\mathbf{n}_{B}^{0}}}, \tag{9}\]
and \(\hat{\mathbf{Z}}=\hat{\mathbf{X}}\times\hat{\mathbf{Y}}\). By construction, \(\mathbf{n}_{A}^{0}\cdot\mathbf{\delta}_{\pm}=0\) so
that one may write \(\mathbf{\delta}_{\pm}=\delta_{\pm}^{\perp Z}\mathbf{\hat{Z}}\times\mathbf{n}_{A}^{0}+\delta_{ \pm}^{\parallel Z}\mathbf{\hat{Z}}\). As is usual in cavity magnonics, we work with the complex variables \(\alpha=\delta_{-}^{\perp Z}-i\delta_{-}^{\parallel Z},\beta=\delta_{+}^{\perp Z }-i\delta_{+}^{\parallel Z}\) that would represent annihilation operators in the quantum regime. This change of variables brings Eqs. (6) and (7) into
\[i\frac{d}{dt}\begin{pmatrix}\alpha\\ -\overline{\alpha}\\ \beta\\ -\overline{\beta}\end{pmatrix}=\begin{pmatrix}f_{1}-h_{1}&f_{2}-h_{2}-if_{3}& g_{1}&g_{2}-ig_{3}\\ f_{2}-h_{2}+if_{3}&f_{1}-h_{1}&g_{2}+ig_{3}&g_{1}\\ g_{1}&g_{2}-ig_{3}&f_{1}+h_{1}&f_{2}+h_{2}-if_{3}\\ g_{2}+ig_{3}&g_{1}&f_{2}+h_{2}+if_{3}&f_{1}+h_{1}\end{pmatrix}\begin{pmatrix} \alpha\\ \overline{\alpha}\\ \overline{\beta}\\ \overline{\beta}\end{pmatrix}, \tag{10}\]
where overbars denote complex conjugation. Note that the equation is in the Bogoliubov form with the matrix on the right-hand-side being Hermitian. For succinct expressions of the matrix coefficients, let us introduce two distinct orthogonal decompositions of the film normal \(\mathbf{\hat{z}}=z_{A}\mathbf{n}_{A}^{0}+z_{\perp A}\mathbf{\hat{Z}}\times\mathbf{n}_{A}^{0}+ z_{Z}\mathbf{\hat{Z}}=z_{B}\mathbf{n}_{B}^{0}+z_{\perp B}\mathbf{\hat{Z}}\times\mathbf{n}_{B}^{0}+ z_{Z}\mathbf{\hat{Z}}\), where \(z_{A}=\mathbf{n}_{A}^{0}\cdot\mathbf{\hat{z}},z_{\perp A}=\left(\mathbf{\hat{Z}}\times\mathbf{n }_{A}^{0}\right)\cdot\mathbf{\hat{z}},z_{Z}=\mathbf{\hat{Z}}\cdot\mathbf{\hat{z}}\) and similarly for the \(B\) layer. The coefficients are then given by
\[f_{1} = \mu_{0}\mathbf{H}\cdot\frac{\left|\gamma_{A}\right|\mathbf{n}_{A}^{0}+ \left|\gamma_{B}\right|\mathbf{n}_{B}^{0}}{2}-\frac{1}{2}\left(\frac{\left|\gamma_ {A}\right|}{d_{A}M_{A}}+\frac{\left|\gamma_{B}\right|}{d_{B}M_{B}}\right)\left[ J_{1}\mathbf{n}_{A}^{0}\cdot\mathbf{n}_{B}^{0}+J_{2}\left\{3\left(\mathbf{n}_{A}^{0}\cdot\mathbf{n}_{B} ^{0}\right)^{2}-1\right\}\right] \tag{11}\] \[+\left|\gamma_{A}\right|\mu_{0}M_{A}\frac{z_{\perp A}^{2}+z_{Z}^{ 2}-2z_{A}^{2}}{4}+\left|\gamma_{B}\right|\mu_{0}M_{B}\frac{z_{\perp B}^{2}+z_{Z }^{2}-2z_{B}^{2}}{4}\] \[f_{2} = \left|\gamma_{A}\right|\mu_{0}M_{A}\frac{z_{\perp A}^{2}-z_{Z}^{ 2}}{4}+\left|\gamma_{B}\right|\mu_{0}M_{B}\frac{z_{\perp B}^{2}-z_{Z}^{2}}{4}+ \frac{1}{2}\left(\frac{\left|\gamma_{A}\right|}{d_{A}M_{A}}+\frac{\left|\gamma_ {B}\right|}{d_{B}M_{B}}\right)J_{2}\left\{1-\left(\mathbf{n}_{A}^{0}\cdot\mathbf{n}_{B }^{0}\right)^{2}\right\},\] (12) \[f_{3} = \frac{\left|\gamma_{A}\right|\mu_{0}M_{A}z_{\perp A}+\left|\gamma _{B}\right|\mu_{0}M_{B}z_{\perp B}}{2}z_{Z},\] (13) \[g_{1} = \mu_{0}\mathbf{H}\cdot\frac{\left|\gamma_{A}\right|\mathbf{n}_{A}^{0}- \left|\gamma_{B}\right|\mathbf{n}_{B}^{0}}{2}-\frac{1}{2}\left(\frac{\left|\gamma_ {A}\right|}{d_{A}M_{A}}-\frac{\left|\gamma_{B}\right|}{d_{B}M_{B}}\right)\left[ J_{1}\mathbf{n}_{A}^{0}\cdot\mathbf{n}_{B}^{0}+J_{2}\left\{3\left(\mathbf{n}_{A}^{0}\cdot\mathbf{n}_{B} ^{0}\right)^{2}-1\right\}\right]\] (14) \[+\left|\gamma_{A}\right|\mu_{0}M_{A}\frac{z_{\perp A}^{2}+z_{Z}^{ 2}-2z_{A}^{2}}{4}-\left|\gamma_{B}\right|\mu_{0}M_{B}\frac{z_{\perp B}^{2}+z_{ Z}^{2}-2z_{B}^{2}}{4}\] \[g_{2} = \left|\gamma_{A}\right|\mu_{0}M_{A}\frac{z_{\perp A}^{2}-z_{Z}^{ 2}}{4}-\left|\gamma_{B}\right|\mu_{0}M_{B}\frac{z_{\perp B}^{2}-z_{Z}^{2}}{4}+ \frac{1}{2}\left(\frac{\left|\gamma_{A}\right|}{d_{A}M_{A}}-\frac{\left|\gamma_ {B}\right|}{d_{B}M_{B}}\right)J_{2}\left\{1-\left(\mathbf{n}_{A}^{0}\cdot\mathbf{n}_{B }^{0}\right)^{2}\right\},\] (15) \[g_{3} = \mu_{0}\frac{\left|\gamma_{A}\right|M_{A}z_{\perp A}-\left|\gamma_ {B}\right|M_{B}z_{\perp B}}{2}z_{Z},\] (16) \[h_{1} = -\sqrt{\frac{\gamma_{A}\gamma_{B}}{d_{A}M_{A}d_{B}M_{B}}}\left[ \frac{1+\mathbf{n}_{A}^{0}\cdot\mathbf{n}_{B}^{0}}{2}J_{1}+\left\{2\left(\mathbf{n}_{A}^{0} \cdot\mathbf{n}_{B}^{0}\right)^{2}+\mathbf{n}_{A}^{0}\cdot\mathbf{n}_{B}^{0}-1\right\}J_{ 2}\right],\] (17) \[h_{2} = \sqrt{\frac{\gamma_{A}\gamma_{B}}{d_{A}M_{A}d_{B}M_{B}}}\frac{1- \mathbf{n}_{A}^{0}\cdot\mathbf{n}_{B}^{0}}{2}\left\{J_{1}+2\left(1+2\mathbf{n}_{A}^{0}\cdot \mathbf{n}_{B}^{0}\right)J_{2}\right\}. \tag{18}\]
The eigenfrequencies of Eq. (10) can be calculated as
\[\omega^{2} = f_{1}^{2}-f_{2}^{2}-f_{3}^{2}+g_{1}^{2}-g_{2}^{2}-g_{3}^{2}+h_{1}^{ 2}-h_{2}^{2}\] \[\pm 2\sqrt{\left(f_{1}g_{1}-f_{2}g_{2}-f_{3}g_{3}\right)^{2}+ \left(f_{1}h_{1}-f_{2}h_{2}\right)^{2}-\left(g_{1}h_{2}-g_{2}h_{1}\right)^{2}-g _{3}^{2}\left(h_{1}^{2}-h_{2}^{2}\right)}.\]
One can observe that the "couplings" \(g_{1,2,3}\) between \(\alpha\) and \(\beta\) all vanish if the two layers are identical and \(\mathbf{H}\) is in the plane. For identical layers with \(\theta\neq 90^{\circ}\), \(g_{1}=g_{2}=0,g_{3}\neq 0\) due to \(z_{A\perp}=-z_{B\perp}\) and the problem reduces to that of Refs. [23; 51]. The variables \(\alpha,\beta\) represent oscillations that are odd and even under \(\mathcal{C}_{2}\times\{A\leftrightarrow B\}\), and can be considered generalisations of the acoustic and optical modes in SyAFs, respectively. When \(g_{1,2,3}\) become comparable with \(f_{1,2,3},h_{1,2}\), however, \(\alpha\) and \(\beta\) evenly contribute to the eigenmodes for all values of \(H\). This makes it meaningless to talk about hybridisation between odd and even modes, which would require the modes be weakly coupled away from a resonance region and come almost degenerate upon tuning some parameters. Indeed, there is no simple relation between \(g_{1,2,3}\) and the spectral gap in general.
## III Sample growth and magnetometry characterisation
Samples used in this study were grown by using magnetron sputtering techniques inside a chamber at a base pressure better than 5\(\times\)10\({}^{-6}\) Pa. As sum
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Sample & \(\mu_{0}M_{A}\) & \(\mu_{0}M_{B}\) & \(\mu_{0}H_{\rm ex}\) & \(\mu_{0}H_{\rm ex}\) & \(d_{A}\) & \(d_{B}\) \\ geometry & (T) & (T) & (T) & ((T)) & (nm) & (nm) \\ \hline Ta/NiFe/Ru(0.4)/NiFe/Ta/.. & 0.95 & 0.9 & 0.145 & 0.03 & 5 & 3 \\ Ta/NiFe/Ru(0.4)/NiFe/Ta/.. & 0.95 & 0.9 & 0.1 & 0.02 & 3 & 5 \\ Ta/CoFeB/Ru(0.45)/NiFe/Ta/.. & 1.5 & 1.0 & 0.048 & 0.005 & 3 & 3 \\ Ta/CoFeB/Ru(0.5)/NiFe/Ta/.. & 1.5 & 1.0 & 0.02 & 0.003 & 3 & 3 \\ Ta/CoFeB/Ru(0.55)/NiFe/Ta/.. & 1.5 & 1.0 & 0.03 & 0.002 & 3 & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the VSM magnetometry parameters used to obtain the theoretical magnetisation curves shown in Fig. 2 according to Eqs. (1) and (20). The left column represents the sample geometry where ”..” indicates the thermally oxidized Si substrate and the FM near to the substrate is the second FM layer referred to as \(B\) layer. \(\mu_{0}H_{\rm ex},\mu_{0}H_{\rm 2ex}\) are the quadratic and biquadratic exchange fields respectively and \(M_{A(B)},d_{A(B)}\) are the magnetisation and thickness for the two ferromagnetic layers (NiFe/CoFeB).
Figure 2: (a-e) Normalised \(M\)-\(H\) loops for different set of samples (a) NiFe(5)/Ru(0.4)/NiFe(3) (b) NiFe(3)/Ru(0.4)/NiFe(5) (c) CoFeB(3)/Ru(0.45)/NiFe(3) (d) CoFeB(3)/Ru(0.5)/NiFe(3) and (e) CoFeB(3)/Ru(0.55)/NiFe(3). The field is applied along the in-plane easy axis. The solid lines are fit obtained by the theoretical static state calculations based on Eq. (1). (f-j) Static state angles of magnetisation for two FM layers calculated for the best fit parameters corresponding to (a-e) respectively. (k-o) Angle between the two magnetisations.
marised in Table 1, we studied five different multi-layers Ta(5)/FM\({}_{1}\)(\(d_{\rm A}\))/Ru/FM\({}_{2}\)(\(d_{\rm B}\))/Ta(5)/thermally oxidized Si substrate (numbers in the brackets represent layer thickness in nm) after optimising growth conditions [23; 43; 66]. Figure 2 shows normalised hysteresis loops for the samples measured for static external field in the plane by vibrating sample magnetometer (VSM) techniques. Three regions distinguished by the alignment of magnetisations of the two layers are indicated in different colours. As explained in the previous section, due to the competition between the exchange and Zeeman energies, our samples undergo two phase transitions. In the small magnetic field limit \(H<H_{\rm sf}\) (shadowed in pink), the exchange energy dominates and the two moments are aligned antiferromagnetically. As the field is increased, the spin-flop transition takes place, after which the two moments tilt away from the field in a canted state. Finally, at higher field values \(H>H_{\rm ff}\), the Zeeman energy prevails and the magnetic moments point along the field direction entering the forced ferromagnetic regime as indicated in green for each plot.
Equation (1) was used for fitting to determine the static states of each moment. For obtaining the ground state, we find the values of \(\cos\phi_{\rm A,B}\) that minimise Eq. (1) for \(\theta_{A}=\theta_{B}=90^{\circ}\) in an iterative manner for each magnetic field. The orange curves in the first row of Fig. 2 are the normalised magnetisation calculated for each field value as:
\[\frac{M(H)}{M_{s}}=\frac{d_{A}M_{A}\cos\phi_{A}+d_{B}M_{B}\cos\phi_{B}}{d_{A}M _{A}+d_{B}M_{B}}. \tag{20}\]
where \(M_{s}\) is the total saturation magnetisation of the sample. Optimisation with respect to the experimental curves yielded the best-fit values of \(M_{A},M_{B}\) as well as the quadratic (\(\mu_{0}H_{\rm ex}=J_{1}/\sqrt{d_{A}d_{B}M_{A}M_{B}}\)) and biquadratic exchange fields ( \(\mu_{0}H_{\rm ex}=J_{2}/\sqrt{d_{A}d_{B}M_{A}M_{B}}\)), which are summarised in the Table 1. While the microscopic origin of \(J_{1}\) is well-explained by the RKKY interaction via electrons in the spacer layer [67], the identification of physical origins for \(J_{2}\) is challenging among the several proposals [68], such as intrinsic mechanism [69; 70], extrinsic fluctuation [71] and magnetic-dipole origin [72]. We however mention that our theoretical model and spin dynamics measurements treat the \(J_{2}\) term phenomenologically and are not influenced by its microscopic origin.
## IV Spin dynamics
High frequency responses of the coupled moment systems were characterised by broadband on-chip microwave absorption techniques. As illustrated in Fig. 3(a), each sample chip was placed face-down on a coplanar waveguide [73]. For each measurement, we fixed the frequency \(f\) and swept a dc external magnetic field \(\mu_{0}H\) while applying an ac magnetic field at 12 Hz for lock-in detection techniques. Here we show our measurements on the samples NiFe(5)/Ru(0.4)/NiFe(3) and CoFeB(3)/Ru/NiFe(3), both showing avoided crossing [29; 51; 23] due to the asymmetry of thickness and magnetic moment size, respectively.
### NiFe(5 nm)/Ru(0.4 nm)/NiFe(3 nm)
Figures 3(b) and (c) represent individual measurement curves targeted at two resonance modes in the sample NiFe(5 nm)/Ru(0.4 nm)/NiFe(3 nm) for \(\theta=90^{\circ}\) and different frequencies. These individual scans are used to produce a \(f\)-\(\mu_{0}H\) two-dimensional plot as shown in Fig. 3(d) to capture the absorption spectrum. At \(\mu_{0}H\approx 0.25\) T, instead of mode degeneracy, we observe the avoided crossing, suggesting that the in- and out-of-phase oscillations are strongly hybridised [29; 23]. Figure 3(e) plots peak positions extracted by individual curve fittings using derivative Lorentzian functions [74; 75; 76]. Equation (20) with material parameters independently extracted in the static VSM measurements (Table 1) generates curves that are in reasonable agreement with experiment. This displays the applicability of the macro-spin model with the minimal set of phenomenological parameters for this type of experiments. To highlight the role of thickness asymmetry for gap opening, we also show two additional sets of model calculations for (\(d_{A}\),\(d_{B}\))=(5 nm, 4 nm) and (5 nm, 5 nm). The model shows that the spectral gap widens as the thickness asymmetry is increased and the gap disappears in a symmetric system. Figures 3(f-g) confirm this prediction for the symmetric sample NiFe(5 nm)/Ru(0.4 nm)/NiFe(5 nm) with similar thickness of two ferromagnets. The two modes cross each other at \(\mu_{0}H=0.15\) T with an absence of gap in this case. The presence of mode symmetry prevents them from hybridisation and the two modes are degenerate at the crossing point. Due to the asymmetry \(d_{A}\neq d_{B}\), some of the coupling parameters in the off-diagonal blocks in Eq. (10), i.e. \(g_{1}\) and \(g_{2}\), become non-zero, for instance through the prefactor \(\left|\gamma_{A}\right|/\mu_{0}M_{A}-\left|\gamma_{B}\right|/\mu_{0}M_{B}\). Therefore, even for the case of \(\theta=90^{\circ}\), the thickness asymmetry generates the hybridisation of in- and out-of-phase oscillations.
We can further increase the gap size by tilting the moments towards the out-of-plane, as we previously demonstrated in the symmetric cases [23]. Figures 4(a-d) summarise the experimentally-measured \(\theta\) dependence of the magnetic resonances. We performed peak position analysis for these \(\theta\)-dependent results as shown in Fig. 4(e-h), together with \(\Delta\)-\(\theta\) relationship plotted in Fig. 4(i). Here \(\Delta\) is defined as the minimum of the difference between the upper and lower resonance frequencies as shown in the Fig. 4(i) inset. Our theoretical curves successfully reproduce the experimental results without any tunable parameters. As the out-of-plane field increases, the gap is enhanced in comparison with that due to the thickness asymmetry alone and might be attributed to an increase of \(g_{3}\) (Eq. (16)) with reducing \(\theta\). The observed trend is further supported by repeated experiments with a sample
with the inverted growth order, i.e. NiFe(3 nm)/Ru(0.4 nm)/NiFe(5 nm), demonstrating approximately the same quantitative behaviour as shown in Figs. 4(i). This proves that the angle dependence of the gap is a robust feature independent of the assignment of top and bottom layers and small fluctuations in material parameters across different fabrication conditions.
### CoFeB(3 nm)/Ru/NiFe(3 nm)
In order to experimentally demonstrate the effect of symmetry breaking due to the asymmetry in magnetic moments (\(M_{\rm A}\neq M_{\rm B}\)) [52], we grew multilayers of CoFeB/Ru/NiFe where the thickness of the two FM materials was kept fixed at 3 nm. Figure 5 summarises the spectral measurements/analysis for the sample CoFeB(3 nm)/Ru(0.45 nm)/NiFe(3 nm) for different values of \(\theta\). A clear avoided-crossing gap is visible in the spectra shown in Fig. 5(a) for \(\theta=\pi/2\) and the model calculations (solid curves) reproduce the dispersion curves with the degree of moment asymmetry fixed by the static VSM measurements in this stack as shown in Fig. 5(e). This is because \(g_{1}\) and \(g_{2}\) become non-zero when \(M_{A}\neq M_{B}\) (see Eqs. (14)-(15)). \(g_{3}\) further adds to the coupling when the two moments have out-of-plane components and this tendency is experimentally demonstrated as shown in Figs. 5(a)-(h).
Figure 5(i) displays the gap size \(\Delta\) as a function of \(\theta\) for the samples CoFeB(3 nm)/Ru(\(t\))/NiFe(3 nm) with three different Ru thicknesses, i.e. \(t=0.45,0.50\) and 0.55 nm; the magnetic parameters of these samples extracted from VSM measurements are listed in Table 1. The Ru thickness does not directly enter the free energy equation or LLG equation, instead mostly influencing the interlayer exchange coupling strength \(\mu_{0}H_{\rm ex}\). Hence, comparing these three samples can be a good experimental demonstration of the effect of the exchange coupling strength on GHz spectra for the coupled moments. There is indeed direct correlation between \(\Delta\) and \(\mu_{0}H_{\rm ex}\) as shown in the inset of Fig. 5(i) for \(\theta=90^{\circ}\). We also perform further simulations using the same parameters in the sample CoFeB(3 nm)/Ru(0.45 nm)/NiFe(3 nm), except for \(\mu_{0}H_{\rm ex}\) being 0.1 T. \(\Delta\) of this simulation as a function of \(\theta\) is plotted in Fig. 5(j), supporting our claim.
We have so far shown the reliability of our macrospin model in reproducing the experimental results of magnetic resonance spectra in coupled moments via the interlayer exchange interaction. Here we present our theoretical predictions to discuss the magnetic-parameter dependence of \(\Delta\). The asymmetry of coupled moments, i.e. \(M_{A}\) and \(M_{B}\), can be further enhanced in simulation and we
Figure 3: (a) Schematic of the sample structure. (b)-(c) Absorption spectra for the sample NiFe(5)/Ru(0.4)/NiFe(3) at (b) low and (c) high field for \(\theta\)=90\({}^{\circ}\). (d) Microwave transmission as a function of frequency and field for the sample NiFe(5)/Ru(0.4)/NiFe(3). The field is applied within the plane, \(\theta\)=90\({}^{\circ}\). A clear avoided-crossing gap is visible at field \(\mu_{0}H=0.25\) T. (e) Fitting results for data as in (d). The solid lines are fitted curves obtained from macrospin model. The increasing transparencies of the lines correspond to the model calculations for the case (\(d_{A}\),\(d_{B}\))=(5 nm, 3 nm), (5 nm, 4 nm) and (5 nm, 5 nm) respectively. It is seen from the calculations that the spectral gap widens as the thickness asymmetry is increased. (f-g) Similar plots as in (d-e) for sample NiFe(5)/Ru(0.4)/NiFe(5) at \(\theta\)=90\({}^{\circ}\). A clear crossing is seen at at field \(\mu_{0}H=0.15\) T. This crossing indicates that the two modes are degenerate due to the inter-layer symmetry.
find that \(\Delta\) is monotonically increased by enlarging the difference between \(M_{A}\) and \(M_{B}\) for a fixed value of \(\mu_{0}H_{\rm ex}\) as shown in Fig. 6(a), reaching up to approximately 7.5 GHz with \(\mu_{0}M_{A}=1.5\) T and \(\mu_{0}M_{B}=0.4\) T. This might be achieved by selecting low-moment magnets as a counterpart of CoFeB to form a stack of synthetic ferrimagnet. Our model simulations also suggest that in such synthetic ferrimagnets with large moment asymmetry, \(\mu_{0}H_{\rm ex}\) that can be tuned by the thickness of the intermediate layer can act as a knob to further enhance \(\Delta\) as presented in Fig. 6(b). See Appendix for individual spectra for extracting \(\Delta\). Finally, the \(\theta\) dependence of \(\Delta\) for different values of \(\mu_{0}H_{\rm 2ex}\) is plotted in Fig. 6(c). For these simulations, an increase of \(\mu_{0}H_{\rm 2ex}\) decreases \(\Delta\), which is qualitatively different from the role of \(\mu_{0}H_{\rm ex}\) on \(\Delta\), e.g. in Fig. 5(j). This is partially because of the general competition between \(J_{1}\) and \(J_{2}\) which prefer different static state configurations and therefore combine to soften the order and decrease the scale of resonance frequency. While \(\mu_{0}H_{\rm 2ex}\) is not a material parameter that can be easily tuned by growth conditions, it is interesting to notice that the biquadratic nature enters the spectral responses very differently from the quadratic counterpart. In general, when the off-diagonal block elements \(g_{1},g_{2},g_{3}\) become comparable with the diagonal block ones as in the present case, the notion of coupling between acoustic and optical modes becomes inappropriate, leading to the complex dependence of \(\Delta\) on not only the asymmetry related parameters but also the symmetry-respecting ones such as \(\mu_{0}H_{\rm ex}\) and \(\mu_{0}H_{\rm 2ex}\). We would also like to add that in our model, we did not include the mutual spin pumping term between the two magnetic layers [77]. However, the fact that we have good agreement between experiment and theory without the term indicates that the contribution of the spin-pumping term seems to be insignificant.
## V Conclusion
We studied the dynamics of synthetic ferrimagnets and theoretically and experimentally showed their magnon-magnon coupling with dissimilar material and thickness of two ferromagnetic layers. We presented analytical expressions of the coupled mode resonance frequencies and used them to quantitatively discuss experimental results. Using the rich and controllable spin-wave spectra in interlayer-coupled magnetic moments, these materials might find their important use for future magnonic/spintronic applications [30; 31; 32; 78; 79].
## Acknowledgements
A. S. thanks JSPS Postdoctoral fellowship for research in Japan (P21777) and EPSRC for their supports through NPIF EPSRC Doctoral studentship (EP/R512400/1) during her PhD at UCL. K. Y. is supported by JST PRESTO Grant No. JPMJPR20LB,
Figure 4: (a)-(d) Microwave transmission as a function of frequency and applied field for the sample NiFe(5)/Ru(0.4)/NiFe(3) for different \(\theta\). The angle \(\theta\) is defined as in Fig. 1. (e)-(h) Resonance frequency as a function of field obtained by derivative Lorentzian fitting of the experimental data. The solid lines in the figure are theoretical results obtained from the macrospin model. (i) Spectral gap as a function of \(\theta\) obtained from theoretical model calculations. It can be seen that a maximum gap of \(\approx 4.5\) GHz is achieved. The spectral gap is defined as the minimum of the difference between the upper (\(f_{\rm u}\)) and lower (\(f_{\rm l}\) ) resonance frequencies as a function of \(\mu_{0}H\) as shown by the dotted line in inset for the sample NiFe(5)/Ru(0.4)/NiFe(3).
Japan and JSPS KAKENHI (No. 21K13886). SM thanks to CSRN in CSIS at Tohoku Univ. and to JSPS KAKENHI (No. 21H04648, 22F21777, 22KF0030)
## Appendix A Collinear ground states
The coordinate axes we used in the main text, Eq. (9) are not well-defined for \(\mathbf{n}_{A}^{0}\cdot\mathbf{n}_{B}^{0}=\pm 1\), namely when the two magnetisations are collinear in the static state. This happens for \(H\leq H_{\text{sf}}\) and \(H\geq H_{\text{ff}}\) if the magnetic field
Figure 5: (a)-(d) Microwave transmission as a function of frequency and applied field for the sample CoFeB(3)/Ru(0.45)/NiFe(3) for different \(\theta\). The spectral gap increases as \(\theta\) is decreased. (e)-(h) Resonance frequency as a function of field obtained by derivative Lorentzian fitting of the experimental data. The solid lines in (e)-(h) are theoretical results obtained from the macrospin model. (i) The spectral gap as a function of \(\theta\) for different Ru thicknesses, which shows a gradual increase as \(\theta\) is decreased. Inset shows the variation of \(\Delta\) as a function of \(\mu_{0}H_{\text{ex}}\). (j) Spectral gap as a function of \(\theta\) for sample with Ru thickness 0.45 nm at \(\mu_{0}H_{\text{ex}}=0.1\) T and 0.05 T. The gap shows an increase as \(\mu_{0}H_{\text{ex}}\) is increased. The spectra used for extracting the spectral gap is given in Appendix C.
is in-plane \(\theta=90^{\circ}\), and more generally at high fields if the two layers are identical.
Let us first discuss the antiferromagnetic state \(\mathbf{n}_{A}^{0}\cdot\mathbf{n}_{B}^{0}=-1\), for which we can assume \(\mathbf{H}=H\hat{\mathbf{x}}\). In place of \(\mathcal{C}_{2}\) given in Eq. (8), the static state satisfies \(\mathcal{C}_{2}^{\prime}\mathbf{n}_{A}^{0}=\mathbf{n}_{B}^{0}\) where
\[\mathcal{C}_{2}^{\prime}\mathbf{v}=\left(\hat{\mathbf{y}}\cdot\mathbf{v}\right)\hat{\mathbf{y }}-\mathbf{v}. \tag{10}\]
One may still then define \(\mathbf{\delta}_{\pm}=\left(\mathbf{\delta}_{A}\pm\mathcal{C}_{2}^{\prime}\mathbf{\delta} _{B}\right)/\sqrt{2}\) and decompose them as \(\mathbf{\delta}_{\pm}=\delta_{\pm}^{\pm}\hat{\mathbf{z}}\times\mathbf{n}_{A}^{0}+\delta_{ \pm}^{[\pm}\hat{\mathbf{z}}\). The rest does not have to be changed with \(z_{A}=z_{\perp A}=z_{B}=z_{\perp B}=0,z_{Z}=1\) and \(\mathbf{n}_{A}^{0}=-\mathbf{n}_{B}^{0}=\pm\hat{\mathbf{x}}\) according to \(d_{A}M_{A}\gtrless d_{B}M_{B}\).
For the ferromagnetic state \(\mathbf{n}_{A}^{0}\cdot\mathbf{n}_{B}^{0}=1\), \(\hat{\mathbf{X}}=\mathbf{n}_{A}^{0}\) is well-defined and one may redefine \(\hat{\mathbf{Y}}=\hat{\mathbf{y}}\). With this provision, \(\mathcal{C}_{2}\) is simply a two-fold rotation about \(\hat{\mathbf{X}}\) and \(\mathbf{\delta}_{\pm}=\mathbf{\delta}_{A}\mp\mathbf{\delta}_{B}\). Again nothing needs to be modified in Eq. (10) and beyond with \(z_{\perp A}=z_{\perp B}=0\).
## Appendix B Additional magnetisation-dynamics results for other samples measured in this study
This section provides supplementary results for samples measured in this study, which further supports the observations and claims as described in the main text. Top panels (a-b) in Figs. 7 and 8 show measurements for some remaining angles not shown in the main text for the samples with stacking pattern CoFeB/Ru(0.45)/NiFe and NiFe(5)/Ru(0.4)/NiFe(3) respectively. The fittings produced by our macrospin model are shown in bottom panel, which agree well with the experimental data.
The measurements were repeated for other sets of samples following the procedure outlined in the main text and we saw similar behaviours of spectral gap variation with change in applied field angle towards out-of-plane as shown in Fig. 9 for sample CoFeB/Ru(0.5)/NiFe and Fig. 11 for sample CoFeB/Ru(0.55)/NiFe. The resonance frequency obtained by fitting of experimental data using derivative of Lorentzian function along with the theoretical predictions are plotted in Figs. 10 and 12 corresponding to Figs. 9 and 11 respectively. For samples with Ru thickness 0.5 and 0.55 nm the gap opening is smaller than the sample with Ru thickness 0.45 nm as shown in Fig. 9 and Fig. 11. This is due to the lower value of \(\mu_{0}H_{\text{ex}}\) in these samples. These results further support our observation of spectral gap controlled by the out-of-plane
Figure 6: Spectral gap obtained from simulations by varying (a) \(M_{B}\) and (b) \(\mu_{0}H_{\text{ex}}\). The other fixed parameters used for the simulations are indicated on the plot. As the asymmetry is increased, a very large spectral gap \(\approx\) 12 GHz is obtained for \(\mu_{0}H_{\text{ex}}\) = 0.15 T as shown in (b). (c) Spectral gap as a function of \(\theta\) for different biquadratic exchange field values \(\mu_{0}H_{\text{2ex}}\) for the sample with Ru thickness of 0.45 nm. The other parameters used for the simulation are the same as given in Table 1. For low \(\mu_{0}H_{\text{2ex}}\) values, the increase in gap size is not prominent as \(\theta\) is varied.
Figure 7: (a)-(b) Extra plots of microwave transmission as a function of frequency and applied field for CoFeB(3)/Ru(0.45)/NiFe(3) for different \(\theta\) values. The spectral gap increases as \(\theta\) is decreased. Figures (c-d) shows resonance frequency obtained using derivative Lorentzian fitting of the experimental data and the solid lines are the theoretical curves obtained from macrospin model.
angle \(\theta\) and exchange field strength \(\mu_{0}H_{\mathrm{ex}}\) as mentioned in the main text.
## Appendix C Numerical simulations to study the impact of varying parameters on coupling gap
Using numerical simulations, we explored different parameter regimes beyond the experimental conditions. In an effort to understand the magnetic-parameter dependence of \(\Delta\) we performed numerical simulations by vary
Figure 11: (a)-(d) Microwave transmission as a function of frequency and applied field for CoFeB(3)/Ru(0.55)/NiFe(3) for different \(\theta\) values. A small variation in spectral gap is seen as the \(\theta\) varied.
Figure 8: (a)-(b) Extra plots of microwave transmission as a function of frequency and applied field for NiFe(5)/Ru(0.4)/NiFe(3) for different \(\theta\) values. Figures (c-d) shows resonance frequency obtained using derivative Lorentzian fitting of the experimental data and the solid lines are the theoretical curves obtained from macrospin model for the experimental data as in (a-b).
Figure 10: (a)-(d) Resonance frequency extracted from derivative Lorentzian fitting of experimental data as a function of applied field along with theoretical prediction for CoFeB(3)/Ru(0.5)/NiFe(3). These correspond to the data shown in Fig. 9.
Figure 9: (a)-(d) Microwave transmission as a function of frequency and applied field for CoFeB(3)/Ru(0.5)/NiFe(3) for different \(\theta\). The gap opening is smaller as compared to sample with Ru thickness 0.45 nm due to smaller \(\mu_{0}H_{\mathrm{ex}}\) of this sample.
ing different parameters \(\mu_{0}M_{\rm A}\), \(\mu_{0}H_{\rm ex}\) and \(\theta\) as shown in Fig. 13.
\(\Delta\) corresponding to Fig. 13 is shown in Fig. 6 given in the main text. Our numerical simulations suggest that we can tune \(\Delta\) by varying different parameters.
|
2305.01746
|
Emergent U(1) symmetry in non-particle-conserving one-dimensional models
|
The properties of stable Luttinger liquid phases in models with a
non-conserved number of particles are investigated. We study the Luttinger
liquid phases in one-dimensional models of hard-core boson and spinless fermion
chains where particles can be created and annihilated three by three on
adjacent sites. We provide an intuitive and systematic method based on flow
equations approach, which accounts for additional terms in the correlations
generated by the $\mathbb{Z}_3$-symmetric interactions. We find that despite
the emergence of U(1) symmetry under renormalization, the observables are still
affected by its breaking in the bare Hamiltonian. In particular, the standard
bosonization mapping becomes insufficient to capture the full behavior of
correlation functions.
|
Zakaria Jouini, Natalia Chepiga, Loic Herviou, Frédéric Mila
|
2023-05-02T19:28:49Z
|
http://arxiv.org/abs/2305.01746v2
|
# Emergent U(1) symmetry in non-particle-conserving 1D models
###### Abstract
The properties of stable Luttinger liquid phases in models with a non-conserved number of particles are investigated. We study the Luttinger liquid phases in one-dimensional models of hard-core boson and spinless fermion chains where particles can be created and annihilated three by three on adjacent sites. We provide an intuitive and systematic method based on flow equations approach, which accounts for additional terms in the correlations generated by the \(\mathbb{Z}_{3}\)-symmetric interactions. We find that despite the emergence of U(1) symmetry under renormalization, the observables are still affected by its breaking in the bare Hamiltonian. In particular, the standard bosonization mapping becomes insufficient to capture the full behavior of correlation functions.
## I Introduction
The observation of density-wave order in the recent experiments on one-dimensional systems of Rydberg atoms [1; 2] have brought back unsolved questions about the nature of the commensurate melting of period-\(p\) phases [3; 4; 5; 6; 7; 8]. For \(p>2\), a floating phase, characterized by incommensurate and algebraic correlations, separates the \(\mathbb{Z}_{p}\)-ordered and the disordered phases [3; 8; 9; 10]. For \(p=3\) and \(p=4\), the extension of the floating phase is still debated [8; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. One open question is the existence of a direct and continuous transition in the chiral universality class that would occur before the floating phase develops [4; 8]. The quantum version of this problem can be formulated in terms of hard-core bosons associated with domain walls in the commensurate structure of the density wave [21]. The resulting Hamiltonian exhibits a \(\mathbb{Z}_{p}\) symmetry as it contains terms that create and annihilate \(p\) adjacent particles. When these perturbations to the free-fermion fixed point are irrelevant, the U(1) charge conservation is restored at the scaling limit in an extended incommensurate Luttinger liquid phase [17; 22], equivalent to the floating phase of the 2D classical problem [8].
The properties of the Luttinger liquid phase are described by a bosonic conformal field theory [23; 24]. The correspondence between the operators at the lattice scale and the bosonic fields in the continuum limit can be in principle constructed from selection rules dictated by the symmetry of the lattice Hamiltonian. When the latter exhibits U(1) symmetry, this correspondence is generally given by the standard bosonization mapping [24]. For models that do not conserve the number of particles, the latter becomes insufficient to capture the long-distance behavior of the correlations. A similar problem was discussed recently in the case of nonsymmorphic 1D models [25; 26]. We provide in this paper a way to derive a bosonic representation of lattice operators in \(\mathbb{Z}_{p}\)-symmetric 1D models. Using the flow equation approach [27], introduced by Wegner in 1994, a continuous unitary transformation is designed to restore the U(1) symmetry perturbatively in an effective Hamiltonian. The action of these transformations on the lattice operator generates an expansion in terms of the bosonic fields of the Luttinger liquid theory. The bosonization mappings obtained from this procedure are sufficient to capture the long-distance behavior of the correlations. We illustrate this method on two chains of spinless fermions and hard-core bosons where particles are created and annihilated three by three on adjacent sites. Our findings are assessed by density matrix renormalization group (DMRG) [30; 31; 32; 33] simulations.
The paper is organized as follows. In Section II, we discuss the phase diagrams of the hard-core boson and spinless fermion models. Section III provides a brief introduction to the flow equation approach. In Section IV, the flow equation approach is applied to the fermionic model to derive perturbatively a U(1)-symmetric effective Hamiltonian. A modified bosonic representation of the single-fermion operator is then derived and used to calculate correlation functions inside the Luttinger liquid phase. In Section V, the same procedure is applied to the continuum limit of the hard-core boson model, using the generator that diagonalizes the dual sine-Gordon model. The results are summarized in Section VI.
## II Models and Phase Diagrams
We consider one-dimensional models of hard-core bosons and spinless fermions where particles are created and annihilated three by three on adjacent sites. The two models share the feature of a Luttinger liquid phase that remains stable when the \(\mathbb{Z}_{3}\)-symmetric interaction is turned on. In this phase, the low-energy properties of the system are described by the Luttinger liquid Hamiltonian
\[H_{\rm LL}=\frac{1}{2\pi}\int dx\,\frac{v}{K}[\partial_{x}\theta(x)]^{2}+vK[ \partial_{x}\phi(x)]^{2}, \tag{1}\]
where \(K\) is the Luttinger parameter and \(v\) is the velocity. The fields \(\phi\) and \(\theta\) are bosonic in nature and satisfy the commutation relation \([\phi(x),\theta(y)]=i\pi\,{\rm sign}(y-x)/2\). The correlations decay algebraically with an exponent controlled by \(K\) and oscillate with an incommensurate
wave vector proportional to the Fermi wave vector \(k_{F}\). We present in this section the arguments for the stability of the Luttinger liquid phase and discuss the nature of the transitions out of it.
### Hard-core bosonic model
The Hamiltonian of the hard-core boson model is given by
\[H=\sum_{i}-t(b^{\dagger}_{i+1}b_{i}+\text{h.c.})-\mu n_{i}+\lambda(b^{\dagger}_{ i}b^{\dagger}_{i+1}b^{\dagger}_{i+2}+\text{h.c.}), \tag{2}\]
where \(b^{\dagger}_{i}\) and \(b_{i}\) are respectively the creation and annihilation operators of hard-core bosons at site \(i\), and \(n_{i}=b^{\dagger}_{i}b_{i}\) is the density operator. The hard-core constraint amounts to a restriction of the occupation number to \(n_{i}=0\) and \(n_{i}=1\). Accordingly, the operators satisfy the commutation relation \([b_{i},b^{\dagger}_{j}]=(1-2n_{i})\delta_{i,j}\). The hard-core boson model was recently introduced as a dual description of the transition to phases with a density-wave order of period 3 in chains of Rydberg atoms [15]. Its phase diagram is studied extensively in Ref. [17]. We recall here the main results. Without loss of generality, the hopping amplitude is set to \(t=1\). At \(\lambda=0\), Eq.(2) is a free fermion Hamiltonian that can be mapped in the continuum limit to the Luttinger liquid Hamiltonian (1) with a Luttinger exponent \(K=1\) and a velocity \(v=2\sin(k_{F})\). When \(\lambda\neq 0\), the U(1) symmetry is reduced to a \(\mathbb{Z}_{3}\) symmetry. The stability of the Luttinger liquid phase follows from the scaling analysis of the Hamiltonian in the continuum limit, obtained by applying the bosonization mapping [24]. Since the bosonic representation of the hard-core boson operators takes the form \(b\sim e^{i\theta}\), the Hamiltonian reduces, at half filling (\(\mu=0\)) and up to the most relevant term, to
\[H\sim H_{\text{LL}}+\frac{g}{\pi\alpha^{2}}\int dx\cos(3\theta(x)), \tag{3}\]
where \(g\) is a dimensionless coupling and \(\alpha\) is a real-space cutoff. A standard renormalization group (RG) analysis [24] yields the RG equations
\[\begin{split}\frac{dK}{dl}&=\frac{9}{4}g^{2},\\ \frac{dg}{dl}&=\Big{(}2-\frac{9}{4K}\Big{)}g.\end{split} \tag{4}\]
When \(K<9/8\), the coupling constant decays exponentially under renormalization such that the Luttinger liquid Hamiltonian is recovered with effective parameters \(K^{\star}\) and \(v^{\star}\) that depend on the bare coupling constants in Eq.(6). The U(1) symmetry is thus restored at the scaling limit. It manifests itself as a symmetry under translation of the dual field \(\theta\). At \(K=9/8\), the system undergoes a Kosterlitz-Thouless (KT) transition [28] into a \(\mathbb{Z}_{3}\)-ordered phase by pinning the dual field in the minima of the cosine, i.e., at \(\theta_{n}=2\pi n/3\). Upon varying the chemical potential \(\mu\) inside the Luttinger liquid phase, a commensurate-incommensurate transition into a disordered phase occurs. This transition is in the Pokrovsky-Talapov (PT) universality class [29], characterized by a dynamical exponent \(z=2\) and an incommensurate correlations wave vector that approaches its commensurate value with a singularity proportional to \(|\mu-\mu_{\text{c}}|^{1/2}\). The KT and PT lines are expected to meet at a Lifshitz point that would appear before the three-state Potts point [8]. Between the two points, the commensurate melting is direct and takes place in the chiral universality class. These predictions are confirmed numerically in Ref. [17].
### Fermionic model
The Hamiltonian of the fermionic model is given by
\[H=\sum_{i}-t(c^{\dagger}_{i+1}c_{i}+\text{h.c.})-\mu n_{i}+\lambda(c^{\dagger}_ {i}c^{\dagger}_{i+1}c^{\dagger}_{i+2}+\text{h.c.}), \tag{5}\]
where \(c^{\dagger}_{i}\) and \(c_{i}\) are the creation and annihilation operator of spinless fermions. It differs from the hard-core boson model (2) by a string operator when the Jordan-Wigner transformation is applied.
Similar to the bosonic case, the stability of the Luttinger liquid phase in Eq.(5) can be investigated using
Figure 1: (a)-(b) Scaling of the incommensurate wave vector as a function of the chemical potential close to the PT transition for (a) \(\lambda=0.2\) and (b) \(\lambda=2\). (c) Luttinger parameter \(K\) as a function of \(\lambda\) at \(\mu=0\), extracted numerically from the decay of the two-fermion correlations. This procedure is only valid when \(K\) is larger than \(\sqrt{3}\) (see Section IV.3.2). The system is simulated with \(N=600\) sites.
the bosonization mapping [24]. In terms of the fields \(\phi\) and \(\theta\), the Hamiltonian reads
\[H\sim H_{\rm LL}+\frac{g}{\pi\alpha^{2}}\int dx\,\cos(3\phi(x)-3k_{F}x)\cos(3 \theta(x)), \tag{6}\]
where terms that oscillate at \(k_{F}\) are neglected as they vanish under the integration for all fillings of the band inside the Luttinger liquid phase. Eq.(6) becomes non-oscillating at \(k_{F}=2\pi/3\). The perturbation to the Luttinger liquid Hamiltonian has a scaling dimension \(\Delta=9(K+1/K)/4\) and becomes relevant when \(\Delta<2\). This inequality has however no solution and the coupling constant \(g\) decays exponentially under renormalization for all values of \(K\). It should be noted that terms such as \(\cos(6\theta)\) and \(\cos(6\phi)\) are generated along the RG flow and can lead to a KT transition when \(K>9/2\) or \(K<2/9\). DMRG computations of \(K\) suggest however that it increases slowly enough from \(K=1\) for the generated terms to remain irrelevant even at fairly large values of \(\lambda\). For instance, the numerical results at \(\mu=0\) indicate an extension of the Luttinger liquid phase at least up to \(\lambda\sim 50\) (Fig.1). Nevertheless, a KT transition at larger values of \(\lambda\) cannot be excluded. As the chemical potential is varied in the Luttinger liquid phase, a PT transition line can be identified numerically using the scaling behavior of the wave vector (Fig.1).
## III Flow equation approach
The general idea behind the flow equation approach introduced by Wegner [27] is to apply continuous unitary transformations to the Hamiltonian in order to bring it into a more band-diagonal form. The approach consists in a renormalization scheme where states with large energy differences are first decoupled while smaller energy differences are later suppressed along the flow.
The formalism of the method is based on the parametrization of a set of unitarily equivalent Hamiltonians \(H(l)=U(l)H(0)U^{\dagger}(l)\). By taking the derivative with respect to \(l\), the problem is recast into the differential equation
\[\frac{dH(l)}{dl}=[\eta(l),H(l)], \tag{7}\]
where
\[\eta(l)=\frac{dU(l)}{dl}U^{\dagger}(l) \tag{8}\]
is the anti-hermitian generator of the flow. The latter can be chosen appropriately to diagonalize the bare Hamiltonian \(H(0)\). The canonical choice proposed by Wegner [27] is given by \(\eta(l)=[H_{\rm d}(l),H_{\rm od}(l)]\), where \(H_{\rm d}(l)\) and \(H_{\rm od}(l)\) are the diagonal and the off-diagonal parts of the flowing Hamiltonian \(H(l)\). From this definition of the generator, it can be shown [34] that
\[\frac{d\mathrm{Tr}(H_{\rm od}^{\dagger}(l))}{dl}=-2\mathrm{Tr}(\eta^{\dagger}( l)\eta(l))\leq 0, \tag{9}\]
which indicates that the flow gradually brings the Hamiltonian into a more band-diagonal form. A fixed point of the flow is reached when \(\eta(l)\) vanishes. The diagonal and off-diagonal parts of the flowing Hamiltonian then commute and the Hamiltonian becomes block-diagonal with respect to the symmetry of the non-interacting part. Thus, the flow equation approach provides a systematic way to design a unitary transformation that recovers the U(1) symmetry in models that do not conserve the number of particles. Its drawback is the proliferation of terms along the flow. Truncations schemes are hence needed to keep the calculations tractable.
Once the diagonal Hamiltonian is obtained from the flow equation procedure, the change of basis associated with \(\eta(l)\) can be applied to the operators. Given an operator \(O\), its transformation along the flow is dictated by the flow equation
\[\frac{dO(l)}{dl}=[\eta(l),O(l)]. \tag{10}\]
The expectation value in the ground state of the bare Hamiltonian can be evaluated using the relation
\[\left\langle\psi_{\rm gs}\right|O\left|\psi_{\rm gs}\right\rangle=\left\langle \psi_{\rm gs}(\infty)\right|O(\infty)\left|\psi_{\rm gs}(\infty)\right\rangle, \tag{11}\]
where \(\left|\psi_{\rm gs}(\infty)\right\rangle=U^{\dagger}(\infty)\left|\psi_{\rm gs }\right\rangle\) is the ground state of the diagonal Hamiltonian \(H(\infty)\).
In the following sections, we apply the flow equation procedure to the models in Eqs.(2),(5). The transformed bosonic representations of bosonic and fermionic operators is derived from the generator of the flow that restores the U(1) symmetry in the Hamiltonians.
## IV Flow equation approach to the fermionic model
We derive in this section a U(1)-symmetric effective Hamiltonian that describes the low-energy properties of the fermionic Hamiltonian (5).
### Flow of the Hamiltonian
We proceed by writing the Hamiltonian in the Fourier basis in order to separate it into a diagonal part \(H_{0}\) and an interaction part \(\lambda H_{3}\). We have
\[H=\sum_{k}\xi_{k}c_{k}^{\dagger}c_{k}+\frac{\lambda}{3!\sqrt{N}}\sum_{k,q}B_{ k,q}(c_{k}^{\dagger}c_{q}^{\dagger}c_{-k-q}^{\dagger}-\mathrm{h.c.}), \tag{12}\]
where
\[\begin{split}\xi_{k}&=-2\cos(k)-\mu\\ B_{k,q}&=2i[\sin(2k+q)-\sin(2q+k)-\sin(k-q)].\end{split} \tag{13}\]
Along the flow, other interaction terms that are not initially present are generated. They are incorporated in the following ansatz for the flowing Hamiltonian \(H(l)\):
\[H(l)=H_{0}(l)+\lambda H_{3}(l)+\lambda^{2}H_{U}(l), \tag{14}\]
where
\[H_{U}(l)=\frac{1}{N}\sum_{k,q,p}U_{k,q,p}(l)c^{\dagger}_{k+p}c^{\dagger}_{q-p}c_{k }c_{q}. \tag{15}\]
Three-body interaction and six-fermion terms are also generated along the flow. Due to their large scaling dimension, they are neglected in the ansatz (12). We will omit for brevity the \(l\)-dependence in the bare Hamiltonian, i.e., \(H\equiv H(0)\). By taking the generator as \(\eta(l):=[H_{0}(l),\lambda H_{3}(l)]\), which reads
\[\eta(l)=\frac{\lambda}{3!\sqrt{N}}\sum_{k,q}B_{k,q}(l)\alpha_{k,q}(l)(c^{ \dagger}_{k}c^{\dagger}_{q}c^{\dagger}_{-k-q}+\text{h.c.}), \tag{16}\]
with \(\alpha_{k,q}=\xi_{k}+\xi_{q}+\xi_{-k-q}\), we obtain a set of flow equations:
\[\begin{split}\frac{dB_{k,q}(l)}{dl}&=-\alpha_{k,q} ^{2}(l)B_{k,q}(l),\\ \frac{dU_{k,q,p}(l)}{dl}&=-\frac{1}{2}\alpha_{k+p,q- p}(l)B_{k,q}(l)B_{k+p,q-p}(l),\\ \frac{d\xi_{k}(l)}{dl}&=-\frac{\lambda^{2}}{N}\sum _{q}\alpha_{k,q}(l)B_{k,q}^{2}(l),\end{split} \tag{17}\]
with the initial conditions \(B_{k,q}(0)=B_{k,q}\), \(\xi_{k}(0)=\xi_{k}\), and \(U_{k,q,k^{\prime}}(0)=0\). To the leading order in \(\lambda\), it is sufficient to take the bare value of \(\alpha_{k,q}\) in the flow equation of \(B_{k,q}\). The solution shows that the three-site term decays to zero along the flow as \(B_{k,q}(l)=B_{k,q}e^{-\alpha_{k,q}^{2}l}\). On the other hand, the generated terms take finite values at \(l=\infty\) given by
\[\begin{split} U_{k,q,p}(\infty)&=-\frac{1}{2}\frac {\alpha_{k+p,q-p}}{\alpha_{k,q}^{2}+\alpha_{k+p,q-p}^{2}}B_{k,q}B_{k+p,q-p},\\ \xi_{k}(\infty)&=\xi_{k}-\frac{\lambda^{2}}{2N}\sum _{q}\frac{B_{k,q}^{2}}{\alpha_{k,q}},\text{ for }\alpha_{k,q}\neq 0.\end{split} \tag{18}\]
By construction, the two-body interaction matrix \(U_{k,q,p}\) is symmetrized with respect to the permutations of fermion operators in Eq.(15). Its value at \(l=\infty\) does not contain divergences since the denominator is a sum of squared energies. The divergence in the renormalized dispersion is an artifact of the limit \(l\to\infty\). In fact, the contribution of the flow to the dispersion vanishes when \(\alpha_{k,q}=0\). Finally, we note that Wegner's prescription is relaxed by taking in the definition (16) of \(\eta(l)\) only the three-site term and not all the interacting part of \(H(l)\). As a result, the approach becomes perturbative in \(\lambda\). Terms that are generated along the flow have a second-order dependence in \(\lambda\) and can be eliminated by including them in a redefinition of the generator. This induces higher-order corrections to the flow equations (17). Thus, the flow equation procedure enables to push the \(\lambda\)-dependence of terms that break the U(1) symmetry to higher-orders, thereby restoring the symmetry perturbatively.
### Bosonization of the effective Hamiltonian
We investigate in this section the low-energy properties of the Hamiltonian (5) starting from the effective Hamiltonian \(H_{\text{eff}}=H_{0}(\infty)+\lambda^{2}H_{U}(\infty)\). After taking the long-range limit in the two-body interaction term, only particle-hole excitations within a momentum range \(\Lambda\) around the Fermi points \(\pm k_{F}\) are retained. The renormalized dispersion relation can then be linearized around the Fermi point and the fermion operators separated into right and left modes:
\[c_{k}=\Theta(\Lambda-|k-k_{F}|)c_{R,k}+\Theta(\Lambda-|k+k_{F}|)c_{L,k}, \tag{19}\]
where \(\Theta\) is the Heaviside function. After constructing fermionic fields from these modes, i.e.,
\[c(x)=\frac{1}{\sqrt{N}}\sum_{k}e^{ikx}c_{k}=:c_{R}(x)+c_{L}(x), \tag{20}\]
we obtain in the continuum limit a Hamiltonian that describes the low-lying states of \(H_{\text{eff}}\). It is given by
\[\begin{split} H_{\text{eff}}=&-i\tilde{v}_{F}\int dx \left[c^{\dagger}_{R}(x)\partial_{x}c_{R}(x)-c^{\dagger}_{L}(x)\partial_{x}c_{ L}(x)\right]\\ &+4\lambda^{2}g_{2}\int dx\,\rho_{R}(x)\rho_{L}(x),\end{split} \tag{21}\]
where \(\tilde{v}_{F}=\partial\xi_{k}(\infty)/\partial k|_{k=k_{F}}\) is the renormalized Fermi velocity, \(\rho_{R}(x)\) and \(\rho_{L}(x)\) are respectively the density operators of the right and left branches, and \(g_{2}=U_{k_{F},-k_{F},0}\) is the forward scattering matrix element. Due to the symmetry of the two-body interaction matrix, \(g_{4}\) scattering processes that couple fermions at the same branch vanish. Finally, the factor 4 accounts for the two possible \(g_{2}\) processes and the two backscattering processes, i.e., \(g_{1}=-U_{k_{F},-k_{F},2k_{F}}\), which for spinless fermions coincide with \(g_{2}\) processes. We note that since \(g_{2}>0\), the interaction is attractive, which indicates that the three-site term in the bare Hamiltonian favors the occupation of three adjacent sites.
We now apply the bosonization mapping between the fermionic fields \(c_{R},c_{L}\) and the bosonic fields \(\phi,\theta\). It is given by [24]
\[c_{r}(x)=\frac{F_{r}}{\sqrt{2\pi\alpha}}e^{irk_{F}x}e^{-i[r\phi(x)-\theta(x)]}, \tag{22}\]
where \(r=1\) for \(r=R\) and \(r=-1\) for \(r=L\). Here, \(\alpha\sim 1/\Lambda\) is a short-distance cut-off and \(F_{r}\) are unitary operators called Klein factors. They follow the commutation relations \(\{F_{r},F_{r^{\prime}}^{\dagger}\}=2\delta_{r,r^{\prime}}\) and ensure the anticommutation of fermions from different species. Using Eq.(22), the Hamiltonian (21) can be reduced to the Luttinger liquid Hamiltonian (1) with a renormalized velocity \(u\) and a Luttinger parameter, given to the second order of \(\lambda\) by
\[K=1+\frac{4}{\pi}\sin(k_{F})\sin^{2}\Big{(}\frac{k_{F}}{2}\Big{)}\lambda^{2}. \tag{23}\]
The derivation of Eq.(23) is detailed in Appendix C. Here, \(k_{F}\) is defined by the filling of the renormalized band \(\xi_{k}(\infty)\) and differs from its value at the non-interacting limit \(\lambda=0\). The relation between \(k_{F}\) in Eq.(23) and the bare chemical potential can be obtained by setting \(\xi_{k_{F}}(\infty)=0\). This leads to the auto-coherent equation
\[\mu=-2\cos(k_{F})-\lambda^{2}I(k_{F},\mu), \tag{24}\]
where \(I\) is the sum over the \(q\) modes in Eq.(18).
The behavior of the Luttinger parameter is compared to DMRG results, along a small \(\lambda\) cut, where the perturbative calculations still hold (Fig.2). \(K\) is obtained numerically by fitting the profile of the local density, which for open boundary conditions exhibits Friedel oscillations. According to conformal field theory, we have [35; 36]
\[\langle n_{j}\rangle\propto\frac{\cos(2k_{F}j+\beta)}{[(N/\pi)\sin(\pi j/N)]^ {K}}, \tag{25}\]
with a phase shift \(\beta\). The DMRG results for \(K\) follow the form described by Eq.(23). The absence of a reflection symmetry with respect to \(\mu\) is due to the breaking of particle-hole symmetry by the three-site interaction in Eq.(5).
The maximal deviation between the analytical and the numerical results is of order \(10^{-3}\). Its origin is twofold. First, it should be noted that by considering only scattering processes with momentum transfer \(p\sim 0,2k_{F}\) in the effective Hamiltonian (21), we have neglected terms that oscillate with a wave vector \(2k_{F}\). These terms contribute to the renormalization of \(K\) at the \(4^{\text{th}}\) order of \(\lambda\), mainly at half-filling. Secondly, three-body and six-fermion interaction terms are discarded in the early stages of the calculations. Including them in the generator of the flow also provides corrections of order 4 in \(\lambda\) to the two-body interaction matrix.
When the edges of the band \(\xi_{k}(\infty)\) are crossed, the system undergoes a PT transition as the density of fermions respectively vanishes or saturates. For small \(\lambda\), the shape of the transition line can be extracted from the solution of Eq.(24) at \(k_{F}=0,\pi\) (Fig.3). As the transition is approached from the Luttinger liquid phase, the two-body interaction matrix vanishes, i.e., \(U_{k_{F},-k_{F},p}=0\) for all \(p\) and the Luttinger parameter tends, up to the second order in \(\lambda\), towards its non-interacting value \(K=1\). This result indicates that although the particles are not conserved in the bare Hamiltonian, they still behave as free fermions in the low density limit.
### Transformed Fermionic operator
In this section, a modified bosonic representation of the single-fermion operator is derived from the the flow effective. Consequences of these modifications on the correlation functions of bare operators are discussed.
#### iii.3.1 Flow of single-fermion operators
In order to evaluate observables in the ground state of \(H_{\text{eff}}\), the operators also need to be transformed. In particular, the form of the single-fermion operator under the transformation \(U(\infty)\) that block-diagonalizes the Hamil
Figure 3: (a)-(b) Theoretical prediction for the Pokrovsky-Talapov (PT) transition line at (a) \(k_{F}=0\) and (b) \(k_{F}=\pi\), as a function of \(\lambda\).
Figure 2: (a) Example of the local density profile at \(\lambda=0.1\) and \(\mu=1.2\). (b) Analytical calculation and DMRG simulations of the Luttinger parameter \(K\) at \(\lambda=0.1\) as a function of the bare chemical potential \(\mu\), obtained by solving Eq.(24).
tonian can be obtained by solving the flow equation
\[\frac{dc_{k}(l)}{dl}=[\eta(l),c_{k}(l)]. \tag{26}\]
Similar to the flow of the Hamiltonian, a closed form of the solution to Eq.(26) is not tractable and truncations need to be carried out. We take here the ansatz
\[\begin{split} c_{k}(l)=c_{k}&+\frac{\lambda}{ \sqrt{N}}\sum_{q}\gamma_{k,q}(l)c_{q}^{\dagger}c_{-k-q}^{\dagger}\\ &-\frac{2\lambda}{3\sqrt{N}}\sum_{q,p}\gamma_{q,p}(l)\big{[}c_{q} ^{\dagger}c_{p}^{\dagger}c_{-p-q}^{\dagger}-c_{q}c_{p}c_{-p-q}\big{]}c_{k}. \end{split} \tag{27}\]
It will later be argued that higher-order terms in \(\lambda\) do not bring any qualitative change to the behavior of the correlation functions. The flow equation reads
\[\frac{d\gamma_{k,q}(l)}{dl}=-\frac{1}{2}B_{k,q}(l)\alpha_{k,q}(l), \tag{28}\]
with the initial condition \(\gamma_{k,q}(0)=0\). The solution is given by \(\gamma_{k,q}(\infty)=-B_{k,q}/2\alpha_{k,q}\). Since the U(1) symmetry is recovered in the rotated basis, the standard bosonization mapping (22) can be applied. We start by constructing a fermionic field \(\tilde{c}(x)\) from the transformed modes \(c_{k}(\infty)\) in the same fashion as in Eq.(20). Given that the mode \(k\) decouples from the other modes in the last term of Eq.(27), the Fourier transform yields a local operator that will at most lead to a renormalization of the numerical prefactors in correlation function. Hence, we only consider
\[\begin{split}\tilde{c}(x)\sim&\,c(x)+\frac{\lambda }{N}\int dydz\,\Gamma(z,z-y)c^{\dagger}(x+y)c^{\dagger}(x+z),\end{split} \tag{29}\]
where
\[\Gamma(x,y)=\frac{1}{N}\sum_{k,q}e^{-ikx}e^{-iqy}\gamma_{k,q}(\infty). \tag{30}\]
The most relevant operators in the bosonic representation of Eq.(29) are extracted by carrying out operator product expansions (OPE) in the product of vertex operators. We have
\[\begin{split} c^{\dagger}(x+y)c^{\dagger}(x+z)\sim\frac{1}{\pi \alpha}\sin(k_{F}(z-y))e^{-2i\theta(x)}.\end{split} \tag{31}\]
By inserting this expression back into Eq.(29) and carrying out the double integration, we obtain
\[\begin{split}\tilde{c}(x)&\sim\frac{e^{ik_{F}x}}{ \sqrt{2\pi\alpha}}e^{-i[\phi(x)-\theta(x)]}+\frac{e^{-ik_{F}x}}{\sqrt{2\pi \alpha}}e^{i[\phi(x)+\theta(x)]}\\ &\quad+\frac{2\lambda}{\pi\alpha}\sin(k_{F})e^{-2i\theta(x)}. \end{split} \tag{32}\]
From the structure of the generator \(\eta\), the form of higher-order terms in the transformed operator can be guessed. We will only consider operators of order \(\lambda^{2}\) as these can contribute to the second-order expansion of the correlations by combining with the zero\({}^{\text{th}}\)-order part of Eq.(27). Given an operator \(O(c,c^{\dagger})\) generated at the second order in \(\lambda\), it transforms under U(1) rotations as \(O(e^{i\alpha}c,e^{-i\alpha}c^{\dagger})=e^{in\alpha}O(c,c^{\dagger})\), where \(n\) can only take the values \(1,7,-5\). Since the U(1) rotations act on the bosonic fields as \(\phi\rightarrow\phi\) and \(\theta\rightarrow\theta+\alpha\), this indicates that operators with a scaling dimension smaller than those in Eq.(32) cannot be generated. The terms that transform as a single-fermion operator will merely add a \(\lambda^{2}\)-dependence to the prefactor of the corresponding vertex operators. Thus, the transformed fermion operator truncated to the second order in \(\lambda\) takes the form
\[\begin{split}\tilde{c}(x)\sim&\,C_{1}e^{ik_{F}x}e^ {-i[\phi(x)-\theta(x)]}\\ &+C_{1}e^{-ik_{F}x}e^{i[\phi(x)+\theta(x)]}+C_{2}e^{-2i\theta(x) },\end{split} \tag{33}\]
with \(C_{1}\sim 1+O(\lambda^{2})\) and \(C_{2}\sim O(\lambda)\).
#### iii.2.2 Fermionic correlation functions
Using the effective bosonic representation of the transformed fermion operator (33), we can now compute correlation functions in the Luttinger liquid phase. Consider the point-split product of bare-fermion operator
\[F_{p}(x)=\lim_{\Delta\to 0}\prod_{n=0}^{p-1}c(x+n\Delta). \tag{34}\]
The correlations in the ground state of the bare Hamiltonian can be evaluated in the Luttinger liquid ground state of \(H_{\text{eff}}\) through the relation
\[\left\langle F_{p}(x)^{\dagger}F_{p}(y)\right\rangle_{\text{GS}}=\left\langle \tilde{F}_{p}^{\dagger}(x)\tilde{F}_{p}(y)\right\rangle_{\text{LL}}, \tag{35}\]
where \(\tilde{F}_{p}(x)=U(\infty)F_{p}(x)U^{\dagger}(\infty)\) is obtained by replacing \(c\) with the transformed field \(\tilde{c}\) in Eq.(34). The correlations of \(p\) fermions are then deduced from the well-known result [24] for the correlation function of vertex operators in the Luttinger liquid Hamiltonian (1):
\[\left\langle e^{i[n\phi(x)+m\theta(x)]}e^{-i[n\phi(y)+m\theta(y)]}\right\rangle _{\text{LL}}\sim\frac{1}{|x-y|^{\frac{n^{2}k}{2}+\frac{n^{2}}{2k}}}. \tag{36}\]
Accordingly, Eqs.(33) and (35) yield the one-fermion correlations
\[\left\langle F_{1}^{\dagger}(x)F_{1}(y)\right\rangle_{\text{GS}}\sim 2C_{1}^{2}\frac{\cos(k_{F}r)}{r^{\frac{ 1}{2k}+\frac{n}{2}}}+(C_{2})^{2}\frac{1}{r^{\frac{2}{k}}}, \tag{37}\]
where \(r=|x-y|\). Similarly, the two-fermion correlations can be obtained from the bosonic representation of \(F_{2}\). We have
\[\begin{split}\tilde{F}_{2}(x)\sim&\,(C_{1})^{2}e^{2 i\theta(x)}+C_{1}C_{2}e^{ik_{F}x}e^{-i[\phi(x)+\theta(x)]}\\ &+C_{1}C_{2}e^{-ik_{F}x}e^{i[\phi(x)-\theta(x)]},\end{split} \tag{38}\]
which leads to
\[\Big{\langle}F_{2}^{\dagger}(x)F_{2}(y)\Big{\rangle}_{\rm GS}\sim(C_{1})^{4}\frac {1}{r^{\frac{2}{K}}}+2(C_{1}C_{2})^{2}\frac{\cos(k_{F}r)}{r^{\frac{2K}{2K}+\frac {K}{2}}}. \tag{39}\]
We note that a single-fermion operator appears in the bosonic representation of \(F_{2}\). It is an example of a term that is not initially present in the standard bosonization mapping but is generated by the \(\mathbb{Z}_{3}\)-symmetric interaction along the flow. Its consequence is a crossover between two power-laws in the two-fermion correlations. At short distance, the correlations decay with the standard exponent \(2/K\) of the two-fermion operator. At large distance, a single-fermion part with an exponent \(\frac{K}{2}+\frac{1}{2K}\), oscillating with a wave vector \(k_{F}\), takes over the correlations when \(K<\sqrt{3}\). The crossover takes place at length scale \(l\sim 1/\lambda^{4K/(3-K^{2})}\). Numerically, it can observed close to the PT transition line, where the Luttinger parameter tends to \(K=1\) while \(\lambda\) remains small enough for \(l\) to remain smaller than the system size. The numerical results for the correlations (Fig.4), fitted with Eqs.(37),(39) by the least-square method, are in concordance with the analytical calculations. Finally, it should be noted that a two-fermion operator is also generated in the bosonic representation of the single fermion operator (see Eq.(33)). Its effect on the correlation remains however small compared to the single-fermion part.
## V Flow equation approach to the hard-core bosonic model
We turn now to the hard-core boson model (2). The commutation relation of the hard-core boson operator renders the calculation of the flow on the lattice difficult. It is more convenient to consider the Hamiltonian in the continuum limit, where it reduces to the dual sine-Gordon model:
\[H=\frac{v_{F}}{2\pi}\int dx\,[\partial_{x}\theta(x)]^{2}+[\partial_{x}\phi(x)] ^{2}+\frac{g}{\pi\alpha^{2}}\int dx\cos(\beta\theta(x)), \tag{40}\]
with \(\beta=3/\sqrt{K}\). Eq.(40) is obtained from the bosonization mapping (3) by absorbing the Luttinger parameter \(K\) into a redefinition of the bosonic fields, i.e., \(\phi\rightarrow\phi\sqrt{K}\) and \(\theta\rightarrow\theta/\sqrt{K}\). The duality transformation \(\phi\leftrightarrow\theta\) and \(K\to 1/K\) recovers the sine-Gordon Hamiltonian, which is studied with the flow equation approach in Refs.[37] and [38]. We review here the main result of this work and establish the duality correspondence with Eq.(40) to obtain a low-energy effective Hamiltonian.
### Flow of the sine-Gordon Hamiltonian
We introduce the vertex operators \(V_{r}(\beta,x)=:e^{i\beta[r\phi(x)-\theta(x)]}:\), where \(r=R,L\) denotes the left and right species. The relation between our notations and those used in the reference are detailed in the Appendix B. The columns refer to the normal ordering with respect to the ground state of the non-interacting part in Eq.(40). After performing the duality transformation, the interaction part of the Hamiltonian (40), hereinafter denoted by \(H_{3}\), reads
\[H_{3}=\frac{g}{2\pi\alpha^{2}}\Big{(}\frac{2\pi\alpha}{L}\Big{)}^{\frac{\tilde {\beta}^{2}}{4}}\!\int\!\!dx\big{[}V_{R}(\tilde{\beta}/2,x)V_{L}(-\tilde{\beta} /2,x)+\text{h.c.}\big{]}, \tag{41}\]
with \(L\) the total length of the chain and \(\tilde{\beta}(K)=\beta(1/K)\). Combined with the non-interacting part, Eq.(41) is the starting point of calculations of the flow equations carried out in Ref.[37] and [38]. The Hamiltonian is diagonalized
Figure 4: One fermion and two-fermion correlation functions in the ground state of the bare Hamiltonian (5) at \(\mu=1.86\) and \(\lambda=3\). The dashed line are provided as a guide to the eye. The DMRG calculations are performed on a system of size \(N=2100\). The Luttinger parameter \(K\) is extracted from Friedel oscillations as discussed in section IV.2 and the coefficients in Eqs.(37) and (39) are obtained from a least square fit.
by a generator \(\eta(l)=\eta^{(1)}(l)+\eta^{(2)}(l)\), where
\[\begin{split}\eta^{(1)}(l)=-2iv_{F}&\int dxdy\frac{ \partial g(y,l)}{\partial y}\\ &\times\big{[}V_{R}(\tilde{\beta}/2,x)V_{L}(-\tilde{\beta}/2,x-y)+ \text{h.c.}\big{]},\end{split} \tag{42}\]
and \(g(x,l)\) is obtained from the Fourier transform of \(g(k,l)=\frac{u(l)}{4\pi^{2}\alpha^{2}}\Big{(}\frac{2\pi\alpha}{L}\Big{)}^{ \frac{\beta^{2}}{4}}e^{-4v_{F}^{2}k^{2}l}\). Here, \(u(l)\) is a running coupling that flows to zero in the weak-coupling regime, i.e., inside the Luttinger liquid phase. It is initially given by the bare coupling constant in Eq.(40), i.e., \(u(0)=g\). \(\eta^{(2)}(l)\) generates the flow of the parameter \(\beta\). Its expression can be found in Ref. [38]. At the end of the flow, an effective Hamiltonian \(H(\infty)=H_{0}+H_{\text{d}}(\infty)\) is obtained, where \(H_{0}\) denotes the non-interacting part in the bare Hamiltonian and
\[\begin{split} H_{\text{d}}(\infty)=\sum_{k>0}\omega_{k}(\infty) \Big{[}&\tilde{P}_{R}(-k)\tilde{P}_{R}^{\dagger}(-k)+\tilde{P}_{ R}^{\dagger}(k)\tilde{P}_{R}(k)\\ &+\tilde{P}_{L}(-k)\tilde{P}_{L}^{\dagger}(-k)+\tilde{P}_{L}^{ \dagger}(k)\tilde{P}_{L}(k)\Big{]},\end{split} \tag{43}\]
with \(\omega_{k}(\infty)=-v_{F}g^{2}\frac{\cos\big{(}\pi\tilde{\beta}^{2}/4\big{)}} {2\Gamma^{2}(\tilde{\beta}^{2}/4)}k|\alpha k|^{(\tilde{\beta}^{2}-8)/2}\). \(\tilde{P}_{R}(k)\) and \(\tilde{P}_{L}(k)\) are soliton and antisoliton creation and annihilation operator defined as the Fourier transform of vertex operators (see Appendix B). The effective Hamiltonian of the dual sine-Gordon Hamiltonian (40) is deduced from the dual transformation of Eq.(43). The latter acts on the soliton and antisoliton operators as \(\tilde{P}_{R}(k)\to P_{R}^{\dagger}(-k)\) and \(\tilde{P}_{L}(k)\to P_{L}(k)\), where \(P_{r}(k)\) is obtained from \(\tilde{P}_{r}(k)\) by replacing \(\tilde{\beta}\) with \(\beta\). Moreover, the flow equations for \(g\) and \(K\) in Eq.(40) can be deduced (see Appendix A):
\[\begin{split}\frac{dK(l_{\text{RG}})}{dl_{\text{RG}}}& =\frac{9}{4\Gamma(\frac{9}{4K}-1)}g^{2}(l_{\text{RG}}),\\ \frac{dg(l_{\text{RG}})}{dl_{\text{RG}}}&=\Big{(}2- \frac{9}{4K}\Big{)}g(l_{\text{RG}}),\end{split} \tag{44}\]
where \(l_{\text{RG}}\) is the parameter of the RG flow. It is related to the parameter of the flow equation approach \(l\) by \(l_{\text{RG}}=\frac{1}{2}\ln\!\left(\frac{32l}{\alpha^{2}}\right)\). The RG equations, derived in Eq.(4), are recovered by an expansion of the Gamma function around the critical value \(K_{c}=9/8\). Finally, we note that since the soliton and antisoliton operators transform under U(1) rotations as \(P_{r}(k)\to e^{i\alpha}P_{r}(k)\), the U(1) symmetry is restored in the effective Hamiltonian.
### Transformed hard-core boson operator
From the Jordan-Wigner transformation and the bosonization mapping (22), a bosonic representation of the hard-core boson can be derived [24]. It is given by
\[b(x)=\frac{e^{i\theta(x)}}{\sqrt{2\pi\alpha}}\big{[}1+\cos(2\phi(x)-2k_{F}x) \big{]}. \tag{45}\]
We calculate in this section the transformation of Eq.(45) along the flow that diagonalizes the dual sine-Gordon model.
#### iii.2.1 Flow of the hard-core boson operator
The flow of the hard-core boson operator needs to be evaluated using the generator of the dual sine-Gordon Hamiltonian. It is given by the dual of Eq.(42), i.e.,
\[\begin{split}\eta_{\text{dual}}^{(1)}(l)=-2iv_{F}& \int dxdy\frac{\partial g(y,l)}{\partial y}\\ &\times\big{[}V_{R}(\beta/2,x)V_{L}(\beta/2,x-y)+\text{h.c.}\big{]}.\end{split} \tag{46}\]
We take the following ansatz for the flowing hard-core boson operator, truncated to the the most relevant terms:
\[\begin{split} b(x,l)&=[C_{1}(l)+C_{2}(l)\cos(2\phi(x )-2k_{F}x)]e^{i\theta(x)}\\ &+[C_{3}(l)+C_{4}(l)\cos(2\phi(x)-2k_{F}x)]e^{-2i\theta(x)},\end{split} \tag{47}\]
with \(C_{1}(0)=C_{2}(0)=1/\sqrt{2\pi\alpha}\), and \(C_{3}(0)=C_{4}(0)=0\). Terms with larger scaling dimensions can be neglected in describing the behavior of the correlation functions. Since the bosonic fields in Eq.(47) are those of the original Hamiltonian (3), the flow of the rescaled operator need to be considered. We calculate here the flow of \(e^{i\theta(x)}\), from which the flow equation for \(C_{2}(l)\) can be deduced. We have
\[e^{i\theta(x)/\sqrt{K}}=\Big{(}\frac{2\pi\alpha}{L}\Big{)}^{\beta^{2}/36}V_{R} (-\beta/6,x)V_{L}(-\beta/6,x) \tag{48}\]
The details of the calculation can be found in Appendix D.1. We summarize here the main steps of the derivation. First, the commutator of Eq.(48) with the hermitian conjugate part of the generator (46) leads to less relevant terms that are truncated in the ansatz (47). The most relevant operator in the remaining part of \([\eta_{\text{dual}}^{(1)}(l),e^{i\theta(x)/\sqrt{K}}]\) is extracted by an OPE of the vertex operators. The parameters of the vertex operators in Eqs.(46) and (48) combine to produce the operator \(e^{2i\theta(x)}\) in the ansatz (47). Namely,
\[e^{-2i\theta(x)/\sqrt{K}}=\Big{(}\frac{2\pi\alpha}{L}\Big{)}^{\beta^{2}/9}V_{R} (\beta/3,x)V_{L}(\beta/3,x). \tag{49}\]
The flow equation for \(C_{3}(l)\) is then given by
\[\frac{dC_{3}(l)}{dl}=-C_{1}(l)\frac{4v_{F}u(l)}{\Gamma(\beta^{2}/12)^{2}}\sum_{ k>0}k|\alpha k|^{\beta^{2}/6-2}e^{-4v_{F}^{2}k^{2}l}. \tag{50}\]
To the leading order in the bare coupling constant \(g\), we can replace \(C_{1}(l)\) by its bare value. Moreover, the running coupling constant can be replaced by its approximate solution in the weak-coupling regime [39]. From the
RG equations (4), we have
\[u(l_{\rm RG})\sim ge^{(2-\beta^{2}/4)l_{\rm RG}}=g\Big{(}\frac{32l}{\alpha^{2}} \Big{)}^{1-\beta^{2}/8}. \tag{51}\]
Finally, the solution of Eq.(50) at the end of the flow reads
\[C_{3}(\infty)=-\frac{4v_{F}g}{\sqrt{2\pi\alpha}}D_{\beta}\int_{0}^{\infty}dk\, k^{\beta^{2}/6-1}f(2v_{F}k), \tag{52}\]
with \(D_{\beta}=\frac{(32)^{1-\beta^{2}/8}}{\Gamma(\beta^{2}/12)}\) and \(f(k)=k^{\beta^{2}/4-4}\Gamma(2-\beta^{2}/8,k^{2})\).
#### iv.2.2 Bosonic correlation functions
We consider the correlation functions of the point-split product of \(p\) hard-core bosonic operators
\[B_{p}(x)=\lim_{\Delta\to 0}\prod_{n=0}^{p-1}b(x+n\Delta). \tag{53}\]
As in section IV.3, the correlation functions in the ground state of the bare Hamiltonian (2) can be evaluated from the bosonic representation of the transformed operator \(\tilde{B}_{p}(x)=U(\infty)B_{p}(x)U^{\dagger}(\infty)\). It is derived from \(B_{p}(x)\) by substituting the hard-core boson operators with their transformed counterpart \(b(x,\infty)\). The one-boson and two-boson operators are given by
\[\tilde{B}_{1}(x) \sim C_{1}e^{i\theta(x)}+C_{3}e^{-2i\theta(x)}, \tag{54}\] \[\tilde{B}_{2}(x) \sim(C_{1})^{2}e^{2i\theta(x)}+2C_{1}C_{3}e^{-i\theta(x)},\]
where the coefficient are taken at the end of the flow, i.e. at \(l=\infty\). The oscillating terms in Eq.(47) are neglected since they have larger scaling dimensions. Using the result (36) for the correlation functions in the Luttinger liquid Hamiltonian, we obtain
\[\Big{\langle}B_{1}^{\dagger}(x)B_{1}(y)\Big{\rangle}_{\rm GS} \sim(C_{1})^{2}\frac{1}{r^{\frac{3}{2k}}}+(C_{3})^{2}\frac{1}{r^{ \frac{3}{2k}}}, \tag{55}\] \[\Big{\langle}B_{2}^{\dagger}(x)B_{2}(y)\Big{\rangle}_{\rm GS} \sim(C_{1})^{4}\frac{1}{r^{\frac{3}{2k}}}+4(C_{1}C_{3})^{2}\frac{1 }{r^{\frac{3}{2k}}}, \tag{56}\]
with \(r=|x-y|\). For \(p=3\), a combination of \(e^{i\theta(x)}\) in the product (53) can lead to a vanishing exponent. In this case, a higher-order expansion in the point-splitting parameter \(\Delta\) need to be carried out. Moreover, the Klein factors become necessary as their anti-commutation prevents artificial cancellations. They are reintroduced in the ansatz by replacing \(\cos(2\phi)\) with \(F_{R}e^{2i\phi}+F_{L}e^{-2i\phi}\). The details of the derivation are presented in Appendix D. Finally, the correlation function takes the form
\[\Big{\langle}B_{3}^{\dagger}(x)B_{3}(y)\Big{\rangle}_{\rm GS}\sim D_{0}\frac{1}{r^{\frac{3}{2k}}}+D_{1}\frac{1}{r^{2}}+D_{2}\frac{ \cos(2k_{F}r)}{r^{2K}} \tag{57}\] \[+D_{3}\frac{\cos(2k_{F}r)}{r^{2K+2}}+D_{4}\frac{1}{r^{4}},\]
where \(D_{1}\), \(D_{2}\), \(D_{3}\) and \(D_{4}\) depend on the coefficient of the transformed operator \(b(x,\infty)\).
As shown in Fig.5, the correlations decay algebraically at short distance with the exponent \(p^{2}/2K\), associated with the operator \(e^{ip\theta(x)}\) in the standard bosonic representation of the product of \(p\) hard-core bosons. At large distances, the two-boson and three-boson correlations undergo a crossover to power-laws with smaller exponents, induced by the terms generated along the flow. In particular, the two-boson correlations exhibit a crossover to the one-boson correlations. Similarly, the three-boson correlations acquire, among other terms, an oscillating part with a wave vector \(2k_{F}\) that decays with an exponent \(2K\). We also note that the presence of gradients of bosonic fields in \(\tilde{B}_{3}(x)\) leads to a non-vanishing
Figure 5: (a) One-boson and two-boson correlation functions at \(\mu=1\), \(\lambda=0.06\) and (b) Connected three-boson correlation functions at \(\mu=1.9\), \(\lambda=0.2\), in the ground state of the bare Hamiltonian. The dashed line are provided as a guide to the eye. The DMRG calculations are performed on a system of size \(N=1200\). The Luttinger parameter \(K\) and the Fermi wave vector \(k_{F}\) are extracted from Friedel oscillations as discussed in section IV.2 and the coefficients in the correlations are obtained from a least square fit.
expectation of the three-boson operator inside the Luttinger liquid phase. Since these terms are generated to the first order of the coupling \(g\propto\lambda\), the expectation decays linearly with the interaction strength and vanish at the non-interacting point \(\lambda=0\). This result is confirmed by the numerical simulations (Fig.6).
## VI Conclusion
Using the flow equation approach, we have provided an analytical derivation of modified bosonic representations of hard-core boson and spinless fermion operators that take into account the \(\mathbb{Z}_{3}\) symmetry of the Hamiltonian. As a consequence, the correlation functions of \(p\)-particles are dominated in the long distances by the power-law decay of operators that are not initially present in the bosonic representation of the operators. These calculations can be straightforwardly generalized to \(\mathbb{Z}_{n}\)-symmetric models. Since the generator of the flow exhibits the symmetry of the bare Hamiltonian by construction, terms that transform covariantly under \(\mathbb{Z}_{n}\)-rotations will be generated in the bosonic representation of single-particle operators. For instance, in the hard-core boson model with creation/annihilation operators on four adjacent sites, which is of interest in the open problem of commensurate melting of density-waves, the bosonic representation of a product of \(p\) bosons is expected to contain terms that are associated with \(p-4m\) particles, where \(m\in\mathbb{Z}\). Some of these terms have smaller scaling dimension than the bare operator and will dominate the correlations at long distances.
More generally, this paper demonstrates that even if an emergent U(1) symmetry is present, the long distance correlation functions can have a behaviour that differs from the naive expectation and that the flow equation approach is a very useful tool to identify the correct bosonic representation of lattice operators. It would be interesting to investigate the extension of this method to models with an emergent non-abelian symmetry.
###### Acknowledgements.
The work has been supported by the Swiss National Science Foundation (FM, ZJ) grant 212082 and by the Delft Technology Fellowship (NC). Numerical simulations have been performed on the Dutch national e-infrastructure with the support of the SURF Cooperative and the support of the Scientific IT and the Application Support Center of EPFL.
Figure 6: DMRG calculations of the expectation value of the three-boson operator \(B_{3}\), in the ground state of the bare Hamiltonian (2), as a function of the three-site interaction coupling \(\lambda\) and at \(\mu=1\).
## Appendix A Derivation of the flow equations
### Flow equations for the fermionic Hamiltonian
An efficient way of computing the flow equations is to use Wick's theorem for operator products to collect the contributions to the different generated terms in \([\eta,H_{3}]\). For convenience, we write the interaction term as
\[H_{3}=\frac{1}{3!\sqrt{N}}\sum_{k_{1},k_{2},k_{3}}\tilde{B}_{k_{1},k_{2},k_{3}}c _{k_{1}}^{\dagger}c_{k_{2}}^{\dagger}c_{k_{3}}^{\dagger}+\text{h.c.}, \tag{10}\]
where
\[\tilde{B}_{k_{1},k_{2},k_{3}}=\delta_{k_{1}+k_{2}+k_{3},0}\sum_{\sigma\in S_{3} }\epsilon(\sigma)e^{-ik_{\sigma(2)}}e^{-2ik_{\sigma(3)}}, \tag{11}\]
\(S_{3}\) is the symmetric group of order 3, and \(\epsilon(\sigma)\) is the sign of the permutation. The contribution from \([\eta,H_{3}]\) to the flow of \(U_{k,q,p}\) and \(\xi_{k}\) stems from the following term:
\[[\eta,H_{3}]\rightarrow \frac{1}{(3!)^{2}N}\sum_{k_{1},k_{2},k_{3}}\tilde{B}_{k_{1},k_{2},k_{3}}\tilde{B}_{q_{1},q_{2},q_{3}}\tilde{\alpha}_{k_{1},k_{2},k_{3}}\Big{(} c_{k_{1}}^{\dagger}c_{k_{2}}^{\dagger}c_{k_{3}}^{\dagger},c_{q_{1}}c_{q_{2}}c_{q_{ 3}}]+\text{h.c.}\Big{)}, \tag{12}\]
where \(\tilde{\alpha}_{k_{1},k_{2},k_{3}}=\sum_{i=1}^{3}\xi_{k_{i}}\). Using Wick's theorem, we can write the sum in Eq.(12) as
\[\sum_{k_{1},k_{2},k_{3}}\tilde{B}_{k_{1},k_{2},k_{3}}\tilde{B}_{q_{1},q_{2},q _{3}}\tilde{\alpha}_{k_{1},k_{2},k_{3}}\Bigg{(}2c_{k_{1}}^{\dagger}c_{k_{2}}^{ \dagger}c_{k_{3}}^{\dagger}c_{q_{1}}c_{q_{2}}c_{q_{3}}-\sum_{(q_{i},k_{j})\ \text{ pairs}}\ :c_{q_{1}}c_{q_{2}}c_{q_{3}}c_{k_{1}}^{\dagger}c_{k_{2}}^{ \dagger}c_{k_{3}}^{\dagger}:\quad+...\Bigg{)}+\text{h.c.}, \tag{13}\]
where the columns denote the vacuum normal-ordering. The contraction is given by \(\contraction{\sigma_{q_{i}}}{c_{k_{j}}^{\dagger}}=\delta_{q_{i},k_{j}}\) and the ellipses denote terms resulting from double and triple contractions. Let us consider for example the terms arising from single contractions. These will contribute to the flow equation of \(U_{k,q,p}\). We have
\[-\sum_{\begin{subarray}{c}k_{1},k_{2},k_{3}\\ q_{1},q_{2},q_{3}\end{subarray}}\tilde{B}_{k_{1},k_{2},k_{3}}\tilde{B}_{q_{1 },q_{2},q_{3}}\tilde{\alpha}_{k_{1},k_{2},k_{3}}\frac{1}{(2!)^{2}}\sum_{\sigma, \sigma^{\prime}\in S_{3}}\epsilon(\sigma)\epsilon(\sigma^{\prime})\delta_{q_{ \sigma(1)},k_{\sigma^{\prime}(1)}}:c_{q_{\sigma(2)}}c_{q_{\sigma(3)}}c_{k_{ \sigma^{\prime}(2)}}^{\dagger}c_{k_{\sigma^{\prime}(3)}}^{\dagger}:\] \[=-\frac{1}{(2!)^{2}}\sum_{\sigma,\sigma^{\prime}\in S_{3}}\sum_{ \begin{subarray}{c}k_{1},k_{2},k_{3}\\ q_{1},q_{2},q_{3}\end{subarray}}\tilde{B}_{k_{\sigma^{\prime}-1(1)},k_{ \sigma^{\prime}-1(2)},k_{\sigma^{\prime}-1(3)}}\tilde{B}_{q_{\sigma^{\prime}-1 (1)},q_{\sigma^{\prime}-1(2)},q_{\sigma^{\prime}-1(3)}}\tilde{\alpha}_{k_{ \sigma^{\prime}-1(1)},k_{\sigma-1(2)},k_{\sigma^{\prime}-1(3)}}\delta_{k_{1},q _{1}}:c_{q_{2}}c_{q_{3}}c_{k_{2}}^{\dagger}c_{k_{3}}^{\dagger}:\] \[=-\Big{(}\frac{3!}{2!}\Big{)}^{2}\sum_{\begin{subarray}{c}k_{1}, k_{2},k_{3}\\ q_{1},q_{2},q_{3}\end{subarray}}\tilde{B}_{k_{1},k_{2},k_{3}}\tilde{B}_{q_{1},q _{2},q_{3}}\tilde{\alpha}_{k_{1},k_{2},k_{3}}\delta_{k_{1},q_{1}}:c_{q_{2}}c_{ q_{3}}c_{k_{2}}^{\dagger}c_{k_{3}}^{\dagger}:.\]
The factor \(1/2!\) in the first line avoids over-counting contractions. In the second line, the change of indices \(k_{\sigma^{\prime}(i)}\to k_{i}\) for \(i=1,2,3\) is made in order to shift the permutations dependence to the indices of the interaction matrix. Finally, the last line is obtained from the anti-symmetry of the interaction matrix under permutation, i.e., \(B_{k_{\sigma(1)},k_{\sigma(2)},k_{\sigma(3)}}=\epsilon(\sigma)B_{k_{1},k_{2},k_{3}}\). We note that the numerical factor in the last line of Eq.(11) is the number of non-zero single contractions. A similar result can be obtained for terms generated from double contractions, which contribute to the dispersion \(\xi_{k}\). After writing Eq.(11) in terms of \(B_{k,q}=\tilde{B}_{k,q,-k-q}\) and \(\alpha_{k,q}=\tilde{\alpha}_{k,q,-k-q}\), the contribution to the two-body term reads
\[-2\cdot\frac{1}{(2!)^{2}N}\sum_{k,q,p}B_{k,q}B_{k+p,q-p}\alpha_{k+p,q-p}c_{k+p}^ {\dagger}c_{q-p}^{\dagger}c_{k}c_{q}, \tag{14}\]
where the factor 2 comes from the hermitian conjugation. The flow equation for \(U_{k,q,p}\) is then given by
\[\frac{dU_{k,q,p}(l)}{dl}=-\frac{1}{2}\alpha_{k+p,q-p}(l)B_{k,q}(l)B_{k+p,q-p}(l). \tag{15}\]
### Flow equations for the hard-core bosonic Hamiltonian
The flow equations of the sine-Gordon model are derived in Ref. [38]. We have
\[\begin{split}\frac{d\tilde{\beta}^{2}(l_{\text{RG}})}{dl_{\text{RG} }}&=-\frac{\tilde{\beta}(l_{\text{RG}})^{4}}{4\Gamma(\frac{ \beta^{2}(l_{\text{RG}})}{4}-1)}g(l_{\text{RG}})^{2},\\ \frac{dg(l_{\text{RG}})}{dl_{\text{RG}}}&=\Big{(}2 -\frac{\tilde{\beta}(l_{\text{RG}})^{2}}{4}\Big{)}g(l_{\text{RG}}).\end{split} \tag{100}\]
Eqs.(44) are deduced from the relation \(\tilde{\beta}=3\sqrt{K}\) and the duality transformation \(K\to 1/K\).
## Appendix B Bosonization dictionary
### Bosonic fields
The bosonic field \(\phi\), \(\theta\) are constructed from the density modes \(\rho_{r}^{\dagger}(p)\), where \(r=R,L\) denotes their species. We have
\[\begin{split}\phi(x)&=-\frac{i\pi}{L}\sum_{p\neq 0} \frac{e^{-\alpha|p|/2-ipx}}{p}\big{[}\rho_{R}^{\dagger}(p)+\rho_{L}^{\dagger}(p) \big{]},\\ \theta(x)&=\frac{i\pi}{L}\sum_{p\neq 0}\frac{e^{- \alpha|p|/2-ipx}}{p}\big{[}\rho_{R}^{\dagger}(p)-\rho_{L}^{\dagger}(p)\big{]}, \end{split} \tag{101}\]
where
\[[\rho_{r}^{\dagger}(p),\rho_{r^{\prime}}^{\dagger}(-q)]=-\delta_{r,r^{\prime} }\delta_{p,q}\frac{rpL}{2\pi}. \tag{102}\]
The relation between the density modes and the normal modes of the sine-Gordon model is given by
\[\begin{split}\rho_{R}^{\dagger}(p)&=\sqrt{|p|} \sigma_{1}(p),\\ \rho_{L}^{\dagger}(p)&=\sqrt{|p|}\sigma_{2}(p).\end{split} \tag{103}\]
The relation between the fields in Eq.(101) and the fields \(\tilde{\phi}\) and \(\tilde{\theta}\) introduced in the reference is given by
\[\begin{split}\phi(x)&=\sqrt{\pi}\tilde{\phi}(x), \\ \theta(x)&=-\sqrt{\pi}\tilde{\theta}(x).\end{split} \tag{104}\]
### Vertex operators
We define the vertex operators as
\[V_{r}(\beta,x)=\,:e^{i\beta[r\phi(x)-\theta(x)]}:. \tag{105}\]
Note that this definition coincides with the vertex operators \(\tilde{V}_{r}(\beta,x)=\,:e^{i\sqrt{\pi}\beta[r\tilde{\phi}(x)-\tilde{\theta} (x)]}:\) introduced in the reference. In terms of the density modes, Eq.(105) reads
\[V_{r}(\beta,x)=\,:\exp\!\left(r\beta\frac{2\pi}{L}\sum_{p\neq 0}\frac{e^{- \alpha|p|/2-ipx}}{p}\rho_{r}^{\dagger}(p)\right):. \tag{106}\]
The normal ordering with respect to the vacuum defined by
\[\begin{split}\rho_{R}^{\dagger}(p<0)\left|0\right>& =0,\\ \rho_{L}^{\dagger}(p>0)\left|0\right>&=0,\end{split} \tag{107}\]
yields
\[V_{r}(\beta,x)=\Big{(}\frac{L}{2\pi\alpha}\Big{)}^{\beta^{2}/2}e^{i\beta[r\phi(x) -\theta(x)]}. \tag{100}\]
The bosonization mapping is then given by
\[\psi_{r}(x)=\frac{e^{irk_{F}x}}{\sqrt{L}}V_{r}(-1,x). \tag{101}\]
#### b.2.1 Operator product expansion
The operator product expansion of a product of vertex operators is given by
\[\begin{split} V_{R}(\beta,x)V_{R}(-\gamma,y)& \sim\Big{(}\frac{L/2\pi}{i(y-x)+\alpha}\Big{)}^{\beta\gamma}V_{R}( \beta-\gamma,x),\\ V_{L}(\beta,x)V_{L}(-\gamma,y)&\sim\Big{(}\frac{L /2\pi}{i(x-y)+\alpha}\Big{)}^{\beta\gamma}V_{L}(\beta-\gamma,x).\end{split} \tag{102}\]
#### b.2.2 Exchange relations
The order of vertex operators can be exchanged using the following relations:
\[\begin{split} V_{R}(-\gamma,y)V_{R}(\beta,x)&\sim V _{R}(\beta,x)V_{R}(-\gamma,y)\times\frac{[i(y-x)+\alpha]^{\beta\gamma}}{[i(x-y )+\alpha]^{\beta\gamma}},\\ V_{L}(-\gamma,y)V_{L}(\beta,x)&\sim V_{L}(\beta,x )V_{L}(-\gamma,y)\times\frac{[i(x-y)+\alpha]^{\beta\gamma}}{[i(y-x)+\alpha]^{ \beta\gamma}},\end{split} \tag{103}\]
and \([V_{R}(\gamma,x),V_{L}(\delta,y)]=0\) for all \(\gamma,\delta\).
#### b.2.3 Solition and antisoliton operators
The soliton and antisoliton operators in the effective Hamiltonian (43) are defined as the Fourier transform of the vertex operators:
\[\tilde{P}_{r}(k)=\left[\frac{\Gamma(\tilde{\beta}^{2}/4)}{2\pi L}\Big{(} \frac{L|k|}{2\pi}\Big{)}^{1-\frac{\tilde{\beta}^{2}}{4}}\right]^{1/2}\int dx \,e^{-ikx}V_{r}(-\tilde{\beta}/2,x),\ \ \text{for}\ r=R,L. \tag{104}\]
## Appendix C Derivation of the Luttinger liquid Hamiltonian
We derive here the Luttinger parameter and the velocity inside the Luttinger liquid phase of the model (5). We consider the particle-hole excitations close to the Fermi point \(\pm k_{F}\) in the two-body interactions of the effective Hamiltonian \(H_{\text{eff}}=H_{0}(\infty)+\lambda^{2}H_{U}(\infty)\). They consist in two \(g_{2}\) processes and two \(g_{1}\) processes. Since the fermions are spinless, these two processes are undistinguishable and the two-body interaction term reduces to
\[H_{U}(\infty)=\frac{4g_{2}}{N}\sum_{p}\rho_{R}(p)\rho_{L}(-p), \tag{105}\]
where
\[g_{2}=U_{k_{F},-k_{F},0}=-\frac{16\sin^{2}(k_{F})\sin^{2}(2k_{F})}{2+4\cos(k_ {F})+3\mu}, \tag{106}\]
\[\rho_{r}(p)=\sum_{k}c_{r,k+p}^{\dagger}c_{r,k} \tag{104}\]
are the Fourier components of the density operators at the right (\(r=R\)) and left branches (\(r=L\)). Using the expressions of the density operators in terms of the fields \(\phi\) and \(\theta\):
\[\begin{split}\rho_{R}(x)&=-\frac{1}{2\pi}\big{[} \partial_{x}\phi(x)-\partial_{x}\theta(x)\big{]},\\ \rho_{L}(x)&=-\frac{1}{2\pi}\big{[}\partial_{x}\phi (x)+\partial_{x}\theta(x)\big{]},\end{split} \tag{105}\]
the two-body interaction becomes
\[H_{U}(\infty)=\frac{4g_{2}}{(2\pi)^{2}}\int dx\,[\partial_{x}\phi(x)]^{2}-[ \partial_{x}\theta(x)]^{2}. \tag{106}\]
Eq.(106) is then combined with the non-interacting Luttinger liquid Hamiltonian such that the Hamiltonian remains quadratic. We obtain
\[H_{U}(\infty)=\frac{u}{2\pi}\int dx\,K[\partial_{x}\phi(x)]^{2}+\frac{1}{K}[ \partial_{x}\theta(x)]^{2}, \tag{107}\]
where
\[\begin{split} K&=\Big{[}\frac{1-2g_{2}\lambda^{2} /\pi\tilde{v}_{F}}{1+2g_{2}\lambda^{2}/\pi\tilde{v}_{F}}\Big{]}^{1/2},\\ u&=\tilde{v}_{F}\Big{[}1-\Big{(}\frac{2g_{2}\lambda^ {2}}{\pi\tilde{v}_{F}}\Big{)}^{2}\Big{]}^{1/2},\end{split} \tag{108}\]
and \(\tilde{v}_{F}=\partial\xi_{k}/\partial k|_{k=k_{F}}\) is the renormalized velocity. Eq.(23) is obtained from an expansion of Eq.(108) to the second order of \(\lambda\).
## Appendix D Some details of the calculations
### Derivation of the flow of the hard-core boson operator
We give here the detailed calculation of the flow equation for \(C_{3}\) in Eq.(47). We have
\[[\eta_{\rm dual}^{(1)}(l),e^{i\theta(x)/\sqrt{K}}]=-2iv_{F}\Big{(}\frac{2\pi \alpha}{L}\Big{)}^{\beta^{2}/36}\int dydz\frac{\partial g(z,l)}{\partial y} \big{[}V_{R}(\beta/2,y)V_{L}(\beta/2,y-z)+{\rm h.c.},V_{R}(-\beta/6,x)V_{L}(- \beta/6,x)\big{]}. \tag{109}\]
Since the terms obtained from the hermitian conjugation in Eq.(109) are less relevant terms, we only consider the commutator
\[\begin{split}\big{[}V_{R}(\beta/2,y)V_{L}(\beta/2,y-z),V_{R}(- \beta/6,x)V_{L}(-\beta/6,x)\big{]}&=V_{R}(\beta/2,y)V_{R}(- \beta/6,x)V_{L}(\beta/2,y-z)V_{L}(-\beta/6,x)\\ &\times\Big{\{}1-\Big{[}\frac{i(x-y)+\alpha}{i(y-x)+\alpha}\Big{]} ^{\beta^{2}/12}\Big{[}\frac{i(y-z-x)+\alpha}{i(x-y+z)+\alpha}\Big{]}^{\beta^{ 2}/12}\Big{\}}\\ \sim\Big{(}\frac{L}{2\pi}\Big{)}^{\beta^{2}/6}& V_{R}(\beta/3,x)V_{L}(\beta/3,x)\\ &\times\Big{\{}[i(x-y)+\alpha]^{-\beta^{2}/12}[i(y-z-x)+\alpha]^{- \beta^{2}/12}-[i(y-x)+\alpha]^{-\beta^{2}/12}[i(x-y+z)+\alpha]^{-\beta^{2}/1 2}\Big{\}},\end{split} \tag{110}\]
where an OPE of the vertex operator is carried out in last line. By inserting this expression in Eq.(46), we obtain, after a few steps of calculations,
\[\begin{split}[\eta^{(1)}_{\text{dual}}(l),e^{i\theta(x)/\sqrt{K}}]& \rightarrow-2iv_{F}\alpha^{\beta^{2}/6}\frac{u(l)}{4\pi^{2}\alpha^{2}}\frac{2 \pi}{L}\sum_{k}(-ik)e^{-4v_{F}^{2}k^{2}l}e^{-2i\theta(x)/\sqrt{K}}\\ &\times\int dydz\,e^{-ikz}\Big{\{}[i(x-y)+\alpha]^{-\beta^{2}/12} [i(y-z-x)+\alpha]^{-\beta^{2}/12}-[i(y-x)+\alpha]^{-\beta^{2}/12}[i(x-y+z)+ \alpha]^{-\beta^{2}/12}\Big{\}}\\ &\qquad\qquad=-\frac{v_{F}u(l)}{\pi^{2}}\frac{2\pi}{L}\sum_{k}ke^ {-4v_{F}^{2}k^{2}l}f_{\beta}(k)e^{-2i\theta(x)/\sqrt{K}},\end{split} \tag{47}\]
with
\[\begin{split} f_{\beta}(k)&=\int dydz\,e^{-ik \alpha z}[1-iy]^{-\beta^{2}/12}[1+i(y-z)]^{-\beta^{2}/12}\\ &=|\alpha k|^{\beta^{2}/6-2}\frac{4\pi^{2}}{\Gamma(\beta^{2}/12) ^{2}}\Theta(k).\end{split} \tag{48}\]
Therefore, the flow equation for \(C_{3}(l)\) is given by
\[\frac{dC_{3}(l)}{dl}=-C_{1}(l)\frac{4v_{F}u(l)}{\Gamma(\beta^{2}/12)^{2}}\sum_ {k>0}k|\alpha k|^{\beta^{2}/6-2}e^{-4v_{F}^{2}k^{2}l}. \tag{49}\]
### Derivation of \(\tilde{B}_{3}(x)\)
We give here the detailed calculation of the three-boson correlations in Eq.(57). We start by reordering the vertex operators in the product \(\tilde{B}_{3}(x)\). This leads to
\[\begin{split}\tilde{B_{3}}(x)=\lim_{\Delta\to 0}\sum_{n,m,l=0,1} \Big{[}A_{n}+B_{n}&\cos\!\Big{(}2\tilde{\phi}(x)\Big{)}\Big{]} \Big{[}A_{m}+B_{m}(-1)^{p_{n}}\cos\!\Big{(}2\tilde{\phi}(x+\Delta)\Big{)} \Big{]}\\ &\qquad\qquad\times\Big{[}A_{l}+B_{l}(-1)^{p_{n}+p_{m}}\cos\! \Big{(}2\tilde{\phi}(x+2\Delta)\Big{)}\Big{]}e^{i[p_{n}\theta(x)+p_{m}\theta( x+\Delta)+p_{l}\theta(x+2\Delta)]},\end{split} \tag{50}\]
where \(\tilde{\phi}(x)=\phi(x)-k_{F}x\), \(p_{n}=1-3n\), \(A_{0}=C_{0}\), \(B_{0}=C_{1}\), \(A_{1}=C_{3}\) and \(B_{1}=C_{4}\). To avoid artificial cancellations, it is necessary to reintroduce the Klein factors in the definition of the ansatz. This amounts to the following substitution:
\[\begin{split} B_{n}\cos(2\phi(x))&\rightarrow\frac{ 1}{2}B_{n}\big{[}F_{R}e^{2i\phi(x)}+F_{L}e^{-2i\phi(x)}\big{]},\\ A_{n}&\rightarrow\frac{1}{2}A_{n}(F_{R}+F_{L}). \end{split} \tag{51}\]
By carrying out an expansion in the splitting parameter \(\Delta\), we obtain for a term at position \(x+(n-1)\Delta\) in the product of Eq.(50):
\[\begin{split}\frac{1}{2}\big{[}F_{R}e^{2i\phi(x+(n-1)\Delta)}+F_ {L}e^{-2i\phi(x+(n-1)\Delta)}\big{]}\\ &\qquad\qquad\sim\frac{1}{2}F_{R}\Big{\{}1+2i[(n-1)\Delta\partial _{x}\phi+\frac{(n-1)^{2}\Delta^{2}}{2}(\partial_{x}\phi)^{2}]-2(n-1)^{2} \Delta^{2}\partial_{x}^{2}\phi\Big{\}}e^{2i\phi}\\ &\qquad\qquad+\frac{1}{2}F_{L}\Big{\{}1-2i[(n-1)\Delta\partial _{x}\phi+\frac{(n-1)^{2}\Delta^{2}}{2}(\partial_{x}\phi)^{2}]-2(n-1)^{2} \Delta^{2}\partial_{x}^{2}\phi\Big{\}}e^{-2i\phi},\end{split} \tag{52}\]
where the position of the fields is at \(x\) and is omitted for brevity. Similarly, we have
\[\begin{split} e^{i[p_{n}\theta(x)+p_{m}\theta(x+\Delta)+p_{l} \theta(x+2\Delta)]}\sim e^{i[p_{n}+p_{m}+p_{l}]\theta}e^{i[(p_{m}+2p_{l}) \Delta\partial_{x}\theta+(\frac{1}{2}p_{m}+2p_{l})\Delta^{2}\partial_{x}^{2} \theta]}\\ \sim e^{i[p_{n}+p_{m}+p_{l}]\theta}\big{[}1+i[(p_{m}+2p_{l}) \Delta\partial_{x}\theta+(\frac{1}{2}p_{m}+2p_{l})\Delta^{2}\partial_{x}^{2} \theta]-\frac{1}{2}(p_{m}+2p_{l})^{2}\Delta^{2}(\partial_{x}\theta)^{2}\big{]}. \end{split} \tag{53}\]
After collecting all the terms that have a scaling dimension smaller than the bare operator \(e^{3i\theta(x)}\), we obtain
\[\begin{split}\tilde{B}_{3}(x)\sim&(C_{0})^{3}e^{3i \theta(x)}+[8(C_{1})^{2}C_{4}-(C_{2})^{2}C_{4}]\cos\Bigl{(}2\tilde{\phi}\Bigr{)} -12\Delta C_{1}C_{2}C_{4}\partial_{x}\theta\\ &+\Delta[32C_{1}C_{2}C_{3}-8(C_{1})^{2}C_{4}+(C_{2})^{2}C_{4}] \partial_{x}\phi\sin\Bigl{(}2\tilde{\phi}\Bigr{)}-72\Delta^{2}(C_{1})^{2}C_{3} (\partial_{x}\theta)^{2}\\ &+i\Delta^{2}[156(C_{1})^{2}C_{3}-2(C_{2})^{2}C_{3}+12C_{1}C_{2}C _{4}]\partial_{x}^{2}\theta+16\Delta^{2}(C_{2})^{2}C_{3}(\partial_{x}\phi)^{2 }.\end{split} \tag{101}\]
It should be noted that for simplicity of the calculation, the normal-ordering of the operators in Eq.(100) is not taken before carrying out the Taylor expansion. The latter modifies the prefactors of the generated terms in Eq.(101).
|
2306.09892
|
Improving Spectrum-Based Localization of Multiple Faults by Iterative
Test Suite Reduction
|
Spectrum-based fault localization (SBFL) works well for single-fault programs
but its accuracy decays for increasing fault numbers. We present FLITSR (Fault
Localization by Iterative Test Suite Reduction), a novel SBFL extension that
improves the localization of a given base metric specifically in the presence
of multiple faults. FLITSR iteratively selects reduced versions of the test
suite that better localize the individual faults in the system. This allows it
to identify and re-rank faults ranked too low by the base metric because they
were masked by other program elements. We evaluated FLITSR over method-level
spectra from an existing large synthetic dataset comprising 75000 variants of
15 open-source projects with up to 32 injected faults, as well as method-level
and statement-level spectra from a new dataset with 326 true multi-fault
versions from the Defects4J benchmark set containing up to 14 real faults. For
all three spectrum types we consistently see substantial reductions of the
average wasted efforts at different fault levels, of 30%-90% over the best base
metric, and generally similarly large increases in precision and recall, albeit
with larger variance across the underlying projects. For the method-level real
faults, FLITSR also substantially outperforms GRACE, a state-of-the-art
learning-based fault localizer.
|
Dylan Callaghan, Bernd Fischer
|
2023-06-16T15:00:40Z
|
http://arxiv.org/abs/2306.09892v1
|
# Improving Spectrum-Based Localization of Multiple Faults by Iterative Test Suite Reduction
###### Abstract.
Spectrum-based fault localization (SBFL) works well for single-fault programs but its accuracy decays for increasing fault numbers. We present FLITSR (Fault Localization by Iterative Test Suite Reduction), a novel SBFL extension that improves the localization of a given base metric specifically in the presence of multiple faults. FLITSR iteratively selects reduced versions of the test suite that better localize the individual faults in the system. This allows it to identify and re-rank faults ranked too low by the base metric because they were masked by other program elements.
We evaluated FLITSR over method-level spectra from an existing large synthetic dataset comprising 75000 variants of 15 open-source projects with up to 32 injected faults, as well as method- and statement-level spectra from a new dataset with 326 true multi-fault versions from the DefectsJ benchmark set containing up to 14 real faults. For all three spectrum types we consistently see substantial reductions of the average wasted efforts at different fault levels, of 30%-90% over the best base metric, and generally similarly large increases in precision and recall, albeit with larger variance across the underlying projects. For the method-level real faults, FLITSR also substantially outperforms GRACE, a state-of-the-art learning-based fault localizer.
Testing and debugging, Spectrum-based fault localization +
Footnote †: isbn
|
2305.11555
|
Spherical Characters in Families: the unitary Gan-Gross-Prasad case
|
We consider the variation of spherical characters in families. We formulate
conjectures for the rationality and meromorphic property of spherical
characters. As an example, we establish these conjectures in the unitary
Gan-Gross-Prasad case.
|
Li Cai, Yangyu Fan
|
2023-05-19T09:55:13Z
|
http://arxiv.org/abs/2305.11555v1
|
# Spherical characters in families: the unitary Gan-Gross-Prasad case
###### Abstract.
We consider the variation of spherical characters in families. We formulate conjectures for the rationality and meromorphic property of spherical characters. As an example, we establish these conjectures in the unitary Gan-Gross-Prasad case.
###### Contents
* 1 Spherical characters in families
* 2 The unitary Gan-Gross-Prasad case
* 2.1 Zeta integral in families
* 2.2 Rationality of smooth matching
* 2.3 GGP spherical character in families
## 1. Spherical characters in families
Let \(F\) be a \(p\)-adic field and \(H\subset G\) be a pair of \(F\)-reductive groups such that the geometric quotient \(Y:=H\backslash G\) is _spherical_.
Let \(\sigma\) be a complex irreducible unitary \(G(F)\)-representation appearing in the Plancherel decomposition of \(L^{2}(Y(F))\) with a \(G(F)\)-invariant pairing on \(\sigma\) and its contragradient \(\sigma^{\vee}\). Assume the local conjecture of Sakellaridis-Venkatesh [17]. There is a _canonical local period_
\[Z_{\sigma}(\cdot,\cdot):\ \sigma\times\sigma^{\vee}\to\mathbb{C}\]
in the bilinear space \(\operatorname{Hom}_{H(F)^{2}}(\sigma\times\sigma^{\vee},\mathbb{C})\). Globally, under certain conditions (for example, \((G,H)\) is a Gelfand pair), it is conjectured that for any cuspidal automorphic representation over \(G\), (the square of) its period integrals over \(H\) admit an Eulerian decomposition into the product of the above canonical local periods and _certain \(L\)-value_.
In some circumstances, especially in the relative trace formula framework, it is more convenient to consider the _spherical character_ attached to the caonical local period \(Z_{\sigma}\)
\[J_{\sigma}:\mathcal{S}(G(F),\mathbb{C})\to\mathbb{C},\quad f\in\mathcal{H}(K, \mathbb{C})\mapsto\sum_{i}Z_{\sigma}(\sigma(f)\varphi_{i},\varphi^{i}).\]
Here,
* for a coefficient field \(E\), \(\mathcal{S}(G(F),E)\) is the Hecke algebra of \(E\)-valued compactly supported functions on \(G(F)\) and for any open compact subgroup \(K\subset G(F)\), \(\mathcal{H}(K,E)\subset\mathcal{S}(G(F),E)\) is the subspace of bi-\(K\)-invariant functions.
* \(\{\varphi_{i}\}\) and \(\{\varphi^{i}\}\) are bases of \(\sigma^{K}\) and \(\sigma^{\vee,K}\) respectively such that \((\varphi_{i},\varphi^{j})=\delta_{ij}\).
* \(\sigma(f)\varphi=\int_{G(F)}f(g)\sigma(g)\varphi dg\) is the induced action of \(\mathcal{S}(G(F),\mathbb{C})\) on \(\sigma\).
Note that \(J_{\sigma}\) is actually independent of the choice of \((\cdot,\cdot)\). The above Eulerian decomposition of period integrals into canonical local periods and \(L\)-value is equivalent to the Eulerian decomposition of the attached Bessel distributions into spherical characters and the same \(L\)-value (See [19, Lemma 1.7]).
In [3], motivatived by the construction of \(p\)-adic \(L\)-functions from the Eulerian decomposition of periods integrals, we consider the rationality and meromorphic properties of canonical local periods for families of representations.
Let \(E\) be a fixed coefficient field embeddable into \(\mathbb{C}\), \(R\) be a finitely reduced Noetherian \(E\)-algebra and \(\Sigma\subset X=\operatorname{Spec}\left(R\right)\) be a fixed Zariski dense subset of closed points. Let \(\pi\) be a finitely generated smooth
admissible torsion-free \(R[G(F)]\)-module, i.e. \(\pi\) is a torsion-free \(R\)-module equipped with a \(R\)-linear action of \(G(F)\) such that
* (smooth) any \(\varphi\in\pi\) is fixed by some open compact subgroup of \(G(F)\);
* (admissible) the \(R\)-submodule \(\pi^{K}\subset\pi\) of \(K\)-fixed elements is finitely generated for any compact open subgroup \(K\subset G(F)\);
* (finitely generated) \(\pi\) is finitely generated as a \(R[G(F)]\)-module.
For any closed point \(x\in\Sigma\) with residue field \(k(x)\), denote by \(\mathcal{E}(\pi|_{x})\) the set of field embedding \(\tau:k(x)\to\mathbb{C}\) such that the base change \((\pi|_{x})_{\tau}=\pi|_{x}\otimes_{k(x),\tau}\mathbb{C}\) for the specialization of \(\pi\) at \(x\) is irreducible and appears in \(L^{2}(Y(F))\).
**Conjecture 1.1**.: [3, Conjecture 1.1] _Assume \(\mathcal{E}(\pi|_{x})\) is non-empty for any \(x\in\Sigma\) and moreover_
* _there exists a finitely generated smooth admissible torsion-free_ \(R[G(F)]\)_-module_ \(\widetilde{\pi}\) _together with a_ \(G(F)\)_-invariant_ \(R\)_-bilinear pairing_ \((\cdot,\cdot):\pi\times\tilde{\pi}\longrightarrow R\) _such that for any_ \(x\in\Sigma\)_,_ \((\cdot,\cdot)\) _induces a non-degenerate_ \(G(F)\)_-invariant pairing_ \((\cdot,\cdot)|_{x}:\ \pi|_{x}\times\tilde{\pi}|_{x}\to k(x)\)_._
_Then_
1. _(Rationality) for any_ \(x\in\Sigma\)_, there exists a unique bi-_\(H(F)\)_-invariant pairing_ \[Z_{\pi|_{x}}:\ \pi|_{x}\times\tilde{\pi}|_{x}\to k(x)\] _such that for any_ \(\tau\in\mathcal{E}(\pi|_{x})\)_, the following diagram commutes_ _Here_ \(Z_{(\pi|_{x})_{\tau}}\) _is defined with respect to the linear extension of_ \((\cdot,\cdot)|_{x}\)_._
2. _(Meromorphy) upon shrinking_ \(\operatorname{Spec}\left(R\right)\) _to an open subset containing_ \(\Sigma\)_, there exists a bi-_\(H(F)\)_-invariant_ \(R\)_-linear pairing_ \[Z_{\pi}:\ \pi\times\tilde{\pi}\to R\] _such that for any_ \(x\in\Sigma\)_, the following diagram is commutative_ _Here both vertical arrows are the specialization maps._
It is natural to consider the rationality and meromorphy property of spherical characters.
**Conjecture 1.2**.: _Let \(\pi\) be a finitely generated smooth admissible torsion-free \(R[G(F)]\)-module such that \(\mathcal{E}(\pi|_{x})\neq\emptyset\) for any \(x\in\Sigma\). Then_
1. _(Rationality) for any_ \(x\in\Sigma\)_, there exists a unique character_ \[J_{\pi|_{x}}:\ \mathcal{S}(G(F),k(x))\to k(x)\] _such that for any_ \(\tau\in\mathcal{E}(\pi|_{x})\)_, the following diagram commutes_ _2. (Meromorphy) Moreover for any_ \(f\in\mathcal{S}(G(F),E)\)_, there exists a unique meromorphic function_ \(J_{\pi}(f)\in\operatorname{Frac}R\) _interpolating_ \(J_{\pi|_{x}}(f)\)_,_ \(x\in\Sigma\)_._
In general, we don't know whether the above two conjectures are equivalent. However, we can prove the following implication result.
**Proposition 1.3**.: _When \(R=E\), the rationality part in Conjecture 1.1 implies that of Conjecture 1.2. For general \(R\), the meromorphy part of Conjecture 1.1 implies that of Conjecture 1.2 for those \(\pi\) whose fiber rank of \(\pi^{K}\) is locally constant on \(\Sigma\) for all open compact subgroups \(K\subset G(F)\), i.e. the function_
\[\phi_{\pi^{K}}:\ X\mapsto\mathbb{N};\quad x\mapsto\dim_{k(x)}(\pi^{K}|_{x})\]
_is locally constant on \(\Sigma\)._
Proof.: Take any open compact subgroup \(K\subset G(F)\) and \(f\in\mathcal{H}(K,E)\). For the case \(R=E\), one can simply take
\[J_{\pi}(f)=\sum_{i}\frac{Z_{\pi}(\pi(f)\varphi_{i},\varphi^{i})}{(\varphi_{i}, \varphi^{j})}\]
where \(\{\varphi_{i}\}\) (resp. \(\{\varphi^{i}\}\)) is any basis of \(\pi^{K}\) (resp. \(\pi^{\vee,K}\)) such that \((\varphi_{i},\varphi^{j})=0\) for any \(i\neq j\).
For the general case, upon shrinking \(X\) to an open neighborhood of \(\Sigma\) if necessary and by gluing, we may assume \(\pi^{K},\widetilde{\pi}^{K}\) are free \(R\)-modules and the induced pairing \((\cdot,\cdot):\ \pi^{K}\times\widetilde{\pi}^{K}\to R\) is perfect by [18, Lemma 0FWG] and the local constancy assumption.
Take bases \(\{\varphi_{i}\}\) (resp. \(\{\widetilde{\varphi}_{i}\}\)) of \(\pi^{K}\) (resp. \(\widetilde{\pi}^{K}\)) such that \((\varphi_{i},\widetilde{\varphi}_{j})=\delta_{ij}\). Clearly
\[J_{\pi}(f):=\sum_{i}Z_{\pi}(\pi(f)\varphi_{i},\widetilde{\varphi}_{i})\]
is meromorphic. Note that for any \(x\in\Sigma\), the natural map
\[\operatorname{Hom}_{R}(\pi^{K},R)\otimes k(x)\to\operatorname{Hom}_{k(x)}( \pi^{K}|_{x},k(x))\]
is an isomorphism. By taking inductive limits, one finds the natural map \(\pi^{\vee}|_{x}\to(\pi|_{x})^{\vee}\) is an isomorphism for any \(x\in\Sigma\). Consequently, \(J_{\pi}\) interpolates \(J_{\pi|_{x}}(f)\), \(x\in\Sigma\).
_Remark 1.4_.: Perhaps the local constancy assumption holds for any smooth admissible finitely generated torsion-free \(R[G(F)]\)-module \(\pi\) such that \(\pi|_{x}\) is absolutely irreducible for any \(x\in\Sigma\). For the case \(G=\operatorname{GL}_{n}\), see Proposition 2.9 below.
Now we consider spherical characters in the unitary Gan-Gross-Prasad case. Let \(W_{n}\) be a Hermitian space of dimension \(n\geq 2\) with respect to a given quadratic field extension \(F^{\prime}/F\), and \(w\in W_{n}\) be an anisotropic vector. Let \(U_{n}\) be the unitary group associated to \(W_{n}\) and \(U_{n-1}\) the stabilizer of \(w\) in \(U_{n}\). We will consider spherical characters for the strongly tempered spherical variety \(Y:=H\backslash G\) where \(H=U_{n-1}\) embeds into \(G=U_{n}\times U_{n-1}\) diagonally.
In this case, Conjecture 1.1 holds by [3, Proposition 4.13]. Together with Proposition 1.3, one has the following corollary.
**Corollary 1.5**.: _The rationality part of Conjecture 1.2 holds for \(Y\)._
However, the local constancy of fiber ranks is not known in general, even in the global setting for our motivating applications. To avoid this issue, we consider the quadratic base change functoriality \(\pi\to\operatorname{BC}(\pi)\) from irreducible smooth admissible complex representations over \(G(F)\) to those over \(G^{\prime}(F)\) established in [13, Theorem 3.6.1] and [12, Theorem 1.6.1] so that we can apply the local constancy for families of \(G^{\prime}\)-representations. Here \(G^{\prime}\) is the \(F\)-algebraic group \(\operatorname{Res}_{F^{\prime}/F}(\operatorname{GL}_{n}\times\operatorname{GL} _{n-1})\).
Fix an algebraic closure \(\bar{E}\) of \(E\) and a field embedding \(\tau:\ \bar{E}\to\mathbb{C}\). Our main result is the following:
**Theorem 1.6**.: _Assume that_
1. _For any_ \(x\in\Sigma\)_, there exists an absolutely irreducible smooth admissible_ \(G^{\prime}(F)\)_-representation_ \(\operatorname{BC}(\pi|_{x})\) _over_ \(k(x)\) _such that_ \(\operatorname{BC}(\pi|_{x})_{\tau}\cong\operatorname{BC}((\pi|_{x})_{\tau})\)_;_
2. _There exists a smooth admissible finitely generated torsion-free_ \(R[G^{\prime}(F)]\)_-module_ \(\operatorname{BC}(\pi)\) _such that_ \(\operatorname{BC}(\pi|_{x})\cong\operatorname{BC}(\pi)|_{x}\) _for all_ \(x\in\Sigma\)_._
_Then Conjecture 1.1 holds for \(\pi\)._
In the interested global setting, one usually takes \(E=\bar{\mathbb{Q}}_{p}\) and \(\tau\) to be an isomorphism. Then Item (i) is automatic and Item (ii) can be deduced from the existence of family of Galois representations, local-global compatibility and the local Langlands correspondence in family.
Under the assumption on base change functoriality, we approach Theorem 1.6 as following:
1. Apply the spherical character identity of Beuzart-Plessis (See [1, 2]) \[J_{\pi}(f)\doteq I_{\operatorname{BC}(\pi)}(f^{\prime})\]
for any \(f\in\mathcal{S}(G(F),\mathbb{C})\) and \(f^{\prime}\in\mathcal{S}(G^{\prime}(F),\mathbb{C})\) with purely matching orbital integrals to transfer \(J_{\pi}\) to the spherical character \(I_{\mathrm{BC}(\pi)}\) for the Rankin-Selberg and Asai zeta integrals. To apply the character identity, the existence of smooth transfer for \(E\)-valued Schwartz functions is needed (See Proposition 2.7 below).
* Apply the theory of co-Whittaker modules to establish the meromorphy property of Rankin-Selberg and Asai zeta integrals in families (initialed in [15], see Proposition 2.3 and 2.5 below). To deduce the meromorphy of \(I_{\mathrm{BC}(\pi)}\), the local constancy of fiber rank in the \(\mathrm{GL}_{n}\)-case is needed (See Proposition 2.9 below).
**Acknowledgement** We express our sincere gratitude to Prof. Y. Tian for his consistent encouragement. We thank Prof. A. Burungale for inspiring discussions.
L. Cai is partially supported by NSFC grant No.11971254.
## 2. The unitary Gan-Gross-Prasad case
In this section, we establish Theorem 1.6. Throughout,
* let \(\mathcal{O}\subset F\) be the ring of integers and \(q\) be the cardinality of the residue field of \(\mathcal{O}\);
* fix an unramified nontrivial additive character \(\psi:\ F\to\bar{E}^{\times}\) and set \(E_{\psi}:=E(\psi(a)|a\in F)\);
* all measures on \(F\)-points of \(F\)-groups are Haar measures such that volumes of open compact subgroups are rational numbers.
To simplify notations, we assume \(E\) contains a square root \(\sqrt{q}\) of \(q\). Otherwise, we can first work on \(E(\sqrt{q})\) and then descent by Corollary 1.5.
### Zeta integral in families
We start with the meromorphy property of Rankin-Selberg and Asai zeta integral in families. Set \(G_{n}:=\mathrm{GL}_{n}(F)\) and let \(N_{n}\subset P_{n}\subset G_{n}\) be the upper-triangular unipotent subgroup and the mirabolic subgroup respectively. Then by [6, Section 3.1], there are
* the functor \(\pi\mapsto\mathcal{J}(\pi)\) from smooth \(R[G_{n}]\)-modules to \(R[P_{n}]\)-modules
* the functor \(\pi\mapsto\pi^{(n)}\) from smooth \(R[G_{n}]\)-modules to \(R\)-modules
such that for any \(R\)-module \(M\),
\[\mathcal{J}(\pi\otimes_{R}M)=\mathcal{J}(\pi)\otimes_{R}M,\quad(\pi\otimes_{R} M)^{(n)}=\pi^{(n)}\otimes_{R}M.\]
Extend the additive character \(\psi\) to \(N_{n}\) by \(n\in N_{n}\mapsto\psi(\sum_{i=1}^{n-1}n_{i,i+1})\). Then after base change to \(R\otimes_{E}E_{\psi}\) (see [5, Proposition 4.1.2]),
\[\pi^{(n)}=\pi/\pi(N_{n},\psi),\quad\pi(N_{n},\psi):=\langle n\cdot v-\psi(n)v \mid n\in N_{n},\ v\in\pi\rangle.\]
**Definition 2.1**.: A smooth admissible \(R[G_{n}]\)-module \(\pi\) is _of Whittaker type_ if the \(R\)-module \(\pi^{(n)}\) is locally free of rank one. A \(R[G_{n}]\)-module \(\pi\) of Whittaker type is called _co-Whittaker_ if \(\sigma^{(n)}\neq 0\) for any non-zero \(R[G_{n}]\)-quotient \(\sigma\) of \(\pi\), or equivalently \(\mathcal{J}(\pi)\) generates \(\pi\) (see [5, Lemma 4.2.1]).
For any smooth admissible \(R[G_{n}]\)-module \(\pi\) of Whittaker type, the _space of Whittaker functions_\(\mathcal{W}(\pi,\psi)\) of \(\pi\) is the image of the map
\[(\pi^{(n)})^{*}\otimes_{R}\pi\otimes_{E}E_{\psi}\longrightarrow(\mathrm{Ind}_{ N_{n}}^{G_{n}}E_{\psi})\otimes_{E}R\]
induced by the canonical isomorphism of \(R\otimes_{E}E_{\psi}\)-modules
\[(\pi^{(n)}\otimes_{E}E_{\psi})^{*}\cong\mathrm{Hom}_{G_{n}}(\pi\otimes_{E}E_{ \psi},\mathrm{Ind}_{N_{n}}^{G_{n}}E_{\psi}\otimes_{E}R).\]
Here in \(\mathrm{Ind}_{N_{n}}^{G_{n}}E_{\psi}\), \(N_{n}\) acts on \(E_{\psi}\) via \(\psi\). Note that for any closed point \(x\in X=\mathrm{Spec}\,(R)\), there is a natural surjection \(\mathcal{W}(\pi,\psi)|_{x}\to\mathcal{W}(\pi|_{x},\psi)\) which becomes an isomorphism when \(\pi|_{x}\) is irreducible.
For any smooth \(R[G_{n}]\)-module \(\pi\) and any standard parabolic \(P=MN\subset G_{n}\) with Levi factor \(M\), the Jacquet module \(J_{M}(\pi):=\pi/\langle n\cdot v-v\mid n\in N,\ v\in\pi\rangle\) is a smooth \(R[M]\)-module.
**Proposition 2.2**.: _If \(\pi\) is co-Whittaker, \(J_{M}(\pi)\) is finitely generated and admissible. In particular, the \(R\)-module \(\mathrm{End}_{R[M]}(J_{M}(\pi))\) is coherent._
Proof.: By [14, Lemma 2.29], \(\pi\) is finitely generated and consequently, \(J_{M}(\pi)\) is finitely generated. The admissibility of \(J_{M}(\pi)\) is a special case of [4, Corollary 1.5], see also [10, Theorem 10.7 & 10.9]. By [5, lemma 4.1.1], the coherence of \(\mathrm{End}_{R[M]}(J_{M}(\pi))\) follows.
Now we consider the Rankin-Selberg zeta integral in families. Take positive integers \(m\leq n\) and let \(\pi_{1}\) and \(\pi_{2}\) be smooth admissible \(R[G_{n}]\)-module and \(R[G_{m}]\)-module of Whittaker type respectively. For \(W_{1}\in\mathcal{W}(\pi_{1},\psi)\), \(W_{2}\in\mathcal{W}(\pi_{2},\psi^{-1})\) and \(\Phi\in\mathcal{S}(F^{n},E)\otimes_{E}R\), consider the following formal series
\[J_{RS}(W_{1},W_{2},T):=\sum_{j\in\mathbb{Z}}J^{j}(W_{1},W_{2})T^{j}\quad Z_{RS} (W_{1},W_{2},(\Phi),T):=\sum_{j\in\mathbb{Z}}Z_{RS}^{j}(W_{1},W_{2},(\Phi))T^{j}\]
in the variable \(T\) when \(m=n\) and \(m=n-1\) where
* for \(m=n\), \[J_{RS}^{j}(W_{1},W_{2}):=\int_{N_{n-1}(F)\setminus\mathrm{GL}_{n-1}^{j}(F)}W_{ 1}(g)W_{2}(g)dg,\] \[Z_{RS}^{j}(W_{1},W_{2},\Phi):=\int_{N_{n}(F)\setminus\mathrm{GL}_{n}^{j}(F)}W _{1}(g)W_{2}(g)\Phi(e_{n}g)dg\] with \(F^{n}\) viewed as row vectors and \(e_{n}=(0,\cdots,0,1)\),
* for \(m=n-1\), \[Z_{RS}^{j}(W_{1},W_{2}):=\int_{N_{m}(F)\setminus\mathrm{GL}_{m}^{j}(F)}W_{1}( \begin{pmatrix}g&0\\ 0&1\end{pmatrix})W_{2}(g)dg.\]
Here for any subgroup \(H\subset G_{n}\) and any \(j\in\mathbb{Z}\), \(H^{j}:=\{g\in H\mid\mathrm{val}_{F}(\det(g))=j\}\). Note that by the Iwasawa decomposition, the integrals \(Z_{RS}^{j}(W_{1},W_{2},(\Phi))\) and \(J_{RS}^{j}(W_{1},W_{2}))\) are actually finite sum for each \(j\), and hence \(Z_{RS}(W_{1},W_{2},(\Phi),T)\) and \(J_{RS}(W_{1},W_{2},T))\) are well-defined. Moreover when \(X=\mathrm{Spec}\,(\mathbb{C})\) and \(\mathrm{Re}(s)\gg 0\), the substitution \(T=q^{-(s-\frac{n-m}{2})}\) in \(Z_{RS}(W_{1},W_{2},(\Phi),T)\) gives the usual Rankin-Selberg Zeta integral.
**Proposition 2.3**.: _Let \(S\subset R[T]\) be the multiplicative subset consisting of polynomials whose leading and trailing coefficients are units. Let \(\pi_{1}\) and \(\pi_{2}\) be co-Whittaker \(R[G_{n}]\) and \(R[G_{m}]\)-modules respectively. Then for any \(W_{1}\in\mathcal{W}(\pi_{1},\psi)\), \(W_{2}\in\mathcal{W}(\pi_{2},\psi^{-1})\) and \(\Phi\in\mathcal{S}(F^{n},E)\otimes_{E}R\), \(Z_{RS}(W_{1},W_{2},(\Phi),T)\) and \(J(W_{1},W_{2},T)\) belong to \(S^{-1}(R[T,T^{-1}])\otimes_{E}E_{\psi}\)._
Proof.: The case \(Z_{RS}(W_{1},W_{2},T)\) with \(m=n-1\) is treated in [15, Theorem 3.2]. We now deal with \(Z_{RS}(W_{1},W_{2},\Phi,T)\) and \(J(W_{1},W_{2},T)\) when \(n=m\).
Let \(A_{n}\subset G_{n}\) be the subgroup of diagonal matrices and equip \(A_{n}\) with the coordinates
\[(F^{\times})^{n}\xrightarrow{\sim}A_{n};\quad a=(a_{1},\cdots,a_{n})\mapsto \mathrm{diag}\{a_{1}\cdots a_{n},a_{2}\cdots a_{n},\cdots,a_{n}\}.\]
Then (see [14, Lemma 3.2]) for any \(W\in\mathcal{W}(\pi,\psi)\), there exists a constant \(C\) such that \(W(a)=0\) unless \(\mathrm{val}_{F}(a_{i})>-C\) for \(i=1,\cdots,n-1\). Then \(Z(W_{1},W_{2},\Phi,T)\) and \(J(W_{1},W_{2},T)\) are actually formal Laurent series. Moreover by the Iwasawa decomposition, it suffices to show the formal Laurent series
\[Z^{\prime}(W_{1},W_{2},\Phi,T): =\sum_{j\in\mathbb{Z}}\big{(}\int_{A_{n}^{j}(F)}W_{1}(a)W_{2}(a) \Phi(e_{n}a)da\big{)}T^{j},\] \[J^{\prime}(W_{1},W_{2},T): =\sum_{j\in\mathbb{Z}}\big{(}\int_{A_{n-1}^{j}(F)}W_{1}(\mathrm{ diag}\{a^{\prime},1\})W_{2}(\mathrm{diag}\{a^{\prime},1\})da^{\prime}\big{)}T^{j}\]
belong to \(S^{-1}(R[T,T^{-1}])\otimes_{E}E_{\psi}\).
By [9, Proposition 6.2], \(\pi_{i}\) admits a central character \(\chi_{i}\) for \(i=1,2\). Let
\[Z(\chi,\Phi_{n},T):=\sum_{j\in\mathbb{Z}}\big{(}\int_{\varpi^{j}\mathcal{O}_{F} ^{\times}}\chi(x)\Phi_{n}(x)dx\big{)}T^{j}\]
where \(\Phi_{n}(x):=\Phi(0,\cdots,0,x)\in\mathcal{S}(F,E)\otimes_{E}R\) and \(\chi:=\chi_{1}\chi_{2}\). Then in \(R[[T]][T^{-1}]\otimes_{E}E_{\psi}\),
\[Z^{\prime}(W_{1},W_{2},\Phi,T)=J^{\prime}(W_{1},W_{2},T)Z(\chi,\Phi_{n},T^{n}).\]
Straightforward computation shows
\[Z(\chi,\Phi_{n},T^{n})\in S^{-1}(R[T,T^{-1}])\otimes_{E}E_{\psi}.\]
Hence we only need to show
\[J^{\prime}(W_{1},W_{2},T)\in S^{-1}(R[T,T^{-1}])\otimes_{E}E_{\psi}.\]
Consider the coordinates
\[(F^{\times})^{n-1}\xrightarrow{\sim}A_{n-1};\quad a^{\prime}=(a_{1},\cdots,a_{ n-1})\mapsto\mathrm{diag}\{a_{1}\cdots a_{n-1},a_{2}\cdots a_{n-1},\cdots,a_{n-1}\}.\]
Let \(\mathcal{V}\subset C^{\infty}(A_{n-1},R\otimes_{E}E_{\psi})\) be the subspace generated by functions of the form
\[a^{\prime}\mapsto W_{1}(\operatorname{diag}\{a^{\prime},1\})W_{2}(\operatorname{ diag}\{a^{\prime},1\}),W_{1}\in\mathcal{W}(\pi_{1},\psi),\ W_{2}\in\mathcal{W}(\pi_{2},\psi^{-1})\]
and \(\mathcal{V}_{i}\subset\mathcal{V}\) be the subspace of functions \(\phi\) satisfying that \(\phi(a)=0\) when \(\operatorname{val}_{F}(a_{i})\geq C\) for some constant \(C\) (depending on \(\phi\)) for each \(i\leq n-1\). Moreover, let \(M_{i}\subset G_{n}\) be the standard Levi subgroup \(G_{i}\times G_{n-i}\) for \(1\leq i\leq n-1\). Then similar as [15, Lemma 3.3-3.5], the map
\[\mathcal{W}(\pi_{1},\psi)\otimes\mathcal{W}(\pi_{2},\psi^{-1})\to\mathcal{V} \to\mathcal{V}/\mathcal{V}_{i}\]
factors through \(J_{M_{i}}(\mathcal{W}(\pi_{1},\psi))\otimes J_{M_{i}}(\mathcal{W}(\pi_{2},\psi ^{-1}))\). Let \(\rho_{i}(\varpi)\) be the right translation by \(\operatorname{diag}\{\varpi,\cdots,\varpi,1,\cdot,1\}\) (the first \(i\) entities are \(\varpi\)) acting diagonally on \(\pi_{1}\otimes\pi_{2}\). Then \(\rho_{i}(\varpi)\) induce an element in \(\operatorname{End}_{R[M_{i}\times M_{i}]}(J_{M_{i}}(\pi_{1})\otimes J_{M_{i}} (\pi_{2}))\), which is coherent by Proposition 2.2. Then as [15, Lemma 3.6], there exist polynomials \(f_{i}\in S\), \(1\leq i\leq n-1\) such that
\[\big{(}\prod_{i=1}^{n-1}f_{i}(\rho_{i}(\varpi))\big{)}(W_{1}\otimes W_{2})\mid_ {A_{n-1}}\in\cap_{i=1}^{n-1}\mathcal{V}_{i}.\]
Consequently, we have
\[J^{\prime}\big{(}(\prod_{i=1}^{n-1}f_{i}(\rho_{i}(\varpi)))W_{1},(\prod_{i=1}^ {n-1}f_{i}(\rho_{i}(\varpi)))W_{2},T\big{)}\in R[T,T^{-1}]\otimes_{E}E_{\psi}.\]
Note that
\[T^{i}J^{\prime}(\rho_{i}(\varpi)W_{1},\rho_{i}(\varpi)W_{2},T)=J^{\prime}(W_{ 1},W_{2},T),\]
thus one deduces \(J^{\prime}(W_{1},W_{2},T)\in S^{-1}(R[T,T^{-1}])\otimes_{E}E_{\psi}\). We are done.
Take \(x\in\Sigma\) and assume there exists a field embedding \(\tau:\ k(x)\hookrightarrow\mathbb{C}\) such that \((\pi|_{x})_{\tau}\) is essentially unitary and irreducible. Then by [11], for any ring morphism \(\tau:\ k(x)\otimes_{E}E_{\psi}\to\mathbb{C}\) extending \(\tau\), the pairing given by absolutely convergent integration
\[\mathcal{W}(\pi|_{x},\psi)\otimes_{k(x)\otimes_{E}E_{\psi},\tau}\mathbb{C} \times\mathcal{W}(\pi|_{x},\psi^{-1})\otimes_{k(x)\otimes_{E}E_{\psi}},\mathbb{ C}\to\mathbb{C}\]
\[(W,\widetilde{W})\mapsto\int_{N_{n-1}\setminus G_{n-1}}W(g)\widetilde{W}(g)dg =\sum_{j\in\mathbb{Z}}J_{RS}^{j}(W,\widetilde{W})\]
is \(G_{n}\)-equivariant and non-degenerate. Consequently,
* \(T=1\) is not a pole of \(J_{RS}(W_{x},\widetilde{W}_{x},T)\) for any \(W_{x}\in\mathcal{W}(\pi|_{x},\psi)\) and \(\widetilde{W}_{x}\in\mathcal{W}(\widetilde{\pi}|_{x},\psi^{-1})\);
* the pairing \[\langle-,-\rangle_{x}:\ \mathcal{W}(\pi|_{x},\psi)\times\mathcal{W}( \widetilde{\pi}|_{x},\psi^{-1})\to k(x)\otimes_{E}E_{\psi};\quad(W_{x}, \widetilde{W}_{x})\mapsto J_{RS}(W_{x},\widetilde{W}_{x},1)\] is \(G_{n}\)-equivariant and non-degenerate.
From these observations, we immediately deduce the following corollary of Proposition 2.3.
**Corollary 2.4**.: _Let \(\pi\) be a co-Whittaker \(R[G_{n}]\)-module such that for any \(x\in\Sigma\), \((\pi|_{x})_{\tau}\) is essentially unitary and irreducible for some field embedding \(\tau:\ k(x)\hookrightarrow\mathbb{C}\). Then there exists an open subset \(X^{\prime}\subset\operatorname{Spec}\left(R\right)\) containing \(\Sigma\) such that for any \(W\in\mathcal{W}(\pi,\psi)\) and \(\widetilde{W}\in\mathcal{W}(\widetilde{\pi},\psi^{-1})\), \(J_{RS}(W,\widetilde{W},1)\) is regular on \(X^{\prime}\times_{E}E_{\psi}\). Moreover, the pairing_
\[\langle-,-\rangle:\ \mathcal{W}(\pi|_{X^{\prime}},\psi)\times\mathcal{W}( \tilde{\pi}|_{X^{\prime}},\psi^{-1})\to\mathcal{O}_{X^{\prime}}\otimes_{E}E_{ \psi};\quad(W,\widetilde{W})\mapsto J_{RS}(W,\widetilde{W},1)\]
_is \(G_{n}(F)\)-invariant and interpolates \(\langle-,-\rangle_{x}\) for any \(x\in\Sigma\)._
Now we consider Asai zeta integrals in families. Let \(G^{\prime}_{n}=\operatorname{GL}_{n}(F^{\prime})\) and \(\pi\) be a finitely generated smooth admissible \(R[F^{\times}\setminus G^{\prime}_{n}]\)-module of Whittaker type. Let \(\psi_{F^{\prime}}\) be the additive character of \(F^{\prime}\) given by \(a\mapsto\psi(\frac{1}{2}\mathrm{Tr}_{F^{\prime}/F}(a))\). Fix \(\xi\neq 0\in F^{\prime}\) such that \(\mathrm{Tr}_{F^{\prime}/F}(\xi)=0\) and set \(\epsilon_{n}(\xi)=\operatorname{diag}\{\xi^{n-1},\cdots,\xi,1\}\in G^{\prime}_ {n}\). For any Whittaker function \(W\in\mathcal{W}(\pi,\psi_{F^{\prime}})\) and any Schwartz function \(\Phi\in\mathcal{S}(F^{n},E)\otimes R\), consider the formal power series
\[Z_{As}(W,\Phi,T):=\sum_{j\in\mathbb{Z}}Z_{As}^{j}(W,\Phi)T^{j},\quad Z_{As}^{j}(W,\Phi):=\int_{N_{n}\setminus G^{j}_{n}}W(\epsilon_{n}(\xi)g)\Phi(e_{n}g)\eta( \det(g))^{n-1}dg;\]
\[J_{As}(W,T):=\sum_{j\in\mathbb{Z}}J_{As}^{j}(W)T^{j},\quad J_{As}^{j}(W):=\int_{N _{n-1}\setminus G^{j}_{n-1}}W(\epsilon_{n}(\xi)g)\eta(\det(g))^{n-1}dg.\]
Here \(\eta\) is the quadratic character associated to the quadratic field extension \(F^{\prime}/F\). As in the Rankin-Selberg case, the integration \(Z_{As}^{j}(W,\Phi)\) and \(J_{As}^{j}(W)\) are actually finite sum for each \(j\in\mathbb{Z}\), and
\(Z_{As}(W,\Phi,T)\) and \(J_{As}(W,T)\) are formal Laurent series. Moreover, the substitution \(T=q^{-s}\) in \(Z_{As}(W,\Phi,T)\) gives the usual Asai zeta integral when \(X=\operatorname{Spec}\left(\mathbb{C}\right)\) and \(\operatorname{Re}(s)\gg 0\).
We have the following analogue of Proposition 2.3.
**Proposition 2.5**.: _Let \(\pi\) be a co-Whittaker \(R[G^{\prime}_{n}]\)-module. Then for any \(W\in\mathcal{W}(\pi,\psi_{F^{\prime}})\) and \(\Phi\in\mathcal{S}(F^{n},E)\otimes_{E}R\), \(Z_{As}(W,\Phi,T)\) and \(J_{As}(W,T)\) belong to \(S^{-1}(R[T,T^{-1}])\otimes_{E}E_{\psi}\)._
Let \(\pi\) be a co-Whittaker \(R[F^{\times}\backslash G^{\prime}_{n}]\)-module. Take \(x\in\Sigma\) and assume there exists a field embedding \(\tau:\ k(x)\hookrightarrow\mathbb{C}\) such that \((\pi|_{x})_{\tau}\) is essentially unitary and irreducible. Then by [8, Page 185] and [19, Section 3.2], for any morphism \(\tau:\,k(x)\otimes_{E}E_{\psi}\to\mathbb{C}\) extending \(\tau\), the linear functional given by absolutely convergent integration
\[\mathcal{W}(\pi|_{x},\psi_{F^{\prime}})\otimes_{k(x)\otimes_{E}E_{\psi},\tau }\mathbb{C}\to\mathbb{C}\]
\[W\mapsto\int_{N_{n-1}\backslash G_{n-1}}W(\epsilon_{n}(\xi)g)\eta(\det(g))^{n -1}dg=\sum_{j\in\mathbb{Z}}J^{j}_{As}(W)\]
is non-zero and \(\eta^{n-1}\)-equivariant (with respect to the \(G_{n}\)-action). Consequently,
* \(T=1\) is not a pole of \(J_{As}(W_{x},T)\) for any \(W_{x}\in\mathcal{W}(\pi|_{x},\psi_{F^{\prime}})\)
* the linear functional \[\ell_{x}:\ \mathcal{W}(\pi|_{x},\psi_{F^{\prime}})\to k(x)\otimes_{E}E_{\psi}; \quad W_{x}\mapsto J_{As}(W_{x},1)\] is \(\eta^{n-1}\)-equivariant (with respect to the \(G_{n}\)-action).
Similar to the Rankin-Selberg case, we have the following corollary of Proposition 2.5:
**Corollary 2.6**.: _Let \(\pi\) be a co-Whittaker \(R[F^{\times}\backslash G^{\prime}_{n}]\)-module such that for any \(x\in\Sigma\), there exists a field embedding \(\tau:\,k(x)\hookrightarrow\mathbb{C}\) such that \((\pi|_{x})_{\tau}\) is essentially unitary and irreducible. Then there exists an open subset \(X^{\prime}\subset\operatorname{Spec}\left(R\right)\) containing \(\Sigma\) such that \(J_{As}(W,1)\) is regular on \(X^{\prime}\times_{E}E_{\psi}\) for any \(W\in\mathcal{W}(\pi,\psi_{F^{\prime}})\). Moreover, the linear functional_
\[\ell:\ \mathcal{W}(\pi|_{X^{\prime}},\psi_{F^{\prime}})\to\mathcal{O}_{X^{ \prime}}\otimes_{E}E_{\psi};\quad W\mapsto J_{As}(W,1)\]
_is \(\eta^{n-1}\)-equivariant and interpolates \(\ell_{x}\) for all \(x\in\Sigma\)._
### Rationality of smooth matching
Consider the \(F\)-subgroups \(H^{\prime}_{1}:=\operatorname{Res}_{F^{\prime}/F}\mathrm{GL}_{n-1}\) (embedded diagonally) and \(H^{\prime}_{2}:=\mathrm{GL}_{n}\times\mathrm{GL}_{n-1}\) of \(G^{\prime}\). According to [20, Section 3.1], there is a canonical isomorphism between categorical quotients
\[H\backslash G/H\cong H^{\prime}_{1}\backslash G^{\prime}/H^{\prime}_{2}.\]
A point \(\delta\in G\) (resp. \(\gamma\in G^{\prime}\)) is called _regular semisimple_ with respect to the action of \(H\times H\) (resp. \(H^{\prime}_{1}\times H^{\prime}_{2}\)) if its orbit is closed and its stabilizer is trivial. Let \(G_{rs}\subset G\) (resp. \(G^{\prime}_{rs}\subset G^{\prime}\)) be the open subset consisting of regular semisimple elements. Then we have an induced injection
\[H(F)\backslash G_{rs}(F)/H(F)\hookrightarrow H^{\prime}_{1}(F)\backslash G^{ \prime}_{rs}(F)/H^{\prime}_{2}(F). \tag{2.1}\]
The orbital integrals associated to \(\delta\in G_{rs}(F)\) and \(\gamma\in G^{\prime}_{rs}(F)\) are the following distributions respectively
\[f\in\mathcal{S}(G(F),\mathbb{C})\mapsto O(\delta,f):=\int_{(H\times H)(F)}f(h \delta h^{\prime})d\mu_{H}(h)d\mu_{H}(h^{\prime}),\]
\[f^{\prime}\in\mathcal{S}(G^{\prime}(F),\mathbb{C})\mapsto O(\gamma,f^{\prime} ):=\int_{(H^{\prime}_{1}\times H^{\prime}_{2})(F)}f^{\prime}(h_{1}^{-1}\gamma h _{2})\eta(\det(h_{2}))d\mu_{H^{\prime}_{1}}(h_{1})d\mu_{H^{\prime}_{2}}(h_{2})\]
where \(\eta\) is the character
\[\eta:\ H^{\prime}_{2}(F)=\mathrm{GL}_{n}(F)\times\mathrm{GL}_{n-1}(F)\to\{\pm 1 \};\quad(h_{1},h_{2})\mapsto\eta(\det(h_{2}))^{n}\eta(\det(h_{1}))^{n-1}.\]
As \(\delta\) and \(\gamma\) are regular semisimple, the orbital integrals are actually finite sums and \(\mathrm{Aut}(\mathbb{C})\)-equivariant in the sense that for any \(f\in\mathcal{S}(G(F),\mathbb{C})\), \(f^{\prime}\in\mathcal{S}(G^{\prime}(F),\mathbb{C})\) and \(\sigma\in\mathrm{Aut}(\mathbb{C})\),
\[O(\gamma,f)^{\sigma}=O(\gamma,\sigma\circ f),\quad O(\gamma,f^{\prime})^{ \sigma}=O(\gamma,\sigma\circ f^{\prime}).\]
Attached to the unique quadratic character on \(F^{\prime,\times}\) which extends \(\eta\), there exists (see [1, Section 3.4]) a _transfer factor_
\[\Omega:G^{\prime}_{rs}(F)\to\{\pm 1\},\quad\Omega(r\cdot h)=\eta(h)\Omega(r) \quad\forall\ h\in H^{\prime}_{2}(F).\]
A _pure smooth transfer_ of \(f\in\mathcal{S}(G(F),E)\) is a Schwartz function \(f^{\prime}\in\mathcal{S}(G^{\prime}(F),E)\) such that
* For every \(\delta\in G_{rs}(F)\) and \(\gamma\in G^{\prime}_{rs}(F)\) whose orbits corresponding to each other via the injection 2.1, one has \(O(\delta,f)=\Omega(\gamma)O(\gamma,f^{\prime})\).
* For every \(\gamma\in G^{\prime}_{rs}(F)\) whose orbit lies outside the image of the injection 2.1, one has \(O(\gamma,f^{\prime})=0\).
**Proposition 2.7**.: _The pure smooth transfer exists for any \(f\in\mathcal{S}(G(F),E)\)._
Proof.: Fix an embedding \(E\subset\mathbb{C}\). For any \(f\in\mathcal{S}(G(F),E)\subset\mathcal{S}(G(F),\mathbb{C})\), there exists a pure smooth transfer \(f^{\prime}\in\mathcal{S}(G^{\prime}(F),\mathbb{C})\) by [20, Theorem 2.6]. Take any \(E\)-linear splitting \(p_{E}:\ \mathbb{C}\to E\) of the inclusion \(E\subset\mathbb{C}\) and set \(f^{\prime}_{E}:=p_{E}\circ f^{\prime}\in\mathcal{S}(G(F),E)\). Let \(N:=\ker p_{E}\) and \(f^{\prime}_{N}:=f^{\prime}-f^{\prime}_{E}\). Then for any \(\gamma\in G^{\prime}_{rs}(F)\), we have
\[\Omega(\gamma)O(\gamma,f^{\prime}_{N})=(1-p_{E})\Omega(\gamma)O(\gamma,f^{ \prime})\in N.\]
If the orbit of \(\gamma\) matches with that of \(\delta\in G_{rs}(F)\), then \(\Omega(\gamma)O(\gamma,f^{\prime}_{N})\in E\). In fact the \(\operatorname{Aut}(\mathbb{C})\)-equivalence of orbital integral implies that for any \(\sigma\in\operatorname{Aut}(\mathbb{C}/E)\) (note that \(\Omega(\gamma)\in\{\pm 1\}\))
\[\Omega(\gamma)O(\gamma,f^{\prime}_{N})^{\sigma} =O(\delta,f)^{\sigma}-\Omega(\gamma)O(\gamma,f^{\prime}_{E})^{ \sigma}\] \[=O(\delta,f)-\Omega(\gamma)O(\gamma,f^{\prime}_{E})=\Omega( \gamma)O(\gamma,f^{\prime}_{N})\]
Combining these, we find \(\Omega(\gamma)O(\gamma,f^{\prime}_{N})=0\) and \(f^{\prime}_{E}\) is a pure smooth transfer of \(f\).
### GGP spherical character in families
Let \(\sigma=\sigma_{n}\otimes_{E}\sigma_{n-1}\) be an irreducible smooth admissible \(E\)-representation of \((F^{\times}\times F^{\times})\backslash G^{\prime}(F)\) such that \(\sigma_{\tau}\) is tempered for some field embedding \(\tau:E\hookrightarrow\mathbb{C}\). Set
\[\mathcal{W}(\sigma,\psi_{F^{\prime}}):=\mathcal{W}(\sigma_{n},\psi_{F^{\prime }})\otimes_{E_{\psi}}\mathcal{W}(\sigma_{n-1},\psi_{F^{\prime}}^{-1}).\]
By Proposition 2.3 and 2.5 for \(R=E\), there exist \(E_{\psi}\)-linear functionals
\[\ell_{RS,\sigma}:\mathcal{W}(\sigma,\psi_{F^{\prime}})\to E_{\psi},\quad W_{n }\otimes W_{n-1}\mapsto Z_{RS}(W_{n},W_{n-1},1);\]
\[\ell_{As,\sigma^{\vee}}:\mathcal{W}(\sigma^{\vee},\psi_{F^{\prime}}^{-1}) \to E_{\psi},\quad W_{n}^{\vee}\otimes W_{n-1}^{\vee}\mapsto J_{As}(W_{n}^{ \vee},1)J_{As}(W_{n-1}^{\vee},1).\]
Moreover by Corollary 2.4, there is a \(G^{\prime}(F)\)-invariant non-degenerate pairing
\[(-,-):\ \mathcal{W}(\sigma,\psi_{F^{\prime}})\times\mathcal{W}(\sigma^{\vee}, \psi_{F^{\prime}}^{-1})\to E_{\psi}\]
\[(W_{n}\otimes W_{n-1},W_{n}^{\vee}\otimes W_{n-1}^{\vee})\mapsto J_{RS}(W_{n },W_{n}^{\vee},1)J_{RS}(W_{n-1},W_{n-1}^{\vee},1).\]
**Definition 2.8**.: Fix isomorphisms of \(G^{\prime}(F)\)-representations
\[\rho:\ \sigma\otimes_{E}E_{\psi}\cong\mathcal{W}(\sigma,\psi_{F^{\prime}}); \quad\rho^{\vee}:\ \sigma^{\vee}\otimes_{E}E_{\psi}\cong\mathcal{W}(\sigma,\psi_{F^{\prime}}).\]
Let \(I_{\sigma}\) be the character
\[\mathcal{H}(G^{\prime}(F),E)\to E_{\psi};\quad f^{\prime}\in\mathcal{H}(K,E) \mapsto\epsilon(1/2,\eta,\psi)^{-\frac{n(n-1)}{2}}\sum_{i}\frac{\ell_{RS, \sigma}(\sigma(f^{\prime})\rho(v_{i}))\ell_{As,\sigma^{\vee}}(\rho^{\vee}(v^{ i}))}{(\rho(v_{i}),\rho^{\vee}(v^{i}))}.\]
Here \(\epsilon(1/2,\eta,\psi)\in E_{\psi}^{\times}\) is the usual epsilon factor given by Gauss sum and \(\{v_{i}\}\), \(\{v^{i}\}\) are \(E\)-bases of \(\sigma^{K}\) and \(\sigma^{\vee,K}\) respectively such that \(\langle v_{i},v^{j}\rangle=\delta_{ij}\) for any non-degenerate \(G^{\prime}(F)\)-invariant \(E\)-linear pairing \(\langle-,-\rangle:\ \sigma\times\sigma^{\vee}\to E.\) Note that \(I_{\sigma}\) is independent of \(\langle-,-\rangle\).
To consider \(I_{\sigma}\) in families, we need the local constancy of fiber rank in the \(\operatorname{GL}_{n}\)-case.
**Proposition 2.9**.: _Let \(\pi\) be a smooth admissible finitely generated torsion-free \(R[G_{n}]\)-module such that for any \(x\in\Sigma\), \(\pi|_{x}\) is absolutely irreducible and generic. Then for any open compact subgroup \(K\subset G_{n}\), the function_
\[\phi_{\pi^{K}}:\ X\to\mathbb{N},\quad x\mapsto\dim_{k(x)}(\pi^{K}|_{x})\]
_is locally constant on \(\Sigma\)._
Proof.: We use freely the notations in [5, Sections 3-5]. By [5, Lemma 4.2.3], shrinking \(X\) to an open subset containing \(\Sigma\) if necessary, we may assume \(\pi\) is co-Whittaker. Since \(\pi|_{x}\) is absolutely irreducible for \(x\in\Sigma\), one can deduce that \(\pi|_{\eta}\) is absolutely irreducible and generic for each generic point \(\eta\in X\). It suffices to deal with the case \(X\) is connected. Then by [9, Theorem 2.2] and [5, Section 3.3], there exists the classifying map \(\alpha:\ X\to\mathfrak{X}_{n,E}\) from \(X\) to the Bernstein variety factors through the component \(\mathfrak{X}_{[s],[t]}\) determined by the supercuspidal support of \(\pi|_{\eta}\) in the extended Bernstein variety. As explained in the proof of [5, Lemma 5.2.2], there exists an explicitly constructed torsion-free co-Whittaker module \(\mathfrak{M}\) over \(\mathfrak{X}_{[s],[t]}\) such that \(\pi|_{\eta}\cong\alpha^{*}\mathfrak{M}|_{\eta}\) and the fiber rank of \(\mathfrak{M}\) is locally constant. Since \(\pi\) and \(\alpha^{*}\mathfrak{M}\) are both torsion-free co-Whittaker \(R[G_{n}]\)-modules, one deduces \(\pi\cong\alpha^{*}\mathfrak{M}\) from [5, Lemma 4.2.7]. Consequently, the fiber rank of \(\pi\) is locally constant.
Now we can prove Theorem 1.6.
Proof of Theorem 1.6.: By [5, Lemma 4.2.6], shrinking shrinking \(X\) to an open subset containing \(\Sigma\) if necessary, there exist torsion-free co-Whittaker \(R[G^{\prime}_{n}]\)-module \(\sigma_{n}\) and \(R[G^{\prime}_{n-1}]\)-module \(\sigma_{n-1}\) such that \(\mathrm{BC}(\pi)\cong\sigma_{n}\otimes\sigma_{n-1}\) as \(R[G^{\prime}(F)]\)-modules. Let \(\tilde{\sigma}_{n}\) be the \(R\)-module \(\sigma_{n}\) equipped with twisted \(G_{n}\)-action \(\tilde{\sigma}_{n}(g)v:=\sigma_{n}(^{t}g^{-1})v.\) Then for any point \(x\in\Sigma\), \(\tilde{\sigma}_{n}|_{x}\cong(\sigma_{n}|_{x})^{\vee}\). Similar construction and result apply to \(\sigma_{n-1}\). Then by Proposition 2.3 and 2.5, there are \(R\otimes_{E}E_{\psi}\)-linear functional \(\ell_{RS}\) on \(\sigma_{n}\otimes\sigma_{n-1}\otimes_{E}E_{\psi}\) and \(\ell_{As}\) on \(\tilde{\sigma}_{n}\otimes\tilde{\sigma}_{n-1}\otimes_{E}E_{\psi}\) interpolating \(\ell_{RS,\mathrm{BC}(\pi|_{x})}\) and \(\ell_{As,\mathrm{BC}(\pi|_{x})}\) for all \(x\in\Sigma\) respectively.
By Proposition 1.3 and Proposition 2.9, there exists a unique character
\[I_{\sigma_{n}\otimes\sigma_{n-1}}:\ \mathcal{S}(G^{\prime}(F),E)\to \mathrm{Frac}R\otimes_{E}E_{\psi}\]
interpolating \(I_{\mathrm{BC}(\pi|_{e})}\), \(x\in\Sigma\). Consider the character
\[J_{\pi}:\ \mathcal{S}(G(F),E)\to\mathrm{Frac}(R)\otimes_{E}E_{\psi},\quad f \mapsto J_{\pi}(f):=I_{\sigma_{n}\otimes\sigma_{n-1}}(f^{\prime})\]
where \(f^{\prime}\in\mathcal{S}(G^{\prime}(F),E)\) is any smooth transfer of \(f\) provided by Proposition 2.7. By the character identity in [1, Theorem 3.5.7], \(J_{\pi}\) is well-defined and interpolates \(J_{\pi|_{x}}(f)\). By Corollary 1.5, one has \(J_{\pi}(f)\in\mathrm{Frac}R\) and we are done.
_Remark 2.10_.: Actually by the character identity, one can show \(I_{\sigma}\) takes value in \(E\) for any irreducible smooth admissible \(E\)-representation of \((F^{\times}\times F^{\times})\backslash G^{\prime}(F)\) with non-empty \(\mathcal{E}(\sigma)\).
|
2307.09563
|
Combining dependency, grades, and adjoint logic
|
We propose two new dependent type systems. The first, is a dependent
graded/linear type system where a graded dependent type system is connected via
modal operators to a linear type system in the style of Linear/Non-linear
logic. We then generalize this system to support many graded systems connected
by many modal operators through the introduction of modes from Adjoint Logic.
Finally, we prove several meta-theoretic properties of these two systems
including graded substitution.
|
Peter Hanukaev, Harley Eades III
|
2023-07-18T19:21:20Z
|
http://arxiv.org/abs/2307.09563v1
|
# Combining dependency, grades, and adjoint logic
###### Abstract.
We propose two new dependent type systems. The first, is a dependent graded/linear type system where a graded dependent type system is connected via modal operators to a linear type system in the style of Linear/Non-linear logic. We then generalize this system to support many graded systems connected by many modal operators through the introduction of modes from Adjoint Logic. Finally, we prove several meta-theoretic properties of these two systems including graded substitution.
**Theory of computation \(\rightarrow\) Linear logic; Type theory;**_Proof theory; Categorical semantics._
**Keywords:** linear logic, graded types, adjoint logic, dependent types, semantics, resource tracking +
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal
Adjoint Logic [18] generalizes LNL by combining a family of logics with varying degrees of structural rules as opposed to just two fragments of non-linear (allowing weakening and contraction) and linear (no structural rules). This is accomplished by taking a linear base system where all types are annotated with a mode. Each mode is then assigned which structural rules will be allowed in the fragment the mode represents. Finally, one can transport types from one mode to another through modalities similar to the ones found in LNL. This generalization greatly increases the expressiveness of the logic. For example, LNL is easily an instance by taking two modes one allowing both weakening and contraction and one that does not. We could also add a third mode allowing only contraction resulting in combining non-linear logic, relevance logic, and linear logic.
Dependent types allow one to specify and prove properties of programs within the same language they are written [14]. Linear logic has the benefit of affording the ability to specify and prove properties of imperative programs. Krishnaswami et al. [11] show how to integrate dependent types with linear types by generalizing the non-linear fragment of LNL to a dependent type system \(\mathrm{LNL_{D}}\) where the modality from the non-linear fragment now transports a dependent type to the linear fragment. Then using this new mixture of dependent and linear types they show how to specify and prove properties of imperative programs in the style of Bunched implications. The modes found in adjoint logic have also been used to design dependent type system similar to \(\mathrm{LNL_{D}}\) with more than two fragments, but with an eye towards combining dependent types and a family of modal logics [8] rather than just controlling the existence of structural rules.
Graded types are a rather recent addition to linear types where types are annotated with a resource annotation describing how variables of those types can be used; essentially controlling their dataflow. The type system is parameterized by an ordered semiring whose elements are the grades. The grades on types offer more fine grained control over resource usage. For example, if the ordered semiring is taken to be the natural numbers, then the grade describes exactly the number of times the variable is allowed to be used. Furthermore, graded types also have been shown to be a means of combining linear types with dependent types [13, 17, 4, 1].
**Contributions.** We combine dependent types, graded types, and the modes of adjoint logic to define a new system capable of combining lots of substructural logics. First, we generalize \(\mathrm{LNL_{D}}\) into a new system called \(\mathrm{dmGL}\), and then we generalize \(\mathrm{dmGL}\) by adding modes producing our final system called \(\mathrm{\textsc{G}\textsc{l}\textsc{a}\textsc{d}}\). All of our contributions are as follows:
* We replace the dependent type system of \(\mathrm{LNL_{D}}\) with \(\mathrm{\textsc{G}\textsc{a}\textsc{d}}\) a graded dependent type system. This system called \(\mathrm{dmGL}\) gives more control over resource usage [4] producing a graded dependent linear/non-linear system. Then we prove:
* Substitution for the entire system ensuring that typed graded composition is preserved.
* Context and type well formedness.
* Graded contraction and weakening are admissible in the mixed linear/non-linear fragment.
* Subject reduction for the entire system.
* The previous system has two explicit fragments, but with grading on one side. Now we generalize this system one step further by introducing the modes from adjoint logic. This system is parameterized by a family of modes and preordered semirings where each mode is paired with a potentially different preordered semiring. Then each type is annotated with both a mode and a grade (element of the preordered semiring). Then we prove:
* Substitution for the entire system ensuring that typed graded composition is preserved.
* Contraction is admissible in the mixed fragment.
## 2. A Dependent Mixed Graded and Linear Type System
In this section we present our first type system, which combines the graded dependently typed system \(\mathrm{\textsc{G}\textsc{a}\textsc{d}}\) with linear logic. We call this system \(\mathrm{dmGL}\) (dependent mixed graded linear). As with previously proposed dependently typed graded systems [13, 1, 4], variables in \(\mathrm{dmGL}\) are annotated by grades drawn from a semiring which captures a computational notion of resource usage.
**Definition 2.1** (Grades).: \(\mathrm{dmGL}\) is parametrized by a preordered semiring \((R,0,1,+,\cdot,\leq)\), that is \(R\) is equipped with a preorder \(\leq\) and a semiring structure \((R,0,1,+,\cdot)\) such that the operations (\(+\)) and (\(\cdot\)) are monotonic in both arguments. Elements of \(R\) are called _grades_ and denoted \(r,p,q\).
**Example 2.2** (Variable \(\mathrm{Re}\)-use).: We take \(R=\mathbb{N}\) the semiring of natural numbers, with the usual addition and multiplication. In the judgment
\[x:^{2}\mathbf{Nat}+\mathrm{if\;Even}(x)\mathrm{\;then\;}x/2\mathrm{\;else\;}3x+ 1:\mathbf{Nat}\]
the annotation of \(2\in\mathbb{N}\), indicates that the variable \(x\) is used two times in the computation of the consequent term. This grading was originally introduced by Girard for Bounded Linear Logic [7] and used to characterize polynomial time computation, but has also been used, for example, for automated garbage collection [4]. The preorder we choose on \(\mathbb{N}\) is also relevant: It will be used to control the discarding of resources. If we choose the ordinary preorder \(\leq\) on \(\mathbb{N}\), then we could replace the annotation \(2\) above by some other integer \(k\geq 2\). In this case, the annotation \(k\) would mean that the variable \(x\) is used up to \(k\) times in the computation. On the other hand, choosing the preorder to be the trivial one
with \(m\leq n\iff m=n\), would guarantee a usage of two times exactly.
**Example 2.3** (Quantitative Semirings).: Continuing from the previous example, call a semiring \(R\)_quantitative1_ if it satisfies
Footnote 1: Here, we follow terminology by Moon et al. [13]
i) \(0\neq 1\)
ii) \(r+p=0\implies r=p=0\)
iii) \(r\cdot p=0\implies r=0\lor p=0\)
Choudhury et al. proved of Grad that a variable graded with \(0\) in such a semiring is guaranteed to be computationally irrelevant and since our system is based on Grad, a similar result is expected to hold for dmGL. Therefore, such semirings allow the tracking of computationally relevant vs. irrelevant data. This is particularly relevant in dependently typed programs, as it allows a distinction between variables which are only used in type checking and those which are used in the execution of a program. Examples of such semirings are the natural numbers, and the following two which we elaborate upon in more detail.
The boolean semiring is \(R=\{0,1\}\) with \(1+1=1\). This semiring's tracking of variable usage is coarse grained with \(0\) meaning computational irrelevance and and \(1\) representing some usage. We have not yet discussed whether \(0\leq 1\) should hold in this semiring. dmGL features a subusage rule which assert a variable graded with \(r\in R\) may also be graded with \(q\), so long as \(r\leq q\). If we choose \(0\leq 1\) to be true, the subusage rule will allow us to discard variables graded \(1\). On the other hand if we choose \(0\leq 1\) to not be true, variables graded \(1\) are guaranteed to be computationally relevant.
The _none-one-tons_ semiring is \(R=\{0,1,\omega\}\) in which we have \(1+1=1+\omega=\omega\). This semiring offers slightly more fine grained tracking, with \(1\) now representing linear use, and \(\omega\) representing unrestricted use. We take \(0\leq\omega\) to allow the discarding of unrestricted variables, and \(1\leq\omega\) to allow promotion of linear variables to the unrestricted case. If we make \(1\) incomparable by \(\leq\) with the other elements, we can guarantee that variables graded \(1\) are in fact used linearly.
For the remainder of this section, we fix a preordered semiring \((R,0,1,+,\cdot,\leq)\).
The syntax of terms and types in dmGL is given in Figure 1 and will be explained throughout the remainder of this section as it becomes relevant.
**Definition 2.4**.: _Grade vectors_ are finite lists of grades, and denoted by \(\delta\). They have the syntax
\[\delta:=\emptyset\mid\delta,r\]
with \(\emptyset\) denoting the empty grade vector. We write \(\delta,\delta^{\prime}\) for the concatenation of grade vectors \(\delta\) and \(\delta^{\prime}\) and use \(\vec{0}\) to denote any grade vector consisting of only \(0\)s. We extend the operations \(+\) and \(\leq\) to grade vectors of equal length pointwise and define scalar multiplication \(r\cdot\delta\) in the obvious way.
\[\emptyset+\emptyset=\emptyset\qquad\quad(\delta,r)+(\delta^{\prime},r^{ \prime})=\delta+\delta^{\prime},r+r^{\prime}\]
\[\emptyset\leq\emptyset\iff\text{True}\qquad(\delta,r)\leq(\delta^{\prime},r^ {\prime})\iff\delta\leq\delta^{\prime}\wedge r\leq r^{\prime}\]
\[r\cdot\emptyset=\emptyset\qquad\quad r\cdot(\delta,q)=r\cdot\delta,r\cdot q\]
The basic structure of dmGL is similar to other mixed linear/non-linear type systems [2, 11, 19]. dmGL consist of two fragments: A purely graded fragment and a mixed fragment. Terms and types are divided into graded and linear as well. Typing judgments in the graded fragment may only have graded hypotheses, and produce _graded terms_ which belong to _graded types_. In the mixed fragment, typing judgments produce _linear terms_ belonging to _linear types_, but assumptions may consist of both linear and graded formulas. The graded fragment is the dependently typed system Grad, a type system by Choudhury et al. [4]. That work also provides a more detailed discussion of the rules presented here. In the mixed fragment, linear types may dependent on graded variables but not on linear ones. Because of this, we treat both graded and linear types as graded terms, belonging to type universes Type and Linear respectively. Graded types are denoted \(X,Y,Z,W\) and linear types are denoted \(A,B,C\). Their syntax is given as part of the complete syntax of dmGL in Figure 1.
**Definition 2.5** (Contexts).: Contexts are lists of typing assignments to variables. We use the letters \(x,y,z\) for variables. While we make no syntactic distinction between variables assigned to graded or linear types, we do distinguish between _graded_ and _linear contexts_, denoted by \(\Delta\) and \(\Gamma\) respectively and assigning variables to only graded or linear types respectively. The graded fragment is dependently typed, and in the mixed fragment we allow types in the linear context to depend on variables appearing in the graded context. Because of this, our system requires judgments asserting that contexts are well formed. These judgment forms are \(\delta\odot\Delta\vdash_{G}\text{ctx}\) and \(\delta\odot\Delta;\Gamma\vdash_{M}\text{ctx}\) respectively and their rules are given in Figure 2. Here, the formulas \(x\not\in\text{dom }\Delta\) and \(x\not\in\text{dom }\Gamma\) indicate that that the variable \(x\) is not bound in context \(\Delta\) or \(\Gamma\) respectively.
The former of the above judgment forms means that \(\Delta\) is well-formed context and also ensures that the attached grade vector has the same length as \(\Delta\). The latter ensures that all types appearing in linear context \(\Gamma\) are well formed over the graded context \(\Delta\).
**Remark 2.6**.: Note that in the context extension rule for graded contexts, the grade vector \(\delta\) is extended by an arbitrary grade \(r\). Because of this it is actually provable that if \(\delta\odot\Delta\vdash_{G}\text{ctx}\), and \(\delta^{\prime}\) is any grade vector with the same length as \(\delta\), then \(\delta^{\prime}\odot\Delta\vdash_{G}\text{ctx}\) and similarly for mixed contexts.
Aside from the two type universes, our system only contains two basic types, namely the graded and linear unit types \(\mathbf{J}\) and \(\mathbf{I}\) respectively. To construct more complex graded types, our system includes coproduct types \(X_{1}\boxplus X_{2}\), dependent function types \((x:^{r}X)\to Y\) and a dependent pair type \((x:^{r}X)\boxplus Y\). We explain the roles of the grade annotations in the latter two below. To form more complex linear types, we have the linear function type \(A\multimap B\) and tensor product type \(A\otimes B\) at our disposal. Lastly, we have the modal operators \(\mathcal{F}\) and \(\mathcal{G}\), which mediate between the two fragments, transforming linear types into graded ones and vice versa. We give the complete type formation rules in Figure 3.
We now explain the typing rules of our type system in more detail. The typing judgments for the graded and mixed fragment have the forms
\[\delta\odot\Delta\vdash_{\mathrm{C}}t:X\quad\text{and}\quad\delta\odot\Delta; \Gamma\vdash_{M}l:A\]
respectively. The annotations \(\mathrm{G}\) and \(\mathsf{M}\) on the turnstiles indicate whether the judgment is in the graded or mixed fragment. The rules enforce that the length of \(\delta\) and \(\Delta\) are equal in any provable judgment. If \(\delta=r_{1},\ldots,r_{n}\) and \(\Delta=x_{1}:X_{1},\ldots,x_{n}:X_{n}\), the above judgment forms indicate that variable \(x_{i}\) is used with grade \(r_{i}\) in the construction of the term \(t\) (resp. \(l\)). Notice that both graded and linear types are themselves graded terms, but the above judgment forms contain no information about the grades used in the construction of the type \(X\) (resp. \(A\)).
The rule \(\mathrm{G}\)-weak allows weakening, provided the newly added variable is used with grade \(0\). Similarly, for the variable rule \(\mathrm{G}\)-var we require that the variable in the conclusion of the rule is used exactly with grade \(1\) and all other variables are used with grade \(0\). Finally, we include a sub-usage rule \(\mathrm{G}\)-subusage which asserts that we can make typing judgments with higher grades than necessary. The graded unit type \(\mathbf{J}\) has one closed constructor \(\mathbf{j}\) and a term of
Figure 1. Syntax of dmGL
Figure 3. Rules for type formation
Figure 2. Rules for well formed contexts
unit type can be eliminated by pattern matching The graded dependent pair type \((x:^{r}X)\boxplus Y\) comes with a grade annotation \(r\). This annotation means that to eliminate a term of the form \((t_{1},t_{2})\) of this type, the grade at which the first component \(t_{1}\) of the pair is used must be \(r\) times the grade at which \(t_{2}\) is used.
The coproduct type \(X_{1}\boxplus X_{2}\) has the expected left and right injections inl and inr as constructors and its elimination form \(\operatorname{case}_{g}t\) of \(s_{1};s_{2}\) works by case distinction. The one caveat is that the functions \(s_{1}\) and \(s_{2}\), which describe the two cases, must use their input with the same grade \(q\), which we include as annotation on the elimination form.
The dependent function type \((x:^{r}X)\to Y\) has a grade annotation which indicates at which grade the variable \(x\) of type \(X\) must be used: Introduction is done via lambda abstraction, with the constraint that the variable that we are abstracting over must be used at grade \(r\). Similarly, if \(t\) is of type \((x:^{r}X)\to Y\) and \(t^{\prime}\) of type \(X\), then the grades used to construct \(t^{\prime}\) are multiplied by \(r\) when constructing the application \(t\,t^{\prime}\).
Since term judgments contain no information about the grades used in the type, the grade annotations in the dependent function type \((x:^{r}X)\to Y\) and \((x:^{r}X)\boxplus Y\) do not need to be the same as the grade with which \(x:X\) is used in the construction of \(Y\). A similar remark holds for the left adjoint \(\mathcal{F}(x:^{r}X).A\) below.
The mixed fragment behaves like a linear logic, with an additional context of graded variables available. When linear contexts are concatenated, the grade vectors of the shared graded context are added. The mixed fragment features a linear unit type I with inhabitant \(\mathbf{i}\), a linear function type \(A\multimap B\) and a pair type \(A\otimes B\). The rules the mixed fragment are specified in Figure 5 and Figure 6. Some rules in the mixed fragment feature concatenation of linear contexts. In those cases we assume that variables are renamed to avoid name clashes.
Finally, we discuss the modal operators \(\mathcal{F}\) and \(\mathcal{G}\). The operator \(\mathcal{G}\) takes a linear type \(A\) and produces a graded type \(\mathcal{G}\,A\). It's function is analogous to the operators of Benton [2] and Krishnaswami et al. [11]. It transforms linear terms \(l\) of type \(A\) with no free linear variables into graded terms of type \(\mathcal{G}\,A\). Elimination of terms of type \(\mathcal{G}\,A\) is handled by the operator \(\mathcal{G}^{-1}\,t\) which produces terms of type \(A\) from terms of type \(\mathcal{G}\,A\). Here, we can see that linearity and grading line up, if \(l\) is a linear term with free variable \(x\) of linear type \(A\), there is a corresponding term \(\left[\mathcal{G}^{-1}\,y/x\right]l\) with free variable \(y\) of type \(\mathcal{G}\,A\) and variable \(y\) is graded \(1\), see Proposition 2.13 below. The operator \(\mathcal{F}\) is again similar to that of Kirshnaswami [11]. The type \(\mathcal{F}(x:^{r}X).A\) behaves like a dependent pair type where the first component belongs to graded type \(X\) and the second component belongs
Figure 4: Graded system type assignment
to linear type \(A\). Elimination from this type is done by pattern matching let \(\mathcal{F}\left(x,y\right)=l_{1}\) in \(l_{2}\) and the grade annotation \(r\) in the type forces the first component to be used with grade \(r\) for the eliminator to be invoked.
Figure 7 contains the reduction rules (full \(\beta\)-reduction) for dmGL. Since reduction rules only see the syntactic form of terms, the grades are not involved at all, and hence the reduction rules are as one would expect. The application of a lambda abstraction to some term reduces by substitution, and pattern matching expressions reduce when the term being matched on is exactly of the form of the pattern, in which case the reduction is also by substitution. Finally, the case expression for the coproduct eliminator reduces if the scrutinee was constructed by one of the injections, in which case the reduction works by applying the respective function. We write \(\equiv\) for the congruence closure of \(\leadsto\), on graded terms. In other words, \(\equiv\) is the smallest equivalence relation on graded terms that contains \(\leadsto\) and such that \(t_{1}\equiv t_{2}\) implies \([t_{1}/x]t\equiv[t_{2}/x]t\) for all terms \(t,t_{1},t_{2}\) and likewise with linear terms. Similarly, we write \(\equiv\) for the smallest equivalence relation on linear terms which contains \(\leadsto\) and satisfies the implications \(t_{1}\equiv t_{2}\implies[t_{1}/x]t\equiv[t_{2}/x]t\) and \(l_{1}\equiv l_{2}\implies[t_{1}/x]t\equiv[l_{2}/x]t\) for all \(t_{1},t_{2},l_{1},l_{2}\) and \(l\). We will only use the relation \(\equiv\) in the type conversion rules, which are standard.
#### 4.2.2 Metatheory
We now turn our attention to the metatheory of dmGL. First we state some well-formedness conditions. Then we show how grading interacts with substitution and linearity. Finally, we show that the full \(\beta\)-reduction rules preserve typing and grading. Proofs of theorems stated in this section can be found in Appendix A.
In provable typing judgments, the context in which the typing occurs is well-formed. Furthermore, terms always have well-formed types.
**Proposition 2.7**.: _The following hold by mutual induction:_
1. _[label=)]_
2. _If_ \(\delta\odot\Delta\vdash_{\mathsf{G}}t:X\)_, then_ \(\delta\odot\Delta\vdash_{\mathsf{G}}\mathsf{ctx}\)_._
3. _If_ \(\delta\odot\Delta;\Gamma\vdash_{\mathsf{M}}l:A\)_, then_ \(\delta\odot\Delta\vdash_{\mathsf{G}}\mathsf{ctx}\) _and_ \(\delta\odot\Delta;\Gamma\vdash_{\mathsf{M}}\mathsf{ctx}\)_._
Figure 5. Mixed system type assignment
Figure 6. Mixed system type assignment continued
**Proposition 2.8**.: _The following hold by mutual induction:_
1. _If_ \(\delta\odot\Delta\vdash_{\mathsf{G}}t:X\)_, then_ \(\delta^{\prime}\odot\Delta\vdash_{\mathsf{G}}X:\)__Type _for some grade vector_ \(\delta^{\prime}\)_._
2. _If_ \(\delta\odot\Delta;\Gamma\vdash_{\mathsf{M}}I:A\)_, then_ \(\delta^{\prime}\odot\Delta\vdash_{\mathsf{G}}A:\)__Linear _for some grade vector_ \(\delta^{\prime}\)_._
Next, we consider substitution. Since we have a graded and a linear fragment we need to state substitution for both fragments, additionally, in the linear fragment substitution is split further into cases where a variable is replaced by a graded or a linear term. Since our system has dependent typing, when a graded term is substituted for a variable we also need to substitute it in part of the context. We therefore make the following definition:
**Definition 2.9**.: Let \(\Delta\) be a graded context, \(\Gamma\) a linear context, \(x\) a term variable and suppose \(x\not\in\)dom\(\Delta\) and \(x\not\in\)dom\(\Gamma\). We define \([t/x]\Delta\) and \([t/x]\Gamma\) as follows:
\[[t/x]\emptyset=\emptyset\qquad[t/x](\Delta,y:Y)=[t/x]\Delta,y:[t/x]Y\]
\[[t/x](\Gamma,y:A)=[t/x]\Gamma,y:[t/x]A\]
We use an additional notational convention:
**Convention 2.10**.: We assume that lengths of grade vectors and corresponding contexts match in judgments. For example, when we write \(\delta,r,\delta^{\prime}\odot\Delta,x:X,\Delta^{\prime}\vdash_{\mathsf{G}}t:Y\), we assume that \(\delta\) has the same length as \(\Delta\) and similarly for \(\delta^{\prime}\) and \(\Delta^{\prime}\).
We can now state the substitution theorem. Parallel composition is modeled by addition in the semiring, while sequential composition is modeled by multiplication.
**Theorem 2.11** (Substitution).: _The following hold by mutual induction:_
1. _(Graded Contexts) If_ \(\delta_{0}\odot\Delta\vdash_{\mathsf{G}}t_{0}:X\)_, and_ \(\delta,r,\delta^{\prime}\odot\Delta,x:X,\Delta^{\prime}\vdash_{\mathsf{G}} \mathsf{c}\mathsf{c}\mathsf{c}\mathsf{x}\) _then_ \[\delta+r\cdot\delta_{0},\delta^{\prime}\odot\Delta,[t_{0}/x]\Delta^{\prime} \vdash_{\mathsf{G}}\mathsf{c}\mathsf{c}\mathsf{t}\mathsf{x}\]
2. _(Mixed Contexts) If_ \(\delta_{0}\odot\Delta\vdash_{\mathsf{G}}t_{0}:X\)_, and_ \(\delta,r,\delta^{\prime}\odot\Delta,x:X,\Delta^{\prime}\vdash_{\mathsf{M}} \mathsf{c}\mathsf{t}\mathsf{x}\mathsf{t}\mathsf{x}\mathsf{t}\mathsf{x} \mathsf{t}\mathsf{x}\mathsf{t}\mathsf{x}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{ x}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t} \mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf{t}\mathsf
**Theorem 2.14** (Subject Reduction).:
1. _If_ \(\delta\odot\Delta\vdash_{G}t:X\) _and_ \(t\leadsto t^{\prime}\)_, then_ \(\delta\odot\Delta\vdash_{G}t^{\prime}:X\)_._
2. _If_ \(\delta\odot\Delta;\Gamma\vdash_{M}I:A\) _and_ \(l\leadsto l^{\prime}\)_, then_ \(\delta\odot\Delta;\Gamma\vdash_{M}I^{\prime}:A\)_._
## 3. A graded type system in the style of adjoint logic
We now present a further generalization of the previous type system, which is inspired by Adjoint Logic (Dogogor and Kules, 2017), and which we cal glad. Adjoint Logic offers a smooth way of combining an arbitrary number of substructural logics which are identified by _modes_. Each mode \(\mathfrak{m}\) is assigned a set \(\sigma(\mathfrak{m})\subseteq\{\mathsf{W},\mathsf{C}\}\) of the structural rules weakening (\(\mathsf{W}\)) and contraction (\(\mathsf{C}\)) satisfied by its logic, and the modes are arranged in a preorder such that the map \(\sigma\) is monotone, i.e. if \(\mathfrak{m}_{1}\leq\mathfrak{m}_{2}\), then \(\sigma(\mathfrak{m}_{1})\subseteq\sigma(\mathfrak{m}_{2})\). One of the key insights of Adjoint Logic is that judgments
\[A^{1}_{\mathfrak{m}_{1}},\ldots,A^{n}_{\mathfrak{m}_{n}}+B_{\mathfrak{m}}\]
must satisfy \(\mathfrak{m}_{i}\geq\mathfrak{m}\) for each \(i\), where the subscripts indicate the mode a proposition belongs to.
In the system of this section, modes come equipped with a preordered semiring controlling the resource structure. For any one of these semirings, this system is a dependently typed system with the same rules as dmGL. We can control weakening in each of these fragments, but in the presence of dependent types, controlling for contraction becomes difficult. We will discuss the issues of this later.
**Definition 3.1**.: Let \(R,S\) be preordered semirings. A morphism of preordered semirings \(R\to S\) is a map \(f\colon R\to S\) such that \(f(0)=0\), \(f(1)=1\) and for all \(a,b\in R\), we have \(f(a+b)=f(a)+f(b)\) and \(f(a\cdot b)=f(a)\cdot f(b)\) and \(a\leq b\implies f(a)\leq f(b)\). If \(f\colon R\to S\) is a morphism of preordered semirings and \(r\in R\) and \(s\in S\), then we write \(r\cdot s:=f(r)\cdot s\in S\).
**Definition 3.2**.: A _mode_\(\mathfrak{m}\) is a pair \((R_{\mathfrak{m}},\mathsf{Weak}(\mathfrak{m}))\), where \(R_{\mathfrak{m}}\) is a preordered semiring and \(\mathsf{Weak}(\mathfrak{m})\) is either true or false. We will write \(r\colon\mathfrak{m}\) to mean \(r\in R_{\mathfrak{m}}\). Modes are denoted by the lowercase fraktur letters \(\mathfrak{m},\mathfrak{n},\mathfrak{l}\).
Let \(\mathfrak{m}\) and \(\mathfrak{n}\) be modes and assume that the proposition \(\mathsf{Weak}(\mathfrak{m})\implies\mathsf{Weak}(\mathfrak{n})\) is true. A morphism of modes \(\mathfrak{m}\to\mathfrak{n}\) is a morphism \(R_{\mathfrak{m}}\to R_{n}\) of the underlying preordered semirings. There is a category of modes, denoted by Modes.
For the rest of this section, fix a preordered set \(I\) and a functor \(I\to\mathsf{Nodes}\). That is, fix: For each \(i\in I\), a mode \(\mathfrak{m}_{i}\), and for each \(i\leq j\) in \(I\), a morphism of modes \(f_{ij}\colon\mathfrak{m}_{i}\to\mathfrak{m}_{j}\) such that for \(i\leq j\leq k\), \(f_{jk}\circ f_{ij}=f_{ik}\) holds. We will write \(\mathfrak{m}_{i}\leq\mathfrak{m}_{j}\) when \(i\leq j\). glad is parametrized by this data, and we give examples below.
The syntax of types and terms in glad is given in Figure 8. It strongly resembles that of dmGL, with a unit type \(\mathbf{I_{m}}\), dependent function and pair types \((x:^{r:\mathfrak{m}}A)\ltimes B\) and \((x:^{r:\mathfrak{m}}A)\ltimes B\) and a coproduct type \(A\oplus B\). The modal operators \(\uparrow^{\mathfrak{m}_{1}}_{m_{1}}a\) and \(\downarrow^{\mathfrak{m}_{2}}_{m_{1}}a\) take the role of \(\mathcal{G}\) and \(\mathcal{G}^{-1}\) from before while the role of the modal operator \(\mathcal{F}\) is now subsumed by the dependent pair type \((x:^{r:\mathfrak{m}}A)\ltimes B\), which allows the first and second component of the pair to belong to different modes. The mode of any type appearing in a valid judgment will always be determined uniquely, and therefore annotations on types to specify their mode are not necessary. We choose to annotate the unit type with its mode, as this will make things easier in the future.
The judgment for well-formed contexts in glad has the form
\[\delta\mid\mathcal{M}\odot\Gamma\vdash\mathsf{ctx}.\]
The rules for this judgment are given in Figure 9. In this judgment form, \(\delta\) is a list of grades, \(\mathcal{M}\) is a list of modes, and \(\Gamma\) is a context. If \(\delta=(r_{1},\ldots,r_{n})\), \(\mathcal{M}=(\mathfrak{m}_{1},\ldots,\mathfrak{m}_{n})\) and \(\Gamma=x_{1}:A_{1},\ldots,x_{n}:A_{n}\), then the judgment above indicates that \(r_{i}\colon\mathfrak{m}_{i}\), that the variable \(x_{i}\) is graded with grade \(r_{i}\) and that the type \(A_{i}\) belongs to mode \(\mathfrak{m}_{i}\). The same is true for typing judgments:
\[\delta\mid\mathcal{M}\odot\Gamma\vdash_{\mathfrak{m}}a\colon A.\]
The annotation \(\mathfrak{m}\) on the turnstile indicates that the judgment is made in mode \(\mathfrak{m}\), and that \(a\) and \(A\) belong to mode \(\mathfrak{m}\), therefore supporting the structural rules allowed by \(\mathfrak{m}\). As in adjoint logic, we demand that in such a judgment, we have \(\mathfrak{m}\leq\mathcal{M}\), that is \(\mathfrak{m}\leq\mathfrak{n}\) for each mode \(\mathfrak{n}\) appearing in \(\mathcal{M}\). This property is enforced by the typing rules: If we assume that all premise judgments of a rule satisfy this property, then the conclusion necessarily does, too.
We give the full rules for type formation in Figure 10 and those for typing in Figure 11 and Figure 12. The rules strongly resemble the ones of dmGL, so we will omit most explanations, focusing primarily on the differences. glad has more
Figure 8. Syntax of types and terms in glad
Figure 9. Glad well-formed context judgment
general control over weakening: We may add unused variables of mode \(\mathfrak{m}\) to the context, so long as they are graded with the grade \(0\) and the mode \(\mathfrak{m}\) allows weakening. If \(\mathfrak{m}\) is a mode for which \(\text{Weak}(\mathfrak{m})\) is true, the functionality of the preorder on \(R_{\mathfrak{m}}\) is overloaded in the following way: It captures both the subsaging relation and gives control over grades which may be computationally irrelevant. For nonzero elements of \(R_{\mathfrak{m}}\), the preorder captures the subsaging behavior, allowing to use a higher grade of resources than necessary to construct a term. However, for elements \(q\) with \(0\leq q\), the preorder captures that variables graded with \(q\) may be discarded.
Some of the rules feature a scalar multiplication \(r\)-\(\delta\). There are two things to note here: First, the vector \(\delta\) is a list grades coming from potentially different semirings, but this doesn't affect the definition of scalar multiplication as entrywise multiplication with \(r\). Second, let \(\mathfrak{m}_{i}\) be the mode of the \(i\)-th entry of \(\delta\) and \(\mathfrak{m}\) the mode of \(r\). Observe that the rules where scalar multiplication occurs guarantee that \(\mathfrak{m}\leq\mathfrak{m}_{i}\) for each \(i\) and therefore the scalar multiplication is indeed well-defined according to Definition 3.1. Similarly, some rules feature addition of grade vectors \(\delta+\delta^{\prime}\). Notice that whenever this is the case, the annotation by mode vectors \(\mathcal{M}\) forces the \(i\)-th entries of \(\delta\) and \(\delta^{\prime}\) to belong to the same mode, ensuring that the sums of grade vectors are well-defined. Finally, the subusage rule features the preorder relation \(\delta_{1}\leq\delta_{2}\) on grade vectors. This is defined in the natural way as componentwise \(\leq\), with the additional condition that for each \(i\), the \(i\)-th component of \(\delta_{1}\) and \(\delta_{2}\) must be from the same mode.
The dependent pair type \((x:^{r:\mathfrak{m}}A)\otimes B\) now carries an additional mode annotation, indicating that type \(A\) belongs to mode \(\mathfrak{m}\) and that \(r\): \(\mathfrak{m}\). Due to the construction of the dependent pair type, if \(\delta\mid\mathcal{M}\odot\Gamma\vdash_{\mathfrak{m}}(x:^{r:\mathfrak{m}}A) \otimes B\): Type, then \(B\) and \((x:^{r:\mathfrak{m}}A)\otimes B\) belong to mode \(\mathfrak{m}\). If \(\mathfrak{n}\) and \(\mathfrak{m}\) are the same mode, then we recover ordinary versions of the dependent pair type belonging to mode \(\mathfrak{m}\). On the other hand, if the modes are distinct, the first component of the pair is "moved" from the higher mode \(\mathfrak{m}\) to the lower mode \(\mathfrak{n}\). This is the way the left adjoint \(\mathcal{F}(x:^{r}X).A\) functions in dmGL. Since the dependent pair and left adjoint \(\mathcal{F}\) have the same introduction and elimination rules (mutandis mutatis), we treat them as special instances of the same type in Glad, subsuming both functionalities.
The dependent function type \((x:^{r:\mathfrak{m}}A)\multimap B\) also carries a mode annotation now. This is because we allow the modes of \(A\) and \(B\) to be distinct, and the annotation indicates the mode of \(A\). Like with the dependent pair type, if \(B\) has mode \(\mathfrak{n}\), then so does \((x:^{r:\mathfrak{m}}A)\multimap B\) and we necessarily have \(\mathfrak{m}\geq\mathfrak{n}\). A similar construction exists in \(\text{INL}_{\text{D}}\)[11], where
Figure 11. Glad typing rules
Figure 10. Glad type formation rules
there are two dependent function types, one between intuitionistic types and one whose functions take intuitionistic arguments and produce linear terms.
#### 3.3.1 Metatheory
We discuss some metatheory of Glad, and return to the point of controlling contraction in Glad. Substitution holds in Glad. We use Convention 2.10 again, but extend it to also imply that lists of modes \(\mathcal{M}\) have the same length as the corresponding grade vectors and contexts.
**Theorem 3.3** (Glad substitution).: _The following hold by mutual induction:_
1. _(Contexts) If_ \(\delta,r,\delta^{\prime}\mid\mathcal{M},\mathfrak{m}_{0},\mathcal{M}^{\prime} \odot\Gamma,x\colon A,\Gamma^{\prime}\vdash\mathsf{ctx}\) _and_ \(\delta_{0}\mid\mathcal{M}\odot\Gamma\vdash_{\mathfrak{m}_{0}}a_{0}\colon A\) _then_ \[\delta+r\cdot\delta_{0},\delta^{\prime}\mid\mathcal{M},\mathcal{M}^{\prime} \odot\Gamma,[a_{0}/x]\Gamma^{\prime}\vdash\mathsf{ctx}\] _if_ \(\delta,r,\delta^{\prime}\mid\mathcal{M},\mathfrak{m}_{0},\mathcal{M}^{\prime} \odot\Gamma,x\colon A,\Gamma^{\prime}\vdash_{\mathfrak{m}}b\colon B\) _and_ \(\delta_{0}\mid\mathcal{M}\odot\Gamma\vdash_{\mathfrak{m}_{0}}a_{0}\colon A\)_, then_ \[\delta+r\cdot\delta_{0},\delta^{\prime}\mid\mathcal{M},\mathcal{M}^{\prime} \odot\Gamma,[a_{0}/x]\Gamma^{\prime}\vdash\mathsf{ctx}\] _if_ \(\delta,r,\delta^{\prime}\mid\mathcal{M},\mathfrak{m}_{0},\mathcal{M}^{\prime} \odot\Gamma,x\colon A,\Gamma^{\prime}\vdash_{\mathfrak{m}}b\colon B\) _and_ \(\delta_{0}\mid\mathcal{M}\odot\Gamma\vdash_{\mathfrak{m}_{0}}a_{0}\colon A\)_, then_ \[\delta+r\cdot\delta_{0},\delta^{\prime}\mid\mathcal{M},\mathcal{M}^{\prime} \odot\Gamma,[a_{0}/x]\Gamma^{\prime}\vdash\mathsf{ctx}\] _if_ \(\delta,r,\delta^{\prime}\mid\mathcal{M},\mathfrak{m}_{0},\mathcal{M}^{\prime} \odot\Gamma,x\colon A,\Gamma^{\prime}\vdash_{\mathfrak{m}}b\colon B\) _and_ \(\delta_{0}\mid\mathcal{M}\odot\Gamma\vdash_{\mathfrak{m}_{0}}a_{0}\colon A\)_, then_ \[\delta+r\cdot\delta_{0},\delta^{\prime}\mid\mathcal{M},\mathfrak{M}^{\prime} \odot\Gamma,[a_{0}/x]\Gamma^{\prime}\vdash_{\mathfrak{m}}[a_{0}/x]b\colon[a_{0} /x]B\] _if_ \(\delta,r,\delta^{\prime}\mid\mathcal{M},\mathfrak{m}_{0},\mathcal{M}^{\prime} \odot\Gamma,[a_{0}/x]\Gamma^{\prime}\vdash_{\mathfrak{m}}[x/x]b\colon[a_{0} /x]B\) _and_ \(\delta_{0}\mid\mathcal{M}\odot\Gamma\vdash_{\mathfrak{m}}a_{0}\colon A\)_, then_ \[\delta+r\cdot\delta_{0},\delta^{\prime}\mid\mathcal{M},\mathfrak{M}^{\prime} \odot\Gamma,[a_{0}/x]\Gamma^{\prime}\vdash_{\mathfrak{m}}[a_{0}/x]b\colon[a_{0} /x]B\] _if_ \(\delta,r,\delta^{\prime}\mid\mathcal{M},\mathfrak{m}_{0},\mathcal{M}^{\prime} \odot\Gamma,[a_{0}/x]\Gamma^{\prime}\vdash_{\mathfrak{m}}[a_{0}/x]b\colon[a_{ 0}/x]B\)_,_ \[\delta+r\cdot\delta_{0},\delta^{\prime}\mid\mathcal{M},\mathfrak{M}^{\prime} \odot\Gamma,[a_{0}/x]\Gamma^{\prime}\vdash_{\mathfrak{m}}[a_{0}/x]b\colon[a_{ 0}/x]B\]
The proof is by induction and given in Appendix B. As for dmGL we obtain a graded contraction rule as a corollary.
**Corollary 3.4** (Glad contraction).: _If_
\[\delta,r_{1},r_{2},\delta^{\prime}\mid\mathcal{M},\mathfrak{m},\mathfrak{m}, \mathcal{M}^{\prime}\odot\Gamma,x\colon A,y\colon A,\Gamma^{\prime}\vdash_{ \mathfrak{m}}b\colon B,\]
_then_ \[\delta,r_{1}+r_{2},\delta^{\prime}\mid\mathcal{M},\mathfrak{m},\mathcal{M}^{ \prime}\odot\Gamma,x\colon A,[x/y]\Gamma^{\prime}\vdash_{\mathfrak{m}}[x/y]b \colon[x/y]B.\]
Proof.: Apply substitution with the judgment
\[\vec{0},1\mid\mathcal{M},\mathfrak{m}\odot\Gamma,x\colon A\vdash_{\mathfrak{ m}}x\colon A\]
obtained from the glad-var rule.
Upon closer inspection, the thing that makes contraction work in our system, is the form of the substitution theorem. In its statement the grades \(\delta_{0}\) for constructing \(a:A\) are added to the ones used in the construction of \(b:B\). In other words the substitution theorem contains a contraction implicitly. In fact, contraction is implicitly included in the rules of Glad: For example in the rule glad-unitElim, the grade vectors \(\delta\) and \(\delta^{\prime}\) are added. This is implicitly a contraction, as we are adding grades instead of concatenating contexts.
But concatenating contexts is not well-suited to the dependently typed setting. When we concatenate contexts in a simply-typed system which has control over contraction, there is a renaming of the variables in the context to avoid name clashes. While such a renaming is possible in a dependently typed system, it immediately becomes very difficult to type any terms as the variables occurring in a term may also be part of its type. For example consider the rule glad-app: If we opted to rename variables, in order to know that the application \(c\)_a_ is well typed, we would need to remember the fact that the domain type of \(c\) and the type of \(a\) were equal prior to the renaming. It is not clear to us at the moment how to handle this renaming and to how to incorporate it with grades. We leave an investigation of this for future work.
#### 3.3.2 Examples
In this section, we give some example instantiations of Glad and show how we may recover existing graded and mixed systems.
**Example 3.5** (Recovering dmGL).: We explore the relationship between dmGL and Glad. Let \(R\) be a semiring. We ask the following question: How closely can we approximate dmGL graded by \(R\) using modes of Glad? A natural approach to solving this, is to take Glad with two modes, one with the semiring \(R\) and one with a semiring that captures linearity. But since Glad admits contraction, while the linear fragment of dmGL does not, the latter semiring cannot
Figure 12. Glad typing rules continued
exist. Because of this, we can only hope to recover a version of dmGL where the linear fragment is graded as in Example 2.2. This turns out to work: We take as modes \(L\) (linear) and \(G\) (graded) with \(L\leq G\) and set \(R_{G}=R\) and \(R_{L}=N\) with the trivial preorder \(n\leq m\iff n=m\). We also take \(\mathsf{Weak}(G)=\mathsf{True}\) and \(\mathsf{Weak}(L)=\mathsf{False}\). There exists a unique morphism of modes \(\phi\colon L\to G\). Glad instantiated with this data produces a system with two fragments, one graded by \(R\) and which has the same rules as the graded fragment of dmGL. The other fragment corresponds to the mixed fragment of dmGL and has assumptions graded by \(R\) as well as by \(N\), with the assumptions graded by \(N\) behaving in a way that's similar to Bounded Linear Logic.
**Example 3.6** (Recovering \(\mathrm{LNL}_{D}\)[11]).: Let \(U\) be the unrestricted mode where \(R_{U}=0\), the trivial semiring with \(0=1\) and \(\mathsf{Weak}(U)=\mathsf{True}\). Furthermore, let \(L\) be the mode with \(R_{L}\) the none-one-tons semiring of Example 2.3 with the reflexive preorder relation and \(\mathsf{Weak}(L)=\mathsf{False}\). Variables in mode \(U\) have no grading information attached to them, and therefore can be used intuitionistically. On the other hand, variables in mode \(L\) are used linearly by default and may not be weakened or duplicated. We have a unique morphism of modes \(L\to U\). Instantiating Glad with this data allows us to recover a system similar to \(\mathrm{LNL}_{D}\).
**Example 3.7**.: In this example we consider two modes \(W\), \(R\) with the semiring for both modes being the none-one-tons semiring of Example 2.3. We set \(\mathsf{Weak}(W)=\mathsf{True}\) and \(\mathsf{Weak}(R)=\mathsf{False}\). We chose the preorder generated by \(0\leq\omega\) for \(W\) and the reflexive preorder for \(R\). There is now a morphism of modes \(R\to W\) which is the identity morphism on the underlying semirings. The mode \(W\) admits weakening, for variables used with grades \(0\) or \(\omega\), while the mode \(R\) does not. In other words, \(R\) is a relevance logic.
Similar to the _of course_ modality \(\dagger\) of linear logic, and its decomposition into two adjoints \(F\) and \(G\) in \(\mathrm{LNL}\), we can use the mode shifting operators of Glad to introduce irrelevantly used variables to the mode \(R\) in a controlled way.
**Example 3.8**.: Consider the semiring \(\mathsf{Var}=\{\sim\sim,\uparrow\uparrow,\downarrow\downarrow,??\}\), with the preorder generated by \(\sim\leq\uparrow\uparrow,\downarrow\downarrow\leq\bindree\), with \(0=??\), \(1=\uparrow\uparrow\), addition defined by \(a+b=\inf(a,b)\) the greatest lower bound on \(\{a,b\}\) and multiplication determined by the equations
\[\downarrow\downarrow\cdot\downarrow\downarrow=\uparrow\uparrow,\]
\[\sim\cdot\downarrow\downarrow=\sim\cdot\sim=\sim\]
and the requirement that multiplication is commutative. This is the _variance_ semiring introduced by Wood and Atkey [20] and it allows to track whether a term depends on a variable covariantly (\(\uparrow\uparrow\)), contravariantly (\(\downarrow\downarrow\)), invariantly (\(\sim\)), or if there are no guarantees (??). We define the mode \(V\) to have \(R_{V}=\mathsf{Var}\) and \(\mathsf{Weak}\,V=\mathsf{True}\).
We add two more modes: \(L\) with \(R_{L}=N\) and \(M\) with \(R_{M}\) the none-one-tons semiring. We take the preorders on these semirings to be the trivial reflexive preorders. Furthermore, we set \(\mathsf{Weak}\,L=\mathsf{False}\) and \(\mathsf{Weak}\,M=\mathsf{True}\). There are unique morphisms of modes \(L\to M\to V\).
## 4. Discussion, Future Work, Related Work
### Related Work
Closely related to Glad is the framework of Licata et al. [12]. Their system is a simply typed linear sequent calculus equipped with a mode theory where every formula and judgment is annotated with a mode that constrains the structural rules allowed within the context. The modes found in Glad are not as elaborate as the mode theory found in their system. Their modes are generic and have morphisms between them, but our modes are specifically semirings. In addition, Glad is dependently typed.
Glad is based on Grad of Choudhury et al. [4]. Grad is essentially identical to the graded side of dmGL with the addition of the modal operators. However, Glad differs substantially from Grad in that the former now supports multiple semirings and a theory of modes.
The Graded Modal Dependent Type Theory (GMDTT) of Moon et al. [13] is very similar to the graded side of dmGL and our second system Glad, but GMDTT strives to track resource usage in types as well as terms where Glad and dmGL only tracks resource usage in terms. In addition, GMDTT is not based on the theory of adjoints and modal operators in line with \(\mathrm{LNL}\) and Adjoint Logic.
Gratzer et al. [8] propose modal dependent type theory which uses modes to support the embedding of a family of modal logics. Their mode theory is similar to the one found in Glad, but our system's goal is to relate graded type systems and theirs is to relate modal logics.
### Simply Typed Version with Control over Contraction
We presented Glad as a dependently typed system with grading and modes, similar to adjoint logic. The fact that Glad is dependently typed makes it difficult to control for contraction in a manner that's similar to adjoint logic, and we have given an argument for why this is the case. It appears that the difficulties with controlling graded contraction disappear if one considers a simply typed system instead. As a next step, we will investigate a simply typed version of Glad in which we can control for contraction. We will take the approach of equipping a mode \(m\) with a subset \(\mathsf{Cont}(m)\subseteq R_{m}\) which is closed under addition, but may also need to satisfy other algebraic properties. We can then introduce an explicit graded contraction rule such as
\[r,q\in\mathsf{Cont}(m)\] \[\frac{\delta,r,q,\delta^{\prime}\mid\mathcal{M},m,m,\mathcal{M}^{ \prime}\odot\Gamma,x:A,y:A,\Gamma^{\prime}\vdash_{m}b:B}{\delta,r+q,\delta^{ \prime}\mid\mathcal{M},m,\mathcal{M}^{\prime}\odot\Gamma,x:A,\Gamma^{\prime} \vdash_{m}[x/y]b:B}\]
### Categorical Semantics
Categorical semantics for dependent graded type systems are not well explored at the time of writing. The only approach know to us is presented for Atkey's QTT [1]. However, Katsumata [10] has developed a general approach to the categorical semantics using graded linear exponential comonads and formulates the coherence conditions on such comonads in a compact way using double categories. In our preliminary considerations on the categorical semantics of Glad, we have recovered Katsumata's approach exactly.
A common approach to categorical semantics of dependent type theory is through categories with families [5] and this is also the approach taken by Atkey via quantitative categories with families (QCwF's). A similar approach is to use comprehension categories [9]. We believe the latter to be slightly nicer, as the category of comprehension categories embeds into the 2-category of cartesian fibrations over a base category \(\mathcal{B}\) and this 2-category enjoys nice properties. Furthermore, a comprehension category can be very compactly described as morphism of cartesian fibrations
with \(\mathcal{B}\) cartesian closed, \(\mathcal{B}^{\rightarrow}\) the category of arrows in \(\mathcal{B}\) and \(\operatorname{cod}\) the codomain fibration. In this regard, one minor criticism we have of Atkey's QCwF's, is that they do not (or at least are not know to) arise as an instantiation of a more general categorical concept, like CwF's do with fibrations.
In category theory, it is often helpful to formulate specific concepts as instances of more general ones. Our goal for the future is to combine the general categorical pictures provided by Katsumata's graded linear exponential comonads and comprehension categories to develop categorical semantics for graded dependent type theory.
## Conclusion
In the present work we have presented two graded dependent type systems. The first was a obtained by replacing the dependent fragment of \(\operatorname{LNL}_{\text{D}}\) with the graded dependent type system Grad. The second type system is a further generalization of the first, allowing different assumptions to be graded by grades coming from different semirings. This system resembles adjoint logic in its structure, and employs a similar construct of _modes_. We proved meta-theoretic properties of these systems: For the former we proved substitution and presented a reduction relation which we showed to preserve grading and types. The latter system was proven to admit substitution and full graded contraction.
## Acknowledgments
This work is supported by the National Science Foundation under Grant No.: 2104535.
|
2305.07337
|
Observational predictions for Thorne-Żytkow objects
|
Thorne-$\.Z$ytkow objects (T$\.Z$O) are potential end products of the merger
of a neutron star with a non-degenerate star. In this work, we have computed
the first grid of evolutionary models of T$\.Z$Os with the MESA stellar
evolution code. With these models, we predict several observational properties
of T$\.Z$Os, including their surface temperatures and luminosities, pulsation
periods, and nucleosynthetic products. We expand the range of possible T$\.Z$O
solutions to cover $3.45 \lesssim \log \left(T/K\right) \lesssim 3.65$ and
$4.85 \lesssim \log \left(L/L_{\odot}\right) \lesssim 5.5$. Due to the much
higher densities our T$\.Z$Os reach compared to previous models, if T$\.Z$Os
form we expect them to be stable over a larger mass range than previously
predicted, without exhibiting a gap in their mass distribution. Using the GYRE
stellar pulsation code we show that T$\.Z$Os should have fundamental pulsation
periods of 1000--2000 days, and period ratios of $\approx$0.2--0.3. Models
computed with a large 399 isotope fully-coupled nuclear network show a
nucleosynthetic signal that is different to previously predicted. We propose a
new nucleosynthetic signal to determine a star's status as a T$\.Z$O: the
isotopologues $^{44}\rm{Ti} \rm{O}_2$ and $^{44}\rm{Ti} \rm{O}$, which will
have a shift in their spectral features as compared to stable
titanium-containing molecules. We find that in the local Universe (~SMC
metallicities and above) T$\.Z$Os show little heavy metal enrichment,
potentially explaining the difficulty in finding T$\.Z$Os to-date.
|
R. Farmer, M. Renzo, Y. Götberg, E. Bellinger, S. Justham, S. E de Mink
|
2023-05-12T09:31:34Z
|
http://arxiv.org/abs/2305.07337v2
|
# Observational predictions for Thorne-Zytkow objects
###### Abstract
Thorne-Zytkow objects (TZO) are potential end products of the merger of a neutron star with a non-degenerate star. In this work, we have computed the first grid of evolutionary models of TZOs with the MESA stellar evolution code. With these models, we predict several observational properties of TZOs, including their surface temperatures and luminosities, pulsation periods, and nucleosynthetic products. We expand the range of possible TZO solutions to cover \(3.45\lesssim\log\left(\mathrm{T_{eff}/K}\right)\lesssim 3.65\) and \(4.85\lesssim\log\left(\mathrm{L/L_{\odot}}\right)\lesssim 5.5\). Due to the much higher densities our TZOs reach compared to previous models, if TZOs form we expect them to be stable over a larger mass range than previously predicted, without exhibiting a gap in their mass distribution. Using the GYRE stellar pulsation code we show that TZOs should have fundamental pulsation periods of 1000-2000 days, and period ratios of \(\approx\)0.2-0.3. Models computed with a large 399 isotope fully-coupled nuclear network show a nucleosynthetic signal that is different to previously predicted. We propose a new nucleosynthetic signal to determine a star's status as a TZO: the isotopologues \({}^{44}\)TiO\({}_{2}\) and \({}^{44}\)TiO, which will have a shift in their spectral features as compared to stable titanium-containing molecules. We find that in the local Universe (\(\sim\)SMC metallicities and above) TZOs show little heavy metal enrichment, potentially explaining the difficulty in finding TZOs to-date.
keywords: stars: evolution - stars: abundances - stars: interiors - stars: variables
## 1 Introduction
Thorne-Zytkow objects (TZOs) are the hypothetical unique product of the merger of a neutron star (NS) with a non-degenerate star leading to the formation of a single object (Thorne & Zytkow, 1975, 1977). Depending on the mass of the combined star, it can be supported either via nuclear burning on/or near the surface of the NS and by accretion onto the NS (Eich et al., 1989; Biehle, 1991).
A number of potential TZOs candidates have been suggested: U Aqr (Vanture et al., 1999), HV 2112 (Levesque et al., 2014), HV 11417 (Beasor et al., 2018), and XX Sgr (Tabernero et al., 2021), though they are not without controversy (Coe & Pightling, 1998; Vanture et al., 1999; Tout et al., 2014; Maccarone & de Mink, 2016; Tabernero et al., 2021). Objects have been also been proposed that may form a TZO (TIC 470710327 Eisner et al., 2022), or may be the remnants of a TZO (1E161348-5055 Liu et al., 2015). The issue with identifying a TZO is distinguishing it from other similar stars, such as asymptotic giant branch (AGB) or super-AGB (SAGB) stars (Biehle, 1991, 1994; O'Grady et al., 2020, 2023). TZOs are expected to appear as cool red supergiants (RSG) (Cannon et al., 1992) (hereafter Cy2), though many challenges remain in observing and modelling RSGs complicating their analysis (Levesque et al., 2005; Davies et al., 2018). Observations to date have relied on indirect measurements, such as their predicted unique nucleosynthetic signatures (Biehle, 1994; van Paradijs et al., 1995). Future gravitational wave observations may provide an opportunity to detect either the formation of the TZO (Nazin & Postnov, 1995; Moran-Fraile et al., 2023), a rotating NS inside a TZO (DeMarchi et al., 2021), or a post-TZO black hole (BH) (Cholis et al., 2022).
TZOs are expected to be fully convective down to a "knee" where it then transitions to a flat temperature profile until the material reaches the surface of the NS (Eich et al., 1989). At the knee, nucleos processed material will be mixed outwards into the convective envelope, while fresh hydrogen is mixed into the region below the knee. Material may also be burnt at the base of the convective envelope. It is expected that the material is burnt via the interrupted rapid proton process (irp-process, Cannon, 1993 hereafter Cy3), where rapid proton captures (Wallace & Woosley, 1981; van Wormer et al., 1994; Fisker et al., 2008) are interrupted when the material is mixed outwards by convection. This material then beta decays before being mixed back into the inner regions where additional proton captures can occur.
This irp-process can lead to the production of heavy elements
such as Rubidium, Strontium, Yttrium, and Molybdenum (C93). It is also expected that the TZOs will be enriched in \({}^{40}\)Ca (Biehle, 1991, 1994), which in normal stars is difficult to produce and mix to the surface (Tout et al., 2014). Lithium is also expected to be produced via \({}^{3}\)He\((a,\gamma)^{7}\)Be\((e^{-},\nu)^{7}\)Li (Cameron, 1955). In non-TZO stars the \({}^{7}\)Li would be destroyed by high temperatures inside the star, but in a fully convective star the \({}^{7}\)Be can be mixed outwards to cooler regions before it captures an electron (Podsiadlowski et al., 1995). However, the nucleosynthetic signal is not unique, there are also TZO imposters: SAGB stars (Kuchner et al., 2002), or stars polluted by the winds of a SAGB star and are now an AGB themselves (Maccarone and de Mink, 2016), which may have similar nucleosynthetic signals to a TZO.
There are a number of potential formation mechanisms of TZOs: engulfment of the NS during a common envelope (Taam et al., 1978; Terman et al., 1995; Ablimit et al., 2022); the NS receiving a kick such that it has a direct impact with the companion (Leonard et al., 1994); or a dynamical merger either in a dense stellar cluster (Ray et al., 1987) or a triple system (Eisner et al., 2022). The formation of a TZO is likely to produce a transient event (Hirai and Podsiadlowski, 2022). Finally, TZOs are expected to die either when they run out of rp-seed material to burn forcing the material near the NS to heat up and enter the pair-instability region. This causes a runaway increase in the neutrino losses, causing the accretion onto the NS to no longer be Eddington limited, and the NS collapses into a BH (Podsiadlowski et al., 1995). This may possibly form a transient event (Moriya, 2018; Moriya and Blinnikov, 2021). Alternatively, they may eject their envelope via wind mass loss, leaving behind a "bare" NS (Bisnovatyi-Kogan and Lamzin, 1984).
The number of TZOs in the Galaxy will depend on the rate of mergers, the fraction of mergers that successfully produce a TZO, and the lifetime of the resulting TZO. Podsiadlowski et al. (1995) estimated a birth rate of \(>10^{-4}\) yr\({}^{-1}\) from common-envelope evolution, and \(\sim 10^{-4}\) yr\({}^{-1}\) from NS kicks. Renzo et al. (2019) calculated the rate of collision from a NS being kicked during a SN with a companion to be \(\approx 10^{-4}\) the core-collapse SN rate. While Ablimit et al. (2022) calculated formation rates of the merger of a ONeMg WD with a non-degenerate companion, where the core collapses and forms a NS, as between \(\sim 10^{-5}\)-\(10^{-4}\) yr\({}^{-1}\). Population synthesis calculations have shown an expected merger rate of a NS with a giant star as high as \(\approx 1\) per 100 core-collapse supernovae, depending on the assumed physics of the common envelope merger (Grichener, 2023). N-body simulations of globular clusters show dynamical mergers of a NS with main sequence star at a rate of the order 1 per \(\sim 4000\) NS containing binaries (Kremer et al., 2020). The actual formation rate of TZOs will however depend on the probability that the merger is successful and does not lead to a transient or complete disruption of the system (Schrnder et al., 2020).
The first models of TZOs presented in Thorne and Zytkow (1975, 1977) were equilibrium models. Here the TZO was split into three regions: an outer region which encompasses the envelope down to the knee; a middle region between the knee and the surface of the NS; and an inner region for the NS itself. These models were not evolved in time; instead, static solutions where found based on the assumed NS and envelope properties. Biehle (1991) improved on the static models by including a simplified model of rp-burning, instead of just assuming CNO burning. Finally, C92 & C93 created a set of evolutionary models where the NS was modelled by altering the EOS such that the electrons become increasingly degenerate in the vicinity of the NS. However, these models ignored wind mass loss and ignored changes in the composition.
In Section 2 we show how we build a TZO in MESA and discuss our default model. In Section 3 we show the evolution of a grid of TZO models. In Section 4 we make predictions for the expected pulsation signal. In Section 5 we explore the nucleosynthetic signal from TZOs. We discuss the suitability of our model assumptions in Section 6, and show the possible final fates of TZOs in Section 7. Finally, we discuss our results in Section 8 and conclude in Section 9.
## 2 Building a TZO model
Realistic modelling of the formation of a TZO is a complex multi-dimensional problem involving the merger of a NS with the core of another star. In this work, we ignore both the NS and the merger process. Doing so, we construct a spherically symmetric post-merger structure of the envelope of the TZO that is then allowed to evolve. There are three phases to creating our models: 1) producing an initial seed model, 2) modifying the inner boundary of our models, and 3) adjusting the global parameters of the model to match a chosen set of starting conditions. Our inlists and models are available at [https://doi.org/10.5281/zenodo.4534425](https://doi.org/10.5281/zenodo.4534425). An example model is also available in MESA's star/test_suite as of version a7c411b1.
### Seed model
Using MESA r22.11.1 (Paxton et al., 2011, 2013, 2015, 2018, 2019; Jermyn et al., 2023) we evolve a \(20\,\mathrm{M}_{\odot}\) star from the pre-main sequence until midway through the main sequence (MS). This model is used to initialise the stellar structure equations for all TZOs, independent of their final mass and composition. The precise choice of physics does not matter here, as the formation process will wipe away any knowledge the model had of its pre-TZO state.
### Adjusting the inner boundary
With our initial stellar model, we then model the NS by adjusting the inner boundaries of the stellar model. We gradually increase the inner mass, radius, and luminosity at the inner boundary of the model using MESA's relax_core method. This gradually changes the inner boundary while smoothly increasing the core density and core energy generation rate. By changing the inner mass boundary we do not change the total mass of the star, thus after forming a TZO, we assume the total mass remains constant.
### Starting conditions
With the base TZO formed, we then further alter the model to match our required starting conditions. We use MESA's relax_mass to add (remove) mass via wind accretion (loss) to alter the total mass of the TZO. We then change the initial metal composition \(Z_{\mathrm{init}}\), assuming a solar-scaled composition, and the initial helium fraction \(Y_{\mathrm{init}}\)with relax_composition. This allows us to build a TZO with an arbitrary initial mass and composition, which can be used to approximate the post-merger structure for different formation scenarios.
We allow for arbitrary helium compositions, as the merger between a NS and a star may occur at any point in the star's lifetime (ignoring selection effects in terms of which stars are likely to successfully merge and form a TZO). If a TZO forms from a NS kick or a dynamical merger then the age of the companion star is a free parameter1,
while mergers from a common envelope will likely, but not necessarily, involve the companion star having evolved beyond the MS. The mixing of the helium produced in the centre of a MS/post-MS star into the envelope will raise the helium mass fraction of the TZO envelope compared to a canonical RSG. Mass loss from the envelope of the star during the merger will likely preferentially remove H-rich material and thus increase the average helium content of the TZO. Thus when the star is fully mixed the average helium content over the entire star will increase. In the limit of when the entire hydrogen envelope is removed, a pure helium TZO could be formed if it survives the merger process. However, whether a TZO may form from the merger of a NS with an evolved star or not is even more uncertain than TZO formation with main sequence stars (Papish et al., 2015; Metzger, 2022).
At this point, we can also set other physics options, such as the choice of the nuclear network, accretion rate onto the NS, wind mass loss efficiency, or the mixing length. These are discussed further in Appendix A. Although orbital angular momentum is involved in the merger process, the amount retained post-merger is still an open question (e.g., Schneider et al., 2019). Thus, for simplicity, in this work we consider only non-rotating models.
### Default TZO model
Here we describe our default TZO model and the choices for uncertain physical and numerical quantities we have made. The effect of varying these choices is explored further in Appendix A. We assume a default NS mass of \(M_{\rm NS}=1.4\) M\({}_{\odot}\). The radius of a NS is still uncertain and depends on the chosen NS equation of state (EOS) (Steiner et al., 2010; Miller et al., 2019). A realistic assumption would be to assume \(R_{\rm NS}=10\)-\(20\) km for the radius of the NS and use that as the inner boundary of our model2. However, we have found that to be numerically difficult to model. Thus we move the inner boundary out to a radius of \(\approx 650\) km. This implies an average core density of \(\langle\rho_{c}\rangle=10^{9.3}\) g cm\({}^{-3}\), instead of \(\langle\rho_{c}\rangle\approx 10^{14}\) g cm\({}^{-3}\) for a \(10\) km NS.
Footnote 2: The inner boundary includes both the NS and the “middle” radiative burning region. However this radiative region is geometrically thin (\(\sim 100\)m) and contains little mass (\(\sim 10^{-8}\) M\({}_{\odot}\)) (Thorne & Zytkow, 1977).
To compensate for the fact that we do not fully calculate the structure of the TZO down to the surface of the NS, we inject energy at the inner boundary of our model. This energy injection approximates the missing energy generated from accretion and nuclear burning below the knee in the "middle" region. These assumptions are tested in Section 6. We parameterise this inner energy injection based on the Eddington luminosity onto the NS as:
\[L_{\rm knee}=\epsilon_{L}L_{\rm Edd} \tag{1}\]
where the Eddington luminosity, \(L_{\rm Edd}\), is:
\[L_{\rm Edd}=\frac{4\pi cGM_{\rm NS}}{\kappa_{c}} \tag{2}\]
with \(G\) the standard gravitational constant, \(M_{\rm NS}\) is the mass of the neutron star, \(c\) is the speed of light, and \(\kappa_{c}\) of the opacity of the material at the inner boundary of the model. \(L_{\rm knee}\) is allowed to evolve with time as the mass of the NS and the opacity at the inner boundary of the TZO changes. Finally, \(\epsilon_{L}\) is an efficiency factor, for which by default we use \(\epsilon_{L}=1.0\). We use \(L_{*}\) to denote the outer surface luminosity of our models.
By parameterising the energy in this way we can control how deep below the knee we model. The point where the local luminosity \(L>L_{\rm Edd}\) is the point where radiation can no longer carry the energy and the envelope becomes convective. Thus when \(\epsilon_{L}=1.0\) the inner boundary of the structure we are calculating corresponds to the knee, which is also the base of the convection zone (C93). Smaller values of \(\epsilon_{L}\) include more of the material below the knee in the calculation but come at a greater computational cost. By injecting energy in this way we are assuming that the material below the knee is actually able to generate that much energy. It is possible that some of our models would not be able to do this, in which case the corresponding TZO may not form. We are also assuming that the material below the knee is not mixed into the envelope, which may change the nucleosynthetic signatures.
We assume the NS grows at a rate set by the Eddington accretion rate on the NS:
\[\dot{M}_{\rm Edd}=\epsilon_{\dot{M}}\frac{L_{\rm Edd}}{c^{2}} \tag{3}\]
where \(\epsilon_{\dot{M}}\)is a scale factor which by default we set as \(\epsilon_{\dot{M}}=1.0\). This leads to typical accretion rates of \(\sim 10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\) for \(\epsilon_{\dot{M}}=1\).
Our default model assumes an initial metallicity of \(Z_{\rm init}=10^{-4}\) and initial helium fraction Y\({}_{\rm init}=0.28\). Our default TZO has an initial total mass of M\({}_{\rm init}=5\) M\({}_{\odot}\), which includes a default NS \(M_{\rm NS}=1.4\) M\({}_{\odot}\), thus having an envelope mass of \(3.6\) M\({}_{\odot}\). After 100 years of evolution, we enable MESA's hydrodynamic capabilities (Paxton et al., 2015). This wait is to allow the star to return to gravothermal equilibrium after the TZO formation process. We also explore a series of model grids of TZOs with masses between M\({}_{\rm init}=5\)-\(20\) M\({}_{\odot}\), Z\({}_{\rm init}=10^{-5}\)-\(0.03\), and Y\({}_{\rm init}=0.28\)-\(0.65\).
We conservatively stop our models once the velocity of the surface layers exceeds 10% of the escape velocity. At this stage the models undergo RSG pulsations, where there are large-amplitude surface pulsations which cause spiral patterns (Heger et al., 1997) in a Hertzsprung-Russell diagram (HRD). This can change the stellar radius by factors of \(\approx 2\) over timescales of years. At this point the envelope begins expanding and contracting supersonically, and can reach \(\sim 40\)% of the escape velocity before we can longer follow the evolution. These pulsations are resolvable in models of RSGs in MESA when the timestep becomes much shorter (\(\sim\)days) than the pulsation timescale (\(\sim\)years) (Paxton et al., 2013). The timestep becomes this small due to changes in the nuclear burning (see Section 4.2). While the evolution can be continued beyond this point by suppressing the hydrodynamics either globally or only in the cool outer envelope, the results become numerically unstable. More work is needed to determine how to couple the instabilities in the nuclear burning with the pulsational instabilities in the envelope.
We evolve the majority of our models using MESA's approx21.net nuclear network. This contains the main PP, CNO, and alpha capture reactions up to \({}^{56}\)Fe. This is a computational convenience. While TZOs are expected to undergo irp-burning, most of the energy is generated via CNO burning. We also test a large 399 nuclear network for a limited set of models. This network was built by first adding all stable isotopes up to Ru, then adding an additional neutron to the heaviest stable isotope for each element included. We then added all proton-rich isotopes with half-lives \(>1\)s. Finally, there was some hand tuning to make sure there were sufficient beta-decay pathways for all isotopes, as well as adding the light isotopes needed for PP, CNO, and the CNO breakout reactions. This network is available in the online Zenodo material.
We assume a mixing-length alpha parameter of \(\sigma_{\rm mll}=1.8\), and use
MESA's time-dependent convection (TDC) mixing treatment (Jermyn et al., 2023). TDC has been shown to improve numerical stability during dynamical phases of evolution while reducing to the standard Cox-MLT prescription (Cox and Giuli, 1968) over longer timescales (Jermyn et al., 2023). In models that have material below the base of the convection zone (when \(\epsilon_{L}<1.0\)), we assume that there is an additional weak mixing process occurring, with a diffusion coefficient of \(10^{6}\,\mathrm{cm^{2}/s}\) in the material below the knee. This helps to prevent compositional gradients from building up near the surface of the NS (Piro and Bildsten, 2007; Keek et al., 2009). We do not include convective overshoot, semiconvective, or thermohaline mixing, as the star is almost fully convective.
Our EOS for the stellar component of the TZO is a combination of freeEOS(Irwin, 2004), HELM(Timmes and Swesty, 2000), and Skye(Jermyn et al., 2021). We make no assumptions about the NS EOS, as we can not make models with \(10\lesssim R_{\mathrm{NS}}/\mathrm{km}\lesssim 20\), where NS radii are expected to be. We use the wind prescription of van Loon et al. (2005), which is based on observations of cool, dusty AGB & RSG stars, with a wind scaling factor of \(\eta_{\mathrm{VL}}=1.0\). This leads to typical wind mass loss rates of \(\mathrm{M}\approx 10^{-5}\)-\(10^{-4}\,\mathrm{M_{\odot}}\,\mathrm{yr^{-1}}\).
MESA does not include any general relativistic (GR) correction factors. For this work, we add a GR correction factor to correct the continuity equation (Thorne, 1977; Ayasli and Joss, 1982). This prescription adjusts the gravitational constant, \(G\), as a function of the mass coordinate inside the star. All masses in this work are baryonic. Other MESA choices are specified in Appendix C and all options can be found in the online Zenodo inlists.
Our models have on average \(\approx 1700\)-\(2000\) mesh points and we artificially cap the maximum timestep to be \(\delta t=2\times 10^{8}\,\mathrm{s}\) (\(\approx 6\) years). While models can take longer timesteps, this comes at the cost of an increased number of timesteps that needed to be rejected and taken again with a smaller \(\delta t\) as MESA could not find a valid solution that satisfies our required numerical constraints. Variations in spatial and temporal resolution of by factors of two smaller/larger lead to changes of order \(\Delta\log\left(\mathrm{L_{*}/L_{\odot}}\right)\approx 0.01\) dex.
## 3 Structure and Evolution of a TZO
Figure 1 shows the evolution of the surface temperature and luminosity of our default TZO models. Figure 1 shows the models once they are at least 100 years old (when we turn on the hydrodynamics), where they start from a zero-age TZO (ZATZO) line. The endpoint of the models will be discussed further in Section 7, as shown it is when the evolution reaches 1000 years before we can no longer follow the evolution. We can see that models with higher initial masses have higher initial luminosities and temperatures compared to lower-mass models. Higher-mass models also evolve to lower final surface temperatures and live longer.
Contrary to the models of C92 our models always evolve to lower luminosities. The total luminosity of our TZOs is a combination of the injected energy, \(L_{\mathrm{knee}}\), and nuclear burning above the knee. The nuclear burning above the knee provides \(\approx 1\%\)-\(10\%\) of the total luminosity, and this fraction decreases with time as the knee cools. Thus the evolution is driven by changes in \(L_{\mathrm{knee}}\), where \(L_{\mathrm{knee}}\propto M_{\mathrm{NS}}/\kappa_{c}\). The accretion rate on to the NS is \(10^{-9}\)-\(10^{-8}\,\mathrm{M_{\odot}}\,\mathrm{yr^{-1}}\), which combined with a lifetime of \(\approx 100,000\) years means the NS can only gain \(10^{-4}\)-\(10^{-3}\)\(\mathrm{M_{\odot}}\). Thus the evolution of our TZOs can be approximated as entirely driven by the increase in the opacity of material at the knee, which lowers \(L_{\mathrm{knee}}\).
The opacity for the material at the knee is provided by Compton scattering (Poutanen, 2017) and thus depends on the number of free
Figure 1: Hertzsprung-Russell diagram (HRD) of the TZO as a function of their initial mass, at fixed initial composition. Models are evolved with our default assumptions and a \(M_{\mathrm{NS}}=1.4\)\(\mathrm{M_{\odot}}\). Colours indicate the initial mass of the TZOs. The direction of evolution is always towards _decreasing_ luminosity. The grey boxes show the observational constraints on HV 2112 (Levesque et al., 2014) and VX Sgr (Tabernero et al., 2021). The horizontal dashed grey line shows the quoted luminosity of HV 11417 (Beasor et al., 2018), which has a variability amplitude of \(\Delta m=1.86\)(Soszynski et al., 2009). The grey lines show the models of C92, where evolution is always to _increasing_ luminosity. Arrows mark the midpoint of the TZOs lifetime.
Figure 2: Top-panel: Temperature-density profiles our our default TZO models as a function of initial mass, at fixed initial composition. Models are evolved with our default assumptions and a \(M_{\mathrm{NS}}=1.4\)\(\mathrm{M_{\odot}}\). Colours indicate the initial mass of the TZOs. Dash-dotted lines show Model A and C from figures 6 and 7 of C92. Bottom-panel: The opacity as a function of the density inside the star, dash-dotted lines are from tables 1 and 2 of C92. Models shown are \(\approx 40,000\) years post TZO formation. The red line marks the edge of the pair-instability region where electron-positron production becomes significant.
electrons per nucleon. Three factors will drive the evolution of the number of free electrons: H burning into He will decrease the number of free electrons, proton captures in rp-burning will decrease the number of nucleons, and \(e^{\pm}\) production will increase the number of free electrons. If we turn off composition changes from nuclear burning (by setting dxdt_nuc_factor=0) while preserving the energy generated from nuclear burning, we find \(L_{\rm knee}\) still decreases. However, it does this at a much faster rate than when the composition is allowed to change. Therefore, nuclear burning slows the decrease in \(L_{\rm knee}\) but can not stop it. Thus, the changes in \(L_{\rm knee}\) are driven by the production of \(e^{\pm}\) pairs, while not entering the pair-instability region.
Our results differ from those of C92 due to differences in the chosen stopping criteria. Our models stop when supersonic pulsational instabilities form in the envelope, preventing MESA from continuing the evolution. The end condition for C92 is when the NS grows to the Oppenheimer-Volkoff (OV) (Oppenheimer & Volkoff, 1939) mass limit for the their assumed EOS, which for C92 is \(\rm M_{\odot}\). As the NS mass increases, \(L_{\rm knee}\) will increase, increasing the total luminosity of the star in agreement with the results of C92. However, our models stop significantly earlier than C92 such that the NS only gains \(10^{-4}\)-\(10^{-3}\)\(\rm M_{\odot}\), which does not cause \(L_{\rm knee}\) to increase by a significant amount. We note in passing that C92 states they can take single timesteps of \(\approx 10^{5}\) years, which is comparable to the entire lifetime of many of our models which is between 50,000 and 200,000 years for the TZOs shown in figure 1.
We can replicate the results of C92 by increasing the accretion rate onto the NS. If the accretion rate onto the NS is high enough that the mass of the NS can grow significantly (at least \(\approx 0.1\)\(\rm M_{\odot}\)) then \(L_{\rm Edd}\) will increase in time. This causes the TZO to become more luminous and more closely follow the tracks of C92, though still at higher surface temperatures and lower surface luminosities. This requires accretion rates of between \(10^{3}\)-\(10^{4}\)\(\rm M_{\rm Edd}\) (See Appendix A). The remaining differences between our models can likely be attributed to changes in the microphysics (EOS, opacities, and nuclear reaction rates) and choice of metallicity.
Figure 2 shows the temperature-density profile and opacity-density profile inside our TZOs arbitrarily at \(\approx 40,000\) years after TZO formation. As the initial mass of the model increases the density also increases and the models reach higher temperatures at the base of the convection zone. Our models do not form a knee, due to the energy injection at the inner boundary of the models, so we are only modelling material above the knee. The dash-dotted lines show models A & C of C92. We can see that our models are significantly denser (by a factor \(\approx 100\)) than previously predicted, though this is still a factor \(\approx 100\) less dense than a typical AGB/RSG.
None of our models ever evolve significantly into the pair-instability region (where \(\Gamma_{1}<4/3\)). Instead, models evolve _around_ the edge of the pair-instability region (See Section 6). Note that the temperature and density of the knee evolves with time, but always stays outside of the pair-instability region.
The bottom panel of Figure 2 shows the opacity of our models in colour, and models A & C of C92 with grey dash-dotted lines. When \(\log(\rho/\rm g\,cm^{-3})\leq 0.0\) our models have slightly higher opacities, as they are cooler for the same density. Once \(\log(\rho/\rm g\,cm^{-3})>0.0\) our models have higher opacities than model A, but lower opacities than those of model C (which enters the pair-instability region and so \(e^{\pm}\) production dominates the opacity).
In C93 they find a set of "high" and "low" mass solutions, which depends on how the energy is generated at the knee, and the production of \(e^{\pm}\) due to models entering the pair-instability region which changes the required \(L_{\rm knee}\) as the opacity increases (these are also the "giant" and "supergiant" models of Thorne & Zytkow, 1977). C93 finds a gap where, for certain masses, models are unable to produce sufficient energy to keep the envelope convective. Our models do not exhibit this "luminosity gap" (which can be mapped into a mass gap) in the mass distribution of TZOs. This is due to the higher density of our models, such that they do not enter the pair-instability region. Therefore our models would not be limited by the same mechanism as proposed by C93 and thus we should not expect a split into "high" and "low" mass solutions, even if we did model the knee. C92 also finds valid solutions for all masses, where C93 attributes this difference due to changes in the nuclear reaction rates used.
### Composition effects
Figure 3 shows the average metal fraction of our TZOs as a function of time since the TZO formed. These models start at \(Z=10^{-4}\) but can increase their total metal fractions by factors of 10-100 within \(\approx 10,000\)-100,000 years. Higher initial mass models evolve to higher total metal fractions, reaching near solar values for their total metallicity, though this is a significantly non-solar scaled composition. They also evolve faster to higher metallicities than the low-mass models and live longer. The ages shown are only lower limits on the lifetime of a TZO (see Section 7), but likely lead to lifetimes of \(\approx 10^{4}\)-\(10^{5}\) years.
Figure 4(a) shows the HRD of TZOs with varying \(Z_{\rm init}\) with initial masses of 5 and \(10\,\rm M_{\odot}\). The TZOs can be broken into two groups of behaviour, a low initial metallicity and a high initial metallicity behaviour. The exact initial metallicity at which this behaviour changes depends on the initial mass, for the \(5\,\rm M_{\odot}\) models this is at \(Z_{\rm init}\approx 3\times 10^{-4}\) while for the \(10\,\rm M_{\odot}\) models this occurs at \(Z_{\rm init}\approx 10^{-3}\). This limit is approximately the final metallicity shown for each mass in Figure 3 (though the match is not exact). Effectively the low-metallicity models evolve up to the mass dependent critical \(Z\), with the evolution converging to the same final surface temperature and luminosity once they reach the critical \(Z\). In contrast, the models that start with metallicities above this critical limit follow
Figure 3: The evolution with time of the average metal fraction (Z) as a function of the initial mass, at fixed initial composition. Models are evolved with our default assumptions and \(M_{\rm NS}=1.4\)\(\rm M_{\odot}\). Colours indicate the initial mass of the TZOs. The final ages of these models are only lower limits to the lifetime of a TZO.
their own \(Z\)-dependent evolutionary tracks. We can also see in Figure 4(a) that the high-metallicity models evolve along similar tracks independent of the initial mass (except for their starting luminosities). We also note that as the initial metallicity increases the 5 M\({}_{\odot}\) models are more likely to become numerically unstable.
As the metallicity increase (either due to a higher initial metallicity or due to metals produced from nuclear burning), the opacity at the knee increases (Xin et al., 2022). This decreases \(L_{\rm knee}\) and thus decreases the temperature at the knee. This decreases the amount of heavy metal burning as the metallicity increases, though the rate of CNO burning increases as the metallicity increases due to the increased amount of CNO material. Thus the nuclear burning luminosity is greater in the high metallicity models, but the rate of production of all metals is lower. _Hence at LMC and SMC metallicities TZOs may show little metal enrichment_. The high metallicity models can have lifetimes up to a factor \(\approx 2\) more than shown in Figure 3.
Figure 4(b) shows variations in the initial helium fraction Y\({}_{\rm init}\), for a 5 and 10 M\({}_{\odot}\) TZO at \(Z_{\rm init}=10^{-4}\). As the initial helium fraction increases, the models become more luminous and have higher surface temperatures. The variation in the initial helium fraction may be expected due to variations in the evolutionary state of the companion when the TZO merger takes place. As Y\({}_{\rm init}\) increases the final luminosity they reach increases and is at higher surface temperatures. These changes are due to the knee decreasing in temperature as Y\({}_{\rm init}\) increases. This temperature decrease also decreases the rate of metal production.
The upper limit of Y\({}_{\rm init}\) = 0.65 in Figure 4(b) is due to numerical issues, which prevents the modelling of TZOs with Y\({}_{\rm init}\) = 0.65-0.95. While there are differences between H-rich and He-rich TZOs, they both occupy similar regions of a HRD, appearing as RSGs. It is interesting to speculate that, if more He-rich TZOs can form, there may exist a population of H-poor RSGs. While we are unaware of any He-lines detectable in a cool RSG atmosphere, H-lines are readily detectable. Thus weak or no detectable H-lines in a RSG spectrum could indicate a very He-enriched TZO.
### Comparison with proposed TZO candidates
Here we briefly compare observed TZO candidates with our models at both our default composition (Fig. 1) and for alternative compositions (Fig. 4). We emphasize that our default model has Z\({}_{\rm init}=10^{-4}\), i.e. much lower than that of the SMC or LMC.
HV 2112's temperature and luminosity can be well approximated by models using with an initial mass between \(5\lesssim\) M\({}_{\rm init}\)/ M\({}_{\odot}\lesssim 8\) when adopting our default composition. HV 2112 can also be well fitted by our 5 M\({}_{\odot}\) TZO models with Z\({}_{\rm init}\approx 10^{-3}\), though for models at this metallicity we would not expect to see any metal enrichment (see Section 3.1). Thus while HV 2112 was initially proposed as a TZO candidate due to its unusual chemical composition, we _believe this unusual composition now rules out HV 2112 being a TZO_.
VX Sgr's position in the HRD can be matched with a TZO model of mass M\({}_{\rm init}\gtrsim 8\) M\({}_{\odot}\) at our default metallicity, and is also consistent with our 10 M\({}_{\odot}\) models with modified composition.
HV 11417 has a luminosity which is lower than any of our default-composition models in Fig. 1. However, lower-mass NSs can allow models to drop to the luminosity of HV 11417 with our default composition (see Appendix A), and several of the higher-metallicity 5 M\({}_{\odot}\) models shown in Fig. 4 reach the luminosity of HV 11417 even for our canonical NS mass.
## 4 Pulsations
### Hydrostatic pulsations
RSGs may pulsate due to the \(\kappa\)-mechanism in hydrogen ionisation zones, with periods between \(\approx 100\)-\(1000\) days (Fox & Wood, 1982; Heger et al., 1997). Longer period pulsations may also occur, but observations can be hindered by a lack of long baseline photometry (Soraisam et al., 2018). Observations of RSG pulsations are complicated by irregular photometric variability, presumed to be caused by the interaction of large convection cells (which can have sizes of order the stellar radius) and the pulsation modes (Kiss et al., 2006).
Using the GYRE v6.0 (Townsend & Teitler, 2013; Townsend et al., 2018) stellar oscillation code, we compute the adiabatic pulsations
Figure 4: HRD for variations in the initial metallicity (panel a) and initial helium fraction (panel b). Solid lines denote a 5 M\({}_{\odot}\) TZO, while dashed lines denote a 10 M\({}_{\odot}\) TZO. Grey lines and grey boxes have the same meaning as in Figure 1.
for our TZOs with M\({}_{\rm init}=5\)-20 M\({}_{\odot}\) for the radial (\(l=0\)) modes. Figure 5(a) shows the time evolution of the pulsation period, Figure 5(b) shows a Petersen diagram (Petersen, 1973) of the period ratios, while Figure 5(c) shows the period-luminosity relationship.
In 5(a) we can see that the fundamental mode is always greater than 1000 days, and this increases with time. The first and second overtones are \(\approx 500\) and 250 days respectively. The highest-mass models start with lower periods than the low-mass models. While exploring other parameters, we find that the period of the fundamental mode is predominately sensitive to Y\({}_{\rm init}\), potentially providing a way to constrain the helium composition of a TZO.
Figure 5(b) shows a Petersen diagram of the period ratios compared with the fundamental period. We can see that in general the period ratio is small (\(\approx 0.2\)-0.3), decreases as the period increases (and thus decreases with time), as well as decreasing with initial mass. OGLE observations of the SMC and LMC (Soszynski et al., 2004), show most objects with \(\log(\rm P/days)>3\), have period ratios \(\approx 0.1\). Therefore our TZOs exist in a region of parameter space distinct from most other objects, potentially providing an easy method to search for TZO candidates.
Finally, Figure 5(c) shows the period-luminosity relationship for our TZOs. We can see a tight relationship between their pulsation period and luminosity. Thus TZOs may prove useful for determining distances if they can be detected and identified.
HV 2112 has a measured set of pulsation periods from OGLE of 165.59, 604.4, and 887.3 days (Soszynski et al., 2011). From Figure 5(a) it is difficult to match the models to the measured periods of HV 2112. To match the 604.4 and 887.3 day periods will require the first and second overtones to increase their periods by a factor of \(\approx 2\).
Variations in other model parameters (Appendix A) lead to changes in the predicted pulsation periods. However, no parameter variation we have tested is able to fit the measured periods of HV 2112, over the time we have evolved our TZOs for (though our grid is not exhaustive). The pulsation periods are predominately set by the initial mass, the mass of the NS, \(\alpha_{\rm mlt}\), and the initial helium fraction, while not being sensitive to the initial metallicity. The fundamental mode is most strongly affected by Y\({}_{\rm init}\), with Y\({}_{\rm init}\)=0.65 able to have \(P>10^{5}\) days, while the first overtone is more strongly affected by \(\alpha_{\rm mlt}\).
The closest match in our (non-comprehensive) grid is from the 5 M\({}_{\odot}\) TZO with \(\alpha_{\rm mlt}=3.0\), with a first overtone period of \(\approx 750\) days. Based on 3D models values of \(\alpha_{\rm mlt}=3\)-4 have been proposed for RSG envelopes (Goldberg et al., 2022). We speculate a slightly more massive TZO with a relatively high \(\alpha_{\rm mlt}\) would be able to better fit the pulsation periods of HV 2112. If HV 2112 were a TZO with a high \(\alpha_{\rm mlt}\), then we would predict that we have observed the first, second, and third overtones, and that the fundamental mode is currently undetected with a period in the 1500-3000 day range.
VX Sgr has a measured pulsation period of 757 days, with a possible longer period at 28,279 days (Tabernero et al., 2021). Though Tabernero et al. (2021) caution the 28,279 days is comparable to the total time span that VX Sgr has been observed for, and thus may be an artifact of the observing period. There are also a number of shorter pulsations that vary in period and amplitude, around 600 days. The lack of a detected period between 1000-3000 days rules out most of our TZO models. The only models that can match the 28,279 day period (assuming that it is real), is our most helium enriched model with Y\({}_{\rm init}=0.65\). This model has a fundamental mode that increase with age and can be between \(P\approx 10^{3}-10^{5}\) days. Thus if VX Sgr where a TZO it would imply the merger occurred with an evolved
Figure 5: Pulsation properties of our TZOs with default model assumptions. Panel a: the evolution of the pulsation period as a function of time for the fundamental (solid), first (dashed), and second overtones (dot-dashed). Panel b: Petersen diagram showing the ratio of the first overtone to the fundamental (solid), and the second to the fundamental (dashed). Panel c: The pulsation period as a function of the surface luminosity, lines have the same meaning as panel a. Colours indicate the initial mass of the TZO. Grey dashed lines in panel a mark the observed pulsation periods of HV 2112 (Soszynski et al., 2011).
star after significant amounts of helium had been produced in the companion.
HV 11417 has measured pulsation periods of 214, 793, and 1092 days (Soszynski et al., 2009, 2011). Based on Figure 5, the 1092 day period would imply it was a very young (\(\lesssim 2000\) year) and massive (\(\sim 20\) M\({}_{\odot}\)) TZO. However models in this mass and age range would be inconsistent with the reported luminosity of HV 11417 by \(\sim 0.6\) dex. Thus we conclude HV 11417 is inconsistent with our a TZO models.
### Oscillations in the nuclear burning rate
Figure 6 shows the total nuclear energy integrated over our default 5 M\({}_{\odot}\) TZO for the \({}^{12}\)C(p,\(\gamma\))\({}^{13}\)N and \({}^{14}\)N(p,\(\gamma\))\({}^{15}\)O reactions. These two rates constitute 85% of the nuclear energy generated in our models (with the approx21.net) network. As the approx21.net network approximates CNO burning, \({}^{12}\)C(p,\(\gamma\))\({}^{13}\)N is a proxy for the chain \({}^{12}\)C(p,\(\gamma\))\({}^{13}\)N, \((\mathrm{e}^{\mathrm{e}\gamma\nu})^{13}\)C(p,\(\gamma\))\({}^{4}\)N, while \({}^{14}\)N(p,\(\gamma\))\({}^{15}\)O provides the chain \({}^{14}\)N(p,\(\gamma\))\({}^{15}\)O(e,\(\gamma\))\({}^{15}\)N.
When the TZOs are \(54,000\) years old the variations in the reaction rates are small and approximately the size of the linewidth in Figure 6. Once the TZO reaches \(\approx 24,000\) years the reaction rates begin showing a medium level variability, with a short period variability of a few years embedded in a longer \(\approx 100\) year period cycle (which is shown in the insert). This leads to a variation in the surface luminosity of \(\mathrm{\Delta}\mathrm{log}\left(\mathrm{L}/\mathrm{L}_{\odot}\right)\approx 0.001\) dex. Then when the TZO reaches \(\approx 50,000\) years, the model undergoes large variations in the reaction rates. The energy generation rate increases by a factor \(\approx 15\) to \(\approx 10^{38}\) erg/s and the surface luminosity can vary by \(\mathrm{\Delta}\mathrm{log}\left(\mathrm{L}/\mathrm{L}_{\odot}\right) \approx 0.5\) dex. This is when we can no longer follow the evolution with hydrodynamics included in the model,
We can locally suppress the hydrodynamics in the envelope by using velocity_logt_lower_bound=5, which turns off the hydrodynamics in zones where the local temperature is \(\mathrm{log}\left(\mathrm{T}_{\mathrm{eff}}/\mathrm{K}\right)<5.0\) which allows the evolution to continue for another \(\approx 40,000\) years. As the TZO evolves at this point the nuclear energy generation rates drops, returning to its pre-large amplitude burning values, before once again becoming numerically unstable and the evolution ceases.
### Hydrodynamic pulsations
Convective envelopes are known to be able to have dynamical instabilities that can lead to mass loss (Paczynski, 1969; Tuchman et al., 1978, 1979). Given a suitable excitation these dynamical instabilities can lead to mass loss rates of \(\sim 10^{-3}\)M\({}_{\odot}\) yr\({}^{-1}\) through pulsational mass loss events (Clayton et al., 2017). Here we show what happens to our TZOs as they undergo RSG pulsations.
Figure 7 shows the HRD of our 5 M\({}_{\odot}\) default TZO during this dynamical phase. Note this is only a representative plot, the exact shape of the spiral and number of cycles we can follow depends sensitively on the input physics and numerical resolution. We can see that as the star evolves the change in surface temperature and luminosity increases with each additional cycle.
Figure 8 shows the radial velocity of the surface layers of our default 5 M\({}_{\odot}\) TZO. As the TZO evolves the surface layers can reach velocities of \(\approx 10\) km s\({}^{-1}\), and during the final contraction phase becomes supersonic. These velocities are of a similar magnitude to those found in Yoon & Cantiello (2010) for the evolution of massive stars with pulsation-driven superwinds. This suggests that we should expect pulsation-driven mass loss in a TZO. Yoon & Cantiello (2010) finds the mass loss rate may increase up to \(10^{-2}\) M\({}_{\odot}\) yr\({}^{-1}\). These RSG pulsations are resolved when the timestep of our models drops significantly below the pulsation period. This occurs in our TZOs when the nuclear burning rate spikes, as seen in Figure 6, but can occur at earlier times if we artificially enforce a short timestep. If the mass loss rates are at the upper end of those predicted by Yoon & Cantiello (2010) then this implies a lifetime of \(\approx 100\)-\(1000\) years. However, our wind mass loss is based on that of van Loon et al. (2005) which is based on the observations of RSG's and thus has this mass-loss built into the time averaged mass-loss rates. More work is needed to understand the wind/pulsational mass loss rates in RSG's, as this will set the lifetime of a TZO.
Figure 6: The nuclear energy released, integrated over the TZO for the two most energetic reactions in our default 5 M\({}_{\odot}\) TZO evolved with the approx21.net. These two reactions provide \(\approx 85\)% of the stars’ nuclear energy. The inset shows a zoom-in where the reaction rates begin to undergo medium oscillations. These oscillations are still within the TZO’s hydrostatic phase of evolution. At \(\approx 50,000\) years the nuclear energy generation rate increases rapidly and exceeds the plot limits while becoming hydrodynamic. Text labels denote the approximate boundaries between the different amplitudes of the pulsations.
Figure 7: The surface temperature and luminosity for a our default 5 M\({}_{\odot}\) TZO during the dynamical pulsations. This plot shows \(\approx 180\) years of evolution.
## 5 Nucleosynthesis
Figure 9 shows the surface composition relative to the initial composition for TZOs evolved with a fully coupled 399 isotope nuclear network at \(\approx 10,000\) years after formation. Figure 9(a) shows the composition as Y\({}_{\rm init}\) is varied, for a fixed M\({}_{\rm init}=5\) M\({}_{\odot}\) and Z\({}_{\rm init}=10^{-4}\). Figure 9(b) shows the composition as M\({}_{\rm init}\) is varied for fixed Y\({}_{\rm init}=0.28\) and Z\({}_{\rm init}=10^{-3}\). A HRD for the models in Figure 9(a) can be found in Appendix A. We include all isotopes produced in our models, without taking into account any radioactive decay. A comparison with the results of C93 can be found in Appendix B. We note, as a word of caution, that this abundance pattern is sensitive to the choice of the initial composition, as the initial metals act as seed nuclei for the rp-burning.
For the Z\({}_{\rm init}=10^{-4}\) we find that for \(Z<20\), there is no enhancement relative to the initial composition. For \(21\leq Z\leq 25\) (Sc to Mn) there is an enhancement that increases as Y\({}_{\rm init}\) increases, while both Fe and Ca are not enhanced. For higher atomic numbers the enhancement relative to the starting composition becomes larger, up to \(\approx 10^{5}\) their starting values. As Y\({}_{\rm init}\) increases the most enhanced element decreases in atomic number, for Y\({}_{\rm init}=0.28\) it is Br, Y\({}_{\rm init}=0.4\) it is As, and Y\({}_{\rm init}=0.6\) it is Ga. After this peak element, there is a rapid decline in the production of heavier elements. Mo is only enhanced in our Y\({}_{\rm init}\)=0.28 model and is thus not a good element for determining a TZO status.
These differences are due to the knee temperature decreasing as Y\({}_{\rm init}\) increases. This lowers the maximum mass of an element that can be produced in the irp-process. We reconfirm the previous findings of Biehle (1991) and show that we can have significant enhancement of the elements Rb, Sr, Y, Zr and Mo but this depends sensitivity on the initial He composition. We find that models with very high initial helium fractions can lack Rb. Thus there could be a population of helium-rich TZOs without Rb, possibly explaining the difficulty in confirming the detection of a TZO with the Rb mass fractions alone.
For the Z\({}_{\rm init}=10^{-3}\) models we see almost no enhancement of metals at all. There is a slight enhancement of N due to CNO burning as well as an enhancement of S-Ar for the M\({}_{\rm init}=5\) M\({}_{\odot}\) model, and enhancements in Ca-Mn for the M\({}_{\rm init}=10\) M\({}_{\odot}\) and \(15\) M\({}_{\odot}\) models. However there is no production of elements with Z \(>25\). This is due to the higher opacity at the knee, lowering the knee luminosity, which lowers the knee temperature. Thus, at metallicities comparable to the SMC, LMC, or MW, TZOs are unlikely to be distinguishable from non-TZO stars based on their surface composition alone of elements such as Rb or Mo.
Table 1 shows the surface mass-fraction ratios for a selected set of elements for the Z\({}_{\rm init}=10^{-4}\) models. Also shown is a comparison with Model A of C93. We can see that the estimates for Ni/Fe are similar for all initial helium fractions. Estimates for Rb/Ni and Rb/Fe depend on the initial helium fraction but can be consistent with C93 for \(0.28<\) Y\({}_{\rm init}<0.4\) (C93 assumes Y\({}_{\rm init}\)=0.32). Finally, Mo/Fe is extremely different, with our models predicting very little above Mo, with the valued decreasing as Y\({}_{\rm init}\) increases. Other ratios including K/Ca and Ca/Fe are a factor 10 smaller than C93.
Figure 10 shows how the isotopic composition will change over time, by showing the reaction flow for the dominant reaction for each isotope. Diagonal lines to the lower right are due to beta decays. There are insufficient free neutrons for the \((n,p)\) reactions to dominate as both \({}^{13}\)C and \({}^{22}\)Ne have mass fractions \(X\lesssim 10^{-17}\) at the base of the envelope. Vertical lines indicate a proton capture, while horizontal lines denote neutron captures. \(\alpha\)-captures are included in the network but are rarely significant in this mass range. Isotopes with no lines are included in the network but have all their reaction rates \(<10^{10}\) reactions/s.
Isotopes are predominately decaying via beta-decays faster than proton captures can increase the atomic number of the isotope. Thus the flow is towards more neutron-rich isotopes, rather than more proton-rich isotopes. This prevents significant rp-burning from occurring and causes the limited amount of Mo shown in Figure 9(a). The maximum atomic number that is reached, before beta-decays outpace the proton captures will depend on the peak temperature reached by the TZO at the base of the convection zone (Fisker et al., 2008). Thus, given Figure 2, the peak atomic number reached depends on the initial mass as well.
We have tested our choice of nuclear network in Appendix B. Changing our nuclear network to one that closely matches that of C93, has little effect on the results (see Appendix B). Though by extending to higher atomic numbers we can see a similar turn over in the mass fraction pattern in C93 as we see, except for C93 this occurs at higher atomic numbers.
Neutrino losses from our TZOs are \(\approx 10^{37}\)erg/s. This is dominated by the losses due to beta-decays, while the thermal neutrino losses are negligible. This is only a lower limit on the neutrino flux, as there may be additional neutrino emissions from the material below the
\begin{table}
\begin{tabular}{c c c c c} \hline Element & Y\({}_{\rm init}=0.28\) & Y\({}_{\rm init}=0.40\) & Y\({}_{\rm init}=0.60\) & C93 \\ ratio & & & & Model A \\ \hline Rb/Ni & 2.18E+00 & 3.00E-03 & 9.94E-06 & 1.82E-01 \\ Rb/Fe & 3.38E-01 & 1.07E-03 & 8.04E-06 & 1.48E-01 \\ Li/Ca & 1.55E-04 & 1.49E-04 & 1.23E-04 & \\ Li/K & 2.67E-03 & 2.67E-03 & 2.64E-03 & \\ Mo/Fe & 3.99E-04 & 4.44E-06 & 3.36E-06 & 1.23E-01 \\ Ni/Fe & 1.55E-01 & 3.57E-01 & 8.08E-01 & 8.13E-01 \\ K/Ca & 5.80E-02 & 5.59E-02 & 4.64E-02 & 1.58E-01 \\ Ca/Fe & 4.91E-02 & 4.58E-02 & 4.23E-02 & 8.91E-01 \\ Rb/Zr & 1.76E+01 & 5.99E+01 & 5.97E-01 & 4.17E+00 \\ \hline \end{tabular}
\end{table}
Table 1: Ratio of surface mass fractions for selected elements for our 5 M\({}_{\odot}\) TZOs, evolved with the large 399 isotope nuclear network, at 10,000 years after TZO formation at Z\({}_{\rm init}=10^{-4}\). The final column contains the Model A data from C93 which only provides mass fractions for Carbon and heavier elements and thus lacks Lithium for comparison with.
Figure 8: The radial velocity of the surface layers of our 5 M\({}_{\odot}\) default TZO over time. The right-hand axis shows the approximate fraction of the escape velocity that the material reaches. The x-axis is the time until we can no longer follow the evolution with MESA.
knee which we do not model. This neutrino flux is comparable to a solar-mass star at the tip of the red giant branch (Farag et al., 2020), and is unlikely to be detectable with current detectors (Patton et al., 2017, 2017).
### \({}^{44}\)TiO\({}_{2}\) and \({}^{44}\)TiO
While the predicted unique nucleosynthetic signal of a TZO has been used previously to make claims for the detection (or not) of a TZO, it is not without controversy (Tout et al., 2014). Thus we propose a new signal which provides a more constraining nucleosynthetic signal, namely the detection of molecules of TiO\({}_{2}\) and TiO containing the radioactive isotope \({}^{44}\)Ti.
\({}^{44}\)Ti has a half life of \(\approx 60\) years (Audi et al., 2003; Ahmad et al., 2006) and is usually found in the ejecta of core-collapse supernovae (Iyudin et al., 1994). Typical core-collapse supernovae have ejecta of \(10^{-5}\)-\(10^{-4}\) M\({}_{\odot}\) of \({}^{44}\)Ti rich material (Magkotsios et al., 2010). This suggests that the detection of \({}^{44}\)Ti in a TZO could be the result of contamination from the supernovae that formed the NS initially. However, given its short half-life, unless we detect a TZO shortly after the birth of the NS (when the SN remnant should still be visible) then the detection of \({}^{44}\)Ti requires a continuous production site.
In non-TZO stars, \({}^{44}\)Ti is normally produced deep in the stellar core, via \({}^{40}\)Ca(\(\alpha,\gamma\))\({}^{44}\)Ti during the late stages of stellar evolution and during explosive burning episodes (Timmes et al., 1996). Coupled with this, is that we need to be able to efficiently mix the \({}^{44}\)Ti to the surface of the star, which is difficult to achieve unless the convective envelope penetrates deep into the core which is unexpected when \({}^{40}\)Ca(\(\alpha,\gamma\))\({}^{44}\)Ti is active.
For reference we computed a M\({}_{\rm init}=5\) M\({}_{\odot}\) AGB star, M\({}_{\rm init}=8\) M\({}_{\odot}\) SAGB star and M\({}_{\rm init}=10\) M\({}_{\odot}\) massive star with \(Z_{\rm init}=0.00142\) (\(\sim Z_{\odot}/10\)). For computational reasons we used a truncated version of our 399 isotope nuclear network. We took our 399 network and removed all elements heavier than Fe, as we are only interested in the Ti isotopes, which brings the total isotope count down to 172.
The 5 M\({}_{\odot}\) model is evolved until \(\approx 40\) thermal pulses while the 8 and 10 M\({}_{\odot}\) models are evolved up to carbon ignition. We find the maximum surface mass fraction of \({}^{44}\)Ti\({}^{48}\)Ti remains effectively zero (and always lower than the numerical tolerance imposed on the nuclear network solver) throughout the evolution.
Figure 10: The total isotopic flow rate due to nuclear reactions for a selected set of isotopes in our default 5 M\({}_{\odot}\) TZO evolved with an 399 isotope nuclear network. The coloured lines show the logarithm of the net number of reactions per second for each isotope, showing only the most significant reaction, taking into account both forward and reverse reactions and summed over the entire TZO. Arrows show the direction of flow. The model shown is \(\approx 10,000\) years post-TZO formation. Isotopes with no arrows have total reaction rates less than the lower limit of \(10^{10}\) reactions/s. The atomic mass is quoted for each isotope in each box.
Figure 9: The surface composition relative to the initial composition at 10,000 years post-TZO formation. Left panel: The composition as a function of of the proton number Z for a 5 M\({}_{\odot}\) TZO and \(Z_{\rm init}=10^{-4}\) with Y\({}_{\rm init}=0.28\) (red square), Y\({}_{\rm init}=0.4\) (orange triangle), Y\({}_{\rm init}=0.6\) (blue star). Right panel: The composition as a function of of the proton number Z for Y\({}_{\rm init}=0.28\) and \(Z_{\rm init}=10^{-5}\), with M\({}_{\rm init}=5\) M\({}_{\odot}\) (red square), M\({}_{\rm init}=10\) M\({}_{\odot}\) (orange triangle), and M\({}_{\rm init}=15\) M\({}_{\odot}\) (blue star). All models were evolved with a fully coupled 399 isotope nuclear network. Vertical lines mark elements that may be useful for detecting TZOs. Note the change in the \(y\)-scale between panels. Data tables are available in the online Zenodo material with the time evolution of the composition.
However, distinguishing different isotopes directly in a spectrum can be challenging. Thus we propose looking for isotopologues of TiO\({}_{2}\) and TiO. Our TZOs almost always have surface temperatures log (\(T_{\rm eff}/K\)) \(<\) 3.6, where TiO\({}_{2}\) and TiO molecules are expected to form (Pavelenko et al., 2020), thus there should be a significant amount of TiO\({}_{2}\) and TiO in the atmospheres of the TZOs.
The most common Ti isotope is \({}^{48}\)Ti, with contributions from \({}^{46-50}\)Ti (Asplund et al., 2009). Detecting the different isotopologues has been achieved for these isotopes, due to a shift in the molecular lines as the mass of the molecules TiO\({}_{2}\) and TiO change with the different isotopic compositions (Breier et al., 2019). The size of this shift is relative to the change in the molecular mass between the isotopologues (Herzberg, 1950). Thus a molecule containing \({}^{44}\)Ti will have a larger shift in its molecular lines than the already detectable isotopologues containing \({}^{46-50}\)Ti (Pavelenko et al., 2020; Serindag et al., 2021). It may also be possible to use millimetre/submillimetre observations to detect \({}^{44}\)TiO given the detection of other TiO isotopologues (Kaminski et al., 2013; Lincowski et al., 2016).
In our TZO models \({}^{44}\)Ti is produced in the CaScTi3 cycle via \({}^{43}\)Sc (\(p\), \(\gamma\)) \({}^{44}\)Ti (Fisker et al., 2008). The \({}^{44}\)Ti is then mixed outwards before it can capture another proton. Figure 11 shows the surface mass fraction of \({}^{44}\)Ti/\({}^{48}\)Ti. Depending on Y\({}_{\rm init}\), Z\({}_{\rm init}\), and M\({}_{\rm init}\) the ratios starts between 10\({}^{-4}\)-10\({}^{-2}\) and decreases with time. On long timescales, we can see that the ratio tends towards \(X\)(\({}^{44}\)Ti/\({}^{48}\)Ti) \(\approx\) 10\({}^{-4}\). This is due to increasing amounts of \({}^{48}\)Ti (other stable Ti isotopes are also increasing with time), while the absolute amount of \({}^{44}\)Ti remains approximately constant. For the TZOs with Z\({}_{\rm init}=10^{-3}\) the ratio decreases with M\({}_{\rm init}\). A 5 M\({}_{\odot}\) TZO with Z\({}_{\rm init}=10^{-3}\) did not show any enrichment of \({}^{44}\)Ti. These ratio's are much greater than the values found for the 5, 8, and 10 M\({}_{\odot}\) non-TZO stars.
Footnote 3: While MESA’s approx21 net does contain \({}^{44}\)Ti, it is only produced via \({}^{40}\)Ca(\(\alpha\), \(\gamma\)) \({}^{44}\)Ti. Thus the approx21 show significantly smaller amounts of \({}^{44}\)Ti than when using our large nuclear network.
Table 2 shows the absolute mass fraction fractions of the Ti isotopes in our large nuclear network. We can see that the amount of all Ti isotopes slightly increases with increasing initial helium fraction. There is however a factor \(\sim\) 10 increase in \({}^{44}\)Ti as Y\({}_{\rm init}\) increases. Given the small absolute mass of \({}^{44}\)Ti, which is much less than that typically seen in supernovae ejecta (Magkotsios et al., 2010), it may be difficult to directly detect the gamma-ray from the decay of \({}^{44}\)Ti.
We note though that before this becomes a viable method it is likely we will need improved theoretical models of the molecular lines of \({}^{44}\)TiO\({}_{2}\) and \({}^{44}\)TiO, as to date experiments have concentrated on the more common and stable isotopes of titanium, namely 46-50 (Brunken et al., 2008; Breier et al., 2019; McKemmish et al., 2019; Witsch et al., 2021).
## 6 Suitability of model assumptions
The models presented in this work depend on two key approximations, that make the TZO models computationally feasible. Without these approximations, these models would not be computationally viable. Both approximations deal with the fact that we can not model down to the surface of the NS while simultaneously including the RSG envelope. First, we assume a log (\(\rho_{c}/\)g cm\({}^{-3}\)) = 9.3, which implies an effective R\({}_{\rm NS}\)\(\approx\) 600 km and secondly we inject additional energy at the base of our models. We will now test those assumptions, as far as possible, to determine their effect on our results. For this test, we turn off the hydrodynamics, as it can cause numerical convergence issues.
Figure 12 shows the total luminosity of the TZO as a function of the age since formation, for variations in the core density (12(a)), and the injected energy (12(b)). Firstly we can see that both sets of models evolve to lower luminosities over time. Figure 12(a) compares models with log(\(\rho_{c}/\)g cm\({}^{-3}\)) between 10\({}^{9}\)-10\({}^{12}\) g cm\({}^{-3}\) (this equates to effective NS radii between \(\approx\) 90-900 km). It shows that as the core density increases the total luminosity increases. This is due to a lower opacity at the knee, causing \(L_{\rm knee}\) to increase. Figure 12(b) also shows that as the injected energy decreases, the surface luminosity also decreases. Thus, surprisingly, over the range of parameters explored here, the two approximations act with different signs but similar magnitudes, in terms of the total luminosity. Therefore they approximately cancel out. Comparing the surface metal mass fraction, the low \(\epsilon_{L}\) models converge to values about a factor 2 greater than the \(\epsilon_{L}=1\), while the highest core density models have a factor 2 lower surface metal mass fractions. Again, our two assumptions are approximately cancelling each other out.
The large luminosity jumps leading to the models then evolving on a different luminosity track are due to the lack of hydrodynamics. In models with hydrodynamics included, those jumps occur at the time when the energy generated by nuclear burning spikes and the models would normally have been stopped due to large amplitude surface pulsations being resolved in the models.
In Figure 13 we show the temperature-density profiles inside the TZOs as a function of \(\epsilon_{L}\). As \(\epsilon_{L}\) decreases, less energy is injected into the model, therefore the TZO must provide more of the energy itself. As the mass of the NS varies little in our models this is also equivalent to changing the opacity of the material at the base of the convection zone. The material below the knee then moves to higher temperatures and densities to provide the necessary energy via nuclear burning and mass accretion to support the star. The region below the knee does not evolve at a constant temperature, as previously found in Thorne & Zytkow (1975, 1977) and C92. Instead as the temperature increases, the material avoids the pair-instability region by evolving around
the instability region. The T-\(\rho\) tracks converge for \(\epsilon_{L}\leq 0.1\), until \(\log(\rho/\mathrm{g\,cm^{-3}})\approx 6.0\).
At \(\log(\rho/\mathrm{g\,cm^{-3}})\approx 6.0\) the core begins evolving to cooler temperatures. At this temperature both hydrogen and helium are depleted in the material below the knee. With a very low mass fraction of carbon and oxygen, due to the low initial metallicity, and our use of the approx21.net nuclear network, the nuclear energy generation rate goes to 0 as there are no available nuclear reactions. The inner regions then begin to cool due to an increase in thermal neutrino losses as the density increases.
Finally, the increase in temperature at \(\log(\rho/\mathrm{g\,cm^{-3}})\approx 8.0\) is due to a new convection zone setting in near the inner boundary of the model. This is due to the (small) amount of energy that is still being injected into the model. Models with less injected energy cool further and reach higher densities before showing this uptick.
## 7 Final fate
Once the large amplitude surface pulsations begin the computational timescale decreases and we begin evolving the models on the pulsation timescale. This leads to timesteps of order \(10^{2}\)-\(10^{3}\) seconds, which is much smaller than the \(10^{8}\) seconds we take during the normal phase of a TZOs evolution and thus is infeasible to evolve over longer time frames. Numerical convergence issues also occur as shocks form in the outer envelope, preventing the models from being evolved for more than a few tens of years at this point.
Thus we can only speculate what happens next. The TZOs are undergoing large pulsations, and in other \(\epsilon\)-mechanism pulsators this is expected to lead to pulsation-driven mass loss (Barraffe et al., 2001; Nakauchi et al., 2020) or, as in normal RSGs, pulsations may lead to pulsational driven superwinds (Yoon & Cantiello, 2010). This could rapidly decrease the envelope mass over \(\approx 100\) years, leaving behind a bare NS. Or perhaps the TZO will undergo a neutrino runaway when nuclear burning ceases and energy losses from neutrinos cause the TZO to collapse (Podsiadlowski et al., 1995), leaving a BH behind, though this seems less likely as our models avoid the pair-instability region.
If the envelope was entirely ejected from the NS, it seems unlikely that the NS mass will have increased significantly enough to be visible as a higher mass NS. The accretion rate onto the NS is \(\approx 10^{-9}\)-\(10^{-8}\,\mathrm{M_{\odot}\,yr^{-1}}\). In our models, this leads to the NS gaining at most
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline M\({}_{\mathrm{init}}\) & Y\({}_{\mathrm{init}}\) & Z\({}_{\mathrm{init}}\) & \({}^{44}\)Ti & \({}^{46}\)Ti & \({}^{47}\)Ti & \({}^{48}\)Ti & \({}^{49}\)Ti & \({}^{50}\)Ti \\ \hline
5 & 0.28 & \(10^{-4}\) & 3.85E-12 & 2.10E-09 & 2.76E-09 & 2.02E-08 & 6.50E-09 & 9.71E-10 \\
5 & 0.40 & \(10^{-4}\) & 8.39E-11 & 1.03E-08 & 2.12E-08 & 1.01E-07 & 6.88E-08 & 9.96E-10 \\
5 & 0.60 & \(10^{-4}\) & 1.31E-11 & 3.38E-09 & 5.66E-09 & 3.35E-08 & 1.71E-08 & 9.77E-10 \\
5 & 0.28 & \(10^{-3}\) & 5.56E-16 & 1.43E-08 & 1.32E-08 & 1.34E-07 & 1.00E-08 & 9.86E-09 \\
10 & 0.28 & \(10^{-3}\) & 5.46E-11 & 4.45E-08 & 4.57E-08 & 0.20E-07 & 6.47E-08 & 1.07E-08 \\
15 & 0.28 & \(10^{-3}\) & 1.51E-10 & 3.96E-08 & 4.95E-08 & 2.55E-07 & 1.80E-07 & 1.02E-08 \\ & Sun & & & & & & & \\
5 & & \(Z_{\odot}/10\) & 3.05E-26 & 2.02E-08 & 1.85E-08 & 1.88E-07 & 1.44E-08 & 1.39E-08 \\
8 & & \(Z_{\odot}/10\) & 2.16E-98 & 2.05E-08 & 1.89E-08 & 1.91E-07 & 1.43E-08 & 1.41E-08 \\
10 & & \(Z_{\odot}/10\) & 2.57E-98 & 2.05E-08 & 1.89E-08 & 1.91E-07 & 1.43E-08 & 1.41E-08 \\ \hline \end{tabular}
\end{table}
Table 2: Surface mass fractions for different Ti isotopes for our large nuclear network models. The composition is taken at \(\approx 10,000\) years post TZO formation. The 5, 8, and 10 M\({}_{\odot}\) non-TZO stars are measured at the time of maximum surface \({}^{44}\)Ti. Solar values from Grevesse & Sauval (1998).
Figure 12: Panel a: The evolution of the surface luminosity as a function of the assumed average core density. Light colours denote higher core densities and smaller assumed NS radii. Panel b: The evolution of the surface luminosity as a function of the efficiency factor \(\epsilon_{L}\). Light colours denote higher efficiencies and thus the energy injected into the inner boundary is closer to \(L_{\mathrm{Edd}}\). Evolution was arbitrarily stopped when either the model reached 100,000 years, 100,000 timesteps, or when MESA could no longer follow the evolution.
\(\approx 0.002\) M\({}_{\odot}\) during its evolution. The NS may also be spun down by braking between the NS's magnetic field and the envelope (Liu et al., 2015). In this scenario the NS will become a slow spinning NS inside a slow moving CSM that appears like a supernovae remnant, that has been proposed for RCW 103 (Liu et al., 2015).
The other option is that the TZO runs out of CNO material, which provides the bulk of the nuclear energy generated (Behle, 1994). In this case, the envelope may collapse onto the NS forming a BH, possibly leading to a transient event (Moriya, 2018; Moriya and Blinnikov, 2021). In a 5 M\({}_{\odot}\) TZO model the total mass of CNO elements decreases at a rate of \(\approx 10^{-10}\) M\({}_{\odot}\) yr\({}^{-1}\). With an initial mass of \(\approx 10^{-5}\) M\({}_{\odot}\) of CNO elements (for \(Z_{\rm init}=10^{-5}\)), this gives an upper limit of \(\approx 10^{6}\) years, assuming the burn rate continues at the same rate. This is comparable to the lifetime given assuming a steady state wind mass loss.
If the NS inside the TZO did collapse into a BH and form a transient event, this may look like a type IIn SN. The pre-SN pulsations likely removed 1-10 M\({}_{\odot}\) of material in the 100-1000 years before the final collapse. At collapse it is then likely that there is only a few solar masses, at most, of H-rich envelope left. Our default 5 M\({}_{\odot}\) TZO has only a \(\approx 2.5\) M\({}_{\odot}\) envelope left before the pulsations begin, while at 20 M\({}_{\odot}\) has \(\approx 5.5\) M\({}_{\odot}\) envelope before the pulsations begin. We would not expect any significant \({}^{56}\)Ni production in the event, and as the TZO was fully mixed there would not be any observed change in the composition at late times. To power a SN transient event would require it either to be entirely powered by CSM interaction, energy released as the NS collapses into a BH, or if the BH generates a jet from additional mass accretion (Fryer et al., 1996; Qin et al., 1998; Fryer et al., 2014).
## 8 Discussion
In this work, we have been agnostic as to how a TZO formed, whether it was a direct impact, common envelope merger, or dynamical merger. Each of the formation pathways may impact the resulting TZO. Common-envelope mergers require that the companion star is evolved, with a helium-rich core and an envelope that is increasing in radius. While direct impacts and dynamical mergers are less sensitive to the companion star's evolutionary state, at least regarding when a merger can occur. Merging at different points in the star's pre-TZO life will lead to a different initial metal fraction than our assumed solar-scaled metal distribution, and limits what values of Y\({}_{\rm init}\) are possible. There could be differences in whether or not a merger results in a TZO or instead causes enhanced mass loss, leaving behind a tight binary with an NS, thus failed TZO's are progenitors for double NS systems (DNS). Detections of TZOs could place constraints on the uncertain merger rate of DNS systems as currently being probed by gravitational wave detections.
We assumed that the starting point for a TZO can be approximated as a normal star, which then becomes fully convective. However, it is possible that the companion was the accretor in a binary system. The material it gains will be enriched by the nuclear burning in the donor (Farmer et al., 2021, 2023) and the internal structure of the accretor will be altered by the mass accretion (Renzo and Gotberg, 2021; Renzo et al., 2023). This may lead to possible different outcomes during the merger, which may include additional mass loss, changes in the probability of a successful TZO formation, and to initially enrich the TZO in additional metals.
It was argued in [17] that it was necessary to use a two-stream convection model to follow the nucleosynthesis. Here the composition is tracked for material that is convectively moving towards the knee and away from the knee. As material moves towards the NS it undergoes proton captures, before mixing away from the knee, where it can then undergo beta decays, before being mixed back towards the knee. As the burning timescale is comparable to the mixing timescale at the base of the envelope the chemical composition of material flowing up/down may be significantly different. MESA takes assumes a diffusion equation for chemical transport. The effect of having a diffusion model is that we likely over predict the production of heavy nuclei. In the MESA models some heavy nuclei will be able stay near the base of the envelope, where the temperatures are highest, for much longer than they would if they where being mixed outwards on a timescale similar to their burning timescale. Thus they can undergo additional proton captures and produce even heavier nuclei. This may suggest we are even less likely to see a detectable nucleosynthetic signal in TZOs in the local Universe.
## 9 Conclusions
In this work, we have computed the first set of MESA TZOs. We did this by adjusting the inner boundary of the model to approximate the NS at the centre of each TZO. We have then followed the evolution of the TZOs. We have also explored in detail the pulsation periods with the GRE stellar oscillation instrument, and computed detailed nucleosynthetic signatures with a large fully-coupled 399 isotope nuclear network. Our results can be summarised as follows:
* We find that TZOs evolve to lower luminosities and lower temperatures during their lifetime. We have also expanded the range of possible locations for TZOs to be between log (T\({}_{\rm eff}/{\rm K})\approx 3.47\)-3.6 and log (L/L\({}_{\odot}\)) \(\approx 5.0\)-5.5.
* We do not find a gap in the parameter space where models can not exist. This is because our models are denser than previously predicted, which prevents models from evolving into the pair-instability region.
* We have computed the pulsation periods of our TZOs and find the longest periods to be \(\approx 250,500\) and 1000-2000 days.
* If HV 2112 is a TZO, we predict there should be a currently undetected 1500-3000 day pulsation period. If detected this will also imply \(\alpha_{\rm mlt}\approx 3\) in the envelopes of TZOs.
Figure 13: Top Panel: The temperature-density (T-\(\rho\)) profile for the models shown in Figure 12(b) at a time \(\approx 10,000\) years post TZO formation. The \(\epsilon_{L}=0\) model did not reach 10,000 years and is thus not shown. Dash-dotted lines are the models of [17] while the red region is the pair-instability region.
* If VX Sgr were a TZO and its \(\sim 28,000\) day pulsation period were real, this would imply it is a very helium enriched TZO.
* Based on the measured pulsation periods of HV 11417 we would infer it to be a massive, but very young TZO. However this is inconsistent with the measured luminosity of HV 11417. Thus we rule out HV 11417 as a TZO.
* Our results and the predicted lifetimes depend strongly on the mass-loss rates due to RSG pulsations. The lifetime of the TZO will depend on the total mass-loss rate experienced by the TZO. If RSG pulsations remove significant amounts of material then the lifetime of a TZO may only be 100-1000 years.
* Assuming the RSGs do not experience significantly higher mass loss than we assume, then we estimate a lifetime in the range of \(10^{4}\)-\(10^{5}\) years. Contrary to non-TZO stars, the higher the initial mass of the TZO the longer it lives.
* We have computed several of our models with a large 399 isotope fully-coupled nuclear network. We reconfirm the previous findings that Rb, Sr, Y, Zr can be enhanced due to the irp-burning. However, the level of enhancement is sensitive to the initial composition of the TZOs.
* At higher initial metallicities TZOs do not show any metal enrichment due to a lower knee temperature, caused by the increasing opacity at the knee as the metallicity increases. Thus in the local Universe TZOs may not be distinguishable from non-TZOs based on their nucleosynthesis alone.
* We propose a new observational signal, that of molecules containing \({}^{44}\)Ti. Due to the high-temperature burning and fully convective envelope of the TZO, \({}^{44}\)Ti can be mixed to the surface before it decays. There in the cooler envelope, it can form \({}^{44}\)TiO\({}_{2}\) and \({}^{44}\)TiO, which could be detectable due to the shift in their molecular lines compared to stable Ti-containing molecules.
* TZOs represent a class of stars that are exceptional tests of the numerical capabilities of a stellar evolution code. This work has led to many improvements and code fixes in the MESA stellar evolution code that have applications far outside that of TZOs.
## Acknowledgements
We acknowledge helpful discussions with T. Maccarone, F. Timmes, B. Paxton, R. Smolec, J. Schwab, A. Jermyn, R. Townsend. This work has been supported by the following grants at some point in time; NASA under TCAN grant NNX14AB53G (PI F. Timmes), NSF under SI2 grant 1339600 (PI F. Timmes), the Netherlands Organization for Scientific Research (NWO) through a top module 2 grant with project number 614.001.501 (PI S.E. de Mink). Support for this work was provided by NASA through the NASA Hubble Fellowship Program grant #HST-HF2-51457.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. This work was also supported by the Cost Action Program ChETEC CA16117. This research was supported by the Munich Institute for Astro, Particle and BioPhysics (MIAPbP) which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. This research has made use of NASA's Astrophysics Data System.
## Data Availability
All input files and all output data is made available at [https://doi.org/10.5281/zenodo.4534425](https://doi.org/10.5281/zenodo.4534425).
|
2306.11005
|
Superconductivity near spin and valley orders in graphene multilayers: a
systematic study
|
Spin excitations that soften near the onset of magnetic order have long been
known to act as `paramagnon' pairing glue that can drive spin-triplet
superconductivity. Recent findings of superconductivity in graphene bilayers
and trilayers, occurring in the proximity of different itinerant ordered phases
polarized in isospin (spin and valley), have motivated us to conduct a
comprehensive investigation of an isospin extension of the paramagnon pairing
mechanism in the vicinity of spin/isospin orders. In each case, we identify a
soft mode, associated with the order parameter fluctuations, that mediates
pairing interaction. We develop an approach that relates the soft mode
described through summation of the contributions most strongly divergent at the
onset of spin/valley isospin orders. This interaction is not always attractive,
but if it is, it gives rise to an enhancement of superconducting $T_c$ in an
appropriate pairing channel. In the cases when the pairing interaction is
attractive, it leads to the formation of a superconducting state which can be
either spin-triplet and valley-singlet or vice versa, depending on the specific
isospin order type. These findings demonstrate that the occurrence of
superconductivity in the vicinity of an itinerant magnetic phase is a generic
phenomenon, closely mirroring experimental observations.
|
Zhiyu Dong, Leonid Levitov, Andrey V. Chubukov
|
2023-06-19T15:12:42Z
|
http://arxiv.org/abs/2306.11005v1
|
# Superconductivity near spin and valley orders in graphene multilayers: a systematic study
###### Abstract
Spin excitations that soften near the onset of magnetic order have long been known to act as 'paramagnon' pairing glue that can drive spin-triplet superconductivity. Recent findings of superconductivity in graphene bilayers and trilayers, occurring in the proximity of different itinerant ordered phases polarized in isospin (spin and valley), have motivated us to conduct a comprehensive investigation of an isospin extension of the paramagnon pairing mechanism in the vicinity of spin/isospin orders. In each case, we identify a soft mode, associated with the order parameter fluctuations, that mediates pairing interaction. We develop an approach that relates the soft mode described through summation of the contributions most strongly divergent at the onset of spin/valley isospin orders. This interaction is not always attractive, but if it is, it gives rise to an enhancement of superconducting \(T_{c}\) in an appropriate pairing channel. In the cases when the pairing interaction is attractive, it leads to the formation of a superconducting state which can be either spin-triplet and valley-singlet or vice versa, depending on the specific isospin order type. These findings demonstrate that the occurrence of superconductivity in the vicinity of an itinerant magnetic phase is a generic phenomenon, closely mirroring experimental observations.
Introduction
Recent experiments on graphene multilayers in a transverse electric field, such as Bernal bilayer graphene (BBG), revealed a complex phase diagram encompassing different spin and valley-ordered states with a cascade of phase transitions between them [1; 2] and also found multiple superconducting phases near the onset of valley polarization [3; 4; 5]. Overall, the pattern of spin and valley polarized magnetic phases intertwined with superconductivity bears some similarity to the magnetic and superconducting phases seen in twisted bilayer graphene [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17], a system that received considerable attention recently (see [18] and references therein). This triggered a renewal of interest in the low-energy physics of BBG, and also rhombohedral trilayer graphene (RTG), where a similar behavior has been reported [19; 20] and analyzed [21; 22; 23; 24].
The electronic structure of BBG consists of two bands [25; 26], with the wavefunctions supported by the A sublattice in one and the B sublattice in the other layer. Without a transverse field, the two bands have quadratic dispersion and touch at the high-symmetry points \(\mathbf{K}\) and \(\mathbf{K}^{\prime}\). Applying a transverse electric field opens up a band gap at charge neutrality [26; 27] and changes the dispersion from quadratic to quartic. While the bandwidth remains large, of order 1eV, transverse field flattens the dispersion near the top of the valence band and the bottom of the conduction band. This gives rise to qualitative changes in the Fermi surface geometry and the ordered phasesas compared to those previously studied in unbiased bilayer graphene [28; 29; 30; 31; 32; 33; 34; 35; 36; 37].
When the chemical potential \(\mu\) is near the bottom of the conduction band (or the top of the valence band), filled states of fermions from one valley are located near \(\mathbf{K}\) and from the other valley near \(\mathbf{K}^{\prime}\). The Fermi surface near \(\mathbf{K}\) (\(\mathbf{K}^{\prime}\)) does evolve with \(\mu\) from three separate small pockets very near charge neutrality to a single Fermi surface at somewhat larger doping. In between, the system passes through a van Hove singularity and features an annulus-type Fermi surface.
Flattened bands lead to a Stoner-type instability and several different spin- and valley-polarized phases. Several groups analyzed collective instabilities and superconductivity, specific to a particular geometry of the Fermi surface [38]. The superconducting phases seen in BBG and RTG share several common aspects of which the main one is that superconductivity occurs near the onset of one of spin/valley orders. Motivated by this, in this
paper, we aim to understand the relation between spin /valley orders and superconductivity using a simple but broadly applicable framework that captures the essential aspects of BBG and RTG systems. Namely, we employ a two-valley model that describes spin-1/2 fermions in valleys \(\mathbf{K}\) and \(\mathbf{K}^{\prime}\) coupled by an electron-electron (e-e) repulsion interaction. The intravalley interactions between fermions (\(\mathbf{K}\) - \(\mathbf{K}\) or \(\mathbf{K}^{\prime}\) - \(\mathbf{K}^{\prime}\)) are taken to be different from the intervalley interactions (\(\mathbf{K}\) - \(\mathbf{K}^{\prime}\)). In particular, we assume that the interactions are insensitive to the fine details of the electronic structure in the valleys (i.e., whether there is a single Fermi surface near each of the points \(\mathbf{K}\) and \(\mathbf{K}^{\prime}\), or two annual Fermi surfaces, or three even smaller Fermi surfaces). We also account for exchange processes in which the interacting fermions undergo intervalley scattering, \(\mathbf{K}\rightarrow\mathbf{K}^{\prime}\) and \(\mathbf{K}^{\prime}\rightarrow\mathbf{K}\).
Our problem is simplified because several potentially viable effects happen to be small or absent in Bernal bilayer system. First, there is no term with pair-hopping between the valleys as neither \(2\mathbf{K}\) nor \(2\mathbf{K}^{{}^{\prime}}\) is a reciprocal lattice vector. We can also legitimately ignore the Bloch wavefunction effects such as Berry phases and form factors in the e-e interaction. These effects are small at realistic carrier densities, so long as the Fermi energy is much smaller than the bandgap induced by a transverse field [39].
Under these assumptions, the single-particle Hamiltonian reads:
\[H=\sum_{p,\alpha}\epsilon_{p}\psi^{\dagger}_{1,p,\alpha}\psi_{1,p,\alpha}+\sum _{p^{\prime},\alpha}\epsilon_{p^{\prime}}\psi^{\dagger}_{2,p^{\prime},\alpha} \psi_{2,p^{\prime},\alpha} \tag{1}\]
where the subscript \(1,2\) denotes valleys \(\mathbf{K}\) and \(\mathbf{K}^{{}^{\prime}}\), respectively. The momenta \(p\) and \(p^{\prime}\) label states in valleys \(\mathbf{K}\) is near \(\mathbf{K}^{{}^{\prime}}\). For a weak electron doping, filled states \(\epsilon_{p}<\mu\) form small pockets near \(\mathbf{K}\) and \(\mathbf{K}^{{}^{\prime}}\).
The interaction Hamiltonian describing these low-energy fermions contains three main parts: \(H_{\rm ee}=H_{1}+H_{2}+H_{3}\). The first term is the density-density interaction between fermions in the same valley:
\[H_{1}=\frac{U_{1}}{2}\sum_{\alpha\beta;\gamma\delta}\left[\sum_{pkaq}\psi^{ \dagger}_{1,p+q,\alpha}\psi^{\dagger}_{1,k-q,\beta}\psi_{1,k,\delta}\psi_{1,p, \gamma}+\sum_{p^{\prime}k^{\prime}q}\psi^{\dagger}_{2,p^{\prime}+q,\alpha} \psi^{\dagger}_{2,k^{\prime}-q,\beta}\psi_{2,k^{\prime},\delta}\psi_{2,p^{ \prime},\gamma}\right]\delta_{\alpha\gamma}\delta_{\beta\delta} \tag{2}\]
where \(\alpha\), \(\beta\), \(\gamma\), \(\delta\) denote spin projections \(\uparrow\) and \(\downarrow\), the quantity \(\delta_{\alpha\alpha^{\prime}}\) is the Kronecker delta function. Here, the momenta \(\mathbf{p}\), \(\mathbf{K}\) are near \(\mathbf{K}\), and \(\mathbf{p}^{\prime},\mathbf{K}^{\prime}\) are near \(\mathbf{K}^{{}^{\prime}}\), whereas the momentum transfer \(\mathbf{q}\) is much smaller than \(\mathbf{Q}=\mathbf{K}-\mathbf{K}^{{}^{\prime}}\) since typical \(\mathbf{q}\) values are comparable to the Fermi momentum \(p_{F}\) in each valley. The second term in the Hamiltonian \(H_{\rm ee}\) is the
interaction between fermion densities in different valleys
\[H_{2}=U_{2}\sum_{\alpha\beta;\gamma\delta}\sum_{pk^{\prime}q}\psi^{\dagger}_{1,p+ q,\alpha}\psi^{\dagger}_{2,k^{\prime}-q,\beta}\psi_{2,k^{\prime},\delta}\psi_{1,p, \gamma}\delta_{\alpha\gamma}\delta_{\beta\delta} \tag{3}\]
The third term involves simultaneous inter-valley scattering \(\mathbf{K}\rightarrow\mathbf{K}^{\prime}\), \(\mathbf{K}^{\prime}\rightarrow\mathbf{K}\),
\[H_{3}=U_{3}\sum_{\alpha\beta;\gamma\delta}\sum_{pk^{\prime}q}\psi^{\dagger}_{1,p+ q,\alpha}\psi^{\dagger}_{2,k^{\prime}-q,\beta}\psi_{1,p,\delta}\psi_{2,k^{ \prime},\gamma}\delta_{\alpha\gamma}\delta_{\beta\delta}. \tag{4}\]
The first two terms in the interaction Hamiltonian, \(H_{1}\) and \(H_{2}\), are the interactions with small momentum transfer. The couplings \(U_{1}\) and \(U_{2}\) are of the same order of magnitude though generally taking non-identical values. The interaction \(H_{3}\) describes processes with momentum transfer near \(\mathbf{Q}=\mathbf{K}-\mathbf{K}^{{}^{\prime}}\). The coupling strength \(U_{3}\) is expected to be much smaller than \(U_{1}\) and \(U_{2}\) because dressed Coulomb interaction is substantially smaller at momentum \(\mathbf{Q}\) than at momentum transfer of order \(k_{F}\) within a given valley. For this reason, the inter-valley scattering \(U_{3}\) is often neglected. Here it will be retained because, as we will see, it lifts the degeneracy between several types of instabilities and is relevant for superconductivity.
The sign of \(U_{3}\) is governed by the interplay of several effects. In a simple microscopic model, due to the exchange of two fermions in valleys \(\mathbf{K}\) and \(\mathbf{K}^{\prime}\), \(U_{3}\) takes a negative value, corresponding to the intervalley attraction. However, as shown in Ref.[23], the value of \(U_{3}\) may become positive under the renormalization group flow in certain regimes of carrier density and transverse field. we therefore do not specify the sign of \(U_{3}\) here, and keep the discussion generic.
Within this model, there are four potential instabilities towards spin/valley orders, bilinear in fermions, two with zero momentum transfer, and two with momentum transfer \(\mathbf{Q}=\mathbf{K}-\mathbf{K}^{\prime}\). Of the two \(\mathbf{q}=0\) orders, one is valley polarization - a valley imbalance type density order that makes Fermi energies in the two valleys unequal. The other is intra-valley ferromagnetism, with either ferromagnetic or antiferromagnetic ordering between valleys, each with a three-component order parameter due to the global spin rotation \(SU(2)\) symmetry that acts on electrons in both valleys. Of the two orders with the momentum transfer \(\mathbf{Q}\), one is charge-density-wave (CDW) with a two-component complex order parameter due to the valley conservation \(U(1)_{v}\) symmetry. The other is spin-density-wave (SDW) with a \(3\times 2=6\)-component complex order parameter due to \(U(1)_{v}\times SU(2)\) symmetry. We note that, the CDW and SDW orders are also known as spin-singlet and spin-triplet inter-valley
coherence (IVC) orders, respectively[47]. The total number of the order parameter components is 15 (one for valley polarization, \(3+3\) for intra-valley ferromagnetism, 2 for a complex CDW order, and 6 for a complex SDW order).
When inter-valley scattering is negligible and density-density interactions for fermions within a given valley and in different valleys are identical, the conditions for all four instabilities are identical. In this case, 15 order parameters form the adjoint representation of the SU(4) symmetry group [40]. The ordered state can be a single order, one of the four, or a combination of different orders, e.g. a valley polarization order and ferromagnetism within just one valley [24; 40]. Here we differentiate between density-density interactions with a valley and between valleys and include inter-valley scattering (which as we show, is relevant to superconductivity). In this situation, different orders emerge at different temperatures/doping levels, and one can consider separately a given order and superconductivity near it.
In a recent paper [41] we analyzed the instability towards valley polarization and superconductivity near its onset. In this work, the results of the analysis of three other potential orders in BBG and superconductivity near each of these orders are reported. We specifically address three main questions:
1. Can the pairing interaction be enhanced near the onset of a given order?
2. Can the pairing interaction be viewed as mediated by soft fluctuations of the corresponding density or spin order parameter?
3. Can the pairing interaction become attractive in at least one of the channels?
It is known that for systems without an additional valley degree of freedom, the answer to all three questions is generally in the affirmative (see Sec.II ). In the present case of Bernal bilayer graphene the situation is more involved since Cooper pairs with zero total momentum comprise fermions in different valleys, \(\mathbf{K}\) and \(\mathbf{K}^{\prime}=-\mathbf{K}\). Without intra-valley scattering, the pairing vertex for such fermions does not have a component coming from anti-symmetrization (fermionic exchange) and hence has no component that would couple to soft spin fluctuations near a magnetic transition. Near the onset of density order, the valley exchange interaction is not relevant. This constitutes a new problem that has not been analyzed in previous literature.
In Ref. [41] we identified a pairing interaction mediated by soft density fluctuations associated with valley-polarization instability. While the sign of this interaction turns out to be repulsive, we argued in [41] that superconductivity is still possible if an additional in-plane magnetic field is applied which couples to electron spin by Zeeman interaction. A similar pairing mechanism was found to be operational in the presence of an Ising spin-orbit interaction which effectively induces a valley-odd Zeeman field. Superconductivity at a finite field is consistent with the data in [42], and the one at a finite spin-orbit coupling is consistent with the data in [4] for BBG placed on top of a monolayer of tungsten diselenide as by all accounts WSe\({}_{2}\) induces spin-orbit coupling.
Here we consider the problem near the onset of intra-valley ferromagnetism, wherein the pairing interaction is again enhanced and can be viewed as mediated by spin fluctuations. We identify pairing channels in which attraction arises in a more robust manner than in Ref.[41]. There are two types of intra-valley ferromagnetism: an inter-valley ferromagnetism, a phase in which the spin polarizations in two valleys are parallel, and an inter-valley antiferromagnetism, a phase in which spin polarizations in the two valleys are antiparallel. Near the onset of inter-valley ferromagnetism, spin-mediated interaction is attractive in spin-triplet, valley singlet channel. Near the onset of inter-valley antiferromagnetism, the attractive interaction is in spin-singlet, valley-triplet channel. While this result agrees with what one could expect on general grounds, we emphasize that this holds only when we include \(U_{3}\) scattering. We show that to obtain such spin-mediated attraction, one has to include diagrammatic series to all orders in \(U_{3}\). Near CDW and SDW transitions, the pairing interaction can be viewed as mediated by density and spin fluctuations, respectively. Yet, the pairing interaction near an SDW instability is repulsive in both singlet and triplet channels and does not give rise to superconductivity, at least at zero fields and without spin-orbit coupling. The one near CDW instability is repulsive in the spin-triplet channel, but attractive in the spin-singlet channel. We emphasize that this holds for repulsive density-density interaction and an arbitrary sign of inter-valley scattering.
## II An SU(2) spin-\(1/2\) model
Before addressing the full two-valley problem, Eqs.(1)-(4), here we consider a simpler problem - a one-valley problem described by an SU(2) spin-\(1/2\) model. We will work out
a solution to this SU(2) problem as an illustration for our approach. Later we will use the same approach to study an \(SU(4)\) problem that encompasses spin and valley degrees of freedom, which is the main focus of this paper.
The single-particle Hamiltonian in this one-valley model is simply \(H_{0}^{\text{SU(2)}}=\sum_{p,\alpha}\epsilon_{p}\psi_{p,\alpha}^{\dagger}\psi_{p,\alpha}\), and the interaction between electrons can be modeled using a short-range repulsion:
\[H_{\text{int}}^{\text{SU(2)}}=\frac{U}{2}\sum_{\alpha\beta;\gamma\delta}\sum_{pk ^{\prime}q}\psi_{p+q,\alpha}^{\dagger}\psi_{k^{\prime}-q,\beta}^{\dagger}\psi_{ k^{\prime},\delta}\psi_{p,\gamma}\delta_{\alpha\gamma}\delta_{\beta\delta} \tag{5}\]
where \(\alpha,\beta,\gamma,\delta\) are spin indices.
We consider the pairing interaction arising from this Hamiltonian. A generic pairing interaction between spin \(1/2\) fermions is
\[\Gamma_{\alpha\beta;\gamma\delta}(k,-k;p,-p) =U_{a}^{\text{SU(2)}}\delta_{\alpha\gamma}\delta_{\beta\delta}-U _{b}^{\text{SU(2)}}\delta_{\alpha\delta}\delta_{\beta\gamma}\] \[=\left(U_{a}^{\text{SU(2)}}-\frac{U_{b}^{\text{SU(2)}}}{2}\right) \delta_{\alpha\gamma}\delta_{\beta\delta}-\frac{U_{b}^{\text{SU(2)}}}{2} \boldsymbol{\sigma}_{\alpha\gamma}\cdot\boldsymbol{\sigma}_{\beta\delta} \tag{6}\]
where \(U_{a}^{\text{SU(2)}}\) is the fully dressed irreducible interaction with momentum transfer \(k-p\) and \(U_{b}^{\text{SU(2)}}\) is the fully dressed interaction with momentum transfer \(k+p\). Each dressed interaction is given by infinite series of diagrams in which Kohn-Luttinger terms are the leading ones. In the last line in (6) we used the Fierz identity \(\delta_{\alpha\delta}\delta_{\beta\gamma}=(1/2)(\delta_{\alpha\gamma}\delta_{ \beta\delta}+\boldsymbol{\sigma}_{\alpha\gamma}\cdot\boldsymbol{\sigma}_{ \beta\delta})\).
Our goal is to obtain the pairing interaction near an instability in the particle-hole channel. It is well-known that in a one-valley system with a repulsive interaction, a potential instability is a Stoner-like one, towards a magnetic order. For simplicity, we consider a ferromagnetic instability. It occurs when \(U\Pi(0)=1\), where \(\Pi=\Pi(0)\) is the static polarization function at vanishing momentum. Our goal then is to identify series of diagrams for \(U_{a}^{\text{SU(2)}}\) and \(U_{b}^{\text{SU(2)}}\), which contain the factor \(1/(1-U\Pi(0))\). Accordingly, we consider pairing vertex \(\Gamma_{\alpha\beta;\gamma\delta}(k,-k;p,-p)\) with \(p\approx k\). A simple experimentation shows that for \(U_{a}^{\text{SU(2)}}\) (the dressed interaction that scatters a pair from momenta \((k,-k)\) to momenta \((k,-k)\) with zero momentum transfer), the relevant diagrammatic series are the ones shown in Fig.1 a-b. These series yield
\[U_{a}^{\text{SU(2)}}=\left(\frac{1}{1-U\Pi(0)}\right)^{2}\frac{U}{1+\frac{2U \Pi}{1-U\Pi(0)}}=\frac{U}{1-U^{2}\Pi^{2}(0)} \tag{7}\]
The relevant diagrammatic series for the dressed \(U_{b}^{\text{SU(2)}}\) (the interaction that scatters a pair from momenta \((k,-k)\) to momenta \((-k,k)\) with a momentum transfer of \(2k\)), are the ones
shown in Fig.1 c. These series yield
\[U_{b}^{\rm SU(2)}=\frac{U}{1-U\Pi(0)} \tag{8}\]
Adding these contributions, we obtain the effective pairing interaction in the form
\[\Gamma_{\alpha\beta;\gamma\delta}(k,-k;k,-k)=\frac{U}{2}\left(\frac{\delta_{ \alpha\gamma}\delta_{\beta\delta}}{1+U\Pi(0)}-\frac{\mathbf{\sigma}_{\alpha\gamma} \cdot\mathbf{\sigma}_{\beta\delta}}{1-U\Pi(0)}\right) \tag{9}\]
The divergence of pairing interaction near a FM instability arises from the second term, which can be viewed as the pairing interaction, mediated by soft ferromagnetic spin fluctuations [43]. Such an interaction is attractive in spin-triplet channel and leads to spin-triplet pairing. A similar analysis near an antiferromagnetic instability yields a pairing interaction mediated by soft antiferromagnetic fluctuations. Such an interaction is attractive in the spin-singlet channel (see e.g., [44]).
## III The two-valley model
Below we apply the diagrammatic approach introduced above to study the two-valley model defined in Eqs.(1)-(4). As we stated in the Introduction, we neglect momentum variations of the interactions, i.e., treat \(U_{i}\) as constants. Along the same lines, we treat the polarization bubbles \(\Pi(q)\) with small q, both for fermions in the same valley and in different valleys, as some positive \(\Pi(0)\), and treat the polarization bubble \(\Pi(q)\) with \(q\approx{\bf Q}={\bf K}-{\bf K^{\prime}}\) as some positive \(\Pi(Q)\). We assume that \(\Pi(0)\) and \(\Pi(Q)\) are enhanced due to large density of states in flattened bands, but neglect the fine structure of \(\Pi(q)\) which occurs when (i) the Fermi sea in each isospin consists of more than one pocket or (ii) the Fermi pocket is single-piece but non-circular. Because low-energy fermionic states exists only near \({\bf K}\) and \({\bf K^{\prime}}\) and hence the momenta \(k\) and \(p\) in (6) are constrained to the vicinity of these points, this last restriction implies that the pairing gap \(\Delta(k)=\Delta({\bf K})\) is a constant. Without the isospin variable, this would restrict the pairing to only the spin-singlet channel. In our case, however, both spin-singlet and spin-triplet pairing are possible because a fermionic pair with momenta \(\mathbf{K}\) and \(-\mathbf{K}\) is made of fermions from different valleys, and the gap function can be either valley-symmetric or valley-antisymmetric. A generic requirement that a superconducting order parameter must change sign upon the exchange of the two fermions then implies that the valley-symmetric gap function is spin-singlet and the valley-antisymmetric gap function is spin-triplet. The valley-antisymmetric gap changes sign upon replacement \({\bf k}\) to \(-{\bf k}\) and in this respect is analogous to a gap function near a FM instability in a one-valley system.
## IV Ordered states
In this section, we analyze the instability conditions for different ordered states with order parameters, bilinear in fermions. Simple experimentation shows that the order parameters, which one can create out of \(\psi_{1}\) and \(\psi_{2}\), include
* Valley polarization: an order that splits fermionic densities in the two valleys \[\Delta_{\rm VP}=\frac{2}{N}\sum_{\alpha\beta}\left[\sum_{p}\left\langle\psi^{ \dagger}_{1,p,\alpha}\delta_{\alpha\beta}\psi_{1,p,\beta}\right\rangle-\sum_{p^ {\prime}}\left\langle\psi^{\dagger}_{2,p^{\prime},\alpha}\delta_{\alpha\beta }\psi_{2,p^{\prime},\beta}\right\rangle\right]\] (10) This is a single scalar order parameter.
* Intra-valley ferromagnetism \[\mathbf{\Delta}_{1,\text{FM}}=\frac{2}{N}\sum_{\alpha\beta}\sum_{p}\left\langle \psi_{1,p,\alpha}^{\dagger}\boldsymbol{\sigma}_{\alpha\beta}\psi_{1,p,\beta} \right\rangle,\;\;\mathbf{\Delta}_{2,\text{FM}}=\frac{2}{N}\sum_{\alpha\beta} \sum_{p^{\prime}}\left\langle\psi_{2,p^{\prime},\alpha}^{\dagger}\boldsymbol{ \sigma}_{\alpha\beta}\psi_{2,p^{\prime},\beta}\right\rangle,\] (11) Each order parameter is a 3-component vector, so the total number of order parameter components here is six. When \(\mathbf{\Delta}_{1,\text{FM}}\) and \(\mathbf{\Delta}_{2,\text{FM}}\) are parallel to each other, the ordered state is an inter-valley ferromagnet, when the two are antiparallel to each other the order is ferromagnetic within the valley and antiferromagnetic between the valleys.
* A CDW order with momentum \(\boldsymbol{Q}=\boldsymbol{K}-\boldsymbol{K}^{{}^{\prime}}\) \[\Delta_{\text{CDW}}=\frac{2}{N}\sum_{\alpha\beta}\sum_{p,p^{\prime}}\left\langle \psi_{1,p,\alpha}^{\dagger}\delta_{\alpha\beta}\psi_{2,p^{\prime},\beta} \right\rangle\delta_{\mathbf{p}-\mathbf{p^{\prime}}-\boldsymbol{Q}}\] (12) this order parameter is a complex function, i.e., \(\Delta_{\text{CDW}}\) and \(\Delta_{\text{CDW}}^{*}\) are not identical. Both \(\Delta_{\text{CDW}}\) and \(\Delta_{\text{CDW}}^{*}\) are scalar order parameters, so the total number of order parameter components is two.
* A SDW order with momentum \(\boldsymbol{Q}=\boldsymbol{K}-\boldsymbol{K}^{{}^{\prime}}\) \[\mathbf{\Delta}_{\text{SDW}}=\frac{2}{N}\sum_{\alpha\beta}\sum_{p,p^{\prime}} \left\langle\psi_{1,p,\alpha}^{\dagger}\boldsymbol{\sigma}_{\alpha\beta}\psi_ {2,p^{\prime},\beta}\right\rangle\delta_{\mathbf{p}-\mathbf{p^{\prime}}- \boldsymbol{Q}}\] (13) This is again a complex order parameter: \(\mathbf{\Delta}_{\text{SDW}}\) and \(\mathbf{\Delta}_{\text{SDW}}^{*}\) are not identical. Each order parameter has three spin components. Hence the total number of components is six.
Altogether, there are 15 order parameter components, some of which are degenerate. To analyze when each order develops spontaneously, we do a standard analysis [45]: introduce the trial order parameter and construct a self-consistent equation on it by collecting ladder and bubble renormalizations with transferred momentum either zero or \(\boldsymbol{Q}\). We show self-consistent equations diagrammatically in Fig. 1. Each self-consistent equation has a non-zero solution at the onset of the corresponding order. Alternatively, one could introduce an infinitesimally small trial vertex, renormalize it by inserting ladder and bubble diagrams with the corresponding momentum transfer, and obtain the susceptibility. This susceptibility diverges at the onset of spontaneous order.
Solving the diagrammatic equations we find that
* Valley polarization instability emerges when \[(2U_{2}-U_{1}-U_{3})\Pi(0)=1,\] (14) where we remind that \(\Pi(0)\) is the particle-hole polarization bubble at zero momentum transfer. Like we said, a sufficient information for our purposes is that \(\Pi(0)\) is positive and is enhanced due to flat dispersion \(\epsilon_{p}\) near \(\mathbf{K}\) and \(\mathbf{K}^{{}^{\prime}}\).
* To detect the onset of intra-valley ferromagnetism, leading to either ferromagnetic or antiferromagnetic ordering of \(\mathbf{\Delta}_{\rm 1,FM}\) and \(\mathbf{\Delta}_{\rm 2,FM}\), we need to solve the set of two
Figure 2: Diagrammatic representation of the order parameters for different order types arising within the model given in Eqs.(1)–(4). Solid and dashed lines colored in black and blue represent fermions in valleys \(\mathbf{K}\) and \(\mathbf{K}^{\prime}\), respectively. Panels (a), (b), (c) and (d) detail the Stoner mean field description for valley-polarization order, ferromagnetic order, CDW order and SDW order, respectively. The CDW and SDW orders are also known in the literature as the spin singlet and spin triplet inter-valley coherence orders, respectively. In each panel the diagrams illustrate the mean-field self-consistency relations described in the text, see Eqs. (14), (15), (16), (17) and (18).
coupled equations for these order parameters. The coupling is via \(U_{3}\). The results are: an inter-valley ferromagnetic order develops at \[(U_{1}+U_{3})\Pi(0)=1,\] (15) and an inter-valley antiferromagnetic order with ferromagnetism within a valley develops at \[(U_{1}-U_{3})\Pi(0)=1,\] (16) These two orders are described by the order parameters \(\Delta_{+}=\Delta_{1,\rm{FM}}+\Delta_{2,\rm{FM}}\) and \(\Delta_{-}=\Delta_{1,\rm{FM}}-\Delta_{2,\rm{FM}}\), respectively. We call the former order FM\({}^{+}\) and the latter FM\({}^{-}\). For \(U_{3}>0\), the instability towards inter-valley ferromagnetism develops first, and for \(U_{3}<0\), the instability towards inter-valley antiferromagnetism develops before the one towards inter-valley ferromagnetism.
* A CDW instability develops at \[(U_{2}-2U_{3})\Pi(Q)=1,\] (17) where, we remind, \(\Pi(Q)\) is the particle-hole polarization bubble at momentum transfer \(\mathbf{Q}=\mathbf{K}-\mathbf{K}^{{}^{\prime}}\). It is comparable in magnitude but not identical to \(\Pi(0)\) (see e.g., Ref. [46]).
* An SDW instability develops at \[U_{2}\Pi(Q)=1,\] (18) For \(U_{3}>0\), SDW develops before CDW, and for \(U_{3}<0\) CDW order develops first.
If \(U_{3}\) and the difference between \(U_{1}\) and \(U_{2}\) and between \(\Pi(0)\) and \(\Pi(Q)\) were negligibly small (i.e., \(\Pi(0)\approx\Pi(Q)\approx\Pi\), all instabilities would occur at the same \(U_{1}\Pi=1\). The fifteen order parameters would then form the adjoint representation of the SU(4) symmetry group [40]. In our analysis, we keep \(U_{1}\) and \(U_{2}\) different, \(U_{3}\) finite, and \(\Pi(Q)\neq\Pi(0)\). In this situation, each order develops separately, and one can find the condition at which a given order develops before the others.
## V Pairing interaction
In this section, we analyze the pairing interaction near the onset of each of the potential orders. As stated above, the three key questions of interest are: (i) whether the pairing interaction is enhanced, (ii) whether it can be viewed as mediated by soft fluctuations of the
Figure 3: a) Pairing interaction between carriers in valleys \(\mathbf{K}\) and \(\mathbf{K}^{\prime}\), represented by solid and dashed lines colored in black and blue, respectively. Shown is a diagrammatic representation of a selfconsistency relation for the superconducting order parameter. The shaded wavy line labeled \(U_{\text{total}}\) is a properly antisymmetrized pairing interaction detailed in panels (b) and (c). The paring interactions shown in b) by open wavy lines accounts for the intravalley and intervalley scattering processes, with the minus sign arising due to anticommutation of fermions in the states \(\mathbf{K}\alpha\) and \(\mathbf{K}^{\prime}\beta\). The interactions \(U_{a}\) and \(U_{b}\) are the renormalized intervalley density-density interaction and the intervalley exchange interaction, as detailed in panel (c).
order parameter, either in the density or in the spin channel, and (iii) whether the pairing interaction is attractive.
To analyze the pairing, we introduce a trial order parameter in the particle-particle channel, \(\Delta\), and obtain a self-consistent equation on this order parameter. The self-consistent equation is shown diagrammatically in Fig. 3a. The pairing interaction has the same form as in Eq. (6), but now the two components are \(U_{a}^{\rm SU(4)}\) and \(U_{b}^{\rm SU(4)}\). To simplify notations, below we label them as just \(U_{a}\) and \(U_{b}\). We have
\[\Gamma_{\alpha\beta;\gamma\delta}(k,-k;k,-k)=U_{a}\delta_{\alpha\gamma}\delta_ {\beta\delta}-U_{b}\delta_{\alpha\delta}\delta_{\beta\gamma} \tag{19}\]
Using the Fierz identity, as before, this can be equivalently expressed as
\[\Gamma_{\alpha\beta;\gamma\delta}(k,-k;k,-k)=U_{d}\delta_{\alpha\gamma}\delta_ {\beta\delta}+U_{s}\mathbf{\sigma}_{\alpha\delta}\cdot\mathbf{\sigma}_{\beta\gamma} \tag{20}\]
Here \(U_{d}=U_{a}-U_{b}/2\) and \(U_{s}=-U_{b}/2\) are density and spin components of the pairing interaction. At the bare level, \(U_{a}=U_{2}\) and \(U_{b}=U_{3}\) (see Fig. 3c). The fully dressed \(U_{a}\) and \(U_{b}\) include the renormalizations from insertions of particle-hole bubbles.
### Valley polarization
The vertex function near the onset of valley polarization has been analyzed in Ref. [41] for the case \(U_{3}=0\). In this situation, both the bare and dressed \(U_{b}=0\), and the vertex function has only the density component \(U_{a}\). Relevant diagrams for \(U_{a}\), which contain polarization \(\Pi(0)\), are shown in Fig. 4. They include ladder series for vertex renormalization (Fig. 4b), which sum up into \(\gamma=1/(1-U_{1}\Pi(0))\), and series of bubbles, each of which includes ladder series of vertex renormalizations, which give one factor of \(\gamma\). The bubble diagrams can be summed up directly, in order-by-order analysis. A more elegant way to sum them up is to re-express diagrammatic series in terms of dressed \(\overline{U}_{2}\) (same as \(U_{a}\)) and \(\overline{U}_{1}\) and solve the set of two coupled equations. The equations are shown diagrammatically in Fig. 5. In analytic form we have
\[\overline{U}_{1}=U_{1}\gamma^{2}-2U_{1}\Pi(0)\overline{U}_{1} \gamma-2U_{2}\Pi(0)\overline{U}_{2} \tag{21}\] \[\overline{U}_{2}=U_{2}\gamma^{2}-2U_{1}\Pi(0)\overline{U}_{2}-2U _{2}\Pi(0)\overline{U}_{1} \tag{22}\]
The solutions are
\[\overline{U}_{\pm}=\frac{\left(U_{1}\pm U_{2}\right)\gamma^{2}}{1+2\Pi(0) \left(U_{1}\pm U_{2}\right)\gamma}, \tag{23}\]
Figure 4: Panel (a) – diagrammatic expression for the effective interaction \(U_{a}\) when \(U_{3}\) is set to zero. We keep only the diagrams which contain \(\Pi(0)\). The momenta along the upper line are approximately \((k,k)\) and the ones along the lower line are \((-k,-k)\). The dressed vertices (the black dots) are given by the diagrams in panel (b)
Figure 5: Diagrammatic representation of the coupled equations for the renormalized interactions \(\bar{U}_{1}\) and \(\bar{U}_{2}\) at zero momentum transfer.
where \(\overline{U}_{\pm}=\overline{U}_{1}\pm\overline{U}_{2}\). Extracting \(\overline{U}_{2}\), we obtain [41]
\[U_{a}=\overline{U}_{2}=\frac{\gamma^{2}}{2}\left[\frac{U_{1}+U_{2}}{1+2(U_{1}+U_ {2})\Pi(0)\gamma}-\frac{U_{1}-U_{2}}{1+2(U-1-U_{2})\Pi(0)\gamma}\right], \tag{24}\]
Substituting \(\gamma=1/(1-U_{1}\Pi(0))\), we find that (24) simplifies to
\[\overline{U}_{2}=\frac{U_{2}}{\left(1+(U_{1}+2U_{2})\Pi(0)\right)\left(1-(2U_ {2}-U_{1})\Pi(0)\right)} \tag{25}\]
We see that \(\overline{U}_{2}\) diverges at the onset of the valley polarization order, which at \(U_{3}=0\) is at \((2U_{2}-U_{1})\Pi(0)=1\), Eq. (14). Near the onset,
\[\overline{U}_{2}\approx\frac{1}{4\Pi(0)}\frac{1}{1-(2U_{2}-U_{1})\Pi(0)} \tag{26}\]
We extended the analysis of Ref. [41] to include the contribution from \(U_{3}\). Simple experimentation with diagrammatic series to first order in \(U_{3}\) shows that \(U_{b}\) is finite but not singular near the onset of valley polarization. The relevant terms are the two extra contributions to \(U_{a}\), which we show in Fig. 6. They contain \(U_{3}\Pi^{2}(0)\) multiplied by either fully dressed \(\overline{U}_{2}^{2}\) or fully dressed \(\overline{U}_{1}^{2}\). With these terms,
\[U_{a}=\overline{U}_{2}-2U_{3}\Pi^{2}(0)\left(\overline{U}_{2}^{2}+\overline{ U}_{1}^{2}\right) \tag{27}\]
The dressed \(\overline{U}_{1}\) is extracted from (23):
\[\overline{U}_{1}= \frac{U_{1}}{1-(U_{1}\Pi(0))^{2}}-\frac{2U_{2}^{2}\Pi(0)}{1+U_{1} \Pi(0)}\] \[\times\frac{1}{\left(1+(2U_{2}+U_{1})\Pi(0)\right)\left(1-(2U_{2} -U_{1})\Pi(0)\right)} \tag{28}\]
Figure 6: Diagrams representing the correction to the effective interaction \(U_{a}\) due to finite \(U_{3}\), to first order in \(U_{3}\). The momenta along the upper line are approximately \((k,k)\) and the ones along the lower line are \((-k,-k)\).
Near the valley-polarization instability, the relevant term in \(\overline{U}_{1}\) is the divergent second one. Near the onset,
\[\overline{U}_{1}\approx-\frac{1}{4\Pi(0)}\frac{1}{(1-(2U_{2}-U_{1})\Pi(0))} \tag{29}\]
Comparing (26) and (29), we see that both \(\overline{U}_{1}\) and \(\overline{U}_{2}\) are proportional to the susceptibility of the order parameter for the valley polarization. Substituting these singular terms into (27), we find
\[U_{a}\approx\frac{1}{4\Pi(0)}\frac{1}{1-(2U_{2}-U_{1})\Pi(0)} \left(1-\frac{U_{3}\Pi(0)}{(1-(2U_{2}-U_{1})\Pi(0))^{2}}\right)\] \[\approx\frac{1}{4\Pi(0)}\frac{1}{1-(2U_{2}-U_{1}-U_{3})\Pi(0)} \tag{30}\]
Comparing with (14), we see that the inclusion of the \(U_{3}\) shifts the singular point, where \(U_{a}\) diverges, to exactly the onset of the valley polarization order.
Substituting \(U_{a}\) from (30) into the pairing vertex, we obtain
\[\Gamma_{\alpha\beta;\gamma\delta}(k,k^{\prime}=-k;p=k,p^{\prime}=-k)\approx \frac{1}{4\Pi(0)}\frac{\delta_{\alpha\beta}\delta_{\gamma\delta}}{1-(2U_{2}-U_ {1}-U_{3})\Pi(0)} \tag{31}\]
This form of \(\Gamma\) implies that the pairing vertex is (i) enhanced near the onset of valley polarization and (ii) can be viewed as mediated by singular charge fluctuations. This is similar to the form of the pairing interaction near the onset of density order in single-valley systems. A similarity is only partial because, in single-valley systems, boson-mediated pairing interaction is attractive. In contrast, here the pairing interaction is repulsive (positive) and, although strong, does not lead to superconductivity. One needs to add a magnetic field or spin-orbit coupling to obtain an unconventional superconductivity with frequency-dependent gap function \(\Delta(\omega_{m})\) that changes sign at a finite Matsubara frequency (Ref, [41]).
### Intra-valley ferromagnetism
We recall that the instability towards a ferromagnetic order within a valley occurs at \(1=U_{1}\Pi(0)\) if we neglect \(U_{3}\), and at \(1=(U_{1}\pm U_{3})\Pi(0)\) if we keep \(U_{3}\). In the last case, the two instabilities are towards ferromagnetic and antiferromagnetic order between valleys.
To obtain the pairing vertex \(\Gamma\), we again need to include the renormalizations with polarization bubble \(\Pi(0)\) and check whether there are contributions that become singular near a transition to ferromagnetism. Without \(U_{3}\), this is not the case as the renormalizations are
the same as in the previous section, i.e., the pairing interaction is mediated by charge fluctuations and contains the dressed \(\overline{U}_{2}\). The latter diverges at the valley polarization transition, but not at the transition toward a ferromagnetic order.
To verify whether there is any enhanced interaction near the onset of intra-valley ferromagnetism, we then need to include the intra-valley scattering \(U_{3}\) into consideration and analyze the structure of the terms in the pairing interaction that scale with \(U_{3}\). Because \(U_{3}\) is much smaller than \(U_{1}\) and \(U_{2}\), we first we consider these terms to leading order in \(U_{3}\). Specifically, we verify whether there exists a component of the pairing interaction that scales with \(U_{3}\) and diverges at the onset of valley ferromagnetism at \(U_{1}\Pi(0)=1\).
We analyze both \(U_{a}\) and \(U_{b}\). The component \(U_{b}\) equals \(U_{3}\) at the bare level. On inspecting perturbation series, we find the one, shown in Fig. 7 a), that becomes singular at the onset of intera-valley ferromagnetism. It represents ladder insertions of intra-valley density-density interaction \(U_{1}\) on _both_ sides of the intra-valley scattering. Collecting these contributions, we find
\[U_{b}=U_{3}\gamma^{2}=\frac{U_{3}}{(1-U_{1}\Pi(0))^{2}} \tag{32}\]
We see that \(U_{b}\) diverges at the onset of ferromagnetism. Notice, however, that the divergent
piece scales as the square of the susceptibility of the ferromagnetic order parameter. We show below that this is an artifact of keeping only the terms to leading order in \(U_{3}\).
There is also a singular contribution to \(U_{a}\) near the onset of ferromagnetism. The corresponding term is \(-2U_{3}\Pi(0)\overline{U}_{1}^{2}\) in (27), but now we keep only the first term in the expression for \(\overline{U}_{1}\) in (28), as it contains \(1/(1-U_{1}\Pi(0))\) and diverges when \(U_{1}\Pi(0)=1\). We then obtain
\[U_{a}\approx-\frac{U_{3}}{2}\frac{(U_{1}\Pi(0))^{2}}{(1-U_{1}\Pi(0))^{2}} \approx-\frac{U_{3}}{2(1-U_{1}\Pi(0))^{2}} \tag{33}\]
Substituting singular parts of \(U_{a}\) and \(U_{b}\) from (32) and (33) into the pairing vertex, we obtain
\[\Gamma_{\alpha\beta;\gamma\delta}(k,-k;k,-k)=-\frac{U_{3}}{(1-U_{1}\Pi(0))^{2 }}\left(\delta_{\alpha\gamma}\delta_{\beta\delta}+\frac{1}{2}\mathbf{\sigma}_{ \alpha\gamma}\cdot\mathbf{\sigma}_{\beta\delta}\right) \tag{34}\]
We see that \(\Gamma_{\alpha\beta;\gamma\delta}(k,-k;k,-k)\) does diverge at the onset of intra-valley ferromagnetism, but it is quadratic in \(1/(1-U_{1}\Pi(0))\), and its density and spin components are comparable, i.e., the divergent \(\Gamma_{\alpha\beta;\gamma\delta}(k,-k;k,-k)\) cannot be viewed as coming solely from spin fluctuations.
We now show that this result is an artifact of restricting to linear order in \(U_{3}\), as at this level we do not distinguish between inter-valley ferromagnetism and inter-valley antiferromagnetism, i.e., between instabilities towards FM\({}^{+}\) and FM\({}^{-}\). From the analysis in the previous section and from general reasoning, it is natural to expect that higher-order terms in \(U_{3}\) replace \((1-U_{1}\Pi(0))^{2}\) in the denominator in (34) by \((1-U_{1}\Pi(0))^{2}-(U_{3}\Pi(0))^{2}=(1-(U_{1}+U_{3})\Pi(0))(1-(U_{1}-U_{3}) \Pi(0))\), such that the pairing interaction becomes singular right at the onset of either FM\({}^{+}\) or FM\({}^{-}\) and scales with the corresponding susceptibility. We now verify whether the divergent component comes from spin fluctuations.
We start with \(U_{b}\). A simple experimentation shows that the relevant diagrams form ladder series shown in Fig. 7b. There are other seemingly relevant diagrams, like the ones which we earlier included in the renormalization of \(U_{1}\) into \(\overline{U}_{1}\). However, one can verify that these diagrams are not expressed in terms of \(\Pi(0)\) and by this reasons are irrelevant to our analysis. An infinite series of ladder diagrams can be summed up in manner similar to how it was done for \(\overline{U}_{1}\) and \(\overline{U}_{2}\) - by introducing dressed vertices and re-expressing infinite series as the set of coupled equations for the dressed vertices. We show this diagrammatically in Fig. 8. In analytic form, the equations for dressed \(\tilde{U}_{3}=U_{b}\) and the corresponding
(different from \(\overline{U}_{1}\) in Fig. 5 and Eq. 21) are
\[\tilde{U}_{3} =U_{3}+\tilde{U}_{1}\Pi(0)U_{3}+\tilde{U}_{3}\Pi(0)U_{1} \tag{35}\] \[\tilde{U}_{1} =U_{1}+\tilde{U}_{1}\Pi(0)U_{1}+\tilde{U}_{3}\Pi(0)U_{3} \tag{36}\]
Solving for \(\tilde{U}_{3}=U_{b}\), we obtain
\[U_{b}=\frac{1}{2}\left[\left(U_{1}+U_{3}\right)\gamma_{+}-\left(U_{1}-U_{3} \right)\gamma_{-}\right] \tag{37}\]
where
\[\gamma_{\pm}=\frac{1}{1-\left(U_{1}\pm U_{3}\right)\Pi(0)} \tag{38}\]
The component \(U_{a}\) is obtained in a similar way. We skip the details and present the result: \(U_{a}\) is given by almost the same formula as (24), but now \((U_{1}+U_{2})\) is multiplied by \(\gamma_{+}\) and \((U_{1}-U_{2})\) by \(\gamma_{-}\), instead of the \(\gamma\)'s in Eq.(24). In explicit form,
\[U_{a}=\frac{1}{2}\left[\frac{(U_{1}+U_{2})\gamma_{+}^{2}}{1+2(U_{1}+U_{2})\Pi( 0)\gamma_{+}}-\frac{(U_{1}-U_{2})\gamma_{-}^{2}}{1+2(U-1-U_{2})\Pi(0)\gamma_{- }}\right], \tag{39}\]
Using these results, we can analyze superconductivity near the onset of each of the two instabilities, FM\({}^{+}\) and FM\({}^{-}\)
* Near FM\({}^{+}\) where \(\gamma_{+}\rightarrow\infty\), we find \[U_{a}\approx\frac{\gamma_{+}}{4\Pi(0)},\quad U_{b}\approx\frac{\gamma_{+}}{2 \Pi(0)}\] (40) The pairing vertex \(\Gamma_{\alpha\beta;\gamma\delta}(k,-k;k,-k)\) is \[\Gamma_{\alpha\beta;\gamma\delta}(k,-k;k,-k) =-\frac{1}{4\Pi(0)}\frac{\mathbf{\sigma}_{\alpha\beta}\cdot\mathbf{ \sigma}_{\gamma\delta}}{1-(U_{1}+U_{3})\Pi(0)}\] (41)
Figure 8: Diagrammatic representation of the coupled equations for the dressed \(\tilde{U}_{3}\) (the \(U_{b}\) component of the pairing vertex) and the dressed \(\tilde{U}_{1}\). The latter is different from \(\overline{U}_{1}\) in Fig. 5 as in these series the diagrams that contribute to \(\overline{U}_{1}\) are not expressed solely in terms of \(\Pi(0)\).
We see that the singular pairing vertex has only the spin component, as one could anticipate, and scales linearly with \(1/(1-(U_{1}+U_{3})\Pi(0))\), i.e., it is proportional to the susceptibility of bosonic excitations associated with inter-valley ferromagnetism. The sign of the pairing interaction in the singlet and the triplet channel is determined by the sign of \(\Gamma\) convoluted with \((\sigma^{y}_{\alpha\beta})(\sigma^{y}_{\gamma\delta})\) for singlet and \((\sigma^{x}_{\alpha\beta})(\sigma^{x}_{\gamma\delta})\) for triplet. One can also read off the sign of interaction using \(\sigma\cdot\sigma=\pm 1\) for spin-triplet/singlet. We find that the pairing interaction is repulsive (positive) for spin-singlet pairing and attractive (negative) for spin-triplet pairing. We, therefore, predict that near the onset of FM\({}^{+}\) the system becomes unstable against superconductivity in the spin-triplet valley-singlet channel.
* Near FM\({}^{-}\) instability, where \(\gamma_{-}\rightarrow\infty\), we have \[\Gamma_{\alpha\beta;\gamma\delta}(k,-k;k,-k)=\frac{\gamma_{-}}{4\Pi(0)} \mathbf{\sigma}_{\alpha\beta}\cdot\mathbf{\sigma}_{\gamma\delta}=\frac{1}{4\Pi(0)} \frac{\mathbf{\sigma}_{\alpha\beta}\cdot\mathbf{\sigma}_{\gamma\delta}}{1-(U_{1}-U_{3} )\Pi(0)}\] (42) The pairing vertex is again singular _only_ in the spin channel and is proportional to the susceptibility of bosonic excitations associated with inter-valley antiferromagnetism. However, the sign of the vertex function is different from the one near FM\({}^{+}\). As the result, the attraction is now in the spin-singlet, valley-triplet channel, while the interaction in the spin-triplet channel is repulsive. We, therefore, predict that near the onset of FM\({}^{-}\) the system becomes unstable against superconductivity in the spin-singlet, valley-triplet channel.
We also emphasize that near each instability, the overall factor \(U_{3}\) in the pairing interaction cancels out with \(U_{3}\) in the denominator. Hence the dimensionless pairing coupling does not contain \(U_{3}\). The corresponding \(T_{c}\) still scales with \(U_{3}\), as the latter sets the width of the range where the interaction is independent of \(U_{3}\). Still, there is no \(1/U_{3}\) dependence in the exponent for \(T_{c}\).
### CDW and SDW transitions at \(\mathbf{Q=K-K^{{}^{\prime}}}\)
We now analyze the pairing interaction near the two transitions at a finite \(\mathbf{Q=K-K^{{}^{\prime}}}\) - a CDW and an SDW instability. We again compute the two components of the pairing vertex \(\Gamma_{\alpha\beta;\gamma\delta}(k,k^{\prime}=-k,p,p^{\prime}=-p)\), but now we choose terms with \(\Pi(Q)=\Pi(k+p)\), where
and \(p\) are near two Dirac points in the same valley. By inspecting the diagrammatic series, it becomes apparent that for \(U_{a}\), the terms in question originate from the diagrammatic series depicted in Fig. 9 a). This series can be interpreted as either a ladder series in \(U_{2}\) or as a series of maximally crossed diagrams, depending on how one chooses to represent the interaction \(U_{2}\) diagrammatically. Summing up these series, we obtain
\[U_{a}=\frac{U_{2}}{1-U_{2}\Pi(k+p)} \tag{43}\]
Relevant diagrams for \(U_{b}\) are shown in Fig. 9 b. The dressed vertex \(\bar{\gamma}\) in these series is presented in Fig. 9 c). In analytic form, \(\bar{\gamma}=1/(1-U_{2}\Pi(Q))\). On top of this, there are insertions of bubbles, again made out of fermions from different valleys. Summing up the bubbles and dressing each bubble and the two side vertices by \(\Gamma\), we obtain
\[U_{b}=\bar{\gamma}^{2}\frac{U_{3}}{1-2U_{3}\Gamma\Pi(Q)}=\frac{U_{3}}{(1-U_{2} \Pi(Q))(1-(U_{2}-2U_{3})\Pi(Q))} \tag{44}\]
Substituting into the pairing vertex, we obtain
\[\Gamma_{\alpha\beta;\gamma\delta}(k,-k;k,-k)=\frac{1}{1-U_{2}\Pi(Q)}\left(U_{2 }\delta_{\alpha\gamma}\delta_{\beta\delta}-\frac{U_{3}}{1-(U_{2}-2U_{3})\Pi(Q )}\delta_{\alpha\delta}\delta_{\beta\gamma}\right) \tag{45}\]
Figure 9: Diagrammatic representation of \(U_{a}\) (panel (a)) and \(U_{b}\) (panel (b)) near SDW and CDW transitions with a finite \(\mathbf{Q}=\mathbf{K}-\mathbf{K}^{{}^{\prime}}\). The diagrams for the dressed vertex (a black dot) are shown in panel (c).
This vertex can be equivalently re-expressed as
\[\Gamma_{\alpha\beta;\gamma\delta}(k,-k;k,-k)=\frac{U_{2}}{2}\frac{\mathbf{\sigma}_{ \alpha\delta}\cdot\mathbf{\sigma}_{\beta\gamma}}{1-U_{2}\Pi(Q)}+\frac{U_{2}-2U_{3}} {2}\frac{\delta_{\alpha\delta}\delta_{\beta\gamma}}{1-(U_{2}-2U_{3})\Pi(Q)} \tag{46}\]
Note that the spin indices, combined into \(\delta-\) and \(\sigma-\)functions, are for momenta \(k\) an \(-p\), like in the polarization bubble \(\Pi(k+p)=\Pi(\mathbf{Q})\). We see that the spin component of the vertex function is enhanced near an SDW transition at \(1=U_{2}\Pi(\mathbf{Q})\) and the charge component is enhanced near a CDW transition at \(1=(U_{2}-2U_{3})\Pi(\mathbf{Q})\). Near each of these two transitions, an effective pairing interaction can be viewed as mediated by fluctuations of SDW or CDW order parameters.
To get the sign of the effective pairing interaction, we again convolute \(\Gamma_{\alpha\beta;\gamma\delta}\) with \((\sigma^{y}_{\alpha\beta})(\sigma^{y}_{\gamma\delta})\) for spin-singlet and \((\sigma^{x}_{\alpha\beta})(\sigma^{x}_{\gamma\delta})\) for spin-triplet. We find that near an SDW transition, both spin-singlet and spin-triplet components are positive (repulsive). Hence the interaction mediated by soft SDW fluctuations does not give rise to superconductivity with a momentum-independent gap function. Near a CDW transition, the effective interaction is repulsion in the spin-triplet channel, but attractive in the spin-singlet, valley triplet channel.
A comment is in order. In our analysis, we approximated intra-valley and inter-valley interactions as constants. As a consequence, we neglected a potential momentum dependence of the pairing interaction at momenta of order \(k_{F}\) and treated the gap function as \(\Delta_{\alpha\beta}=\Delta T^{s}_{\alpha\beta}T^{v}\), where \(T^{s}\) and \(T^{v}\) are the spin and valley Pauli matrices. For a generic momentum-dependent pairing interaction, the gap function \(\Delta_{\alpha\beta}=\Delta(k)T^{s}_{\alpha\beta}T^{v}\). For the pairing, mediated by CDW or SDW fluctuations, the pairing vertex couples \(\Delta(k)\) and \(\Delta(-k^{\prime})\), where both \(k\) and \(-k^{\prime}\) are near \(\mathbf{K}\). This allows, at least in principle, two types of solutions with respect to the deviation from the center of the Fermi surface at \(\mathbf{K}\), which we label as \(\tilde{k}\): an even parity solution \(\Delta(\tilde{k})=\Delta(-\tilde{k})\) and an odd parity solution \(\Delta(\tilde{k})=-\Delta(-\tilde{k})\). For a constant \(U_{3}\), only an even-parity solution is possible, but if the dressed \(U_{3}(\mathbf{p})\) varies substantially at \(|\mathbf{p}-\mathbf{Q}|\sim k_{F}\), an odd-parity solution is possible. For odd-parity pairing, it is natural to expect that the effective pairing vertex has an opposite sign compared to the even-parity one. Then the pairing interaction mediated by CDW fluctuations is attractive in the spin-triplet channel and repulsive in the spin-singlet channel. In contrast, the pairing interaction mediated by SDW fluctuations is attractive in both spin-triplet and spin-singlet channels.
The same separation holds for the case when there are two Fermi surfaces near both
\(\mathbf{K}\) and \(\mathbf{K}^{{}^{\prime}}\), one inside the other. In this case, one can have either a sign-preserving or a sign-changing gap between the inner and outer hole pockets.
This last scenario has been studied in detail for RTG [47], and both spin-triplet and spin-singlet superconducting states have been argued to develop near the onset of SDW ("spin-polarized IVC" states, in the nomenclature of Ref.[47]).
## VI Conclusions
This work presents a general framework to describe superconductivity triggered by correlated electronic orders in a system of interacting fermions in graphene-like bands with Fermi pockets near \(\mathbf{K}\) and \(\mathbf{K}^{{}^{\prime}}\) points in the Brillouin zone. This model, while general enough, is argued to mimic well the physics of graphene multilayers in a displacement field. We explore four possible ordered states: valley polarization, intra-valley ferromagnetism (this order further splits into an inter-valley ferromagnetism where spin polarizations in two valleys are parallel and an inter-valley antiferromagnetism where spin polarizations in two valleys are antiparallel), and CDW or SDW order at momenta \(\mathbf{Q}=\mathbf{K}-\mathbf{K}^{{}^{\prime}}\), also known in the literature as intervalley coherence (IVC) orders of a spin-singlet and spin-triplet type. Electron interactions are described by a Hamiltonian that includes a density-density interaction within a single valley (\(U_{1}\)), an interaction between fermion densities in different valleys (\(U_{2}\)), and an inter-valley exchange interaction that involves inter-valley scattering (\(U_{3}\)). We five independent conditions for the onset of these orders and considered superconductivity near each of the ordered states. We found that the effective pairing interaction is enhanced in each case. Near a valley polarization and CDW instability, the interaction can be viewed as mediated by soft fluctuations of the corresponding density (charge) order parameter. The enhanced interaction is repulsive near valley polarization and attractive in the spin-singlet/valley-triplet channel near the onset of the CDW order. Near an onset of the SDW order, the enhanced pairing interaction can be viewed as mediated by soft spin fluctuations with momenta near \(\mathbf{Q}=\mathbf{K}-\mathbf{K}^{\prime}\). This interaction is, however, repulsive and as such does not give rise to superconductivity in s-wave channels. We showed that the pairing interaction near the onset of a ferromagnetic order within and between valleys (FM\({}^{+}\) order) and one near the onset of a ferromagnetic order within a valley and antiferromagnetic order between valleys (FM\({}^{-}\) order) is mediated by spin fluctuations and is attractive in the spin-triplet/valley-singlet
channel for FM\({}^{+}\) and in the spin-singlet/valley triplet channel for FM\({}^{-}\). We argued that to demonstrate this one has to sum up infinite series in the intra-valley scattering \(U_{3}\). In both cases, the pairing interaction scales with \(U_{3}\) at some distance from a magnetic transition but becomes independent on \(U_{3}\) near the onset of the transition. Consequently, \(T_{c}\) at the onset of FM\({}^{+}\) or FM\({}^{-}\) order does not contain \(1/U_{3}\) in the exponent (i.e., is not exponentially small in \(1/U_{3}\)).
These results substantiate the notion of graphene systems offering a versatile platform to realize and explore a wide range of possible scenarios for unconventional superconductivity driven by electron interactions.
###### Acknowledgements.
We thank E. Berg, D. Efetov, A. MacDonald, and A. Young for fruitful discussions. The work by L.L. was supported by the Science and Technology Center for Integrated Quantum Materials, National Science Foundation Grant No. DMR1231319, and Army Research Office Grant No. W911NF-18-1-0116. The work by A.V.C. was supported by U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award No. DE-SC0014402.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.